**Updated for TensorFlow 2**

Google’s TensorFlow has been a hot topic in deep learning recently. The open source software, designed to allow efficient computation of data flow graphs, is especially suited to deep learning tasks. It is designed to be executed on single or multiple CPUs and GPUs, making it a good option for complex deep learning tasks. In its most recent incarnation – version 1.0 – it can even be run on certain mobile operating systems. This introductory tutorial to TensorFlow will give an overview of some of the basic concepts of TensorFlow in Python. These will be a good stepping stone to building more complex deep learning networks, such as Convolution Neural Networks, natural language models, and Recurrent Neural Networks in the package. We’ll be creating a simple three-layer neural network to classify the MNIST dataset. This tutorial assumes that you are familiar with the basics of neural networks, which you can get up to scratch with in the neural networks tutorial if required. To install TensorFlow, follow the instructions here. The code for this tutorial can be found in this site’s GitHub repository. Once you’re done, you also might want to check out a higher level deep learning library that sits on top of TensorFlow called Keras – see my Keras tutorial.

First, let’s have a look at the main ideas of TensorFlow.

# 1.0 TensorFlow graphs

TensorFlow is based on graph based computation – “what on earth is that?”, you might say. It’s an alternative way of conceptualising mathematical calculations. Consider the following expression $a = (b + c) * (c + 2)$. We can break this function down into the following components:

\begin{align}

d &= b + c \\

e &= c + 2 \\

a &= d * e

\end{align}

Now we can represent these operations graphically as:

This may seem like a silly example – but notice a powerful idea in expressing the equation this way: two of the computations ($d=b+c$ and $e=c+2$) can be performed in parallel. By splitting up these calculations across CPUs or GPUs, this can give us significant gains in computational times. These gains are a *must* for big data applications and deep learning – especially for complicated neural network architectures such as Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). The idea behind TensorFlow is to the ability to create these computational graphs in code and allow significant performance improvements via parallel operations and other efficiency gains.

We can look at a similar graph in TensorFlow below, which shows the computational graph of a three-layer neural network.

The animated data flows between different nodes in the graph are *tensors* which are multi-dimensional data arrays. For instance, the input data tensor may be 5000 x 64 x 1, which represents a 64 node input layer with 5000 training samples. After the input layer, there is a hidden layer with rectified linear units as the activation function. There is a final output layer (called a “logit layer” in the above graph) that uses cross-entropy as a cost/loss function. At each point we see the relevant tensors flowing to the “Gradients” block which finally flows to the Stochastic Gradient Descent optimizer which performs the back-propagation and gradient descent.

Here we can see how computational graphs can be used to represent the calculations in neural networks, and this, of course, is what TensorFlow excels at. Let’s see how to perform some basic mathematical operations in TensorFlow to get a feel for how it all works.

# 2.0 A Simple TensorFlow example

So how can we make TensorFlow perform the little example calculation shown above – $a = (b + c) * (c + 2)$? First, there is a need to introduce TensorFlow variables. The code below shows how to declare these objects:

import tensorflow as tf # create TensorFlow variables const = tf.Variable(2.0, name="const") b = tf.Variable(2.0, name='b') c = tf.Variable(1.0, name='c')

As can be observed above, TensorFlow variables can be declared using the *tf.Variable* function. The first argument is the value to be assigned to the variable. The second is an optional name string which can be used to label the constant/variable – this is handy for when you want to do visualizations. TensorFlow will infer the type of the variable from the initialized value, but it can also be set explicitly using the optional *dtype* argument. TensorFlow has many of its own types like tf.float32, tf.int32 etc.

The objects assigned to the Python variables are actually TensorFlow tensors. Thereafter, they act like normal Python objects – therefore, if you want to access the tensors you need to keep track of the Python variables. In previous versions of TensorFlow, there were global methods of accessing the tensors and operations based on their names. This is no longer the case.

To examine the tensors stored in the Python variables, simply call them as you would a normal Python variable. If we do this for the “const” variable, you will see the following output:

<tf.Variable ‘const:0’ shape=() dtype=float32, numpy=2.0>

This output gives you a few different pieces of information – first, is the name ‘const:0’ which has been assigned to the tensor. Next is the data type, in this case, a TensorFlow float 32 type. Finally, there is a “numpy” value. TensorFlow variables in TensorFlow 2 can be converted easily into numpy objects. Numpy stands for Numerical Python and is a crucial library for Python data science and machine learning. If you don’t know Numpy, what it is, and how to use it, check out this site. The command to access the numpy form of the tensor is simply .numpy() – the use of this method will be shown shortly.

Next, some calculation operations are created:

# now create some operations d = tf.add(b, c, name='d') e = tf.add(c, const, name='e') a = tf.multiply(d, e, name='a')

Note that *d *and *e *are automatically converted to tensor values upon the execution of the operations. TensorFlow has a wealth of calculation operations available to perform all sorts of interactions between tensors, as you will discover as you progress through this book. The purpose of the operations shown above are pretty obvious, and they instantiate the operations *b + c, c + 2.0, *and *d * e*. However, these operations are an unwieldy way of doing things in TensorFlow 2. The operations below are equivalent to those above:

d = b + c e = c + 2 a = d * e

To access the value of variable *a*, one can use the *.numpy()* method as shown below:

print(**f”Variable a is {a.numpy()}”**)

The computational graph for this simple example can be visualized by using the TensorBoard functionality that comes packaged with TensorFlow. This is a great visualization feature and is explained more in this post. Here is what the graph looks like in TensorBoard:

The larger two vertices or nodes, *b *and *c,* correspond to the variables. The smaller nodes correspond to the operations, and the edges between the vertices are the scalar values emerging from the variables and operations.

The example above is a trivial example – what would this look like if there was an array of *b* values from which an array of equivalent *a* values would be calculated? TensorFlow variables can easily be instantiated using numpy variables, like the following:

b = tf.Variable(np.arange(0, 10), name='b')

Calling *b* shows the following:

<tf.Variable ‘b:0’ shape=(10,) dtype=int32, numpy=array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])>

Note the numpy value of the tensor is an array. Because the numpy variable passed during the instantiation is a range of int32 values, we can’t add it directly to *c* as *c* is of float32 type. Therefore, the tf.cast operation, which changes the type of a tensor, first needs to be utilized like so:

d = tf.cast(b, tf.float32) + c

Running the rest of the previous operations, using the new *b* tensor, gives the following value for *a*:

Variable a is [ 3. 6. 9. 12. 15. 18. 21. 24. 27. 30.]

In numpy, the developer can directly access *slices* or individual indices of an array and change their values directly. Can the same be done in TensorFlow 2? Can individual indices and/or slices be accessed and changed? The answer is yes, but not quite as straight-forwardly as in numpy. For instance, if *b* was a simple numpy array, one could easily execute the following b[1] = 10 – this would change the value of the second element in the array to the integer 10.

b[1].assign(10)

This will then flow through to *a* like so:

Variable a is [ 3. 33. 9. 12. 15. 18. 21. 24. 27. 30.]

The developer could also run the following, to assign a slice of *b* values:

b[6:9].assign([10, 10, 10])

A new tensor can also be created by using the slice notation:

f = b[2:5]

The explanations and code above show you how to perform some basic tensor manipulations and operations. In the section below, an example will be presented where a neural network is created using the Eager paradigm in TensorFlow 2. It will show how to create a training loop, perform a feed-forward pass through a neural network and calculate and apply gradients to an optimization method.

# 3.0 A Neural Network Example

In this section, a simple three-layer neural network build in TensorFlow is demonstrated. In following chapters more complicated neural network structures such as convolution neural networks and recurrent neural networks are covered. For this example, though, it will be kept simple.

In this example, the MNIST dataset will be used that is packaged as part of the TensorFlow installation. This MNIST dataset is a set of 28×28 pixel grayscale images which represent hand-written digits. It has 60,000 training rows, 10,000 testing rows, and 5,000 validation rows. It is a very common, basic, image classification dataset that is used in machine learning.

The data can be loaded by running the following:

from tensorflow.keras.datasets import mnist (x_train, y_train), (x_test, y_test) = mnist.load_data()

As can be observed, the Keras MNIST data loader returns Python tuples corresponding to the training and test set respectively (Keras is another deep learning framework, now tightly integrated with TensorFlow, as mentioned earlier). The data sizes of the tuples defined above are:

*x_train:*(60,000 x 28 x 28)*y_train:*(60,000)*x_test:*(10,000 x 28 x 28)*y_test:*(10,000)

The *x* data is the image information – 60,000 images of 28 x 28 pixels size in the training set. The images are grayscale (i.e black and white) with maximum values, specifying the intensity of whites, of 255. The *x *data will need to be scaled so that it resides between 0 and 1, as this improves training efficiency. The *y* data is the matching image labels – signifying what digit is displayed in the image. This will need to be transformed to “one-hot” format.

When using a standard, categorical cross-entropy loss function (this will be shown later), a one-hot format is required when training classification tasks, as the output layer of the neural network will have the same number of nodes as the total number of possible classification labels. The output node with the highest value is considered as a prediction for that corresponding label. For instance, in the MNIST task, there are 10 possible classification labels – 0 to 9. Therefore, there will be 10 output nodes in any neural network performing this classification task. If we have an example output vector of [0.01, 0.8, 0.25, 0.05, 0.10, 0.27, 0.55, 0.32, 0.11, 0.09], the maximum value is in the second position / output node, and therefore this corresponds to the digit “1”. To train the network to produce this sort of outcome when the digit “1” appears, the loss needs to be calculated according to the difference between the output of the network and a “one-hot” array of the label 1. This one-hot array looks like [0, 1, 0, 0, 0, 0, 0, 0, 0, 0].

This conversion is easily performed in TensorFlow, as will be demonstrated shortly when the main training loop is covered.

One final thing that needs to be considered is how to extract the training data in batches of samples. The function below can handle this:

def get_batch(x_data, y_data, batch_size): idxs = np.random.randint(0, len(y_data), batch_size) return x_data[idxs,:,:], y_data[idxs]

As can be observed in the code above, the data to be batched i.e. the *x *and *y* data is passed to this function along with the batch size. The first line of the function generates a random vector of integers, with random values between 0 and the length of the data passed to the function. The number of random integers generated is equal to the batch size. The *x *and *y* data are then returned, but the return data is only for those random indices chosen. Note, that this is performed on numpy array objects – as will be shown shortly, the conversion from numpy arrays to tensor objects will be performed “on the fly” within the training loop.

There is also the requirement for a loss function and a feed-forward function, but these will be covered shortly.

# Python optimisation variables epochs = 10 batch_size = 100 # normalize the input images by dividing by 255.0 x_train = x_train / 255.0 x_test = x_test / 255.0 # convert x_test to tensor to pass through model (train data will be converted to # tensors on the fly) x_test = tf.Variable(x_test)

First, the number of training epochs and the batch size are created – note these are simple Python variables, not TensorFlow variables. Next, the input training and test data, *x_train* and *x_test*, are scaled so that their values are between 0 and 1. Input data should always be scaled when training neural networks, as large, uncontrolled, inputs can heavily impact the training process. Finally, the test input data, *x_test* is converted into a tensor. The random batching process for the training data is most easily performed using numpy objects and functions. However, the test data will not be batched in this example, so the full test input data set *x_test* is converted into a tensor.

The next step is to setup the weight and bias variables for the three-layer neural network. There are always *L* – *1* number of weights/bias tensors, where *L* is the number of layers. These variables are defined in the code below:

# now declare the weights connecting the input to the hidden layer W1 = tf.Variable(tf.random.normal([784, 300], stddev=0.03), name='W1') b1 = tf.Variable(tf.random.normal([300]), name='b1') # and the weights connecting the hidden layer to the output layer W2 = tf.Variable(tf.random.normal([300, 10], stddev=0.03), name='W2') b2 = tf.Variable(tf.random.normal([10]), name='b2')

The weight and bias variables are initialized using the *tf.random.normal* function – this function creates tensors of random numbers, drawn from a normal distribution. It allows the developer to specify things like the standard deviation of the distribution from which the random numbers are drawn.

Note the shape of the variables. The W1 variable is a [784, 300] tensor – the 784 nodes are the size of the input layer. This size comes from the flattening of the input images – if we have 28 rows and 28 columns of pixels, flattening these out gives us 1 row or column of 28 x 28 = 784 values. The 300 in the declaration of W1 is the number of nodes in the hidden layer. The W2 variable is a [300, 10] tensor, connecting the 300-node hidden layer to the 10-node output layer. In each case, a name is given to the variable for later viewing in TensorBoard – the TensorFlow visualization package. The next step in the code is to create the computations that occur within the nodes of the network. If the reader recalls, the computations within the nodes of a neural network are of the following form:

$$z = Wx + b$$

$$h=f(z)$$

Where *W* is the weights matrix, *x* is the layer input vector, *b* is the bias and *f* is the activation function of the node. These calculations comprise the feed-forward pass of the input data through the neural network. To execute these calculations, a dedicated feed-forward function is created:

def nn_model(x_input, W1, b1, W2, b2): # flatten the input image from 28 x 28 to 784 x_input = tf.reshape(x_input, (x_input.shape[0], -1)) x = tf.add(tf.matmul(tf.cast(x_input, tf.float32), W1), b1) x = tf.nn.relu(x) logits = tf.add(tf.matmul(x, W2), b2) return logits

Examining the first line, the *x_input* data is reshaped from (batch_size, 28, 28) to (batch_size, 784) – in other words, the images are flattened out. On the next line, the input data is then converted to *tf.float32* type using the TensorFlow cast function. This is important – the *x_input* data comes in as *tf.float64 *type, and TensorFlow won’t perform a matrix multiplication operation (*tf.matmul*) between tensors of different data types. This re-typed input data is then matrix-multiplied by *W1* using the TensorFlow *matmul* function (which stands for matrix multiplication). Then the bias *b1* is added to this product. On the line after this, the ReLU activation function is applied to the output of this line of calculation. The ReLU function is usually the best activation function to use in deep learning – the reasons for this are discussed in this post.

The output of this calculation is then multiplied by the final set of weights *W2*, with the bias *b2* added. The output of this calculation is titled *logits*. Note that no activation function has been applied to this output layer of nodes (yet). In machine/deep learning, the term “logits” refers to the un-activated output of a layer of nodes.

The reason no activation function has been applied to this layer is that there is a handy function in TensorFlow called *tf.nn.softmax_cross_entropy_with_logits*. This function does two things for the developer – it applies a softmax activation function to the logits, which transforms them into a quasi-probability (i.e. the sum of the output nodes is equal to 1). This is a common activation function to apply to an output layer in classification tasks. Next, it applies the cross-entropy loss function to the softmax activation output. The cross-entropy loss function is a commonly used loss in classification tasks. The theory behind it is quite interesting, but it won’t be covered in this book – a good summary can be found here. The code below applies this handy TensorFlow function, and in this example, it has been nested in another function called *loss_fn*:

def loss_fn(logits, labels): cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=labels, logits=logits)) return cross_entropy

The arguments to *softmax_cross_entropy_with_logits *are *labels* and *logits*. The *logits* argument is supplied from the outcome of the *nn_model function*. The usage of this function in the main training loop will be demonstrated shortly. The *labels* argument is supplied from the *one-hot* *y *values that are fed into *loss_fn* during the training process. The output of the *softmax_cross_entropy_with_logits* function will be the output of the cross-entropy loss value for each sample in the batch. To train the weights of the neural network, the average cross-entropy loss across the samples needs to be minimized as part of the optimization process. This is calculated by using the *tf.reduce_mean* function, which, unsurprisingly, calculates the mean of the tensor supplied to it.

The next step is to define an optimizer function. In many examples within this book, the versatile *Adam* optimizer will be used. The theory behind this optimizer is interesting, and is worth further examination (such as shown here) but won’t be covered in detail within this post. It is basically a gradient descent method, but with sophisticated averaging of the gradients to provide appropriate momentum to the learning. To define the optimizer, which will be used in the main training loop, the following code is run:

# setup the optimizer optimizer = tf.keras.optimizers.Adam()

The *Adam* object can take a learning rate as input, but for the present purposes, the default value is used.

## 3.1 Training the network

Now that the appropriate functions, variables and optimizers have been created, it is time to define the overall training loop. The training loop is shown below:

total_batch = int(len(y_train) / batch_size) for epoch in range(epochs): avg_loss = 0 for i in range(total_batch): batch_x, batch_y = get_batch(x_train, y_train, batch_size=batch_size) # create tensors batch_x = tf.Variable(batch_x) batch_y = tf.Variable(batch_y) # create a one hot vector batch_y = tf.one_hot(batch_y, 10) with tf.GradientTape() as tape: logits = nn_model(batch_x, W1, b1, W2, b2) loss = loss_fn(logits, batch_y) gradients = tape.gradient(loss, [W1, b1, W2, b2]) optimizer.apply_gradients(zip(gradients, [W1, b1, W2, b2])) avg_loss += loss / total_batch test_logits = nn_model(x_test, W1, b1, W2, b2) max_idxs = tf.argmax(test_logits, axis=1) test_acc = np.sum(max_idxs.numpy() == y_test) / len(y_test) print(f"Epoch: {epoch + 1}, loss={avg_loss:.3f}, test set accuracy={test_acc*100:.3f}%") print("\nTraining complete!")

Stepping through the lines above, the first line is a calculation to determine the number of batches to run through in each training epoch – this will ensure that, on average, each training sample will be used once in the epoch. After that, a loop for each training epoch is entered. An *avg_cost* variable is initialized to keep track of the average cross entropy cost/loss for each epoch. The next line is where randomised batches of samples are extracted (batch_x and batch_y) from the MNIST training dataset, using the *get_batch()* function that was created earlier.

Next, the *batch_x* and *batch_y* numpy variables are converted to tensor variables. After this, the label data stored in *batch_y* as simple integers (i.e. 2 for handwritten digit “2” and so on) needs to be converted to “one hot” format, as discussed previously. To do this, the *tf.one_hot* function can be utilized – the first argument to this function is the tensor you wish to convert, and the second argument is the number of distinct classes. This transforms the *batch_y *tensor from size (batch_size, 1) to (batch_size, 10).

The next line is important. Here the TensorFlow *GradientTape *API is introduced. In previous versions of TensorFlow a static graph of all the operations and variables was constructed. In this paradigm, the gradients that were required to be calculated could be determined by reading from the graph structure. However, in Eager mode, all tensor calculations are performed on the fly, and TensorFlow doesn’t know which variables and operations you are interested in calculating gradients for. The Gradient Tape API is the solution for this. Whatever variables and operations you wish to calculate gradients over you supply to the “*with GradientTape() as tape:*” context manager. In a neural network, this involves all the variables and operations involved in the feed-forward pass through your network, along with the evaluation of the loss function. Note that if you call a function within the gradient tape context, all the operations performed within that function (and any further nested functions), will be captured for gradient calculation as required.

As can be observed in the code above, the feed forward pass and the loss function evaluation are encapsulated in the functions which were explained earlier: *nn_model *and *loss_fn*. By executing these functions within the gradient tape context manager, TensorFlow knows to keep track of all the variables and operation outcomes to ensure they are ready for gradient computations. Following the function calls *nn_model *and *loss_fn* within the gradient tape context, we have the place where the gradients of the neural network are calculated.

Here, the gradient tape is accessed via its name (*tape* in this example) and the gradient function is called *tape.gradient()*. The first argument to this function is the dependent variable of the differentiation, and the second argument is the independent variable/s. In other words, if we were trying to calculate the derivative *dy/dx,* the first argument would be *y* and the second would be *x* for this function. In the context of a neural network, we are trying to calculate *dL/dw *and *dL/db* where *L *is the loss, *w *represents the weights and *b* the weights of the bias connections. Therefore, in the code above, the reader can observe that the first argument is the *loss *output from *loss_fn* and the second argument is a list of all the weight and bias variables through-out the simple neural network.

The next line is where these gradients are zipped together with the weight and bias variables and passed to the optimizer to perform the gradient descent step. This is executed easily using the optimizer’s *apply_gradients()* function.

The line following this is the accumulation of the average loss within the epoch. This constitutes the inner-epoch training loop. In the outer epoch training loop, after each epoch of training, the accuracy of the model on the test set is evaluated.

To determine the accuracy, first the test set images are passed through the neural network model using *nn_model*. This returns the *logits* from the model (the un-activated outputs from the last layer). The “prediction” of the model is then calculated from these logits – whatever output node has the highest logits value, this constitutes the digit prediction of the model. To determine what the highest logit value is for each test image, we can use the *tf.argmax()* function. This function mimics the numpy *argmax()* function, which returns the index of the highest value in an array/tensor. The logits output from the model in this case will be of the following dimensions: (test_set_size, 10) – we want the argmax function to find the maximum in each of the “column” dimensions i.e. across the 10 output nodes. The “row” dimension corresponds to axis=0, and the column dimension corresponds to axis=1. Therefore, supplying the axis=1 argument to *tf.argmax()* function creates (test_set_size, 1) integer predictions.

In the following line, these *max_idxs* are converted to a numpy array (using .*numpy()*) and asserted to be equal to the test labels (also integers – you will recall that we did not convert the test labels to a one-hot format). Where the labels are equal, this will return a “true” value, which is equivalent to an integer of 1 in numpy, or alternatively a “false” / 0 value. By summing up the results of these assertions, we obtain the number of correct predictions. Dividing this by the total size of the test set, the test set accuracy is obtained.

__Note__: if some of these explanations aren’t immediately clear, it is a good idea to jump over to the code supplied for this chapter and running it within a standard Python development environment. Insert a breakpoint in the code that you want to examine more closely – you can then inspect all the tensor sizes, convert them to numpy arrays, apply operations on the fly and so on. This is all possible within TensorFlow 2 now that the default operating paradigm is Eager execution.

The epoch number, average loss and accuracy are then printed, so one can observe the progress of the training. The average loss should be decreasing on average after every epoch – if it is not, something is going wrong with the network, or the learning has stagnated. Therefore, it is an important variable to monitor. On running this code, something like the following output should be observed:

Epoch: 1, cost=0.317, test set accuracy=94.350%

Epoch: 2, cost=0.124, test set accuracy=95.940%

Epoch: 3, cost=0.085, test set accuracy=97.070%

Epoch: 4, cost=0.065, test set accuracy=97.570%

Epoch: 5, cost=0.052, test set accuracy=97.630%

Epoch: 6, cost=0.048, test set accuracy=97.620%

Epoch: 7, cost=0.037, test set accuracy=97.770%

Epoch: 8, cost=0.032, test set accuracy=97.630%

Epoch: 9, cost=0.027, test set accuracy=97.950%

Epoch: 10, cost=0.022, test set accuracy=98.000%

Training complete!

As can be observed, the loss declines monotonically, and the test set accuracy steadily increases. This shows that the model is training correctly. It is also possible to visualize the training progress using TensorBoard, as shown below:

I hope this tutorial was instructive and helps get you going on the TensorFlow journey. Just a reminder, you can check out the code for this post here. I’ve also written an article that shows you how to build more complex neural networks such as convolution neural networks, recurrent neural networks, and Word2Vec natural language models in TensorFlow. You also might want to check out a higher level deep learning library that sits on top of TensorFlow called Keras – see my Keras tutorial.

Have fun!

hi, Iike the idea of explaining using the simple equation, great idea. I didn’t get the tensor/array output could you past all the code. Also the code for the tensorboard visualization would be nice (I know you are planning to go into that in more detail in another tutorial, but would be great to take a look at now.

Hi Tomas – no problems, you can find the code here : https://github.com/adventuresinML/adventures-in-ml-code. I’ve put another link to this repository in the article to make it clearer. Thanks for the feedback

Thanks, great article.

I used the code from this post and it worked instantly. This is a great article and great code so I added the link to the collection of neural networks with python.

great article. Thank you

Asking գueѕtions are really fastіdious thing if you are not understandіng anything completely, but this piece of ѡriting presents nice understanding ｙet.

Hi

Great tutorial, one of the (few..) best explained on the web.

I have a question: it is possible to give an image path to the model so it can recognize the content of the image (a number in this case) and print accuracy ?

I alredy have a Tensorflow model which predict given numbers (based on MNIST) but it fails a bit. I would like to print the accuracy or, better, use a model like this with TF deeply integrated to predict these numbers.

Thank you

Hi Lucian, thanks for the comment. I’m sorry, I’m not quite sure what you mean by image path? The code given here does predict the MNIST numbers and prints the accuracy. Are you asking whether there is a more accurate deep learning model to predict numbers and other image content? If so, there is – a convolutional neural network. Check out this post to learn how to implement in TensorFlow: Convolutional Neural Networks Tutorial in TensorFlow

I hope this helps

Shouldn’t

a=d*e in the 1st paragraph breakdown? Not a=d*c

Hi John, yes it should – thanks for picking this up. I’ve fixed it

No problem – I initially thought I might have missed a new way to break down functions!!

Thank you very much for posting this. Very informative. Keep up the good work 🙂

Hi Andy,

Amazing tutorial, I’d say the best I’ve found in 2 days of google searches!

As an aside, would you be able to write a similar tutorial for a Regression example? Or using different training methods?

I know that it is just a matter of changing the softmax to maybe relu or something like that, and changing the number of output neurons. However I feel like it would be really helpful for someone who is just getting started, as there is really NO tutorial on how to build a NN using TF for a regression problem. If you don’t have the time, would you be able to just post some code? I reckon you could re-use most of the code written here.

Great job anyway!

Hi Andy,

Thank you very much for posting this tutorial.

I tried to run the convolutional_neural_network_tutorial.py code, but my computer crashes.

The characteristics of my Computer are the following:

Processor: Intel i5-7200 CPU 2.50GHz, 2.70GHz

RAM: 4 GB

Operating System: Windows 10

Is the size of my RAM is insufficient to execute this code?

Thank you.

Really great article, thank you very much for the good work!

Great Article. The code worked perfectly. I used TensorFlow running on Docker and had no issues following up. Thanks a lot.

Thank you so much.

This blog is the best way to dive into TF for the first timers. Appreciate your work

Glad it is a help for you

Dear Webmaster,

My name is Patricia Mussen. I am senior link building acquisition strategist and SEO consultant.

While performing competition research for my client, I came across https://adventuresinmachinelearning.com/python-tensorflow-tutorial/

One of my clients has a website in the similar niche. I was wondering if you could edit one of the relevant blog posts on your website and mention my client’s website on it. I will also provide you edits to make your editorial team’s job easier.

We are willing to pay a small editorial fee to the quality site for your efforts.

Let me know your paypal email so that my writer can give you specific edits and we can start nurturing long term relationships for my other clients also.

Patricia.Mussen

Senior Link Acquisition Strategist

SEO Consultant

I like the helpful info you provide in your articles. I’ll bookmark your blog and check again here frequently. I am quite sure I’ll learn lots of new stuff right here! Best of luck for the next!

Almost all of what you point out is astonishingly accurate and that makes me wonder the reason why I hadn’t looked at this with this light previously. This particular piece truly did switch the light on for me as far as this subject goes. Nevertheless there is actually 1 issue I am not really too comfy with so whilst I try to reconcile that with the central idea of the point, allow me see just what the rest of the visitors have to say.Nicely done.

Hello! I could have sworn I’ve been to this site before but after reading through some of the post I realized it’s new to me. Anyways, I’m definitely happy I found it and I’ll be bookmarking and checking back often!

Whats up this is kinda of off topic but I was wanting to know if blogs use WYSIWYG editors or if you have to manually code with HTML. I’m starting a blog soon but have no coding experience so I wanted to get guidance from someone with experience. Any help would be enormously appreciated!

Python TensorFlow Tutorial – Build a Neural Network – Adventures in Machine Learning

aibtjgsolze

[url=http://www.gv26s0mh1s5754y1p8cx91d0cr64m2oos.org/]uibtjgsolze[/url]

ibtjgsolze http://www.gv26s0mh1s5754y1p8cx91d0cr64m2oos.org/

Cozy Sweatshirts

日本国内発送通販ブランドコピー安全後払い専門店

Bottle Label Adhesives

ブランドコピー専門店

Small Mini Food Storage Containers

ブランドバッグコピー

ブランドコピー代引き

5bbl Brewery

Toilet Towel Holder

スーパーコピーバッグ

Exercise Eva Mat

ブランド時計コピー

コピー時計

PVC/CPVC ball valve

My partner and I absolutely love your blog and find many of your post’s to be exactly what I’m looking for. Does one offer guest writers to write content for you personally? I wouldn’t mind composing a post or elaborating on a few of the subjects you write concerning here. Again, awesome site!

Hello! I’ve been following your weblog for some time now and finally got the courage to go ahead and give you a shout out from Atascocita Tx! Just wanted to tell you keep up the excellent work!

I am not sure where you are getting your information, but great topic.

I needs to spend some time learning more or understanding more.

Thanks for excellent info I was looking for this information for my mission.

Best Tights For Workout

Hermesエルメスベルトコピー

75 Inch Digital Board Price

Chanelシャネルブレスレットコピー

Dim Light On Electrical Tester

ブランドRolexロレックス時計コピー代引き

How to make money online – earnings with investment https://sites.google.com/view/site-for-investing-in-bitcoin/how-to-make-money-online-earnings-with-investment

ChristianLouboutinクリスチャンルブタンブランドコピー代引き

China Aerospace Precision Machining

ブランドGucciグッチバッグコピーN級品

China Wholesale New Style Oxygen Concentrator Manufacturers

RogerVivierロジェ?ヴィヴィエコピー激安

Private Label Wipes

Concrete Pump Hire

ブランドLouisVuittonルイヴィトンサングラスコピー代引き

コピー時計

1/2.3 Lens

portable site toilet for sale

ブランドイヤリングコピー代引き

ブランドChanelシャネル帽子コピー代引き

Axial Quiet

CaF2 Crystal Glass Double Concave Lens

LouisVuittonルイヴィトンネックレス販売店

ブランド財布コピー

Low Speed Drill

コピー時計

Crystal Polyvinyl Chloride Thermoplastic Compound

Fish Cultivation

ブランドバッグコピー

6mm+12a+6mm Insulating Glass Unit

Bvlgariブルガリコピー激安

Chain Driven Roller Conveyor

ブランドPradaプラダ帽子コピーN級品

Particle Board Making Machine

ブランドGucciグッチスマホケースコピー代引き

Wholesale Vaporizer Pen Cartridges .8ml

ブランドLouisVuittonルイヴィトンバッグコピー代引き

Diorディオールコピー激安

Cement Door Frame Mould

There’s definately a lot to know about this issue. I like all the points you have made.

May I simply just say what a relief to find a person that really knows what they are talking about on the web. You definitely realize how to bring a problem to light and make it important. A lot more people should look at this and understand this side of your story. It’s surprising you aren’t more popular since you most certainly have the gift.

Block Cylinder

ブランド時計コピー

molding box

Fendiフェンディサングラス販売店

BottegaVenetaボッテガヴェネタベルトスーパーコピー

Embroidered Cotton Lace Fabrics

Chanelシャネル時計スーパーコピー

Automatic Kickstand For Motorcycle

LouisVuittonルイヴィトンコピー激安

Dapoxetine

ブランドSaintLaurentサンローラン靴コピーN級品

Hoodie For Men

Rolexロレックスバッグコピー

Ps Pp Plastic Tray For Pcb

Chanelシャネルバッグスーパーコピー

Full Range Speaker Driver

Delta Airlines Phone Number | Delta airlines customer service phone number https://sites.google.com/view/delta-airlines-phone/

Delta Airlines Phone Number | Delta airlines customer service phone number https://delta-airlines-phone-number.business.site

14411aa700 Service Kit

ブランドLouisVuittonルイヴィトンマフラーコピー代引き

Fat Tire Electric Mountain Bike

ブランド財布コピー

Dry-cutting Segment Diamond Saw Blade

スーパーコピーブランド

ブランドサングラスコピーN級品

10 Motors Straight Line Mitering Machine

Diorディオールネックレス販売店

Gravity Sensor Series

Best view i have ever seen !

Maize Shelling Machine

ブランドHermesエルメスネックレスコピー代引き

Cast Iron Flanged Fittings

スーパーコピーバッグ

Perfection Cedar Shingles

ブランド時計コピー代引き

Burberryバーバリー靴販売店

Chinese Toy Manufacturers

Fendiフェンディ靴コピー

on site services portable toilets

Gucciグッチ指輪スーパーコピー

indoor portable toilet

ブランドCartierカルティエネックレスコピーN級品

Bfm2012 Crankshaft

ブランドコピー代引き

12v Air Compressors

Best view i have ever seen !

Dilas Laser Diode

スーパーコピーブランド

Loeweロエベコピー激安

Blue Marble Granite

Metal Moulding Flask

ブランドベルトスーパーコピー

Tiffanyティファニーコピー激安

Tests on vaping cartridge raise lead concerns; two sold locally

Kick Scooters For Adults

Chanelシャネルイヤリングスーパーコピー

SaintLaurentサンローランバッグスーパーコピー

Mazda Auto Spare Parts

ブランドコピー専門店

Soundbar Per Tv

Can I simply say what a comfort to discover an individual who actually knows what they are talking about online.

You certainly know how to bring an issue to light and make it important.

More and more people have to read this and understand this side of your story.

I was surprised you are not more popular given that you most

certainly have the gift.

Clumping Tofu Cat Litter Flushable

IWC時計コピー

Chanelシャネルマフラーコピー

Air Operated Diaphragm Pump

control of pest New York service https://www.enlightedinc.com/page/3/?s=control+of+pest+%E2%98%8E+1%28844%299484793+New York+service+phone+number

ブランドRogerVivierロジェヴィヴィエ靴コピー代引き

Cnc Engraving And Cutting Machine

China Custom Container Home

ブランドコピーMiuMiuミュウミュウN級品

National Mud Pumps Supplier

Breitlingブライトリングブランドコピー代引き

Axial Cylindrical Roller Bearing Supplier

ブランドコピー専門店

control of pest Idaho service https://incompetech.com/?s=control+of+pest+%E2%98%8E+1%28844%299484793+Idaho+service+phone+number

Heavy Duty C Clamps

ブランド靴スーパーコピー

China Construction Props

Bvlgariブルガリ時計コピー

Custom Hoop Earring

ブランドPradaプラダ靴コピー代引き

How can I do a live streaming webcast on Blogger?

LouisVuittonルイヴィトンコピー激安

Hua-Star Industrial Co., Ltd.

LouisVuittonルイヴィトンサングラスコピー

Insulation Transformer Price

Cement Bathing Mixer

ブランドコピー代引き

control of pest San Diego service https://www.webdesignerdepot.com/?s=control+of+pest+%E2%98%8E+1%28844%299484793+San Diego+service+phone+number

MCMエムシーエムバッグコピー

Bamboo Charcuterie Cheese Board

ブランドLouisVuittonルイヴィトンスマホケースコピーN級品

Fuel Nozzle

Henan Lvyuan Water Treatment Technology Co., Ltd.

ブランドコピー専門店

Featured

コピー時計

ブランド財布コピー

2a Usb Cable Fast

ブランド時計販売店

300bar Tank

ブランドサングラスコピー代引き

Air Compressor Moisture Filter

ブランドコピーGoyardゴヤールN級品

Corbin Hose Clamp

Customized Cnc Machining Fixture

Burberryバーバリーバッグスーパーコピー

I’ve found it a little difficult to find computer parts without having to buy whole computers and tearing them apart myself. . I want to start my own business using the computer parts, but where can I get the computer parts (the small parts)? I have tried my local recycle center and no success.. I’m on the verge of contacting an established computer craftsperson and cosigning to their business. . . Anyone with ideas or advice?.

ブランドDiorディオールスマホケースコピー代引き

2 Way Automatic Valve Electric Actuator

Alkaline Degreaser

ブランドLouisVuittonルイヴィトンマフラーコピー代引き

Air Purifier Dust Collector

ブランド時計コピー

ブランドIWC時計コピー代引き

Electric Power Meter

Celineセリーヌ財布スーパーコピー

Chip-Shaped Cold Pressure Terminal

50w Fiber Laser Marking Machine

ブランドコピーTiffanyティファニーN級品

Cold Galvanized Pipe

ブランドコピー専門店

Pvc Fire Retardant

ブランドバッグコピー

Non Polarized Sunglasses

コピー時計

Grinding Ball Bead

ブランド時計コピー

1×1 Welded Wire

コピー時計

ブランド時計コピー

Pvc Vinyl Floor Sheet

Aluminum Fat Bike

スーパーコピーブランド

ブランドバッグコピー

Ac Coupled Energy Storage

コピー時計

Calcium Formate Seller

コピー時計

Bulk Pallet Bin

Patio Loveseat Rope Bench

スーパーコピーバッグ

Elastomeric Rubber Foam Insulation For Condensate Drain Piping System

スーパーコピーバッグ

コピー時計

Agrochemical

ブランドバッグコピー

Ro Membrane Housing

Awm 1007 18awg

ブランドコピー代引き

Soundproof Wall Panels

ブランドコピー専門店

ブランドコピー代引き

Best Solar Power System For Home

French Process Zinc Oxide Powder

ブランド財布コピー

7.5 Kw Single Phase Motor

Plastic Ziplock Biohazard Specimen Bag

ブランド財布コピー

Home Plastic Injection Mould Moulds

Fuel Injector 12580426

ブランドバッグコピー

China Environmental Regeneration Spc Flooring Manufacturer

Surface Slitting Rewinding Machine

Rice Paper Doypack

ブランドバッグコピー

Particle Board Supplier

15HP Industrial Air Cooler

スーパーコピーブランド

Cadillac Crankshaft Position Sensor

Cemented Carbide Tire Studs

ブランド財布コピー

Alumina Silica Gel

ブランドコピー代引き

Heavy Duty Welding Gloves

Other Non-standard Hydraulic Systems

Used Honda Accord

ブランドバッグコピー

Rectangle Disposable Vegetable and Fruit Plate

Ball Valve Handle Price

Paper Bag Handle Rope Rewinding Machine

スーパーコピーブランド

Дракулов 2021 – 2021 смотреть онлайн в хорошем качестве

China Air Filter and Air Purifier price

スーパーコピーバッグ

40mm round transparent magnet

Blue Pet Tape China Manufacturer

ブランドコピー専門店

Motor Regulator Assembly

Burlap Bags Custom Logo

101905601F

ブランドコピー代引き

ブランド財布コピー

27301-23700

Disposable Shoe Cover

http://www.BlowYourLoad.Fun !!!STOP!!! R U SICK OF JERKING OFF? ISN’T YOUR TURN FOR SOME HOT HORNY PERVY SEX? CLICK HERE TO GET YOUR DICK WARM AND WET, SLIDE INTO THE HOTTEST HOLES YOU CAN IMAGINE.8=======0 @ http://www.BlowYourLoad.Fun

ucfl 205

Abb?Electric?Vehicle?Charging?Infrastructure

ブランド時計コピー

Methyl 3-Hydroxyhexanoate

ブランドバッグコピー

Hdpe Flange Adaptor 250mm

ブランド財布コピー

HYDRAULIC 1BFS BSP MALE 60??CONE SEAT/SAE FLANGE 6000PSI

14 Awg Stranded Copper Wire

ブランドバッグコピー

2 column hydraulic press

Featured

スーパーコピーバッグ

Large Scale Hookah

Vitamin D3 25 Calcifediol

Food Packaging Equipment

ブランドバッグコピー

High Quality Fuel injector 35310-37150 3531037150 for Korean Cars 3.0L V6 1999~2008 2.5L 2.7L V6

Screw Pdo Threads

ブランド財布コピー

9091905073

ブランドコピー専門店

Conical Glass Spice Jar With Jacket Coating Colar

Builders Bulk Bag

24×24 Vinyl Floor Tiles

スーパーコピーブランド

DTSY5558 IC card Energy Meter

Shock Absorber Vs Strut

ブランドコピー専門店

Anodized Custom Aluminum Extrusion Enclosure

Cnc Machining Parts Suppliers

vinyl kitchen cabinet door

ブランドコピー専門店

ブランドコピー専門店

Wheel Weights

led solar integrated lamp

Underpads Disposable

ブランドコピー代引き

1-(2-Aminoethyl)pyrrolidine

Hi! I’ve been following your weblog for some time now and finally got the courage to go ahead and give you a shout out from Austin Tx! Just wanted to say keep up the great job!

Kostal Diesel Injection Pump Receptacle Connector

コピー時計

Gas Regulator Manufacturer

ブランドコピー代引き

Environmental Protection Interior Wall Paint

Foldable plastic cart

ブランドコピー専門店

PTFE Thread Seal Tape

Duffle Top

スーパーコピーバッグ

weed mat

Hub Usb C To Usb C

ブランドコピー専門店

zirconium nickel alloy powder

Dithiophosphate Ibs

ブランドコピー代引き

Eletric Muscle Stimulator

Bearing 30305

Drum Fish Oil Centrifuge Separator Machine

Biodegradable Dog Poop Bags

ブランド財布コピー

PoE Board

スーパーコピーバッグ

28027-15-8

スーパーコピーブランド

Pvc Sheet Manufacturer

7 inch HD car monitor

I think this is among the most important info

for me. And i am glad reading your article. But should remark on few general things,

The web site style is perfect, the articles is really great : D.

Good job, cheers

my blog :: Premium Organics CBD