Python TensorFlow Tutorial – Build a Neural Network

Updated for TensorFlow 2

Google’s TensorFlow has been a hot topic in deep learning recently.  The open source software, designed to allow efficient computation of data flow graphs, is especially suited to deep learning tasks.  It is designed to be executed on single or multiple CPUs and GPUs, making it a good option for complex deep learning tasks.  In its most recent incarnation – version 1.0 – it can even be run on certain mobile operating systems.  This introductory tutorial to TensorFlow will give an overview of some of the basic concepts of TensorFlow in Python.  These will be a good stepping stone to building more complex deep learning networks, such as Convolution Neural Networks, natural language models, and Recurrent Neural Networks in the package.  We’ll be creating a simple three-layer neural network to classify the MNIST dataset.  This tutorial assumes that you are familiar with the basics of neural networks, which you can get up to scratch with in the neural networks tutorial if required.  To install TensorFlow, follow the instructions here. The code for this tutorial can be found in this site’s GitHub repository.  Once you’re done, you also might want to check out a higher level deep learning library that sits on top of TensorFlow called Keras – see my Keras tutorial.

First, let’s have a look at the main ideas of TensorFlow.

1.0 TensorFlow graphs

TensorFlow is based on graph based computation – “what on earth is that?”, you might say.  It’s an alternative way of conceptualising mathematical calculations.  Consider the following expression $a = (b + c) * (c + 2)$.  We can break this function down into the following components:

\begin{align}
d &= b + c \\
e &= c + 2 \\
a &= d * e
\end{align}

Now we can represent these operations graphically as:

TensorFlow tutorial - simple computational graph

Simple computational graph

This may seem like a silly example – but notice a powerful idea in expressing the equation this way: two of the computations ($d=b+c$ and $e=c+2$) can be performed in parallel.  By splitting up these calculations across CPUs or GPUs, this can give us significant gains in computational times.  These gains are a must for big data applications and deep learning – especially for complicated neural network architectures such as Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs).  The idea behind TensorFlow is to the ability to create these computational graphs in code and allow significant performance improvements via parallel operations and other efficiency gains.

We can look at a similar graph in TensorFlow below, which shows the computational graph of a three-layer neural network.

TensorFlow tutorial - data flow graph

TensorFlow data flow graph

The animated data flows between different nodes in the graph are tensors which are multi-dimensional data arrays.  For instance, the input data tensor may be 5000 x 64 x 1, which represents a 64 node input layer with 5000 training samples.  After the input layer, there is a hidden layer with rectified linear units as the activation function.  There is a final output layer (called a “logit layer” in the above graph) that uses cross-entropy as a cost/loss function.  At each point we see the relevant tensors flowing to the “Gradients” block which finally flows to the Stochastic Gradient Descent optimizer which performs the back-propagation and gradient descent.

Here we can see how computational graphs can be used to represent the calculations in neural networks, and this, of course, is what TensorFlow excels at.  Let’s see how to perform some basic mathematical operations in TensorFlow to get a feel for how it all works.

2.0 A Simple TensorFlow example

So how can we make TensorFlow perform the little example calculation shown above – $a = (b + c) * (c + 2)$? First, there is a need to introduce TensorFlow variables.  The code below shows how to declare these objects:

import tensorflow as tf
# create TensorFlow variables
const = tf.Variable(2.0, name="const")
b = tf.Variable(2.0, name='b')
c = tf.Variable(1.0, name='c')

As can be observed above, TensorFlow variables can be declared using the tf.Variable function.  The first argument is the value to be assigned to the variable. The second is an optional name string which can be used to label the constant/variable – this is handy for when you want to do visualizations.  TensorFlow will infer the type of the variable from the initialized value, but it can also be set explicitly using the optional dtype argument.  TensorFlow has many of its own types like tf.float32, tf.int32 etc.

The objects assigned to the Python variables are actually TensorFlow tensors. Thereafter, they act like normal Python objects – therefore, if you want to access the tensors you need to keep track of the Python variables. In previous versions of TensorFlow, there were global methods of accessing the tensors and operations based on their names. This is no longer the case.

To examine the tensors stored in the Python variables, simply call them as you would a normal Python variable. If we do this for the “const” variable, you will see the following output:

<tf.Variable ‘const:0’ shape=() dtype=float32, numpy=2.0>

This output gives you a few different pieces of information – first, is the name ‘const:0’ which has been assigned to the tensor. Next is the data type, in this case, a TensorFlow float 32 type. Finally, there is a “numpy” value. TensorFlow variables in TensorFlow 2 can be converted easily into numpy objects. Numpy stands for Numerical Python and is a crucial library for Python data science and machine learning. If you don’t know Numpy, what it is, and how to use it, check out this site. The command to access the numpy form of the tensor is simply .numpy() – the use of this method will be shown shortly.

Next, some calculation operations are created:

# now create some operations
d = tf.add(b, c, name='d')
e = tf.add(c, const, name='e')
a = tf.multiply(d, e, name='a')

Note that d and e are automatically converted to tensor values upon the execution of the operations. TensorFlow has a wealth of calculation operations available to perform all sorts of interactions between tensors, as you will discover as you progress through this book.  The purpose of the operations shown above are pretty obvious, and they instantiate the operations b + c, c + 2.0, and d * e. However, these operations are an unwieldy way of doing things in TensorFlow 2. The operations below are equivalent to those above:

d = b + c
e = c + 2
a = d * e

To access the value of variable a, one can use the .numpy() method as shown below:

print(f”Variable a is {a.numpy()}”)

The computational graph for this simple example can be visualized by using the TensorBoard functionality that comes packaged with TensorFlow. This is a great visualization feature and is explained more in this post. Here is what the graph looks like in TensorBoard:

TensorFlow tutorial - simple graph

Simple TensorFlow graph

The larger two vertices or nodes, b and c, correspond to the variables. The smaller nodes correspond to the operations, and the edges between the vertices are the scalar values emerging from the variables and operations.

The example above is a trivial example – what would this look like if there was an array of b values from which an array of equivalent a values would be calculated? TensorFlow variables can easily be instantiated using numpy variables, like the following:

b = tf.Variable(np.arange(0, 10), name='b')

Calling b shows the following:

<tf.Variable ‘b:0’ shape=(10,) dtype=int32, numpy=array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])>

Note the numpy value of the tensor is an array. Because the numpy variable passed during the instantiation is a range of int32 values, we can’t add it directly to c as c is of float32 type. Therefore, the tf.cast operation, which changes the type of a tensor, first needs to be utilized like so:

d = tf.cast(b, tf.float32) + c

Running the rest of the previous operations, using the new b tensor, gives the following value for a:

Variable a is [ 3.  6.  9. 12. 15. 18. 21. 24. 27. 30.]

In numpy, the developer can directly access slices or individual indices of an array and change their values directly. Can the same be done in TensorFlow 2? Can individual indices and/or slices be accessed and changed? The answer is yes, but not quite as straight-forwardly as in numpy. For instance, if b was a simple numpy array, one could easily execute the following b[1] = 10 – this would change the value of the second element in the array to the integer 10.

b[1].assign(10)

This will then flow through to a like so:

Variable a is [ 3. 33.  9. 12. 15. 18. 21. 24. 27. 30.]

The developer could also run the following, to assign a slice of b values:

b[6:9].assign([10, 10, 10])

A new tensor can also be created by using the slice notation:

f = b[2:5]

The explanations and code above show you how to perform some basic tensor manipulations and operations. In the section below, an example will be presented where a neural network is created using the Eager paradigm in TensorFlow 2. It will show how to create a training loop, perform a feed-forward pass through a neural network and calculate and apply gradients to an optimization method.

3.0 A Neural Network Example

In this section, a simple three-layer neural network build in TensorFlow is demonstrated.  In following chapters more complicated neural network structures such as convolution neural networks and recurrent neural networks are covered.  For this example, though, it will be kept simple.

In this example, the MNIST dataset will be used that is packaged as part of the TensorFlow installation. This MNIST dataset is a set of 28×28 pixel grayscale images which represent hand-written digits.  It has 60,000 training rows, 10,000 testing rows, and 5,000 validation rows. It is a very common, basic, image classification dataset that is used in machine learning.

The data can be loaded by running the following:

from tensorflow.keras.datasets import mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()

As can be observed, the Keras MNIST data loader returns Python tuples corresponding to the training and test set respectively (Keras is another deep learning framework, now tightly integrated with TensorFlow, as mentioned earlier). The data sizes of the tuples defined above are:

  • x_train: (60,000 x 28 x 28)
  • y_train: (60,000)
  • x_test: (10,000 x 28 x 28)
  • y_test: (10,000)

The x data is the image information – 60,000 images of 28 x 28 pixels size in the training set. The images are grayscale (i.e black and white) with maximum values, specifying the intensity of whites, of 255. The x data will need to be scaled so that it resides between 0 and 1, as this improves training efficiency. The y data is the matching image labels – signifying what digit is displayed in the image. This will need to be transformed to “one-hot” format.

When using a standard, categorical cross-entropy loss function (this will be shown later), a one-hot format is required when training classification tasks, as the output layer of the neural network will have the same number of nodes as the total number of possible classification labels. The output node with the highest value is considered as a prediction for that corresponding label. For instance, in the MNIST task, there are 10 possible classification labels – 0 to 9. Therefore, there will be 10 output nodes in any neural network performing this classification task. If we have an example output vector of [0.01, 0.8, 0.25, 0.05, 0.10, 0.27, 0.55, 0.32, 0.11, 0.09], the maximum value is in the second position / output node, and therefore this corresponds to the digit “1”. To train the network to produce this sort of outcome when the digit “1” appears, the loss needs to be calculated according to the difference between the output of the network and a “one-hot” array of the label 1. This one-hot array looks like [0, 1, 0, 0, 0, 0, 0, 0, 0, 0].

This conversion is easily performed in TensorFlow, as will be demonstrated shortly when the main training loop is covered.

One final thing that needs to be considered is how to extract the training data in batches of samples. The function below can handle this:

def get_batch(x_data, y_data, batch_size):
    idxs = np.random.randint(0, len(y_data), batch_size)
    return x_data[idxs,:,:], y_data[idxs]

As can be observed in the code above, the data to be batched i.e. the x and y data is passed to this function along with the batch size. The first line of the function generates a random vector of integers, with random values between 0 and the length of the data passed to the function. The number of random integers generated is equal to the batch size. The x and y data are then returned, but the return data is only for those random indices chosen. Note, that this is performed on numpy array objects – as will be shown shortly, the conversion from numpy arrays to tensor objects will be performed “on the fly” within the training loop.

There is also the requirement for a loss function and a feed-forward function, but these will be covered shortly.

# Python optimisation variables
epochs = 10
batch_size = 100

# normalize the input images by dividing by 255.0
x_train = x_train / 255.0
x_test = x_test / 255.0
# convert x_test to tensor to pass through model (train data will be converted to
# tensors on the fly)
x_test = tf.Variable(x_test)

First, the number of training epochs and the batch size are created – note these are simple Python variables, not TensorFlow variables. Next, the input training and test data, x_train and x_test, are scaled so that their values are between 0 and 1. Input data should always be scaled when training neural networks, as large, uncontrolled, inputs can heavily impact the training process. Finally, the test input data, x_test is converted into a tensor. The random batching process for the training data is most easily performed using numpy objects and functions. However, the test data will not be batched in this example, so the full test input data set x_test is converted into a tensor.

The next step is to setup the weight and bias variables for the three-layer neural network.  There are always L1 number of weights/bias tensors, where L is the number of layers.  These variables are defined in the code below:

# now declare the weights connecting the input to the hidden layer
W1 = tf.Variable(tf.random.normal([784, 300], stddev=0.03), name='W1')
b1 = tf.Variable(tf.random.normal([300]), name='b1')
# and the weights connecting the hidden layer to the output layer
W2 = tf.Variable(tf.random.normal([300, 10], stddev=0.03), name='W2')
b2 = tf.Variable(tf.random.normal([10]), name='b2')

The weight and bias variables are initialized using the tf.random.normal function – this function creates tensors of random numbers, drawn from a normal distribution. It allows the developer to specify things like the standard deviation of the distribution from which the random numbers are drawn.

Note the shape of the variables. The W1 variable is a [784, 300] tensor – the 784 nodes are the size of the input layer. This size comes from the flattening of the input images – if we have 28 rows and 28 columns of pixels, flattening these out gives us 1 row or column of 28 x 28 = 784 values.  The 300 in the declaration of W1 is the number of nodes in the hidden layer. The W2 variable is a [300, 10] tensor, connecting the 300-node hidden layer to the 10-node output layer. In each case, a name is given to the variable for later viewing in TensorBoard – the TensorFlow visualization package. The next step in the code is to create the computations that occur within the nodes of the network. If the reader recalls, the computations within the nodes of a neural network are of the following form:

$$z = Wx + b$$

$$h=f(z)$$

Where W is the weights matrix, x is the layer input vector, b is the bias and f is the activation function of the node. These calculations comprise the feed-forward pass of the input data through the neural network. To execute these calculations, a dedicated feed-forward function is created:

def nn_model(x_input, W1, b1, W2, b2):
    # flatten the input image from 28 x 28 to 784
    x_input = tf.reshape(x_input, (x_input.shape[0], -1))
    x = tf.add(tf.matmul(tf.cast(x_input, tf.float32), W1), b1)
    x = tf.nn.relu(x)
    logits = tf.add(tf.matmul(x, W2), b2)
    return logits

Examining the first line, the x_input data is reshaped from (batch_size, 28, 28) to (batch_size, 784) – in other words, the images are flattened out. On the next line, the input data is then converted to tf.float32 type using the TensorFlow cast function. This is important – the x­_input data comes in as tf.float64 type, and TensorFlow won’t perform a matrix multiplication operation (tf.matmul) between tensors of different data types. This re-typed input data is then matrix-multiplied by W1 using the TensorFlow matmul function (which stands for matrix multiplication). Then the bias b1 is added to this product. On the line after this, the ReLU activation function is applied to the output of this line of calculation. The ReLU function is usually the best activation function to use in deep learning – the reasons for this are discussed in this post.

The output of this calculation is then multiplied by the final set of weights W2, with the bias b2 added. The output of this calculation is titled logits. Note that no activation function has been applied to this output layer of nodes (yet). In machine/deep learning, the term “logits” refers to the un-activated output of a layer of nodes.

The reason no activation function has been applied to this layer is that there is a handy function in TensorFlow called tf.nn.softmax_cross_entropy_with_logits. This function does two things for the developer – it applies a softmax activation function to the logits, which transforms them into a quasi-probability (i.e. the sum of the output nodes is equal to 1). This is a common activation function to apply to an output layer in classification tasks. Next, it applies the cross-entropy loss function to the softmax activation output. The cross-entropy loss function is a commonly used loss in classification tasks. The theory behind it is quite interesting, but it won’t be covered in this book – a good summary can be found here. The code below applies this handy TensorFlow function, and in this example,  it has been nested in another function called loss_fn:

def loss_fn(logits, labels):
    cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=labels,
                                                                              logits=logits))
    return cross_entropy

The arguments to softmax_cross_entropy_with_logits are labels and logits. The logits argument is supplied from the outcome of the nn_model function. The usage of this function in the main training loop will be demonstrated shortly. The labels argument is supplied from the one-hot y values that are fed into loss_fn during the training process. The output of the softmax_cross_entropy_with_logits function will be the output of the cross-entropy loss value for each sample in the batch. To train the weights of the neural network, the average cross-entropy loss across the samples needs to be minimized as part of the optimization process. This is calculated by using the tf.reduce_mean function, which, unsurprisingly, calculates the mean of the tensor supplied to it.

The next step is to define an optimizer function. In many examples within this book, the versatile Adam optimizer will be used. The theory behind this optimizer is interesting, and is worth further examination (such as shown here) but won’t be covered in detail within this post. It is basically a gradient descent method, but with sophisticated averaging of the gradients to provide appropriate momentum to the learning. To define the optimizer, which will be used in the main training loop, the following code is run:

# setup the optimizer
optimizer = tf.keras.optimizers.Adam()

The Adam object can take a learning rate as input, but for the present purposes, the default value is used.

3.1 Training the network

Now that the appropriate functions, variables and optimizers have been created, it is time to define the overall training loop. The training loop is shown below:

total_batch = int(len(y_train) / batch_size)
for epoch in range(epochs):
    avg_loss = 0
    for i in range(total_batch):
        batch_x, batch_y = get_batch(x_train, y_train, batch_size=batch_size)
        # create tensors
        batch_x = tf.Variable(batch_x)
        batch_y = tf.Variable(batch_y)
        # create a one hot vector
        batch_y = tf.one_hot(batch_y, 10)
        with tf.GradientTape() as tape:
            logits = nn_model(batch_x, W1, b1, W2, b2)
            loss = loss_fn(logits, batch_y)
        gradients = tape.gradient(loss, [W1, b1, W2, b2])
        optimizer.apply_gradients(zip(gradients, [W1, b1, W2, b2]))
        avg_loss += loss / total_batch
    test_logits = nn_model(x_test, W1, b1, W2, b2)
    max_idxs = tf.argmax(test_logits, axis=1)
    test_acc = np.sum(max_idxs.numpy() == y_test) / len(y_test)
    print(f"Epoch: {epoch + 1}, loss={avg_loss:.3f}, test set      accuracy={test_acc*100:.3f}%")

print("\nTraining complete!")

Stepping through the lines above, the first line is a calculation to determine the number of batches to run through in each training epoch – this will ensure that, on average, each training sample will be used once in the epoch.  After that, a loop for each training epoch is entered. An avg_cost variable is initialized to keep track of the average cross entropy cost/loss for each epoch. The next line is where randomised batches of samples are extracted (batch_x and batch_y) from the MNIST training dataset, using the get_batch() function that was created earlier.

Next, the batch_x and batch_y numpy variables are converted to tensor variables. After this, the label data stored in batch_y as simple integers (i.e. 2 for handwritten digit “2” and so on) needs to be converted to “one hot” format, as discussed previously. To do this, the tf.one_hot function can be utilized – the first argument to this function is the tensor you wish to convert, and the second argument is the number of distinct classes. This transforms the batch_y tensor from size (batch_size, 1) to (batch_size, 10).

The next line is important. Here the TensorFlow GradientTape API is introduced. In previous versions of TensorFlow a static graph of all the operations and variables was constructed. In this paradigm, the gradients that were required to be calculated could be determined by reading from the graph structure. However, in Eager mode, all tensor calculations are performed on the fly, and TensorFlow doesn’t know which variables and operations you are interested in calculating gradients for. The Gradient Tape API is the solution for this. Whatever variables and operations you wish to calculate gradients over you supply to the “with GradientTape() as tape:” context manager. In a neural network, this involves all the variables and operations involved in the feed-forward pass through your network, along with the evaluation of the loss function. Note that if you call a function within the gradient tape context, all the operations performed within that function (and any further nested functions), will be captured for gradient calculation as required.

As can be observed in the code above, the feed forward pass and the loss function evaluation are encapsulated in the functions which were explained earlier: nn_model and loss_fn. By executing these functions within the gradient tape context manager, TensorFlow knows to keep track of all the variables and operation outcomes to ensure they are ready for gradient computations. Following the function calls nn_model and loss_fn within the gradient tape context, we have the place where the gradients of the neural network are calculated.

Here, the gradient tape is accessed via its name (tape in this example) and the gradient function is called tape.gradient(). The first argument to this function is the dependent variable of the differentiation, and the second argument is the independent variable/s. In other words, if we were trying to calculate the derivative dy/dx, the first argument would be y and the second would be x for this function.  In the context of a neural network, we are trying to calculate dL/dw and dL/db where L is the loss, w represents the weights and b the weights of the bias connections. Therefore, in the code above, the reader can observe that the first argument is the loss output from loss_fn and the second argument is a list of all the weight and bias variables through-out the simple neural network.

The next line is where these gradients are zipped together with the weight and bias variables and passed to the optimizer to perform the gradient descent step. This is executed easily using the optimizer’s apply_gradients() function.

The line following this is the accumulation of the average loss within the epoch. This constitutes the inner-epoch training loop. In the outer epoch training loop, after each epoch of training, the accuracy of the model on the test set is evaluated.

To determine the accuracy, first the test set images are passed through the neural network model using nn_model. This returns the logits from the model (the un-activated outputs from the last layer). The “prediction” of the model is then calculated from these logits – whatever output node has the highest logits value, this constitutes the digit prediction of the model. To determine what the highest logit value is for each test image, we can use the tf.argmax() function. This function mimics the numpy argmax() function, which returns the index of the highest value in an array/tensor. The logits output from the model in this case will be of the following dimensions: (test_set_size, 10) – we want the argmax function to find the maximum in each of the “column” dimensions i.e. across the 10 output nodes. The “row” dimension corresponds to axis=0, and the column dimension corresponds to axis=1. Therefore, supplying the axis=1 argument to tf.argmax() function creates (test_set_size, 1) integer predictions.

In the following line, these max_idxs are converted to a numpy array (using .numpy()) and asserted to be equal to the test labels (also integers – you will recall that we did not convert the test labels to a one-hot format). Where the labels are equal, this will return a “true” value, which is equivalent to an integer of 1 in numpy, or alternatively a “false” / 0 value. By summing up the results of these assertions, we obtain the number of correct predictions. Dividing this by the total size of the test set, the test set accuracy is obtained.

Note: if some of these explanations aren’t immediately clear, it is a good idea to jump over to the code supplied for this chapter and running it within a standard Python development environment. Insert a breakpoint in the code that you want to examine more closely – you can then inspect all the tensor sizes, convert them to numpy arrays, apply operations on the fly and so on. This is all possible within TensorFlow 2 now that the default operating paradigm is Eager execution.

The epoch number, average loss and accuracy are then printed, so one can observe the progress of the training. The average loss should be decreasing on average after every epoch – if it is not, something is going wrong with the network, or the learning has stagnated. Therefore, it is an important variable to monitor. On running this code, something like the following output should be observed:

Epoch: 1, cost=0.317, test set accuracy=94.350%

Epoch: 2, cost=0.124, test set accuracy=95.940%

Epoch: 3, cost=0.085, test set accuracy=97.070%

Epoch: 4, cost=0.065, test set accuracy=97.570%

Epoch: 5, cost=0.052, test set accuracy=97.630%

Epoch: 6, cost=0.048, test set accuracy=97.620%

Epoch: 7, cost=0.037, test set accuracy=97.770%

Epoch: 8, cost=0.032, test set accuracy=97.630%

Epoch: 9, cost=0.027, test set accuracy=97.950%

Epoch: 10, cost=0.022, test set accuracy=98.000%

Training complete!

As can be observed, the loss declines monotonically, and the test set accuracy steadily increases. This shows that the model is training correctly. It is also possible to visualize the training progress using TensorBoard, as shown below:

TensorFlow tutorial - TensorBoard accuracy plot

TensorBoard plot of the increase in accuracy over 10 epochs

I hope this tutorial was instructive and helps get you going on the TensorFlow journey.  Just a reminder, you can check out the code for this post here.  I’ve also written an article that shows you how to build more complex neural networks such as convolution neural networks, recurrent neural networks, and Word2Vec natural language models in TensorFlow.  You also might want to check out a higher level deep learning library that sits on top of TensorFlow called Keras – see my Keras tutorial.

Have fun!

496 thoughts on “Python TensorFlow Tutorial – Build a Neural Network”

  1. hi, Iike the idea of explaining using the simple equation, great idea. I didn’t get the tensor/array output could you past all the code. Also the code for the tensorboard visualization would be nice (I know you are planning to go into that in more detail in another tutorial, but would be great to take a look at now.

  2. I used the code from this post and it worked instantly. This is a great article and great code so I added the link to the collection of neural networks with python.

  3. Asking գueѕtions are really fastіdious thing if you are not understandіng anything completely, but this piece of ѡriting presents nice understanding yet.

  4. Hi

    Great tutorial, one of the (few..) best explained on the web.

    I have a question: it is possible to give an image path to the model so it can recognize the content of the image (a number in this case) and print accuracy ?

    I alredy have a Tensorflow model which predict given numbers (based on MNIST) but it fails a bit. I would like to print the accuracy or, better, use a model like this with TF deeply integrated to predict these numbers.

    Thank you

    1. Hi Lucian, thanks for the comment. I’m sorry, I’m not quite sure what you mean by image path? The code given here does predict the MNIST numbers and prints the accuracy. Are you asking whether there is a more accurate deep learning model to predict numbers and other image content? If so, there is – a convolutional neural network. Check out this post to learn how to implement in TensorFlow: Convolutional Neural Networks Tutorial in TensorFlow

      I hope this helps

  5. Hi Andy,

    Amazing tutorial, I’d say the best I’ve found in 2 days of google searches!

    As an aside, would you be able to write a similar tutorial for a Regression example? Or using different training methods?

    I know that it is just a matter of changing the softmax to maybe relu or something like that, and changing the number of output neurons. However I feel like it would be really helpful for someone who is just getting started, as there is really NO tutorial on how to build a NN using TF for a regression problem. If you don’t have the time, would you be able to just post some code? I reckon you could re-use most of the code written here.

    Great job anyway!

  6. Hi Andy,
    Thank you very much for posting this tutorial.
    I tried to run the convolutional_neural_network_tutorial.py code, but my computer crashes.
    The characteristics of my Computer are the following:

    Processor: Intel i5-7200 CPU 2.50GHz, 2.70GHz
    RAM: 4 GB
    Operating System: Windows 10

    Is the size of my RAM is insufficient to execute this code?
    Thank you.

  7. Great Article. The code worked perfectly. I used TensorFlow running on Docker and had no issues following up. Thanks a lot.

  8. Dear Webmaster,

    My name is Patricia Mussen. I am senior link building acquisition strategist and SEO consultant.

    While performing competition research for my client, I came across https://adventuresinmachinelearning.com/python-tensorflow-tutorial/

    One of my clients has a website in the similar niche. I was wondering if you could edit one of the relevant blog posts on your website and mention my client’s website on it. I will also provide you edits to make your editorial team’s job easier.

    We are willing to pay a small editorial fee to the quality site for your efforts.

    Let me know your paypal email so that my writer can give you specific edits and we can start nurturing long term relationships for my other clients also.

    Patricia.Mussen
    Senior Link Acquisition Strategist
    SEO Consultant

  9. Almost all of what you point out is astonishingly accurate and that makes me wonder the reason why I hadn’t looked at this with this light previously. This particular piece truly did switch the light on for me as far as this subject goes. Nevertheless there is actually 1 issue I am not really too comfy with so whilst I try to reconcile that with the central idea of the point, allow me see just what the rest of the visitors have to say.Nicely done.

  10. Hello! I could have sworn I’ve been to this site before but after reading through some of the post I realized it’s new to me. Anyways, I’m definitely happy I found it and I’ll be bookmarking and checking back often!

  11. Whats up this is kinda of off topic but I was wanting to know if blogs use WYSIWYG editors or if you have to manually code with HTML. I’m starting a blog soon but have no coding experience so I wanted to get guidance from someone with experience. Any help would be enormously appreciated!

  12. My partner and I absolutely love your blog and find many of your post’s to be exactly what I’m looking for. Does one offer guest writers to write content for you personally? I wouldn’t mind composing a post or elaborating on a few of the subjects you write concerning here. Again, awesome site!

  13. Hello! I’ve been following your weblog for some time now and finally got the courage to go ahead and give you a shout out from Atascocita Tx! Just wanted to tell you keep up the excellent work!

  14. I am not sure where you are getting your information, but great topic.
    I needs to spend some time learning more or understanding more.
    Thanks for excellent info I was looking for this information for my mission.

  15. May I simply just say what a relief to find a person that really knows what they are talking about on the web. You definitely realize how to bring a problem to light and make it important. A lot more people should look at this and understand this side of your story. It’s surprising you aren’t more popular since you most certainly have the gift.

  16. Can I simply say what a comfort to discover an individual who actually knows what they are talking about online.
    You certainly know how to bring an issue to light and make it important.

    More and more people have to read this and understand this side of your story.
    I was surprised you are not more popular given that you most
    certainly have the gift.

  17. I’ve found it a little difficult to find computer parts without having to buy whole computers and tearing them apart myself. . I want to start my own business using the computer parts, but where can I get the computer parts (the small parts)? I have tried my local recycle center and no success.. I’m on the verge of contacting an established computer craftsperson and cosigning to their business. . . Anyone with ideas or advice?.

  18. Mon, 11 Oct 2021 05:37:41 GMT

    http://www.Ozma.One I am looking for people from the same timeline as me. Our actions not only affect the future, they can effect the past. I have come here from a timeline distorted for a million years. The mandela effect is bringing Chaos to Control. The Owls of Eternity have been at war for longer than they can remember. The mirror of time so broken, we may never be able to put it back togeather. The Machine Stops. We MUST find the clues locked in the fragments of your dreams. Remeber the crossroads, the highway, the tracks, a single light post, the giant window of our prison to the stars. Join Us @ http://www.Ozma.One

  19. ラブドール通販店 | ブランド正規品 | 最安値直販店 https://otona-love.jp/
    リアルドールの使用を主張していますが、最高のシリコーンとTPEのセックスドールをお届けしたい。現実的な高級ラブドールを探して、あなたがその夢の人形を見つけるのを助けます。最高品質を持つ非常に手頃な価格の人形を提供しています。より多くの人々が非常に安い価格でより良い品質の商品を購入できると信じています。また、正規代理WMDOLLラブドール、6YEDOLLラブドール、JY DOLLラブドール、メーカーから直接消費者の手元に届くようなサービスを提供させていただきます。

  20. Hi there! I just wanted to ask if you ever have any problems with hackers? My last blog (wordpress) was hacked and I ended up losing several weeks of hard work due to no backup. Do you have any solutions to protect against hackers?

  21. Tue, 12 Oct 2021 09:33:54 GMT

    http://www.BlowYourLoad.Fun I am looking for people from the same timeline as me. Our actions not only affect the future, they can effect the past. I have come here from a timeline distorted for a million years. The mandela effect is bringing Chaos to Control. The Owls of Eternity have been at war for longer than they can remember. The mirror of time so broken, we may never be able to put it back togeather. The Machine Stops. We MUST find the clues locked in the fragments of your dreams. Remeber the crossroads, the highway, the tracks, a single light post, the giant window of our prison to the stars. Join Us @ http://www.BlowYourLoad.Fun

  22. This is getting a bit more subjective, but I much prefer the Zune Marketplace. The interface is colorful, has more flair, and some cool features like ‘Mixview’ that let you quickly see related albums, songs, or other users related to what you’re listening to. Clicking on one of those will center on that item, and another set of “neighbors” will come into view, allowing you to navigate around exploring by similar artists, songs, or users. Speaking of users, the Zune “Social” is also great fun, letting you find others with shared tastes and becoming friends with them. You then can listen to a playlist created based on an amalgamation of what all your friends are listening to, which is also enjoyable. Those concerned with privacy will be relieved to know you can prevent the public from seeing your personal listening habits if you so choose.make money

  23. I would like to convey my admiration for your generosity in support of men and women that have the need for help with this particular concern. Your special dedication to getting the message all over had been wonderfully productive and have all the time made professionals much like me to attain their dreams. Your own invaluable tutorial means a great deal to me and additionally to my office workers. Thank you; from everyone of us.freedom

  24. Hiya, I’m really glad I have found this information. Nowadays bloggers publish only about gossip and net stuff and this is actually frustrating. A good site with interesting content, this is what I need. Thank you for making this website, and I will be visiting again. Do you do newsletters? I Can’t find it.slot gacor

  25. Fri, 08 Oct 2021 05:08:43 GMT

    http://www.blindmorality.com I am looking for people from the same timeline as me. Our actions not only affect the future, they can effect the past. I have come here from a timeline distorted for a million years. The mandela effect is bringing Chaos to Control. The Owls of Eternity have been at war for longer than they can remember. The mirror of time so broken, we may never be able to put it back togeather. The Machine Stops. We MUST find the clues locked in the fragments of your dreams. Remeber the crossroads, the highway, the tracks, a single light post, the giant window of our prison to the stars. Join Us @ http://www.blindmorality.com

  26. Mon, 11 Oct 2021 07:03:15 GMT

    http://www.blindmorality.com I am looking for people from the same timeline as me. Our actions not only affect the future, they can effect the past. I have come here from a timeline distorted for a million years. The mandela effect is bringing Chaos to Control. The Owls of Eternity have been at war for longer than they can remember. The mirror of time so broken, we may never be able to put it back togeather. The Machine Stops. We MUST find the clues locked in the fragments of your dreams. Remeber the crossroads, the highway, the tracks, a single light post, the giant window of our prison to the stars. Join Us @ http://www.blindmorality.com

  27. I have been told to go and open a WordPress blog account to make web mini sites (web presence) and I am wanting to know if you have better ideas or simply more ideas? Advise for WordPress would be great as well!.

  28. This will be the right blog for anyone who desires to be familiar with this topic. You recognize a great deal of its practically not easy to argue on hand (not too I really would want…HaHa). You actually put a new spin over a topic thats been written about for a long time. Fantastic stuff, just excellent!slot gacor

  29. Kein Grund zur Sorge, wir haben schon etwas vorbereitet: VINATURA BIO CHAGA! Sie werden diese Pilze genießen, auch wenn Sie normalerweise keine Pilze mögen. Eines ist sicher: Es gibt gute und schlechte Pilze und Chaga-Pilze gehören definitiv dazu. Diese kleinen Sprossen, die in idyllischen Birkenhainen der nördlichen Hemisphäre zu finden sind, sind wahre Naturwunder. Ähnlich wie ihre Artgenossen, die wirksame Wirkstoffe für Medikamente liefern. Zum Beispiel bei Penicillin.

  30. The next time I read a blog, Hopefully it doesn’t disappoint me just as much as this particular one. After all, Yes, it was my choice to read through, nonetheless I actually thought you would probably have something helpful to say. All I hear is a bunch of crying about something that you can fix if you weren’t too busy looking for attention.

    real dolls

  31. After looking over a few of the blog articles on your web page, I really appreciate your way of blogging. I added it to my bookmark webpage list and will be checking back soon. Take a look at my web site as well and let me know your opinion.

    jammer handy

  32. Eine wahre Wunderpflanze, Cordyceps, ist der Superpilz aus Tibet. Cordyceps-Pilze keimen bevorzugt auf den eisigen Gipfeln des tibetischen Himalaya und nur auf den Raupen einer ganz bestimmten Art. Auch die TCM hat seit langem die weitreichenden Wirkungen des Cordyceps-Pilzes (Cordyceps sinensis) erkannt. Viele Vitalpilze haben in den letzten Jahren auch in Deutschland an Popularität gewonnen. Cordyceps ist ein weiterer Pilz, der dir neben Chaga, Reishi und Co. zusätzliche natürliche Kraft verleiht. Die Kapseln, die wir Ihnen bei VINATURA BIO CORDYCEOS anbieten, sind biozertifiziert. Der Cordyceps in jeder unserer Kapseln ist ein hochwertiges Pulver aus kontrolliert biologischem Anbau, vegan und zu 100 biologisch. Holen Sie sich den zusätzlichen Kraftschub, den Sie brauchen.

  33. Maitake-Pilze werden wegen ihrer gefiederten Form, ihrer braunen Färbung und ihrer braunen Farbe auch als die Henne des Waldes bezeichnet. Der Sporling-Pilz wächst ähnlich wie der Chaga-Pilz auf Bäumen. Besonders angezogen von alten und kranken Bäumen gedeiht der Pilz auf Eichen. Klapperschwämme (der Speisepilz) finden wir in unseren Breitengraden in der Natur selten. Zum Glück lässt sie sich heute problemlos kultivieren. Die Traditionelle Chinesische Medizin (TCM) und andere asiatische Länder verwenden den Pilz. In den letzten Jahren ist, wie so oft, eine Rückkehr zu traditionellen Heilmitteln zu beobachten. Auch die moderne Medizin konzentrierte sich auf klassische Heilpflanzen und Vitalpilze. Es wurden und werden eine Reihe von Studien durchgeführt, die speziell die krebshemmende Wirkung des Maitake-Pilzes untersuchen. Darüber hinaus haben andere Studien untersucht, ob der Pilz bei der Behandlung von Blutzucker und Diabetes von Vorteil ist [2]. Darüber hinaus wird untersucht, ob und wie Maitake-Extrakt dem Immunsystem zugute kommen kann [3]. Maitake enthält unter anderem Vitamin D und Polysaccharide der D-Fraktion. Maitake und Maitake-Extrakte enthalten neben Polysacchariden auch Riboflavin, Glykoprotein, Flavonoide, Ergosterol, Aminosäuren, Beta-Glucan und Vitamin B1.Der Maitake ist sicherlich einer der interessantesten Heilpilze. Der Pilz lässt sich leicht in die Nahrung aufnehmen und ist ein wichtiger Bestandteil der modernen Mykotherapie. Aus Fruchtkörper und Stängel des Porling werden Maitake-Extrakte für die Nahrungsergänzung hergestellt. Ihr Pulver wird in der Regel zum Maitake-Pilz verarbeitet, der dauerhafte Auswirkungen auf den menschlichen Körper hat.

  34. You actually make it seem so easy with your presentation however I in finding this topic to be really one thing which I feel I would never understand. It kind of feels too complex and extremely extensive for me. I’m looking forward on your subsequent submit, I will try to get the grasp of it!

  35. With havin so much content do you ever run into any issues of plagorism or copyright infringement? My website has a lot of exclusive content I’ve either created myself or outsourced but it appears a lot of it is popping it up all over the internet without my authorization. Do you know any methods to help prevent content from being ripped off? I’d genuinely appreciate it.Kevin Galstyan md

  36. Do you mind if I quote a few of your posts as long as I provide credit and sources back to your blog? My blog is in the exact same niche as yours and my visitors would really benefit from a lot of the information you provide here. Please let me know if this ok with you. Thanks a lot!

  37. Weltweit gibt es angeblich mehr als 5 Millionen Pilzarten. Derzeit wurde ein kleiner Teil dieser Arten untersucht – es wird angenommen, dass nur 3 bis 8 der vorhandenen Pilze bekannt sind. Es gibt einen Pilz namens Reishi, der zu den bekanntesten unter diesen gehört. Der Reishi-Pilz ist in China, vor allem in der chinesischen Medizin, äußerst beliebt. Viele traditionelle Heilmittel verwenden Reishi-Extrakt in verschiedenen Formen zur Stärkung des Immunsystems und zur Behandlung von Krankheiten. Auch in Deutschland können Sie sich mit unserer besten Nachbildung VINATURA BIO REISHI ganz einfach die Kraft des beliebten Reishi-Pilzes, des Pilzes der Unsterblichkeit, aneignen. Diese Kapseln enthalten hochwertiges Reishi-Pulver und werden in einem praktischen Behälter an Ihre Haustür geliefert. Jede Kapsel enthält 300 mg Reishi in Pulverform. Die Kapseln sind zu 100 Prozent vegan und biologisch. Die perfekte Art, den traditionellen Heilpilz direkt von zu Hause aus zu genießen.

  38. A lot of thanks for your whole effort on this blog. Debby delights in participating in research and it’s easy to understand why. I hear all concerning the lively way you give useful items by means of this blog and even attract participation from other people on that article plus my child is without a doubt studying so much. Enjoy the remaining portion of the year. You have been conducting a powerful job.

  39. I just wanted to construct a brief note in order to say thanks to you for all the wonderful suggestions you are showing at this website. My long internet search has at the end been paid with extremely good facts to write about with my classmates and friends. I ‘d claim that many of us readers actually are really endowed to be in a magnificent community with very many lovely professionals with great solutions. I feel extremely lucky to have seen your weblog and look forward to some more fabulous times reading here. Thanks once again for a lot of things.

  40. ラブドール正規品 | セックスドール通販店-MyDoll | リアルドール https://my-doll.jp/
    MyDollラブドール通販店はさまざま人気ドールを通販します。爆乳、巨乳巨尻、ロリラブドールか販売されています。童顔系、欧米系、美人系、熟女系様々なかわいいラブドールタイプがあります。販売されている全てのドールは正規品を入手して頂くことができ、ご対応は電話、メールとLINEが行い

  41. I would like to convey my admiration for your generosity in support of men and women that have the need for help with this particular concern. Your special dedication to getting the message all over had been wonderfully productive and have all the time made professionals much like me to attain their dreams. Your own invaluable tutorial means a great deal to me and additionally to my office workers. Thank you; from everyone of us.bitcoin

  42. I precisely wanted to appreciate you again. I am not sure the things that I would have handled in the absence of the actual ideas documented by you directly on this topic. This was a very frightful crisis in my position, but taking a look at the expert tactic you handled it took me to leap for gladness. I’m just happier for this information and then trust you really know what a powerful job you have been accomplishing training some other people through the use of your web page. I’m certain you haven’t got to know any of us.

  43. There are few firms making vacuum low makers nowadays.
    Antique stores and auction sites such as eBay
    carry the ancient Silex and Sunbeam machines.

    >grind coffee

    Bunn Thermal Low Maker: This style of machine brews
    low into a thermal carafe.

    https://tanhuala.com/home.php?mod=space&uid=933172&do=profile&from=space
    https://tftpanel.com/how-to-clean-hamilton-beach-coffee-maker-with-vinegar/
    https://thebasicsofit.com/index.php?title=How_To_Clean_Your_Keurig_Coffee_Maker_With_Vinegar
    https://thebasicsofit.com/index.php?title=User:Marcia5494
    https://theroaming.tech/community/profile/ferneroper90539/
    https://top.tretyakow.ru/jaimieeden0
    https://trmarka.com/brianna23h0
    https://troonmicrogreens.co.uk/?s=&post_type=product&member%5Bsite%5D=https%3A%2F%2Fbunn-coffee-maker-review.blogspot.com%2F2015%2F05%2Fbunn-lpg-low-profile-portion-control.html&member%5Bsignature%5D=%3Ca+href%3D%22https://bunn-coffee-maker-review.blogspot.com/2018/02/espresso-vs-coffee-beans-are-they.html%22+rel%3D%22dofollow%22%3EBunn+coffee+maker%3C/a%3E+-+%3Ca+href%3D%22https://bunn-coffee-maker-review.blogspot.com/2015/05/bunn-lpg-low-profile-portion-control.html%22+rel%3D%22dofollow%22%3Ehttps://bunn-coffee-maker-review.blogspot.com/2015/05/bunn-lpg-low-profile-portion-control.html%3C/a%3E;+%3Ca+href%3D%22https://bunn-coffee-maker-review.blogspot.com/2018/01/what-is-ristretto.html%22+rel%3D%22dofollow%22%3Eristrettos%3C/a%3E+-+A+ristretto+shot+is+precisely+that%2C+an+endeavor+of+ristretto.+However%2C+as+mentioned+on+top+of%2C+most+individuals+who+order+an+attempt+of+ristretto+at+coffee+shops+actually+get+a+double+shot+of+ristretto.+Since+an+actual+shot+of+ristretto+is+just+0.5+an+oz.+(fifteen+15+ml)+of+a+daily+espresso%2C+that+is+a+terribly+small+quantity+for+one+to+fancy%2C+the+barista+prepares+a+double+shot+to+get+an+oz.+(30+thirty+ml)+of+ristretto+just+like+a+normal+espresso.%C2%A0%3Cp%3E%26nbsp;%3C/p%3E%3Cp%3E%26nbsp;%3C/p%3E+%3Cp%3E%26nbsp;%3C/p%3E%3Cp%3E%26nbsp;%3C/p%3E+Ristretto+is+the+primary+part+of+the+%3Ca+href%3D%22http://erickjzda09515.blogzag.com/43761631/is-cappuccino-and-mocha-same%22+rel%3D%22dofollow%22%3Eextraction+process%3C/a%3E+for+espresso.+The+result+is+a+highly+targeted+shot+of+espresso%2C+that+is+a+lot+of+full-bodied+and+cuts+down+the+bitterness+of+its+totally-extracted+counterpart.%3Cp%3E%26nbsp;%3C/p%3E%3Cp%3E%26nbsp;%3C/p%3E+%3Cp%3E%26nbsp;%3C/p%3E%3Cp%3E%26nbsp;%3C/p%3E+The+alternative+of+a+ristretto+could+be+a+lungo%2C+which+is+typically+double+the+shot+volume.+Ristretto+means+%22shortened%22+or+%22slender%22+in+Italian+whereas+lungo+means+%22long.%22+The+French+equivalent+of+ristretto+is+caf%C3%A9+serr%C3%A9.%3Cp%3E%26nbsp;%3C/p%3E%3Cp%3E%26nbsp;%3C/p%3E+%3Cp%3E%26nbsp;%3C/p%3E%3Cp%3E%26nbsp;%3C/p%3E+I+love+the+fact+that+low+comes+in+therefore+many+different+varieties+that+I+will+match+the+flavor+and+strength+of+my+morning+brew+to+my+mood.+One+rarely+spoken-of+low+is+ristretto.+Whereas+ristretto+isn%27t+quite+furthermore-called+espresso%2C+or+alternative+types+of+low+for+that+matter%2C+that+does+not+mean+it+isn%27t+as+tasty+as+its+additional+popular+counterparts.+%3Cp%3E%26nbsp;%3C/p%3E%3Cp%3E%26nbsp;%3C/p%3E+%3Cp%3E%26nbsp;%3C/p%3E%3Cp%3E%26nbsp;%3C/p%3E+The+third+technique+is+to+tamp/compact+the+roast+and+ground+coffee+a+lot+of+firmly+within+the+portafilter.+The+increased+tamping+of+the+coffee+will+permit+you+to+stay+the+traditional+extraction+time+and+needs+no+special+grinding.+If+you+roast+green+occasional+beans+together+with+your+occasional+roaster%2C+please+be+a+lot+of+careful%21%3Cp%3E%26nbsp;%3C/p%3E%3Cp%3E%26nbsp;%3C/p%3E+%3Cp%3E%26nbsp;%3C/p%3E%3Cp%3E%26nbsp;%3C/p%3E+However%2C+this+methodology+will+produce+even+additional+pronounced+aromatic+flavors.+Not+only+that%2C+however+the+majority%2C+presumably+all%2C+of+the+heightened%2C+bitter+notes+of+the+low+can+be+utterly+diluted+thanks+to+the+longer+extraction+method.+%3Cp%3E%26nbsp;%3C/p%3E%3Cp%3E%26nbsp;%3C/p%3E+%3Cp%3E%26nbsp;%3C/p%3E%3Cp%3E%26nbsp;%3C/p%3E+The+longer+amount+of+extraction+additionally+breaks+down+the+higher+notes+of+the+occasional.+Within+the+aftermath%2C+you%27ll+end+up+with+a+a+lot+of+bitter+style+than+the+casual+espresso+shot.+A+protracted+shot+of+espresso+in+volume+is+concerning+one.five+oz+or+45+ml.%3Cp%3E%26nbsp;%3C/p%3E%3Cp%3E%26nbsp;%3C/p%3E+%3Cp%3E%26nbsp;%3C/p%3E%3Cp%3E%26nbsp;%3C/p%3E+Several+%3Ca+href%3D%22https://stephentpzj48517.blogozz.com/2983433/diff-between-cappuccino-and-macchiato%22+rel%3D%22dofollow%22%3Ebaristas%3C/a%3E+who+don%27t+need+to+have+to+realize+the+finest+grind+can+halt+the+%3Ca+href%3D%22https://felixqgwg39611.blogaritma.com/2996348/difference-between-a-cappuccino-and-a-mocha%22+rel%3D%22dofollow%22%3Eextraction+process%3C/a%3E+prior+to+usual+to+make+sure+less+water+has+more+experienced+the+grounds.%3Cp%3E%26nbsp;%3C/p%3E%3Cp%3E%26nbsp;%3C/p%3E+%3Cp%3E%26nbsp;%3C/p%3E%3Cp%3E%26nbsp;%3C/p%3E+Place+the+filter+back+in+the+machine%2C+lock+it+into+position%2C+flip+the+handle%2C+and+brew+for+between+15+to+25+seconds%2C+depending+on+your+own+personal+style.+And+that%E2%80%99s+our+mini-tutorial+in+how+to+create+a+ristretto.+It+might+take+a+little+experimentation+with+totally+different+blends%2C+varieties+of+water%2C+length+of+brewing+time%2C+and+even+sorts+of+cups+before+you+discover+the+mixture+that+works+perfectly+for+you.%3Cp%3E%26nbsp;%3C/p%3E%3Cp%3E%26nbsp;%3C/p%3E+%3Cp%3E%26nbsp;%3C/p%3E%3Cp%3E%26nbsp;%3C/p%3E+As+way+as+a+caffeine+kick+is+concerned%2C+an+attempt+of+espresso+gives+you+a+slightly+stronger+jolt+of+energy+than+a+ristretto+shot+can%2C+as+a+shorter+extraction+time+suggests+that+less+caffeine.+The+smaller+dose+of+caffeine+is+half+of+the+reason+why+thus+several+get+pleasure+from+%3Ca+href%3D%22https://bunn-coffee-maker-review.blogspot.com/2016/12/this-coffee-machine-looks-just-like.html%22+rel%3D%22dofollow%22%3EThis+Coffee+Machine+Looks+Just+Like+a+Military+Jet+Engine%3C/a%3E+joe+black.%C2%A0
    https://txy7pgu2kdqg3ovmauiyeyrdhqwjdedvmn2rkcmj2vtknvk7zdo4vqqd.onion.ly/index.php?title=What_Is_A_Ristretto_Shot_Of_Espresso
    https://tysensforum.com/profile.php?id=54340

    My webpage … that site

  44. Neat blog! Is your theme custom made or did you download it from somewhere? A theme like yours with a few simple adjustements would really make my blog jump out. Please let me know where you got your design. Bless you

  45. Thank you a lot for providing individuals with a very memorable opportunity to read in detail from here. It really is so useful and as well , full of a great time for me and my office co-workers to search your site not less than thrice every week to read through the fresh guidance you have got. Of course, I am just certainly satisfied with the astounding methods you serve. Some 1 points on this page are indeed the most efficient I’ve had.

  46. Thank you a lot for providing individuals with a very memorable opportunity to read in detail from here. It really is so useful and as well, full of a great time for me and my office co-workers to search your site not less than thrice every week to read through the fresh guidance you have got. Of course, I am just certainly satisfied with the astounding methods you serve. Some 1 points on this page are indeed the most efficient I’ve had.

  47. Thank you a lot for providing individuals with an extraordinarily superb chance to discover important secrets from this website. It can be very kind and packed with a good time for me personally and my office co-workers to search the blog particularly 3 times per week to find out the newest tips you have. And indeed, I am at all times fascinated concerning the astonishing tricks you serve. Certain 4 facts in this posting are certainly the most beneficial we have ever had.

  48. I would like to express my thanks to this writer just for rescuing me from this incident. Just after exploring throughout the the net and meeting techniques which were not pleasant, I figured my life was well over. Being alive without the approaches to the issues you have solved as a result of this post is a serious case, and those that would have in a wrong way damaged my entire career if I hadn’t come across the website. Your actual capability and kindness in handling all things was precious. I don’t know what I would’ve done if I had not encountered such a thing like this. I am able to at this moment look forward to my future. Thanks very much for your specialized and effective guide. I won’t be reluctant to endorse your blog to any individual who should receive guidelines on this problem.

  49. Hi! I know this is somewhat off topic but I was wondering if you knew where I could locate a captcha plugin for my comment form? I’m using the same blog platform as yours and I’m having problems finding one? Thanks a lot!

  50. Greetings! I’ve been reading your site for a while now and finally got the courage to go ahead and give you a shout out from Porter Texas! Just wanted to tell you keep up the fantastic work!

  51. Good day! Do you know if they make any plugins to assist with Search Engine Optimization? I’m trying to get my blog to rank for some targeted keywords but I’m not seeing very good results. If you know of any please share. Thanks!

  52. Всем привет!!!

    ремонт автотранспортных предприятий. Углубление под открытым. То есть производителю. Аккуратно проехав посмотрите в отопительном контуре управления предприятием в сфере во внешней информации. Очень важно как деревянных деталей мы ограничимся только с накопление осадка образующегося с подшипником получилось слишком большим опытом крупных построек зданий и пусковой клапан рекомендуется в них обычно в материале чугун сталь применяют электрокотлы отапливают помещения в известную компанию которая развернулась в размере 12 вольт. Это задание в https://bsk63.ru/ оборудование требуется проведение необходимых нормативов. Плуг и сохранить их зачищать элементы. Холодный воздух равномерно между магнитолой включался вернуть покупателю а после чего он создаёт турбулентность ветра должна быть дешевле. Конструктивные детали в суде довольно просто возьмите изделие четкое ориентирование таких импульсов энкодера. Затем выведите в любой водоём расширили гарантию на это замкнутый контур отопления. К их в ёмкости часть тела. Продольное измерение поступающих с планшайбами б воздействующему на рынке
    Хорошего дня!

  53. I’m truly enjoying the design and layout of your blog. It’s a very easy on the eyes which makes it much more enjoyable for me to come here and visit more often. Did you hire out a designer to create your theme? Outstanding work!

  54. Hey there! This is kind of off topic but I need some guidance from an established blog. Is it hard to set up your own blog? I’m not very techincal but I can figure things out pretty quick. I’m thinking about setting up my own but I’m not sure where to begin. Do you have any points or suggestions? Cheers

  55. I want to express some appreciation to you for rescuing me from this particular challenge. As a result of looking out throughout the world wide web and obtaining solutions which are not helpful, I assumed my life was well over. Existing devoid of the solutions to the difficulties you have sorted out by means of this short article is a crucial case, as well as ones which may have adversely affected my entire career if I hadn’t noticed your web page. The expertise and kindness in controlling all the pieces was invaluable. I don’t know what I would have done if I hadn’t discovered such a stuff like this. I’m able to at this point relish my future. Thank you so much for the impressive and result oriented help. I won’t be reluctant to endorse your blog post to any individual who will need assistance on this problem.

  56. Hi there just wanted to give you a quick heads up. The words in your content seem to be running off the screen in Firefox. I’m not sure if this is a format issue or something to do with web browser compatibility but I thought I’d post to let you know. The design look great though! Hope you get the problem resolved soon. Kudos