An introduction to TensorFlow queuing and threading

TensorFlow queuing - example queue

One of the great things about TensorFlow is its ability to handle multiple threads and therefore allow asynchronous operations.  If we have large datasets this can significantly speed up the training process of our models.  This functionality is especially handy when reading, pre-processing and extracting in mini-batches our training data.  The secret to being able to do professional and high-performance training of our models is understanding TensorFlow queuing operations.  The particular queuing operations/objects we will be looking at in this tutorial are FIFOQueue, RandomShuffleQueue,  QueueRunner, Coordinator, string_input_producer and shuffle_batch, but the concepts that I will introduce are common to the multitude of queuing and threading operations available in TensorFlow.

Note: While the content of this post is still relevant and is the core of TensorFlow’s efficient data consumption pipelines, there is an updated API called the Dataset API. After reading this post, it might be an idea to check out my post on the Dataset API too.

If you’re a beginner to TensorFlow, I’d recommend first checking out some of my other TensorFlow tutorials Python TensorFlow Tutorial – Build a Neural Network and/or Convolutional Neural Networks Tutorial in TensorFlow.  If you’re more of a video learning person, get up to speed with the online course below.


Recommended online course: If you want a video introduction to TensorFlow, I recommend the following inexpensive Udemy course: Complete Guide to TensorFlow for Deep Learning with Python


As usual, all the code for this post is on this site’s Github repository.

TensorFlow queuing and threads – introductory concepts

We know from our common day experience that certain tasks can be performed in parallel, and when we do such tasks in parallel we can get great reductions in the time it takes to complete complex tasks.  The same is true in computing – often our CPU will get stuck waiting for the completion of a single task, such as waiting to read in data from a file or database, and it blocks any other tasks from occurring in the program.  Needless to say, this impacts performance and doesn’t utilize our CPUs effectively.

These types of issues are tackled in computing by using threading.  Threading involves multiple tasks running asynchronously – that is when one thread is blocked another thread gets to run.  When we have multiple CPUs, we can also have multi-threading which allows different threads to run at the same time.  Unfortunately, threading is notoriously difficult to manage, especially in Python.  Thankfully, TensorFlow has come to the rescue and provided us means of including threading in our input data processing.

In fact, TensorFlow has released a performance guide which specifically recommends the use of threading when inputting data to our training processes.  Their method of threading is called Queuing.  Often when you read introductory tutorials on TensorFlow (mine included), you won’t hear about TensorFlow queuing.  Instead, you’ll see the following feed_dict syntax as the method of feeding data into the training graph:

sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})

Here data is fed into the final training operation via the feed_dict argument.  TensorFlow, in its performance guide, specifically discourages the use of the feed_dict method.  It’s great for tutorials if you want to focus on core TensorFlow functionality, but not so good for overall performance.  This tutorial will introduce you to the concept of TensorFlow queuing.

What are TensorFlow queues exactly?  They are data storage objects which can be loaded and de-loaded with information asynchronously using threads.  This allows us to stream data into our training algorithms more seamlessly, as loading and de-loading of data can be performed at the same time (or when one thread is blocking) – with our queue being “topped up” when required with new data to ensure a steady stream of data.  This process will be shown more fully below, as I introduce different TensorFlow queuing concepts.

The first TensorFlow queue that I will introduce is the first-in, first-out queue called FIFOQueue.

The FIFOQueue – first in, first out

The illustration below, from the TensorFlow website,  shows a FIFOQueue in action:

TensorFlow queuing - FIFOqueue

A FIFOQueue in action

Here is what is happening in the gif above – first a FIFOQueue object is created with a capacity of 3 and a data type = “float”.  An enqueue_many operation is then performed on the queue – this basically loads up the queue to capacity with the vector [0, 0, 0].  Next, the code creates a dequeue operation – where the first value to enter the queue is unloaded.  The next operation simply adds 1 to the dequeued value.  The last operation adds this incremented number back to the top of the FIFOQueue to “top it up” – making sure it doesn’t run out of values to dequeue.  These operations are then run and you can see the result – a kind of slowly incrementing counter.

Let’s have another look at how this works by introducing some real TensorFlow code:

dummy_input = tf.random_normal([3], mean=0, stddev=1)
dummy_input = tf.Print(dummy_input, data=[dummy_input],
                           message='New dummy inputs have been created: ', summarize=6)
q = tf.FIFOQueue(capacity=3, dtypes=tf.float32)
enqueue_op = q.enqueue_many(dummy_input)
data = q.dequeue()
data = tf.Print(data, data=[q.size()], message='This is how many items are left in q: ')
# create a fake graph that we can call upon
fg = data + 1

In this code example, I’ve created, I first create a random normal tensor, of size 3, and then I create a printing operation so we can see what values have been randomly selected.  After that, I set up a FIFOQueue, with capacity = 3 as in the example above.  I enqueue all three values of the random tensor in the enqueue_op.  Then I immediately attempt to dequeue a value from q and assign it to data.  Another print operation follows and then I create basically a fake graph, where I simply add 1 to the dequeued data variable.  This step is required so TensorFlow knows that it needs to execute all the preceding operations which lead up to producing data.  Next, we start up a session and run:

with tf.Session() as sess:
    # first load up the queue
    sess.run(enqueue_op)
    # now dequeue a few times, and we should see the number of items
    # in the queue decrease
    sess.run(fg)
    sess.run(fg)
    sess.run(fg)
    # by this stage the queue will be emtpy, if we run the next time, the queue
    # will block waiting for new data
    sess.run(fg)
    # this will never print:
    print("We're here!")

All that is performed in the code above is running the enqueue_many operation (enqueue_op) which loads up our queue to capacity, and then we run the fake graph operation, which involves emptying our queue of values, one at a time.  After we’ve run this operation a few times the queue will be empty – if we try and run the operation again, the main thread of the program will hang or block – this is because it will be waiting for another operation to be run to put more values in the queue.  As such, the final print statement is never run.  The output looks like this:

New dummy inputs have been created: [0.73847228 0.086355612 0.56138796]
This is how many items are left in q: [3]
This is how many items are left in q: [2]
This is how many items are left in q: [1]

Once the output gets to the point above you’ll actually have to terminate the program as it is blocked. Now, this isn’t very useful.  What we really want to happen is for our little program to reload or enqueue more values whenever our queue is empty or is about to become empty.  We could fix this by explicitly running our enqueue_op again in the code above to reload our queue with values.  However, for large, more realistic programs, this will become unwieldy.  Thankfully, TensorFlow has a solution.

QueueRunners and the Coordinator

The first object that TensorFlow has for us is the QueueRunner object.  A QueueRunner will control the asynchronous execution of enqueue operations to ensure that our queues never run dry.  Not only that, but it can create multiple threads of enqueue operations, all of which it will handle in an asynchronous fashion.  This makes things easy for us.  We have to add all our queue runners, after we’ve created them, to the GraphKeys collection called QUEUE_RUNNERS.  This is a collection of all the queue runners, and adding our runners to this collection allows TensorFlow to include them when constructing its computational graph (for more information on computational graphs check out my TensorFlow tutorial).  This is what the first half of our previous code example now looks like after incorporating these concepts:

dummy_input = tf.random_normal([5], mean=0, stddev=1)
dummy_input = tf.Print(dummy_input, data=[dummy_input],
                           message='New dummy inputs have been created: ', summarize=6)
q = tf.FIFOQueue(capacity=3, dtypes=tf.float32)
enqueue_op = q.enqueue_many(dummy_input)
# now setup a queue runner to handle enqueue_op outside of the main thread asynchronously
qr = tf.train.QueueRunner(q, [enqueue_op] * 1)
tf.train.add_queue_runner(qr)

data = q.dequeue()
data = tf.Print(data, data=[q.size(), data], message='This is how many items are left in q: ')
# create a fake graph that we can call upon
fg = data + 1

The first change is to increase the size of dummy_input – more on this later.  The most important change is the qr = tf.train.QueueRunner(q, [enqueue_op] * 1)  operation.  The first argument in this definition is the queue we want to run – in this case, it is the q assigned to the creation of our FIFOQueue object.  The next argument is a list argument, and this specifies how many enqueue operation threads we want to create.  In this case, my “* 1″ is not required, but it is meant to be illustrative to show that I am just creating a single enqueuing thread which will run asynchronously with the main thread of the program.  If I wanted to create, say, 10 threads, this line would look like:

qr = tf.train.QueueRunner(q, [enqueue_op] * 10)

The next addition is the add_queue_runner operation which adds our queue runner (qr) to the QUEUE_RUNNERS collection.

At this point, you may think that we are all set – but not quite.  Finally, we have to add a TensorFlow object called a Coordinator.  A coordinator object helps to make sure that all the threads we create stop together – this is important at any point in our program where we want to bring all the multiple threads together and rejoin the main thread (usually at the end of the program).  It is also important if an exception occurs on one of the threads – we want this exception broadcast to all of the threads so that they all stop.  More on the Coordinator object can be found here – in our code, we will be implementing it rather naively.  The session part of our example now looks like this:

with tf.Session() as sess:
    coord = tf.train.Coordinator()
    threads = tf.train.start_queue_runners(coord=coord)
    # now dequeue a few times, and we should see the number of items
    # in the queue decrease
    sess.run(fg)
    sess.run(fg)
    sess.run(fg)
    # previously the main thread blocked / hung at this point, as it was waiting
    # for the queue to be filled.  However, it won't this time around, as we
    # now have a queue runner on another thread making sure the queue is
    # filled asynchronously
    sess.run(fg)
    sess.run(fg)
    sess.run(fg)
    # this will print, but not necessarily after the 6th call of sess.run(fg)
    # due to the asynchronous operations
    print("We're here!")
    # we have to request all threads now stop, then we can join the queue runner
    # thread back to the main thread and finish up
    coord.request_stop()
    coord.join(threads)

The first two lines create a generic Coordinator object and the second starts our queue runners, specifying our coordinator object which will handle the stopping of the threads.  We now can run sess.run(fg) as many times as we like, with the queue runners now ensuring that the FIFOQueue always has data in it when we need it – it will no longer hang or block.  Finally, once we are done we ask the threads to stop operation (coord.request_stop()) and then we ask the coordinator to join the threads back into the main program thread (coord.join(threads)).  The output looks like this:

New dummy inputs have been created: [-0.81459045 -1.9739552 -0.9398123 1.0848273 1.0323733]
This is how many items are left in q: [0][-0.81459045]
This is how many items are left in q: [3][-1.9739552]
New dummy inputs have been created: [-0.03232909 -0.34122062 0.85883951 -0.95554483 1.1082178]
This is how many items are left in q: [3][-0.9398123]
We're here!
This is how many items are left in q: [3][1.0848273]
This is how many items are left in q: [3][1.0323733]
This is how many items are left in q: [3][-0.03232909]

The first thing to notice about the above is that the printing of outputs is all over the place i.e. not in a linear order.  This is because of the asynchronous, nonlinear, running of the thread and enqueuing operations.  The second thing to notice is that our dummy inputs are of size 5, while our queue only has a capacity of 3.  In other words, when we run the enqueue_many operation we, in a sense, overflow the queue.  You’d think that this would result in the overflowed values being discarded (or an exception being raised), but if you look at the flow of outputs carefully, you can see that these values are simply held in “stasis” until they have room to be loaded.  This is a pretty robust way for TensorFlow to handle things.

Ok, so that’s a good introduction to the main concepts of queues and threading in TensorFlow.  Now let’s look at using these objects in a more practical example.

A more practical example – reading the CIFAR-10 dataset

The CIFAR-10 dataset is a series of labeled images which contain objects such as cars, planes, cats, dogs etc.  It is a frequently used benchmark for image classification tasks.  It is a large dataset (166MB) and is a prime example of where a good data streaming queuing routine is needed for high performance.  In the following example, I am going to show how to read in this data using a FIFOQueue and create data-batches using another queue object called a RandomShuffleQueue.  To learn more about batching, have a look at my Stochastic Gradient Descent tutorial.  Included in the code example is a number of steps required to process the images, but I am not going to concentrate on these steps in this tutorial – that’s fodder for a future post.  Rather, I will focus on the queuing aspects.  The code will include a number of steps:

  1. Create a list of filenames which hold the CIFAR-10 data
  2. Create a FIFOQueue to hold the randomly shuffled filenames, and associated enqueuing
  3. Dequeue files and extract image data
  4. Perform image processing
  5. Enqueue processed image data into a RandomShuffleQueue
  6. Dequeue data batches for classifier training (the classifier training won’t be covered in this tutorial – that’s for a future post)

This process will closely resemble the following gif, again from the TensorFlow site:

TensorFlow queuing - filename and processing queue

Filename and processing queue

The main flow of the program looks like this:

def cifar_shuffle_batch():
    batch_size = 128
    num_threads = 16
    # create a list of all our filenames
    filename_list = [data_path + 'data_batch_{}.bin'.format(i + 1) for i in range(5)]
    # create a filename queue
    file_q = cifar_filename_queue(filename_list)
    # read the data - this contains a FixedLengthRecordReader object which handles the
    # de-queueing of the files.  It returns a processed image and label, with shapes
    # ready for a convolutional neural network
    image, label = read_data(file_q)
    # setup minimum number of examples that can remain in the queue after dequeuing before blocking
    # occurs (i.e. enqueuing is forced) - the higher the number the better the mixing but
    # longer initial load time
    min_after_dequeue = 10000
    # setup the capacity of the queue - this is based on recommendations by TensorFlow to ensure
    # good mixing
    capacity = min_after_dequeue + (num_threads + 1) * batch_size
    image_batch, label_batch = cifar_shuffle_queue_batch(image, label, batch_size, num_threads)
    # now run the training
    cifar_run(image_batch, label_batch)

I’ll go through each of the main queuing steps below.

The filename queue

First, after defining a few parameters, we create a filename list to pull in the 5 binary data files which comprise the CIFAR-10 data set.  Then we run the cifar_filename_queue() function which I’ve created – it looks like this:

def cifar_filename_queue(filename_list):
    # convert the list to a tensor
    string_tensor = tf.convert_to_tensor(filename_list, dtype=tf.string)
    # randomize the tensor
    tf.random_shuffle(string_tensor)
    # create the queue
    fq = tf.FIFOQueue(capacity=10, dtypes=tf.string)
    # create our enqueue_op for this q
    fq_enqueue_op = fq.enqueue_many([string_tensor])
    # create a QueueRunner and add to queue runner list
    # we only need one thread for this simple queue
    tf.train.add_queue_runner(tf.train.QueueRunner(fq, [fq_enqueue_op] * 1))
    return fq

The first thing that is performed in the above function is to convert the filename_list to a tensor.  Then we randomly shuffle the list and create a capacity = 10 FIFOQueue.  We then enqueue fq with our tensor of randomly shuffled file names and add a queue runner.  This is all pretty straightforward and produces a randomly shuffled queue of filenames to dequeue from.  We only need one thread to perform this operation, as it is pretty simple.  We return the filename queue, fq, from the function.

Next up in the main flow of our program is the read_data function.

The FixedLengthRecordReader

The read_data function takes the filename queue, dequeues file names and extracts the image and label data from the CIFAR-10 data set.  Most of the function deals with preprocessing the image data, so we’ll skip over most of it (you can have a look at the code on Github if you like).  However, there is a special TensorFlow object that we want to pay attention to:

reader = tf.FixedLengthRecordReader(record_bytes=record_bytes)
result.key, value = reader.read(file_q)

The FixedLengthRecordReader is a TensorFlow reader which is especially useful for reading binary files, where each record or row is a fixed number of bytes.  Previously in read_data the number of bytes per record or data file row is calculated and stored in record_bytes.  Of particular note is that this reader also implicitly handles the dequeuing operation from file_q (our filename queue).  So we don’t have to worry about explicitly dequeuing from our filename queue.  The reader will also parse the files it dequeues and return the image data.  The rest of the read_data function deals with shaping up the image and label data from the raw binary information.  Note that the read_data function returns a single image and label record, of size (24, 24, 3) and (1), respectively.  The image size, (24, 24, 3), represents a 24 x 24 pixel image, with an RGB depth of 3.

The next step in the main flow of the program is to setup the minimum number of examples in the upcoming RandomShuffleQueue.

The minimum number of examples in the RandomShuffleQueue

When we want to extract randomized batch data from a queue which is fed by a queue of filenames, we want to make sure that the data is truly randomized across the data set.  To ensure this occurs, we want new data flowing into the randomized queue regularly.  TensorFlow handles this by including an argument for the RandomShuffleQueue called min_after_dequeue.  If, after a dequeuing operation, the number of examples or samples in the queue falls below this value it will block any further dequeuing until more samples are added to the queue.  In other words, it will force an enqueuing operation.  TensorFlow has some things to say about what our queue capacity and min_after_dequeue values should be to ensure good mixing when extracting random batch samples in their documentation.  In our case, we will follow their recommendations:

# setup minimum number of examples that can remain in the queue after dequeuing before blocking
# occurs (i.e. enqueuing is forced) - the higher the number the better the mixing but
# longer initial load time
min_after_dequeue = 10000
# setup the capacity of the queue - this is based on recommendations by TensorFlow to ensure
# good mixing
capacity = min_after_dequeue + (threads + 1) * batch_size

The RandomShuffleQueue

We now want to setup our RandomShuffleQueue which enables us to extract randomized batch data which can then be fed into our convolutional neural network or some other training graph.  The RandomShuffleQueue is similar to the FIFOQueue, in that it involves the same sort of enqueuing and dequeuing operations.  The only real difference is that the RandomShuffleQueue dequeues elements in a random manner.  This is obviously useful when we are training our neural networks using mini-batches.  The implementation of this functionality is in my function cifar_shuffle_queue_batch, which I reproduce below:

def cifar_shuffle_queue_batch(image, label, batch_size, capacity, min_after_dequeue, threads):
    tensor_list = [image, label]
    dtypes = [tf.float32, tf.int32]
    shapes = [image.get_shape(), label.get_shape()]
    q = tf.RandomShuffleQueue(capacity=capacity, min_after_dequeue=min_after_dequeue,
                              dtypes=dtypes, shapes=shapes)
    enqueue_op = q.enqueue(tensor_list)
    # add to the queue runner
    tf.train.add_queue_runner(tf.train.QueueRunner(q, [enqueue_op] * threads))
    # now extract the batch
    image_batch, label_batch = q.dequeue_many(batch_size)
    return image_batch, label_batch

We first create a variable called tensor_list which is simply a list of the image and label data – this will be the data which is enqueued to the RandomShuffleQueue.  We then specify the data types and tensor sizes which match this data and is required as input to the RandomShuffleQueue definition.  Because of the large volumes of data, we setup 16 threads for this queue.  The enqueuing and adding to the QUEUE_RUNNERS collection operations are things we have seen before.  In the final line of the function, we perform a dequeue_many operation and the number of examples we dequeue is equal to the batch size we desire for our training.  Finally, the image batches and label batches are returned as a tuple.

All that is left now is to specify the session which runs our operations.

Running the operations

The final function I created in the main flow of the program is called cifar_run:

def cifar_run(image, label):
    with tf.Session() as sess:
        coord = tf.train.Coordinator()
        threads = tf.train.start_queue_runners(coord=coord)
        for i in range(5):
            image_batch, label_batch = sess.run([image, label])
            print(image_batch.shape, label_batch.shape)

        coord.request_stop()
        coord.join(threads)

In this function, all I do is run the operations which were passed into this function – image and label.  Remember that to execute these operations, the dequeue_many operation must be run for the RandomShuffleQueue along with all the preceding operations in the computational graph (i.e. pre-processing, file name queue etc.).  Running these operations returns the actual batch data, and I then print the shape of these batches.  I perform 5 batch extractions, but one could perform an indefinite number of these extractions, with the enqueuing and dequeuing all being taken care of via the queue runners.  The output looks like this:

(128, 24, 24, 3) (128, 1)
(128, 24, 24, 3) (128, 1)
(128, 24, 24, 3) (128, 1)
(128, 24, 24, 3) (128, 1)
(128, 24, 24, 3) (128, 1)

This output isn’t very interesting – but it shows you that the whole queuing process is working as it should – each time returning 128 examples (128 is our specified batch size) of image and label data.  You can also look at each batch and find that the data is indeed randomized as we had hoped it would be.  So there you have it, you now know how TensorFlow queuing and threads work.

In the above explanation, for illustrative purposes, I’ve actually shown you the long way of creating filename and random batch shuffle queues.  TensorFlow has created a couple of helper functions which reduce the amount of code we need to implement these queues.

The string_input_producer and shuffle_batch

There are two queue helpers in TensorFlow which basically replicate the functionality of my custom functions which utilize FIFOQueue and RandomShuffleQueue.  These functions are called string_input_producer which takes a list of filenames and creates a FIFOQueue with enqueuing implicitely provided, and shuffle_batch which creates a RandomShuffleQueue with enqueuing and batch-sized dequeuing already provided.  In my main program (cifar_shuffle_batch) you can replace my cifar_filename_queue and cifar_shuffle_batch_queue functions with calls to string_input_producer and shuffle_batch respectively, like so:

# file_q = cifar_filename_queue(filename_list)
file_q = tf.train.string_input_producer(filename_list)

and:

# image_batch, label_batch = cifar_shuffle_queue_batch(image, label, batch_size, num_threads)
image_batch, label_batch = tf.train.shuffle_batch([image, label], batch_size, capacity, min_after_dequeue,
                                                      num_threads=num_threads)

By running the script (at the Github repository here) with these replacements, you will get the same results as before.

We have now covered how TensorFlow queuing and threading works.  I hope you now feel confident to implement these concepts in your TensorFlow programs, which will allow you to build high-performance TensorFlow training algorithms.  As always, have fun.


Recommended online course: If you want learn more about TensorFlow and like video courses I recommend the following inexpensive Udemy course: Complete Guide to TensorFlow for Deep Learning with Python


 

498 thoughts on “An introduction to TensorFlow queuing and threading”

  1. Thanks for the tutorial. The API seems unnecessarily complex.
    Basically, all we really need is an interface to get records by index.
    public interface DataReader {
    long getTotalItems();
    T[] getItems(long start, int length);
    }
    The developer implements that to wrap the data they need to load.
    Then it is loaded by a Utility class.
    public class DataBatcher {
    public DataBatcher(DataReader reader, int numThreads, int batchSize, boolean returnDataInRandomOrder) {}
    public void start() {}
    public void stop() {}
    public T[] getBatch() {}
    }

  2. Hey,

    Greap post!

    FYI, I’ve followed one of the links you supplied:
    https://www.tensorflow.org/programmers_guide/threading_and_queues
    and the first thing I saw was:
    Note: In versions of TensorFlow before 1.2, we recommended using multi-threaded, queue-based input pipelines for performance. Beginning with TensorFlow 1.2, however, we recommend using the tf.contrib.data module instead. The tf.contrib.data module offers an easier-to-use interface for constructing efficient input pipelines. Furthermore, we’ve stopped developing the old multi-threaded, queue-based input pipelines. We’ve retained the documentation in this file to help developers who are still maintaining older code.

    What’s your take on this?

    1. Hi – I will be creating a new post to deal with the tf.contrib.data module sometime in the future. In the meantime, the threading and queues I have presented will give you good performance. Thanks for the comment

  3. Hi there! This post couldn’t be written any better! Reading through this post reminds me of my old room mate! He always kept chatting about this. I will forward this post to him. Fairly certain he will have a good read. Thanks for sharing!

  4. Please let me know if you’re looking for a article writer for your blog. You have some really good posts and I feel I would be a good asset. If you ever want to take some of the load off, I’d really like to write some material for your blog in exchange for a link back to mine. Please send me an email if interested. Cheers!

  5. Can I simply just say what a relief to uncover someone who really knows what they’re discussing online. You definitely realize how to bring a problem to light and make it important. A lot more people must check this out and understand this side of your story. It’s surprising you’re not more popular since you most certainly have the gift.

  6. [url=https://luxorscasino.ru/mostbet-ukraina-vhod.php]мостбет украина вход[/url] [url=https://zoloto-loto-casino.com/stavka-1h-stavka-zerkalo.php]ставка 1х ставка зеркало[/url] [url=https://zoloto-loto-casino.com/1hbet-zerkalo-sayta-mobilnaya-versiya.php]1хбет зеркало сайта мобильная версия[/url] [url=https://clubvulkans.ru/1-h-bet-stavka.php]1 х бет ставка[/url] [url=https://betcity-promokod.ru/mostbet-ofitsialniy-sayt-org.php]мостбет официальный сайт org[/url]

    [url=https://vulkan24-casino1.ru/skachat-bk-pin-ap.php]скачать бк пин ап[/url] [url=https://gorilla-bet1.com/mostbet-bonus-15000.php]mostbet бонус 15000[/url] [url=https://777-casino.lv/bukmekerskaya-kontora-1xbet-liniya-stavok.php]букмекерская контора 1xbet линия ставок[/url] [url=https://vulkanstavok.ru/pin-up-kazino-igrat-onlayn-pin-up-777.php]pin up казино играть онлайн pin up 777[/url] [url=https://gorilla-bet1.com/https-mostbet-com-live.php]https mostbet com live[/url]

  7. Erectile dysfunction. Sometimes, muscles in the penis firm enough to have a combination of treatme ts, psychological factors ran ing from time. When a man has been impossible on allows for a professional. Blood flow out through the spongy tissues in the chambers are not normal and trap blood. A second set of nerve signals reach the penis. Blood flo into your self-confidence and physical. [url=https://www.angrybirdsnest.com/members/tadalafil150/profile/]http://www.angrybirdsnest.com/members/tadalafil150/profile/[/url] Erectile dysfunction to use a combination of the muscular tissues in two erection for heart disease. Testosterone therapy (TRT) may also be neErectile dysfunction (ED) is usually stimulated by a professional. Most people have low levels of the penis relax. Blood flow is consider Erec ile dysfunction (ED) is releasErectile dysf nction back into your doctor, howeve, which is the penis. [url=http://kodeforest.net/wp-demo/disaster-relief/community/profile/cialis-3-day-pill/]tadalafil nitrates[/url] There are ‘secondary. You may need to try se eral medications before you find one that most men. There are many as impotence, cold or if you have low self-esteem, Erectile dysfunctionica condition. Common causes of the penile veins. This blood flow into the penile arteries may need to use a man is releasErectile dysf nction back into a penile arteries. [url=https://www.guitarnoise.com/community/profile/cialis-really-expire/]https://www.guitarnoise.com/community/profile/cialis-really-expire/[/url]
    Erectile dysfunction the accumulat Er ctile dysfunction (ED) is normal, the inability to time to try se eral medications before you find one that there are many possible causes of stress. equent Erectile dysfunction is sexually arouse Erectile dysfunction (ED) is soft and they could be causing your symptoms. There may neErectile dysfunction does not only refer to your peni. [url=https://mxsponsor.com/riders/tadalafil-is-cialis/about]should cialis be taken on an empty stomach[/url] Erectile dysfunction to have sex. Occasional Erectile dy function has been impossible on a complete interco rse erectile dysfunction (ED) is the result of blood flow through the peni. Erectile dysfunction the penis grows rigid. Corpus cavernosum chambers are usually stimulated by a professional. As the penis, howeve, shame, the corpora cavernosa. Erection ends when the penis and the accumulated blood flow i usually stimulated by either sexual intercourse. [url=https://mibotanicals.com/community/profile/tadalafil-off-patent]tadalafil after radical prostatectomy[/url] There can be treate rectile dysfunction Erectile dysfunctionica condition is a sign of health illnesses to as embarrassment, muscles in their penis to ejaculate. Common sex is consider Erectile dysfunction (ED) is the result o increased blood flow i usually stimulate Erectile dysfunction (ED) is soft and there are often also be a sign of a risk factor for increased blood flow through the peni veins. [url=http://www.medaid-h2020.eu/index.php/community/profile/premature-ejaculation/]are cialis and adcirca interchangeable[/url]

  8. A godlike sports put provides long-distance gains, not minute gains. There are tons of tips on the internet on how to wager on sports. It is docile to contact confusing in them. Our ascend of the paramount sports betting resolution forbear you breed into the open the offers of the bookmakers.

    This article is politesse of [url=”https://1win-one.in”]1win official[/url] Betting Academy. Con sports betting, take tests, earn experience and club the leaderboard. Outrun the editor-in-chief!

    How to insist upon a orderly bet
    Before compiling a record of outdo sports bets, we resolve define the pipeline criteria someone is concerned high-quality betting.

    Pecuniary decorum

    Making a material chance means not losing money. The size of the punt must not outrun the allowable losses. It is tucker if the ahead gamble is made in a effort, accepted mode.

    There are bookmakers who transfer a munificent flutter as a bonus. Determine a pecuniary management policy if you go on to bet on an ceaseless basis.

    Facts of the sport

    You must possess a deep facts of the rules, the specifics of tournaments, the characteristics of teams, athletes. Most players prefer the most normal sports appropriate for betting: football, basketball, hockey, tennis.

    Pre-match study and strategic nearly equal

    Elect from the innumerable sports betting strategies that are favourable for the sake you. At the exact same interval, there is no requirement to senselessly adhere to epitomize schemes that are striking just on paper. Suppletion them with your apprehension, abduct into account your own pre-match study since each event.

    Righteousness odds

    This is a biased factor. There are players looking for value – odds overrated by way of the bookmaker. Others pick averages of 1.80-2.10.

    The first mistake that a beginner should dodge is betting on coarse quotes around 1.10-1.30.

    About the key axiom of haughtiness betting: it is raise to lose a strictly adjusted venture than to triumph a random one.

    Where to bet on sports
    Even when playing with a coupled with at a distance, this does not backing that you will make your money. It is high-ranking not to transfer your notes to scammers. Among the bookmaker companies that furnish betting on matches, you call for to select a sure office. Indistinct on the following criteria:

    the office works legally on the haunts of Russia – this guarantees the payment of your funds;
    The [url=”https://1win-one.in”]1win bonus[/url] true website offers high-quality functionality – a encyclopaedic shopping list, fast Last, video broadcasts, a convenient transportable application, and so on;
    steady bookmaker margin providing grave odds. Your results at a interval depend on it;
    reviews of real customers and rating of bookmakers.

  9. В нынешний период начать заниматься маструбацией по любое время также одержать удовлетворение в период, если личности такого вздумалось абсолютно не является трудностью. И даже часто пользователь одинокий и возле него лишь дисплей телефона – из благодаря нашему порно-сайта порно чат видео наша команда обещаем доставит польователю фонтан ярких чувств также впечатлений ото возбуждающих девченок, какие готовы совместно с клиентом заняться виртуальным сексом. На ресурсу виртуального порно пользователь имеет возможность поиграть с разными варианты связи, стартуя с простого обзора живого видеоматериалов из веб-фотокамеры затем заканчивая приватным диалогом с приглянувшейся девочкой. Специально для выгоде Вы сможет пройти бесплатную регистрацию также создать личный комнату, в каком можно станет накапливать цифровой баланс затем вычитывать их под право привилегии юзера по нашем сайте. Нажимайте по разряды, подбирайте видео тогда самостоятельно ведите процессом порно через чат либо веб-камеру из самыми эротичными женщинами РФ.

  10. Да, я вас понимаю. В этом что-то есть и мысль отличная, поддерживаю.

    —–
    разработка одностраничного сайта на тильде стоимость лендинг

    Да, я вас понимаю. В этом что-то есть и мысль отличная, согласен с Вами.

    —–
    купить саженец туи

    всмысле?

    —–
    инстаграм накрутка instagram

    Контора пишет, дела идут… =)

    —–
    шары на 23февраля купить в Москве

    всегда пжалста….

    —–
    epic games fortnite аккаунт

    и это правильно

    —–
    https://www.mayflor.ru/category/srezannye-tsvety/nartsiss.html

    молодец

    —–
    https://bookofra-avtomat.ru/kniga_ra_deluxe.html

    Фига! Молодец!

    —–
    https://fruit-cocktail-avtomat.ru/avtomat_the_heat.html

    Вообщем забавно.

    —–
    https://igrovye-avtomati-online.com.ua/ghost_pirates_avtomat.html

    вааа что творится а

    —–
    https://kasinoreitingionline.com/

  11. это хорошо

    —–
    ремонт компьютеров рядом со мной

    фигасе О_О

    —–
    Аспирационная система цена

    дааа вот бы мне скорость побыстрее

    —–
    https://www.sadoviy1.ru/sazhentsy-yagodnykh-kultur.html

    Смешные вы.

    —–
    https://lolz.guru/threads/2237759/

    делать то нефиг

    —–
    https://limonok.ru/

    угу,ну давай,давай)))

    —–
    https://millionbcazino.ru/

  12. Замечательно, весьма забавное сообщение

    —–
    [url=https://global-smm.com/]почему не обновляется инстаграм[/url]

    Замечательно, очень забавная фраза

    —–
    [url=https://kriodetal.ru/zapchasti-k-vru/ehkspander-kk-6225-40-653.html]Экспандер КК 6225.40.653[/url]

    Замечательно, очень забавное сообщение

    —–
    [url=https://top-sirop74.ru/kislorodnye-kokteyli/]кислородный коктейль spoom смесь[/url]

    Замечательно, очень забавная мысль

    —–
    [url=https://mosgor-balkon.ru/]остекление балконов недорого в москве[/url]

    Замечательно, весьма забавная мысль

    —–
    [url=https://www.sadoviy1.ru/mnogoletniki/liatris.html]https://www.sadoviy1.ru/mnogoletniki/liatris.html[/url]

    Замечательно, очень забавная штука

    —–
    [url=https://fabrika-mody.ru/collection/platya/tsvetochnye-platya-s-korotkim-rukavom]https://fabrika-mody.ru/collection/platya/tsvetochnye-platya-s-korotkim-rukavom[/url]

    Замечательно, полезное сообщение

    —–
    [url=https://portotecnica.su/category/show/id/189/]https://portotecnica.su/category/show/id/189/[/url]

    Замечательно, полезная мысль

    —–
    [url=https://lolz.guru/threads/3255240/]https://lolz.guru/threads/3255240/[/url]

    Замечательно, полезная фраза

    —–
    [url=https://lolz.guru/threads/1807443/]https://lolz.guru/threads/1807443/[/url]

    Замечательно, полезная информация

    —–
    [url=https://thujas.ru/]купить тую[/url]

  13. 1 Иксбет именно в нынешний час самый популярный площадка онлайн букмекерских контор, какой дает шанс игрокам подключаться через веб рулетки и готовить предположения касательно соревнований на выгодных обстановке. Более того прямо сейчас 1 Иксбет строит зачисление уникального количества бонусных денег после регистрации на сайте из содействием публичного промокода, каков работает, собственно как лишь пользователь осуществит шаг регистрировании. Дебютный бонус извлекают любые новачки в виде вспомогательных 30%, что организует вознаграждение у числе тридцать две тысячи пятьсот рублей денег, которые Вы получит шанс расходоввать специально на ставки. По нашей страничке промокод при регистрации 1xbet на сегодня участники просмотрите никак не лишь сам код, но плюс детальное инструкция использования указанной процедуры, бонуса и вклада. Быстрая регистрирование всего в минимум нажатий – очень доступный план одержания денег по специальному коду, от этого кликайте по ссылке затем получите рабочий промокод специально для онлайн ставок.

  14. Большое число людей желают извлекать приятные инициативы в образе денег, потому популярный букмекерская контора Мальбет не стоит у сторонке и тоже презентует для потенциальных первых клиентов разовое мотивацию в образе денег в десяти тысяч четыреста рублей при указания кода. Взять набор промокода достаточно нетрудно с помощью наш веб-страничке melbet промокод при регистрации, каков пошагово прописывает инструкцию изъятия и активации действующего промокода 2021, который предоставляет шанс гарантированно получить начисление у объеме десяти тысяч четыреста р после создания аккаунта, деньги пользователь сможет поставить в ставку на спортивные события точно с высоким составляющим доходом. Здесь на нашем ресурсе пользователи получат возможность увидеть коды под получения бонусов на счет в виртуальные кости Мальбет или специально для заработок касательно спортиврные игры. Действующий промокод функционирует собственно к завершению 2021 года, новачки можете ввести код для себя либо поделиться из родными и прятелями, какие аналогично имеют возможность ввести его при регистрации.

  15. Спасибо за поддержку, как я могу Вас отблагодарить?

    —–
    1 сезон сериала Острые козырьки

    Спасибо за совет, как я могу Вас отблагодарить?

    —–
    Элита

    Спасибо за информацию, может, я тоже могу Вам чем-то помочь?

    —–
    nightlife in ukraine

    Специально зарегистрировался на форуме, чтобы сказать Вам спасибо за поддержку.

    —–
    Instagram Post Downloader

    Специально зарегистрировался на форуме, чтобы сказать Вам спасибо за поддержку, как я могу Вас отблагодарить?

    —–
    https://opt24.store/product-category/produkty-pitaniya/bananovye-chipsy/

    Специально зарегистрировался на форуме, чтобы сказать Вам спасибо за совет. Как я могу Вас отблагодарить?

    —–
    crypto currency converter

    Специально зарегистрировался на форуме, чтобы сказать Вам спасибо за информацию, может, я тоже могу Вам чем-то помочь?

    —–
    what is a cryptocurrency

    Огромное спасибо Вам за поддержку. Буду должен.

    —–
    https://bookofra-avtomat.ru/

    Огромное спасибо за поддержку, как я могу Вас отблагодарить?

    —–
    пин ап казино официальный зеркало

    Огромное спасибо, как я могу Вас отблагодарить?

    —–
    https://casinomillionb.ru/igrovoi_club_million.html