Series Introduction

This blog is a part of "A Guide To TensorFlow", where we will explore the TensorFlow API and use it to build multiple machine learning models for real-life examples. In this blog we shall learn about TensorFlow placeholders, variables and quickly learn to use Tensorboard, which is one of the best features of TensorFlow.
Check out the other parts of the series: Part 1, Part 2


In the past two guides, we have learned how to build a basic computational graph, and running them in sessions. But we have been only using tf.Constant, where we specify the tensors in the code itself. It is important for us to create our operations and build our computation graph, irrespective of the available data. In other words we must have a provision to provide data dynamically from a client program, or a helper function. We do that with what is called a “placeholder”. Placeholders, as the name implies, act as if they are Tensor objects, but they do not have their values specified when created. Instead, they hold the place for a Tensor that will be fed at runtime, in effect becoming an “input” node. Creating placeholders is done using the tf.placeholder Operation.
tf.placeholder takes in the following parameters:

  • dtype: The type of elements in the tensor to be fed.
  • shape: The shape of the tensor to be fed (optional). If the shape is not specified, you can feed a tensor of any shape.
  • name: A name for the operation (optional).

The following code shows how to create a placeholder.

# Creates a placeholder vector of length 2 with data type int32
a = tf.placeholder(tf.int32, shape=[2], name="my_input")

After this we can use the placeholder as if it were any other Tensor object. This tensor when directly used in an Op will produce an error if evaluated. Its value must be fed using the feed_dict optional argument to, or other evaluation functions. We use the handle to the placeholder’s output as the key to the dictionary (in the above code, the variable a), and the Tensor object we want to pass in as its value:

# Open a TensorFlow Session
sess = tf.Session()

# Create a dictionary to pass into `feed_dict`
# Key: `a`, the handle to the placeholder's output Tensor
# Value: A vector with value [5, 3] and int32 data type
input_dict = {a: np.array([5, 3], dtype=np.int32)}

# Fetch the value of `d`, feeding the values of `input_vector` into `a`
with tf.Session() as sess:, feed_dict=input_dict)


A placeholder is used to feed in data to our Ops. Tensor and Operation objects are immutable, often there are parameters or variables that change frequently over multiple operations more specifically machine learning tasks, by their nature, need a mechanism to save changing values over time. This is accomplished in TensorFlow with Variable objects, which contain mutable tensor values that persist across multiple calls to that means a variable maintains it's state in the graph across calls to run(). You can create a Variable by using its constructor, tf.Variable() :

import tensorflow as tf 
# Pass in a starting value of three for the variable
my_var = tf.Variable(3, name="my_variable")
# Variables can be used in TensorFlow anywhere you might use a Tensor
add = tf.add(5, my_var)
mul = tf.mul(8, my_var)

The Variable() constructor requires an initial value for the variable, which can be a Tensor of any type and shape. Often we use a Tensors of zeros or ones or even random numbers. We can make use of one of the various helper Ops in TensorFlow to create these common values. Some of these helper Ops are

  • tf.zeros()
  • tf.ones()
  • tf.random_normal()
  • tf.random_uniform()

Each of these takes in a shape parameter which specifies the dimension of the desired Tensor.

Random Variables

In the future blogs we will be creating neural networks, here we initialize the weights with random values, here tf.random_normal() comes in really handy. We can use the function as follows:

# 3x3x3 Tensor of normally distributed numbers; mean 0 and standard deviation 2
normal = tf.random_normal([3,3,3], mean=0.0 , stddev=2.0)

Note: It is a common practice to use tf.truncated_normal() instead of tf.random_normal(), as it does not create any values more than two standard deviations away from its mean. This prevents the possibility of having one or two numbers be significantly different than the other values in the tensor. tf.truncated_normal() selects random numbers from a normal distribution whose mean is close to 0 and values are close to 0 Ex. -0.1 to 0.1.
In Machine Learning we usually want your weights to be close to 0 to prevent dead neurons.

Initializing Variables

When we launch the graph, variables have to be explicitly initialized before we can run Ops that use their value. There is an initializer op that lets us do that. There are other means of initializing variables like restoring the variable from a save file, or simply running an assign Op that assigns a value to the variable. The reason we have to do this is because Variable objects live in the Graph like most other TensorFlow objects, but their state is actually managed by a Session, making it compulsory to initialize the Variable within a Session. This causes the Session to start keeping track of the ongoing value of the Variable . This is typically done by passing in the tf.global_variables_initializer() Operation to

init = tf.global_variables_initializer()
with tf.Session() as sess:

In some situations we might have to initialize only a particular variable, in such a case we can use tf.initialize_variables() , which takes in a list of Variables to be initialized:

var1 = tf.Variable(0, name="initialize_me") 
var2 = tf.Variable(1, name="no_initialization")
init = tf.initialize_variables([var1], name="init_var1")
with tf.Session() as sess:

If at all we need to create a variable with an initial value dependent on another variable, we can use the other variable's initialized_value(). This ensures that variables are initialized in the right order. For example

# Create another variable with the same value as 'old'.
new_one = tf.Variable(old.initialized_value(), name="new_one")
# Now, the variable must be initialized.
init = tf.variables_initializer(var_list=[new_one])

Modifying Variables

Variable.assign() is an Op that can be used to change the value of an Variable. Since it is an Operation, it must be run in a Session to take effect. It This outputs a Tensor that holds the new value of ref after the value has been assigned. This makes it easier to chain operations that need to use the reset value. Basic usage of Variable.assign():

# Create variable with starting value of 1
my_var = tf.Variable(1)

# Create an operation that multiplies the variable by 2 each time it is run
my_var_times_two = my_var.assign(my_var * 2)

Variable.assign() takes the following parameters:

  • ref: It is a mutable Tensor (Variable) whose value we wish to change. ref may be uninitialized.
  • value: It is a Tensor whose value we want the ref to take. It must have the same type as ref.
  • validate_shape: An optional bool. If true, the operation will validate that the shape of 'value' matches the shape of the Tensor being assigned to. If false, 'ref' will take on the shape of 'value'. Default value is True.
  • use_locking: An optional bool. If True, the assignment will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. Default value is True.
  • name: A name for the operation (optional).

Every time Variable.assign() is run in a Session, the value is modified, running it twice will have the assign operation applied twice on the original variable. Let's use the previous example to illustrate this:

# Create variable with starting value of 1
my_var = tf.Variable(1)

# Create an operation that multiplies the variable by 2 each time it is run
my_var_times_two = my_var.assign(my_var * 2)

# Initialization operation
init = tf.global_variables_initializer()

# Start a session
with tf.Session() as sess: # Initializing the Variable # output = 2 # output = 4 # output = 8 # output = 16

Note: Because Variables are maintained in a Session, each Session can have its own current value for a Variable defined in a graph

For simple incrementing and decrementing of Variables, TensorFlow includes the Variable.assign_add() and Variable.assign_sub() methods:

# Increment by 1

# Decrement by 1

Trainable Variables

The variables we create are by default "trainable", this means that various Optimizer classes (will be covered in the future) can modify these variables multiple times in the course of execution. These classes can change values of Variable objects without explicitly asking to do so. At times it is required to have Variables in our graph that should only be changed manually and not by the Optimizer classes. In such cases we can make the variables non-trainable.
To create a non-trainable we can simply set the trainable parameter to False while creating the Variable

# Creating a non-trainable variable
non_trainable = tf.Variable(a, trainable=False)

There is no easy way to change the variable from trainable to non-trainable and otherwise. Also there is no easy way to check whether the variable is trainable (you need to check whether the name of your variable is in the list of tf.trainable_variables)

Non-trainable variables is typically done for step counters or anything else that isn’t going to be involved in the calculation of a machine learning model.


Let's look at the graph we built in our first blog.
Our Graph
Here's the code for the graph, here we add a new statement in our session tf.summary.FileWriter() and the we close that writer.

import tensorflow as tf

a = tf.constant([5,3], name="input_a")
b = tf.reduce_sum(a, name="sum_b")
c = tf.reduce_prod(a, name="prod_c")
d = tf.add(b,c, name="add_d")

with tf.Session() as sess:
    writer = tf.summary.FileWriter('./my_graph', sess.graph)

Now after we run this python file, we will see no output, the next step to do this enter the following command in the terminal, in the same directory as the python file: tensorboard --logdir="my_graph"

Don’t be alarmed by the “No scalar data was found” warning message. That just means that we didn’t save out any summary statistics for TensorBoard to display- normally. That’s fine, though, as we’re here to admire our beautiful graph. Click on the “Graphs” link at the top of the page, and you should see a graph similar to this:

TensorBoard Output

Name Scopes

Real world models will contain a lot of nodes, even with the name parameter, keeping a track of all the variables and the context of the variables is a very complex job. In order to manage this level of complexity Tensorflow offers a mechanism called name scopes. It allows us to group Operations into larger, named blocks. When you launch your graph with TensorBoard, each name scope will encapsulate its own Ops, making the visualization much more digestible. For basic name scope usage, simply add your Operations in a with tf.name_scope(<name>) block:

import tensorflow as tf

with tf.name_scope("Scope_A"):
    a = tf.add(1, 2, name="A_add")
    b = tf.mul(a, 3, name="A_mul")

with tf.name_scope("Scope_B"):
    c = tf.add(4, 5, name="B_add")
    d = tf.mul(c, 6, name="B_mul")

e = tf.add(b, d, name="output")

Logging this on TensorBoard we get an output similar to the following:

This is how much we need to know for now, we will explore more about TensorBoard in the future.

In the next guide we will explore an example that uses all of the components we’ve discussed: Tensors, Graphs, Operations, Variables, placeholders, Sessions, and name scopes. We’ll also include some TensorBoard summaries so we can keep track of the graph as it runs.