Linear Regression using TensorFlow

This guest post by Giancarlo Zaccone, the author of Deep Learning with TensorFlow, shows how to run linear regression on a real-world dataset using TensorFlow

In statistics and machine learning, linear regression is a technique that’s frequently used to measure the relationship between variables. This is a simple and effective algorithm that can be used in predictive modeling as well.

Linear regression models the relationship between a dependent variable, yi, an interdependent variable, x­i, and a random term, b. This can be seen as follows:

In this article, you’ll see an example of linear regression using TensorFlow with a real dataset. Many datasets are available online to test regression; one of them is the Boston housing dataset, which can be downloaded from the UCI Machine Learning Repository at https://archive.ics.uci.edu/ml/datasets/Housing. It is also available as a preprocessed dataset with scikit-learn.

Running linear regression on a real dataset

Start by importing all the required libraries, including TensorFlow, NumPy, Matplotlib, and scikit-learn:

</>
Copy
import matplotlib.pyplot as plt

import tensorflow as tf

import numpy as np

from numpy import genfromtxt

from sklearn.datasets import load_boston

from sklearn.model_selection import train_test_split

Next, prepare the training set consisting of features and labels from the Boston housing dataset. The read_boston_data ()  method reads from scikit-learn and returns the features and labels separately:

</>
Copy
def read_boston_data():

boston = load_boston()

features = np.array(boston.data)

labels = np.array(boston.target)

return features, labels

Now that you have the features and labels, you need to normalize the features as well, using the normalizer()  method. Here is the signature of the method:

</>
Copy
def normalizer(dataset):

mu = np.mean(dataset,axis=0)

sigma = np.std(dataset,axis=0)

return(dataset - mu)/sigma

The bias_vector()  method is used to append the bias term (that is all 1s) to the normalized features that you prepared in the above step. It corresponds to the b term in the equation, y = W*x + b :

</>
Copy
def bias_vector(features,labels):

n_training_samples = features.shape[0]

n_dim = features.shape[1]

f = np.reshape(np.c_[np.ones(n_training_samples),features],[n_training_samples,n_dim + 1])

l = np.reshape(labels,[n_training_samples,1])

return f, l

Now invoke these methods and split the dataset into training and testing—75% for training and the rest for testing:

</>
Copy
features,labels = read_boston_data()

normalized_features = normalizer(features)

data, label = bias_vector(normalized_features,labels)

n_dim = data.shape[1]

# Train-test split

train_x, test_x, train_y, test_y = train_test_split(data,label,test_size = 0.25,random_state = 100)

Use TensorFlow’s data structures (such as placeholders, labels, and weights):

</>
Copy
learning_rate = 0.01

training_epochs = 100000

log_loss = np.empty(shape=[1],dtype=float)

X = tf.placeholder(tf.float32,[None,n_dim]) #takes any number of rows but n_dim columns

Y = tf.placeholder(tf.float32,[None,1]) # #takes any number of rows but only 1 continuous column

W = tf.Variable(tf.ones([n_dim,1])) # W weight vector

Well done! You have prepared the data structure required to construct the TensorFlow graph. Now it’s time to construct the linear regression, which is pretty straightforward:

</>
Copy
y_ = tf.matmul(X, W)

cost_op = tf.reduce_mean(tf.square(y_ - Y))

training_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost_op)

In the above code segment, the first line multiplies the features matrix by the weights matrix that can be used for prediction. The second line computes the loss, which is the squared error of the regression line. Finally, the third line performs one-step of GD optimization to minimize the square error.

Note that before you start training the model, you need to initialize all the variables using the initialize_all_variables()  method:

</>
Copy
init = tf.initialize_all_variables()

Fantastic! Now that you’ve managed to prepare all the components, you’re ready to train the actual set. Start by creating a TensorFlow session as follows:

</>
Copy
sess = tf.Session()

sess.run(init_op)

for epoch in range(training_epochs):

sess.run(training_step,feed_dict={X:train_x,Y:train_y})

log_loss = np.append(log_loss,sess.run(cost_op,feed_dict={X: train_x,Y: train_y}))

Once the training is completed, you can make predictions on unseen data. However, it’s even more exciting is to see a visual representation of the completed training—just plot the cost as a function of the number of iterations using Matplotlib:

</>
Copy
plt.plot(range(len(log_loss)),log_loss)

plt.axis([0,training_epochs,0,np.max(log_loss)])

plt.show()

Here’s what the output of the above code looks like:

>>>

Make some predictions on the test dataset and calculate the mean squared error:

</>
Copy
pred_y = sess.run(y_, feed_dict={X: test_x})

mse = tf.reduce_mean(tf.square(pred_y - test_y))

print("MSE: %.4f" % sess.run(mse))

The above code yields the following output:

</>
Copy
>>>

MSE: 27.3749

The last thing to do is to show the line of best fit:

</>
Copy
fig, ax = plt.subplots()

ax.scatter(test_y, pred_y)

ax.plot([test_y.min(), test_y.max()], [test_y.min(), test_y.max()], 'k--', lw=3)

ax.set_xlabel('Measured')

ax.set_ylabel('Predicted')

plt.show()

The following is the output of the above code:

>>>

If this article piqued your interest in deep learning and TensorFlow or if you want to know more about deep learning concepts like feed-forward neural networks and ANN, you can explore Deep Learning with TensorFlow. It introduces the core concepts of deep learning and throws light on implementation and research details on cutting-edge architectures to enable you to apply advanced concepts to your own projects.