Blog_post

We are not going to cover any basics on PyTorch.

Just a post to see how working linear regression on PyTorch be.

Simple Linear Regression and comparing with PyTorch

In [1]:
import numpy as np

import matplotlib.pyplot as plt
In [2]:
np.random.seed(0)
x_data = np.random.randint(0,100,100)

For every x data we generate corresponding y data linearly varying

For this, we consider a slope of 3, and we add some noise to this

In [3]:
np.random.seed(0)
random_noise = np.random.normal(0,35,100)
In [4]:
y_data = 3* x_data + random_noise
In [5]:
plt.plot(x_data,y_data,'.')
plt.xlabel('X data', size=15)
plt.ylabel('Y data', size=15)
plt.show()

Before procedding we need to randomly predict the weights (slope here)

In [6]:
np.random.seed(0)
w = np.random.randint(10)
In [7]:
y_pred_data = w*x_data
In [8]:
plt.plot(x_data,y_data,'.')
plt.xlabel('X data', size=15)
plt.ylabel('Y data', size=15)
plt.plot(x_data,y_pred_data)
plt.show()

Not so bad. We'll try with gradient descent method to optimize the weight

We will try with a learning rate

In [9]:
learning_rate = 0.00001
In [10]:
data_size = len(y_data)
In [11]:
for epoch in range(500):
    y_pred_data = w*x_data
    Error = y_pred_data - y_data
    w = w - (learning_rate/data_size)* np.dot(x_data.T,Error)

Using numpy weights we got

In [12]:
w_using_numpy = w
w_using_numpy
Out[12]:
3.0508619564229016
In [13]:
plt.plot(x_data,y_data,'.')
plt.xlabel('X data', size=15)
plt.ylabel('Y data', size=15)
plt.plot(x_data,y_pred_data)
plt.show()

Now we'll try this using PyTorch

If you have not installed PyTorch, and without GPU installation

In [14]:
import torch
from torch.autograd import Variable

We use the same random seed

In [15]:
np.random.seed(0)
w = np.random.randint(10)

Convert numpy arrays to PyTorch tensors and variable

In [16]:
w = Variable(torch.Tensor([w]), requires_grad = True)
In [17]:
x_data = torch.Tensor(x_data)

y_data = torch.Tensor(y_data)
In [18]:
for epoch in range(100):
    y_pred_data = x_data*w
    MSE = sum((y_pred_data-y_data)**2)/data_size
    MSE.backward()
    w.data = w.data - learning_rate*w.grad.data
    w.grad.data.zero_()
In [19]:
w.data.item()
Out[19]:
3.053943157196045
In [20]:
print(f'Weights using numpy: {round(w_using_numpy,3)} \n Weights using PyTorch: {round(w.data.item(),3)}')
Weights using numpy: 3.051
 Weights using PyTorch: 3.054

Below cells for styling, ignore them

In [21]:
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))