Thursday, January 24, 2019

Python Data: Forecasting Time Series Data using Autoregression

This is (yet) another post on forecasting time series data (you can find all the forecasting posts here).  In this post, we are going to talk about Autoregression models and how you might be able to apply them to forecasting time series problems.

Before we get into the forecasting time series , let’s talk a bit about autoregression models as well as some of the steps you need to take before you dive into using them when using them in forecasting time series data. You can jump over to view my jupyter notebook (simplified without comments) here.

Autoregression vs Linear Regression

Autoregression modeling is a modeling technique used for time series data that assumes linear continuation of the series so that previous values in the time series can be used to predict futures values.  Some of you may be thinking that this sounds just like a linear regression – it sure does sound that way and is – in general – the same idea with additional features of the model that includes the idea of ‘lag variables’.

With a linear regression model, you’re taking all of the previous data points to build a model to predict a future data point using a simple linear model. The simple linear regression model is explained in much more detail here. An example of a linear model can be found below:

y = a + b*X

where a and b are variables found during the optimization/training process of the linear model.

With the autoregression model, your’e using previous data points and using them to predict future data point(s) but with multiple lag variables. Autocorrelation and autoregression are discussed in more detail here. An example of an autoregression model can be found below:

y = a + b1*X(t-1) + b2*X(t-2) + b3*X(t-3)

where a, b1, b2 and b3 are variables found during the training of the model and X(t-1), X(t-2) and X(t-3) are input variables at previous times within the data set.

The above is not nearly enough statistical background to truly understand linear and autoregression models, but I hope it gets you some basic understanding of how the two approaches differ.  Now, let’s dig into how to implement this with python.

Forecasting Time Series with Autoregression

For this type of modeling, you need to be aware of the assumptions that are made prior to beginning working with data and autoregression modeling.

Assumptions:

  • The previous time step(s) is useful in predicting the value at the next time step (dependance between values)
  • Your data is stationary. A time series is stationary if is mean (and/or variance) is constant over time. There are other statistical properties to look at as well, but looking at the mean is usually the fastest/easiest.

If your time series data isn’t stationary, you’ll need to make it that way with some form of trend and seasonality removal (we’ll talk about that shortly).   If your time series data values are independent of each other, autoregression isn’t going to be a good forecasting method for that series.

Lets get into some code and some actual ‘doing’ rather than ‘talking’.

For this example, I’m going to use the retail sales data that I’ve used in the past.  Let’s load the data and take a look at the plot.

### Initial imports to get started.

import pandas as pd
import matplotlib.pylab as plt

%matplotlib inline
 
plt.rcParams['figure.figsize']=(20,10)
plt.style.use('ggplot')

sales_data = pd.read_csv('retail_sales.csv')
sales_data['date']=pd.to_datetime(sales_data['date'])
sales_data.set_index('date', inplace=True)

sales_data.plot()

Nothing fancy here…just simple pandas loading and plotting (after the standard imports for this type of thing).

The plot looks like the following:

retail sales data for Forecasting Time Series Data using Autoregression Models

Let’s check for dependance (aka, correlation) – which is the other assumption for autoregression models. A visual method for checking correlation is to use pandas lag_plot() function to see how well the values of the original sales data are correlated with each other. If they are highly correlated, we’ll see a fairly close grouping of datapoints that align along some point/line on the plot.

pd.tools.plotting.lag_plot(sales_data['sales'])

lag plot for sales data

Because we don’t have many data points, this particular lag_plot() doesn’t look terribly convincing, but there is some correlation in there (along with some possible outliers).

A great example of correlated values can be seen in the below lag_plot() chart. These are taken from another project I’m working on (and might write up in another post).

lag plot example of good correlation

Like good data scientists/statisticians, we don’t want to just rely on a visual representation of correlation though, so we’ll use the idea of autocorrelation plots to look at correlations of our data.

Using pandas, you can plot an autocorrelation plot using this command:

pd.tools.plotting.autocorrelation_plot(sales_data['sales'])

The resulting chart contains a few lines on it separate from the autocorrelation function. The dark horizontal line at zero just denotes the zero line, the lighter full horizontal lines is the 95% confidence level and the dashed horizontal lines are 99% confidence levels, which means that correlations are more significant if they occur at those levels.

auto correlation plot

From the plot above, we can see there’s some significant correlation between t=1 and t=12 (roughly) with significant decline in correlation after that timeframe.  Since we are looking at monthly sales data, this seems to make sense with correlations falling off at the start of the new fiscal year.

We can test this concept by checking the pearson correlation of the sales data with lagged values using the approach below.

sales_data['sales'].corr(sales_data['sales'].shift(12))

We used ’12’ above because that looked to be the highest correlation value from the autocorrelation chart. The output of the above command gives us a correlation value of 0.97 which is quite high (and actually almost too high for my liking, but it is what it is).

Now, let’s take a look at stationarity.  I can tell you just from looking at that chart that we have a non-stationary dataset due to the increasing trend from lower left to upper right as well as some seasonality (you can see large spikes at roughly the same time within each year).  There are plenty of tests that you can do to determine if seasonality / trend exist a time series, but for the purpose of this example, I’m going to do a quick/dirty plot to see trend/seasonality using the seasonal_decompose() method found in the statsmodels library.

from statsmodels.tsa.seasonal import seasonal_decompose
decomposed = seasonal_decompose(sales_data['sales'], model='additive')
x = decomposed.plot() #See note below about this

Note: In the above code, we are assigning decomposed.plot() to x. If you don’t do this assignment, the plot is shown in the jupyter notebook. If anyone knows why this is the case, let me know. Until I figure out why, I’ve just been doing it this way.

The resulting plot is below.

retail sales - decomposed Forecasting Time Series Data using Autoregression Models

Now we know for certain that we have a time series that has a trend (2nd panel from top) and has seasonality (third panel from top).  Now what?  Let’s make it stationary by removing/reducing trend and seasonality.

For the purposes of this particular example, I’m just going to use the quick/dirty method of differencing to get a more stationary model.

sales_data['stationary']=sales_data['sales'].diff()

Plotting this new set of data gets us the following plot.

retail sales differenced for time series forecasting with autoregression

Running seasonal_decompose() on this new data gives us:

retail sales differenced decomposed

From this new decomposed plot, we can see that there’s still some trend and even some seasonality, which is unfortunate because it means we’d need to take a look at other methods to truly remove trend and seasonality from this particular data series, but for this example, I’m going to play dumb and say that its good enough and keep going (and in reality, it might be good enough — or it might not be good enough).

Forecasting Time Series Data – Now on to the fun stuff!

Alright – now that we know our data fits our assumptions, at least well enough for this example. For this, we’ll use the AR() model in statsmodels library. I’m using this particular model becasue it auto-selects the lag value for modeling, which can simplify things. Note: this may not be the ideal approach, but is a good approach when first starting this type of work.

from statsmodels.tsa.ar_model import AR

#create train/test datasets
X = sales_data['stationary'].dropna()
train_data = X[1:len(X)-12]
test_data = X[X[len(X)-12:]]

#train the autoregression model
model = AR(train_data)
model_fitted = model.fit()

In the above, we are simply creating a testing and training dataset and then creating and fitting our AR() model. Once you’ve fit the model, you can look at the chosen lag and parameters of the model using some simple print statements.

print('The lag value chose is: %s' % model_fitted.k_ar)

The lag value chose is: 10

print('The coefficients of the model are:\n %s' % model_fitted.params)

The coefficients of the model are:
 const             7720.952626
L1.stationary       -1.297636
L2.stationary       -1.574980
L3.stationary       -1.403045
L4.stationary       -1.123204
L5.stationary       -0.472200
L6.stationary       -0.014586
L7.stationary        0.564099
L8.stationary        0.792080
L9.stationary        0.843242
L10.stationary       0.395546

If we look back at our autocorrelation plot, we can see that the lag value of 10 is where the line first touches the 95% confidence level, which is usually the way you’d select the lag value when you first run autoregression models if you were selecting things manually, so the selection makes sense.

Now, let’s make some forecasts and see how they compare to actuals.

# make predictions 
predictions = model_fit.predict(
    start=len(train_data), 
    end=len(train_data) + len(test_data)-1, 
    dynamic=False)

# create a comparison dataframe
compare_df = pd.concat(
    [sales_data['stationary'].tail(12),
    predictions], axis=1).rename(
    columns={'stationary': 'actual', 0:'predicted'})

#plot the two values
compare_df.plot()

In this bit of code, we’ve made predictions and then combined the prediction values with the ‘test’ data from the sales_data dataframe.

comparison plot of predicted vs actual

That’s really not a bad model at it shows trend and movements (high/lows, etc) well but doesn’t quite get the extreme values.   Let’s check our root mean square error.

from sklearn.metrics import r2_score

r2 = r2_score(sales_data['stationary'].tail(12), predictions)

This gives us a root mean square value of 0.64, which isn’t terrible but there is room for improvement here.

One thing to note about statsmodels AR() libary is that it makes it difficult to use this in on ‘online’ fashion (e.g., train a model and then add new data points as they come in). You’d need to either retrain your model based on the new datapoint added or just save the coefficients from the model and predict your own values as needed.

I hope this has been a good introduction of forecasting time series data using autoregression in python. A always, if you have any questions or comments, leave them in the comment section or contact me.

Note: If you have some interest in learning more about determining stationarity and other methods for eliminating trend and seasonality beyond just differencing, let me know and i’ll put another post up that talks about those things in detail.

Contact me / Hire me

If you’re working for an organization and need help with forecasting, data science, machine learning/AI or other data needs, contact me and see how I can help. Also, feel free to read more about my background on my Hire Me page. I also offer data science mentoring services for beginners wanting to break into data science….if this is of interested, contact me.


To learn more about Time Series Forecasting, I highly recommend the following books:

 

The post Forecasting Time Series Data using Autoregression appeared first on Python Data.



from Planet Python
via read more

No comments:

Post a Comment

TestDriven.io: Working with Static and Media Files in Django

This article looks at how to work with static and media files in a Django project, locally and in production. from Planet Python via read...