Prediction (out of sample)¶
[1]:
%matplotlib inline
[2]:
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm
plt.rc("figure", figsize=(16, 8))
plt.rc("font", size=14)
Artificial data¶
[3]:
nsample = 50
sig = 0.25
x1 = np.linspace(0, 20, nsample)
X = np.column_stack((x1, np.sin(x1), (x1 - 5) ** 2))
X = sm.add_constant(X)
beta = [5.0, 0.5, 0.5, -0.02]
y_true = np.dot(X, beta)
y = y_true + sig * np.random.normal(size=nsample)
Estimation¶
[4]:
olsmod = sm.OLS(y, X)
olsres = olsmod.fit()
print(olsres.summary())
OLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 0.985
Model: OLS Adj. R-squared: 0.984
Method: Least Squares F-statistic: 1001.
Date: Wed, 30 Nov 2022 Prob (F-statistic): 6.97e-42
Time: 21:29:37 Log-Likelihood: 1.5101
No. Observations: 50 AIC: 4.980
Df Residuals: 46 BIC: 12.63
Df Model: 3
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 4.9351 0.083 59.155 0.000 4.767 5.103
x1 0.5126 0.013 39.836 0.000 0.487 0.538
x2 0.4233 0.051 8.368 0.000 0.321 0.525
x3 -0.0204 0.001 -18.021 0.000 -0.023 -0.018
==============================================================================
Omnibus: 0.479 Durbin-Watson: 2.585
Prob(Omnibus): 0.787 Jarque-Bera (JB): 0.459
Skew: -0.215 Prob(JB): 0.795
Kurtosis: 2.811 Cond. No. 221.
==============================================================================
Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
In-sample prediction¶
[5]:
ypred = olsres.predict(X)
print(ypred)
[ 4.42616728 4.88308276 5.30561197 5.67068697 5.9635649 6.18025023
6.32815119 6.42486262 6.4952752 6.56748597 6.66818207 6.81825612
7.02937362 7.30205634 7.6255966 7.9798165 8.33838311 8.67313487
8.95870849 9.17670603 9.31871766 9.38770334 9.39750655 9.37057978
9.33429574 9.31645001 9.34069437 9.42265083 9.56734503 9.7683805
10.00898959 10.26478838 10.50778283 10.71096817 10.8527657 10.92056809
10.91281073 10.83922883 10.71925789 10.57884057 10.44616431 10.34703029
10.30061431 10.31631629 10.39221589 10.51538799 10.66402628 10.81102598
10.92843704 10.99205628]
Create a new sample of explanatory variables Xnew, predict and plot¶
[6]:
x1n = np.linspace(20.5, 25, 10)
Xnew = np.column_stack((x1n, np.sin(x1n), (x1n - 5) ** 2))
Xnew = sm.add_constant(Xnew)
ynewpred = olsres.predict(Xnew) # predict out of sample
print(ynewpred)
[10.97337822 10.84121789 10.61217442 10.32407512 10.02671403 9.76966064
9.59012349 9.50384026 9.50122465 9.54971339]
Plot comparison¶
[7]:
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.plot(x1, y, "o", label="Data")
ax.plot(x1, y_true, "b-", label="True")
ax.plot(np.hstack((x1, x1n)), np.hstack((ypred, ynewpred)), "r", label="OLS prediction")
ax.legend(loc="best")
[7]:
<matplotlib.legend.Legend at 0x7f3036648160>

Predicting with Formulas¶
Using formulas can make both estimation and prediction a lot easier
[8]:
from statsmodels.formula.api import ols
data = {"x1": x1, "y": y}
res = ols("y ~ x1 + np.sin(x1) + I((x1-5)**2)", data=data).fit()
We use the I
to indicate use of the Identity transform. Ie., we do not want any expansion magic from using **2
[9]:
res.params
[9]:
Intercept 4.935120
x1 0.512554
np.sin(x1) 0.423272
I((x1 - 5) ** 2) -0.020358
dtype: float64
Now we only have to pass the single variable and we get the transformed right-hand side variables automatically
[10]:
res.predict(exog=dict(x1=x1n))
[10]:
0 10.973378
1 10.841218
2 10.612174
3 10.324075
4 10.026714
5 9.769661
6 9.590123
7 9.503840
8 9.501225
9 9.549713
dtype: float64