Autoregressive integrated moving average
In statistics and econometrics, and in particular in time series analysis, an autoregressive integrated moving average (ARIMA) model is a generalization of an autoregressive moving average (ARMA) model. Both of these models are fitted to time series data either to better understand the data or to predict future points in the series (forecasting). ARIMA models are applied in some cases where data show evidence of nonstationarity, where an initial differencing step (corresponding to the "integrated" part of the model) can be applied one or more times to eliminate the nonstationarity.^{[1]}
The AR part of ARIMA indicates that the evolving variable of interest is regressed on its own lagged (i.e., prior) values. The MA part indicates that the regression error is actually a linear combination of error terms whose values occurred contemporaneously and at various times in the past. The I (for "integrated") indicates that the data values have been replaced with the difference between their values and the previous values (and this differencing process may have been performed more than once). The purpose of each of these features is to make the model fit the data as well as possible.
Nonseasonal ARIMA models are generally denoted ARIMA(p,d,q) where parameters p, d, and q are nonnegative integers, p is the order (number of time lags) of the autoregressive model, d is the degree of differencing (the number of times the data have had past values subtracted), and q is the order of the movingaverage model. Seasonal ARIMA models are usually denoted ARIMA(p,d,q)(P,D,Q)_{m}, where m refers to the number of periods in each season, and the uppercase P,D,Q refer to the autoregressive, differencing, and moving average terms for the seasonal part of the ARIMA model.^{[2]}^{[3]}
When two out of the three terms are zeros, the model may be referred to based on the nonzero parameter, dropping "AR", "I" or "MA" from the acronym describing the model. For example, ARIMA (1,0,0) is AR(1), ARIMA(0,1,0) is I(1), and ARIMA(0,0,1) is MA(1).
ARIMA models can be estimated following the Box–Jenkins approach.
Contents
Definition[edit]
Given a time series of data X_{t} where t is an integer index and the X_{t} are real numbers, an model is given by
or equivalently by
where is the lag operator, the are the parameters of the autoregressive part of the model, the are the parameters of the moving average part and the are error terms. The error terms are generally assumed to be independent, identically distributed variables sampled from a normal distribution with zero mean.
Assume now that the polynomial has a unit root (a factor ) of multiplicity d. Then it can be rewritten as:
An ARIMA(p,d,q) process expresses this polynomial factorisation property with p=p'−d, and is given by:
and thus can be thought as a particular case of an ARMA(p+d,q) process having the autoregressive polynomial with d unit roots. (For this reason, no ARIMA model with d > 0 is wide sense stationary.)
The above can be generalized as follows.
This defines an ARIMA(p,d,q) process with drift .
Other special forms[edit]
The explicit identification of the factorisation of the autoregression polynomial into factors as above, can be extended to other cases, firstly to apply to the moving average polynomial and secondly to include other special factors. For example, having a factor in a model is one way of including a nonstationary seasonality of period s into the model; this factor has the effect of reexpressing the data as changes from s periods ago. Another example is the factor , which includes a (nonstationary) seasonality of period 2.^{[clarification needed]} The effect of the first type of factor is to allow each season's value to drift separately over time, whereas with the second type values for adjacent seasons move together.^{[clarification needed]}
Identification and specification of appropriate factors in an ARIMA model can be an important step in modelling as it can allow a reduction in the overall number of parameters to be estimated, while allowing the imposition on the model of types of behaviour that logic and experience suggest should be there.
Differencing[edit]
Differencing in statistics is a transformation applied to timeseries data in order to make it stationary. A stationary time series' properties do not depend on the time at which the series is observed.
In order to difference the data, the difference between consecutive observations is computed. Mathematically, this is shown as
Differencing removes the changes in the level of a time series, eliminating trend and seasonality and consequently stabilizing the mean of the time series.
Sometimes it may be necessary to difference the data a second time to obtain a stationary time series, which is referred to as second order differencing:
Another method of differencing data is seasonal differencing, which involves computing the difference between an observation and the corresponding observation in the previous year. This is shown as:
The differenced data is then used for the estimation of an ARMA model.
Examples[edit]
Some wellknown special cases arise naturally or are mathematically equivalent to other popular forecasting models. For example:
 An ARIMA(0,1,0) model (or I(1) model) is given by — which is simply a random walk.
 An ARIMA(0,1,0) with a constant, given by — which is a random walk with drift.
 An ARIMA(0,0,0) model is a white noise model.
 An ARIMA(0,1,2) model is a Damped Holt's model.
 An ARIMA(0,1,1) model without constant is a basic exponential smoothing model.^{[4]}
 An ARIMA(0,2,2) model is given by — which is equivalent to Holt's linear method with additive errors, or double exponential smoothing.^{[4]}
Choosing the order[edit]
To determine the order of a nonseasonal ARIMA model, a useful criterion is the Akaike information criterion (AIC) . It is written as
where L is the likelihood of the data, p is the order of the autoregressive part and q is the order of the moving average part. The k represents the intercept of the ARIMA model. For AIC, if k = 1 then there is an intercept in the ARIMA model (c ≠ 0) and if k = 0 then there is no intercept in the ARIMA model (c = 0).
The corrected AIC for ARIMA models can be written as
The Bayesian Information Criterion can be written as
The objective is to minimize the AIC, AICc or BIC values for a good model. The lower the value of one of these criteria for a range of models being investigated, the better the model will suit the data. It should be noted however that the AIC and the BIC are used for two completely different purposes. While the AIC tries to approximate models towards the reality of the situation, the BIC attempts to find the perfect fit. The BIC approach is often criticized as there never is a perfect fit to reallife complex data; however, it is still a useful method for selection as it penalizes models more heavily for having more parameters than the AIC would.
AICc can only be used to compare ARIMA models with the same orders of differencing. For ARIMAs with different orders of differencing, RMSE can be used for model comparison.
Estimation of coefficients[edit]
This section is empty. You can help by adding to it. (March 2017)

Forecasts using ARIMA models[edit]
The ARIMA model can be viewed as a "cascade" of two models. The first is nonstationary:
while the second is widesense stationary:
Now forecasts can be made for the process , using a generalization of the method of autoregressive forecasting.
Forecast intervals[edit]
The forecast intervals (confidence intervals for forecasts) for ARIMA models are based on assumptions that the residuals are uncorrelated and normally distributed. If either of these assumptions does not hold, then the forecast intervals may be incorrect. For this reason, researchers plot the ACF and histogram of the residuals to check the assumptions before producing forecast intervals.
95% forecast interval: , where is the variance of .
For , for all ARIMA models regardless of parameters and orders.
For ARIMA(0,0,q),
In general, forecast intervals from ARIMA models will increase as the forecast horizon increases.
Variations and extensions[edit]
A number of variations on the ARIMA model are commonly employed. If multiple time series are used then the can be thought of as vectors and a VARIMA model may be appropriate. Sometimes a seasonal effect is suspected in the model; in that case, it is generally considered better to use a SARIMA (seasonal ARIMA) model than to increase the order of the AR or MA parts of the model.^{[5]} If the timeseries is suspected to exhibit longrange dependence, then the d parameter may be allowed to have noninteger values in an autoregressive fractionally integrated moving average model, which is also called a Fractional ARIMA (FARIMA or ARFIMA) model.
Software implementations[edit]
Various packages that apply methodology like Box–Jenkins parameter optimization are available to find the right parameters for the ARIMA model.
 EViews: has extensive ARIMA and SARIMA capabilities.
 Julia: contains an ARIMA implementation in the TimeModels package^{[6]}
 Mathematica: includes ARIMAProcess function.
 MATLAB: the Econometrics Toolbox includes ARIMA models and regression with ARIMA errors

NCSS: includes several procedures for
ARIMA
fitting and forecasting.^{[7]}^{[8]}^{[9]}  Python: the "statsmodels" package includes models for time series analysis – univariate time series analysis: AR, ARIMA – vector autoregressive models, VAR and structural VAR – descriptive statistics and process models for time series analysis.
 R: the standard R stats package includes an arima function, which is documented in "ARIMA Modelling of Time Series". Besides the ARIMA(p,d,q) part, the function also includes seasonal factors, an intercept term, and exogenous variables (xreg, called "external regressors"). The CRAN task view on Time Series is the reference with many more links. The "forecast" package in R can automatically select an ARIMA model for a given time series with the auto.arima() function. The package can also simulate seasonal and nonseasonal ARIMA models with its simulate.Arima() function. It also has a function Arima(), which is a wrapper for the arima from the "stats" package.^{[10]}
 Ruby: the "statsampletimeseries" gem is used for time series analysis, including ARIMA models and Kalman Filtering.
 SAFE TOOLBOXES: includes ARIMA modelling and regression with ARIMA errors.
 SAS: includes extensive ARIMA processing in its Econometric and Time Series Analysis system: SAS/ETS.
 IBM SPSS: includes ARIMA modeling in its Statistics and Modeler statistical packages. The default Expert Modeler feature evaluates a range of seasonal and nonseasonal autoregressive (p), integrated (d), and moving average (q) settings and seven exponential smoothing models. The Expert Modeler can also transform the target timeseries data into its square root or natural log. The user also has the option to restrict the Expert Modeler to ARIMA models, or to manually enter ARIMA nonseasonal and seasonal p, d, and q settings without Expert Modeler. Automatic outlier detection is available for seven types of outliers, and the detected outliers will be accommodated in the timeseries model if this feature is selected.
 SAP: the APOFCS package^{[11]} in SAP ERP from SAP allows creation and fitting of ARIMA models using the Box–Jenkins methodology.
 SQL Server Analysis Services: from Microsoft includes ARIMA as a Data Mining algorithm.
 Stata includes ARIMA modelling (using its arima command) as of Stata 9.
 Teradata Vantage has the ARIMA function as part of its Machine learning engine.
 TOL (Time Oriented Language) is designed to model ARIMA models (including SARIMA, ARIMAX and DSARIMAX variants) [1].
 Scala: sparktimeseries library contains ARIMA implementation for Scala, Java and Python. Implementation is designed to run on Apache Spark.
 PostgreSQL/MadLib: Time Series Analysis/ARIMA.
 X12ARIMA: from the US Bureau of the Census
See also[edit]
References[edit]
This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (May 2011) (Learn how and when to remove this template message)

 ^ For further information on Stationarity and Differencing see https://www.otexts.org/fpp/8/1
 ^ "Notation for ARIMA Models". Time Series Forecasting System. SAS Institute. Retrieved 19 May 2015.
 ^ Hyndman, Rob J; Athanasopoulos, George. 8.9 Seasonal ARIMA models. Forecasting: principles and practice. oTexts. Retrieved 19 May 2015.
 ^ ^{a} ^{b} "Introduction to ARIMA models". people.duke.edu. Retrieved 20160605.
 ^ Swain, S; et al. (2018). Development of an ARIMA Model for Monthly Rainfall Forecasting over Khordha District, Odisha, India. Recent Findings in Intelligent Computing Techniques (Advances in Intelligent Systems and Computing. Advances in Intelligent Systems and Computing. 708. pp. 325–331). doi:10.1007/9789811086366_34. ISBN 9789811086359.
 ^ TimeModels.jl www.github.com
 ^ ARIMA in NCSS,
 ^ Automatic ARMA in NCSS,
 ^ Autocorrelations and Partial Autocorrelations in NCSS
 ^ 8.7 ARIMA modelling in R  OTexts. www.otexts.org. Retrieved 20160512.
 ^ "Box Jenkins model". SAP. Retrieved 8 March 2013.
Further reading[edit]
 Asteriou, Dimitros; Hall, Stephen G. (2011). "ARIMA Models and the Box–Jenkins Methodology". Applied Econometrics (Second ed.). Palgrave MacMillan. pp. 265–286. ISBN 9780230271821.
 Mills, Terence C. (1990). Time Series Techniques for Economists. Cambridge University Press. ISBN 9780521343398.
 Percival, Donald B.; Walden, Andrew T. (1993). Spectral Analysis for Physical Applications. Cambridge University Press. ISBN 9780521355322.