Strong consistency of the least squares estimator in regression models with adaptive learning

Norbert Christopeit, Michael Massmann

Research output: Contribution to JournalArticleAcademicpeer-review

Abstract

This paper looks at the strong consistency of the ordinary least squares (OLS) estimator in linear regression models with adaptive learning. It is a companion to Christopeit & Massmann (2018) which considers the estimator’s convergence in distribution and its weak consistency in the same setting. Under constant gain learning, the model is closely related to stationary, (alternating) unit root or explosive autoregressive processes. Under decreasing gain learning, the regressors in the model are asymptotically collinear. The paper examines, first, the issue of strong convergence of the learning recursion: It is argued that, under constant gain learning, the recursion does not converge in any probabilistic sense, while for decreasing gain learning rates are derived at which the recursion converges almost surely to the rational expectations equilibrium. Secondly, the paper establishes the strong consistency of the OLS estimators, under both constant and decreasing gain learning, as well as rates at which the estimators converge almost surely. In the constant gain model, separate estimators for the intercept and slope parameters are juxtaposed to the joint estimator, drawing on the recent literature on explosive autoregressive models. Thirdly, it is emphasised that strong consistency is obtained in all models although the near-optimal condition for the strong consistency of OLS in linear regression models with stochastic regressors, established by Lai & Wei (1982a), is not always met.

Original languageEnglish
Pages (from-to)1646-1693
Number of pages48
JournalElectronic Journal of Statistics
Volume13
Issue number1
DOIs
Publication statusPublished - 1 Jan 2019

Fingerprint

Adaptive Learning
Strong Consistency
Least Squares Estimator
Regression Model
Recursion
Ordinary Least Squares Estimator
Estimator
Linear Regression Model
Converge
Weak Consistency
Convergence in Distribution
Rational Expectations
Learning Rate
Ordinary Least Squares
Unit Root
Autoregressive Process
Collinear
Intercept
Autoregressive Model
Strong Convergence

Keywords

  • Adaptive learning
  • Almost sure convergence
  • Non-stationary regression
  • Ordinary least squares

Cite this

Christopeit, Norbert ; Massmann, Michael. / Strong consistency of the least squares estimator in regression models with adaptive learning. In: Electronic Journal of Statistics. 2019 ; Vol. 13, No. 1. pp. 1646-1693.
@article{3224c87c87884ba082b54c7751360d95,
title = "Strong consistency of the least squares estimator in regression models with adaptive learning",
abstract = "This paper looks at the strong consistency of the ordinary least squares (OLS) estimator in linear regression models with adaptive learning. It is a companion to Christopeit & Massmann (2018) which considers the estimator’s convergence in distribution and its weak consistency in the same setting. Under constant gain learning, the model is closely related to stationary, (alternating) unit root or explosive autoregressive processes. Under decreasing gain learning, the regressors in the model are asymptotically collinear. The paper examines, first, the issue of strong convergence of the learning recursion: It is argued that, under constant gain learning, the recursion does not converge in any probabilistic sense, while for decreasing gain learning rates are derived at which the recursion converges almost surely to the rational expectations equilibrium. Secondly, the paper establishes the strong consistency of the OLS estimators, under both constant and decreasing gain learning, as well as rates at which the estimators converge almost surely. In the constant gain model, separate estimators for the intercept and slope parameters are juxtaposed to the joint estimator, drawing on the recent literature on explosive autoregressive models. Thirdly, it is emphasised that strong consistency is obtained in all models although the near-optimal condition for the strong consistency of OLS in linear regression models with stochastic regressors, established by Lai & Wei (1982a), is not always met.",
keywords = "Adaptive learning, Almost sure convergence, Non-stationary regression, Ordinary least squares",
author = "Norbert Christopeit and Michael Massmann",
year = "2019",
month = "1",
day = "1",
doi = "10.1214/19-EJS1558",
language = "English",
volume = "13",
pages = "1646--1693",
journal = "Electronic Journal of Statistics",
issn = "1935-7524",
publisher = "Institute of Mathematical Statistics",
number = "1",

}

Strong consistency of the least squares estimator in regression models with adaptive learning. / Christopeit, Norbert; Massmann, Michael.

In: Electronic Journal of Statistics, Vol. 13, No. 1, 01.01.2019, p. 1646-1693.

Research output: Contribution to JournalArticleAcademicpeer-review

TY - JOUR

T1 - Strong consistency of the least squares estimator in regression models with adaptive learning

AU - Christopeit, Norbert

AU - Massmann, Michael

PY - 2019/1/1

Y1 - 2019/1/1

N2 - This paper looks at the strong consistency of the ordinary least squares (OLS) estimator in linear regression models with adaptive learning. It is a companion to Christopeit & Massmann (2018) which considers the estimator’s convergence in distribution and its weak consistency in the same setting. Under constant gain learning, the model is closely related to stationary, (alternating) unit root or explosive autoregressive processes. Under decreasing gain learning, the regressors in the model are asymptotically collinear. The paper examines, first, the issue of strong convergence of the learning recursion: It is argued that, under constant gain learning, the recursion does not converge in any probabilistic sense, while for decreasing gain learning rates are derived at which the recursion converges almost surely to the rational expectations equilibrium. Secondly, the paper establishes the strong consistency of the OLS estimators, under both constant and decreasing gain learning, as well as rates at which the estimators converge almost surely. In the constant gain model, separate estimators for the intercept and slope parameters are juxtaposed to the joint estimator, drawing on the recent literature on explosive autoregressive models. Thirdly, it is emphasised that strong consistency is obtained in all models although the near-optimal condition for the strong consistency of OLS in linear regression models with stochastic regressors, established by Lai & Wei (1982a), is not always met.

AB - This paper looks at the strong consistency of the ordinary least squares (OLS) estimator in linear regression models with adaptive learning. It is a companion to Christopeit & Massmann (2018) which considers the estimator’s convergence in distribution and its weak consistency in the same setting. Under constant gain learning, the model is closely related to stationary, (alternating) unit root or explosive autoregressive processes. Under decreasing gain learning, the regressors in the model are asymptotically collinear. The paper examines, first, the issue of strong convergence of the learning recursion: It is argued that, under constant gain learning, the recursion does not converge in any probabilistic sense, while for decreasing gain learning rates are derived at which the recursion converges almost surely to the rational expectations equilibrium. Secondly, the paper establishes the strong consistency of the OLS estimators, under both constant and decreasing gain learning, as well as rates at which the estimators converge almost surely. In the constant gain model, separate estimators for the intercept and slope parameters are juxtaposed to the joint estimator, drawing on the recent literature on explosive autoregressive models. Thirdly, it is emphasised that strong consistency is obtained in all models although the near-optimal condition for the strong consistency of OLS in linear regression models with stochastic regressors, established by Lai & Wei (1982a), is not always met.

KW - Adaptive learning

KW - Almost sure convergence

KW - Non-stationary regression

KW - Ordinary least squares

UR - http://www.scopus.com/inward/record.url?scp=85067004516&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85067004516&partnerID=8YFLogxK

U2 - 10.1214/19-EJS1558

DO - 10.1214/19-EJS1558

M3 - Article

VL - 13

SP - 1646

EP - 1693

JO - Electronic Journal of Statistics

JF - Electronic Journal of Statistics

SN - 1935-7524

IS - 1

ER -