Methods for computing numerical standard errors: Review and application to value-at-risk estimation

David Ardia*, Keven Bluteau, Lennart F. Hoogerheide

*Corresponding author for this work

Research output: Contribution to JournalArticleAcademicpeer-review

163 Downloads (Pure)


Numerical standard error (NSE) is an estimate of the standard deviation of a simulation result if the simulation experiment were to be repeated many times. We review standard methods for computing NSE and perform a Monte Carlo experiments to compare their performance in the case of high/extreme autocorrelation. In particular, we propose an application to risk management where we assess the precision of the value-at-risk measure when the underlying risk model is estimated by simulation-based methods. Overall, heteroscedasticity and autocorrelation estimators with prewhitening perform best in the presence of large/extreme autocorrelation.

Original languageEnglish
Article number20170011
Pages (from-to)1-9
Number of pages9
JournalJournal of Time Series Econometrics
Issue number2
Early online date21 Jul 2018
Publication statusPublished - 26 Jul 2018


  • bootstrap
  • HAC kernel
  • Markov chain Monte Carlo (MCMC)
  • Monte Carlo
  • numerical standard error (NSE)
  • spectral density
  • value-at-risk
  • Welch

Cite this