Abstract
Numerical standard error (NSE) is an estimate of the standard deviation of a simulation result if the simulation experiment were to be repeated many times. We review standard methods for computing NSE and perform a Monte Carlo experiments to compare their performance in the case of high/extreme autocorrelation. In particular, we propose an application to risk management where we assess the precision of the value-at-risk measure when the underlying risk model is estimated by simulation-based methods. Overall, heteroscedasticity and autocorrelation estimators with prewhitening perform best in the presence of large/extreme autocorrelation.
Original language | English |
---|---|
Article number | 20170011 |
Pages (from-to) | 1-9 |
Number of pages | 9 |
Journal | Journal of Time Series Econometrics |
Volume | 10 |
Issue number | 2 |
Early online date | 21 Jul 2018 |
DOIs | |
Publication status | Published - 26 Jul 2018 |
Keywords
- bootstrap
- GARCH
- HAC kernel
- Markov chain Monte Carlo (MCMC)
- Monte Carlo
- numerical standard error (NSE)
- spectral density
- value-at-risk
- Welch