Abstract
Neural networks have been shown to be extremely effective rainfall-runoff models, where the river discharge is predicted from meteorological inputs. However, the question remains: what have these models learned? Is it possible to extract information about the learned relationships that map inputs to outputs, and do these mappings represent known hydrological concepts? Small-scale experiments have demonstrated that the internal states of long short-term memory networks (LSTMs), a particular neural network architecture predisposed to hydrological modelling, can be interpreted. By extracting the tensors which represent the learned translation from inputs (precipitation, temperature, and potential evapotranspiration) to outputs (discharge), this research seeks to understand what information the LSTM captures about the hydrological system. We assess the hypothesis that the LSTM replicates real-world processes and that we can extract information about these processes from the internal states of the LSTM. We examine the cell-state vector, which represents the memory of the LSTM, and explore the ways in which the LSTM learns to reproduce stores of water, such as soil moisture and snow cover. We use a simple regression approach to map the LSTM state vector to our target stores (soil moisture and snow). Good correlations (R2>0.8) between the probe outputs and the target variables of interest provide evidence that the LSTM contains information that reflects known hydrological processes comparable with the concept of variable-capacity soil moisture stores. The implications of this study are threefold: (1) LSTMs reproduce known hydrological processes. (2) While conceptual models have theoretical assumptions embedded in the model a priori, the LSTM derives these from the data. These learned representations are interpretable by scientists. (3) LSTMs can be used to gain an estimate of intermediate stores of water such as soil moisture. While machine learning interpretability is still a nascent field and our approach reflects a simple technique for exploring what the model has learned, the results are robust to different initial conditions and to a variety of benchmarking experiments. We therefore argue that deep learning approaches can be used to advance our scientific goals as well as our predictive goals.
Original language | English |
---|---|
Pages (from-to) | 3079-3101 |
Number of pages | 23 |
Journal | Hydrology and Earth System Sciences |
Volume | 26 |
Issue number | 12 |
Early online date | 20 Jun 2022 |
DOIs | |
Publication status | Published - 2022 |
Bibliographical note
Funding Information:Financial support. This research has been supported by the Nat-
Funding Information:
Acknowledgements. The authors would like to thank the teams responsible for releasing CAMELS GB (Coxon et al., 2020b) and the authors and maintainers of the neuralhydrology code base for training machine learning models for rainfall-runoff modelling. Thomas Lees is supported by NPIF award NE/L002612/1; Simon J. Dadson is supported by NERC grant NE/S017380/1. We were further supported by Verbund AG for Daniel Klotz and by the Linz Institute of Technology DeepFlood project for Martin Gauch. Part of the research was developed in the Young Scientists Summer Program at the International Institute for Applied Systems Analysis, Laxenburg (Austria), with financial support from the United Kingdom National Member Organization.
Publisher Copyright:
© 2022 Thomas Lees et al.