Causal discovery through machine learning is one of the major current challenges with utmost importance. A main lesson from causality research is that, to achieve causal discovery, one needs to perform interventions whenever such an option is available. In this paper, we develop an RL agent that makes causal discovery by learning to perform interventions on previously unseen environments and constructs a causal model. Apart from the fact that the resulting graph is useful for providing an explanation for the data generating process, the process of constructing the graph in every step is explainable; that is, we can trace and pin down why and how is the resulting graph is constructed. We make an ablation study to understand how much interventional learning adds to our generalisation performance. We further show that our agent compares favorably to state-of-the-art algorithms regarding the accuracy, and the run-time efficiency even in the presence of varying degrees of uncertainty.
|Publication status||Submitted - Nov 2021|
- Causal discovery
- reinforcement learning