oai:arXiv.org:2406.16708
Computer Science
2024
6/26/2024
Temporal causal discovery is a crucial task aimed at uncovering the causal relations within time series data.
The latest temporal causal discovery methods usually train deep learning models on prediction tasks to uncover the causality between time series.
They capture causal relations by analyzing the parameters of some components of the trained models, e.g., attention weights and convolution weights.
However, this is an incomplete mapping process from the model parameters to the causality and fails to investigate the other components, e.g., fully connected layers and activation functions, that are also significant for causal discovery.
To facilitate the utilization of the whole deep learning models in temporal causal discovery, we proposed an interpretable transformer-based causal discovery model termed CausalFormer, which consists of the causality-aware transformer and the decomposition-based causality detector.
The causality-aware transformer learns the causal representation of time series data using a prediction task with the designed multi-kernel causal convolution which aggregates each input time series along the temporal dimension under the temporal priority constraint.
Then, the decomposition-based causality detector interprets the global structure of the trained causality-aware transformer with the proposed regression relevance propagation to identify potential causal relations and finally construct the causal graph.
Experiments on synthetic, simulated, and real datasets demonstrate the state-of-the-art performance of CausalFormer on discovering temporal causality.
Our code is available at https://github.com/lingbai-kong/CausalFormer.
Kong, Lingbai,Li, Wengen,Yang, Hanchen,Zhang, Yichao,Guan, Jihong,Zhou, Shuigeng, 2024, CausalFormer: An Interpretable Transformer for Temporal Causal Discovery