Causal AI Book Cover

Causal AI Book

Causal AI is Robert Osazuwa Ness' book on causality. This page contains links to tutorials, notebooks, references and errata.

Chapter 1: Introduction

Book recommendations

This book takes an opinionated approach to causality that focuses on graphs, probabilistic machine learning, Bayesian decision-making, and using deep learning tools such as Pytorch.

For books with alternative perspectives that focus on econometrics, social science, and practical data science themes, check out:

Key references in the chapter
  • D'Amour, A., Heller, K., Moldovan, D., Adlam, B., Alipanahi, B., Beutel, A., Chen, C., Deaton, J., Eisenstein, J., Hoffman, M.D. and Hormozdiari, F., 2020. Underspecification presents challenges for credibility in modern machine learning. arXiv preprint arXiv:2011.03395.

Chapter 2: Primer on probability modeling

Our course on probabilistic machine learning covers in detail the elements of Bayesian and probabilistic inference covered in this chapter.

Book recommendations

Chapter 3: Building a causal graphical model

Causal abstraction
  • Beckers, S. and Halpern, J.Y., 2019, July. Abstracting causal models. In Proceedings of the AAAI conference on artificial intelligence (Vol. 33, No. 01, pp. 2678-2685).
  • Beckers, S., Eberhardt, F. and Halpern, J.Y., 2020, August. Approximate causal abstractions. In Uncertainty in artificial intelligence (pp. 606-615). PMLR.
  • Rischel, E.F. and Weichwald, S., 2021, December. Compositional abstraction error and a category of causal models. In Uncertainty in Artificial Intelligence (pp. 1013-1023). PMLR.
Independence of mechanism
  • Schölkopf, B., Janzing, D., Peters, J., Sgouritsa, E., Zhang, K. and Mooij, J., 2012. On causal and anticausal learning. arXiv preprint arXiv:1206.6471.
  • Rojas-Carulla, M., Schölkopf, B., Turner, R. and Peters, J., 2018. Invariant models for causal transfer learning. Journal of Machine Learning Research, 19(36), pp.1-34.
  • Besserve, M., Shajarisales, N., Schölkopf, B. and Janzing, D., 2018, March. Group invariance principles for causal generative models. In International Conference on Artificial Intelligence and Statistics (pp. 557-565). PMLR.
  • Parascandolo, G., Kilbertus, N., Rojas-Carulla, M. and Schölkopf, B., 2018, July. Learning independent causal mechanisms. In International Conference on Machine Learning (pp. 4036-4044). PMLR.
Causal data fusion and transfer learning
  • Bareinboim, E. and Pearl, J., 2016. Causal inference and the data-fusion problem. Proceedings of the National Academy of Sciences, 113(27), pp.7345-7352.
  • Rojas-Carulla, M., Schölkopf, B., Turner, R. and Peters, J., 2018. Invariant models for causal transfer learning. Journal of Machine Learning Research, 19(36), pp.1-34.
  • Magliacane, S., van Ommen, T., Claassen, T., Bongers, S., Versteeg, P. and Mooij, J.M., 2017. Causal transfer learning. arXiv preprint arXiv:1707.06422.
Causally invariant prediction
  • Arjovsky, M., Bottou, L., Gulrajani, I. and Lopez-Paz, D., 2019. Invariant risk minimization. arXiv preprint arXiv:1907.02893.
  • Heinze-Deml, C., Peters, J. and Meinshausen, N., 2018. Invariant causal prediction for nonlinear models. Journal of Causal Inference, 6(2), p.20170016.
  • Rosenfeld, E., Ravikumar, P. and Risteski, A., 2020. The risks of invariant risk minimization. arXiv preprint arXiv:2010.05761.
  • Lu, C., Wu, Y., Hernández-Lobato, J.M. and Schölkopf, B., 2021. Nonlinear invariant risk minimization: A causal approach. arXiv preprint arXiv:2102.12353.

Chapter 4: Testing your causal graph

Tools for d-separation
  • NetworkX's d_separation algorithm.
  • pgmpy's get_independencies method enumerates all d-separations in a DAG.
  • Daggity.net provides both an online application for building DAGs and evaluating d-separation. It also provides an R package.
  • The dsep function in the bnlearn R package evaluates simple d-separation statements as true or false.
  • Causalfusion is an online app like Daggity but with a valuable set of additional features. You need to apply for access.
Background on statistical hypothesis testing
Tools for causal discovery
  • The PyWhy suite contains the popular causal-learn library for causal-discovery
  • PyWhy also contains an experimental library called dodiscover, which focuses on being a user-friendly interface for discover.
  • The bnlearn package is an R package, and it has a corresponding Python library
False discovery rate and causal discovery
  • Wikipedia page on the multiple comparisons problem that occurs when doing repeated hypothesis testing. Standard statistical remedies are to do a family-wise error rate correction or calculate a false discovery rate.
  • Pena, J.M., 2008, March. Learning gaussian graphical models of gene networks with false discovery rate control. In European conference on evolutionary computation, machine learning and data mining in bioinformatics (pp. 165-176). Berlin, Heidelberg: Springer Berlin Heidelberg.
  • The bnlearn package implements the interleaved incremental association algorithm with FDR in the iamb.fdr function
  • Gasse, M., Aussem, A. and Elghazel, H., 2014. A hybrid algorithm for Bayesian network structure learning with application to multi-label learning. Expert Systems with Applications, 41(15), pp.6755-6772.
Go deeper on functional constraints (Verma constraints)
  • Tian, J. and Pearl, J., 2012. On the testable implications of causal models with hidden variables. arXiv preprint arXiv:1301.0608.
  • Bhattacharya, R. and Nabi, R., 2022, August. On testability of the front-door model via Verma constraints. In Uncertainty in Artificial Intelligence (pp. 202-212). PMLR.
More on causal faithfulness
  • Zhang, J. and Spirtes, P., 2016. The three faces of faithfulness. Synthese, 193, pp.1011-1027.
  • Ramsey, J., Zhang, J. and Spirtes, P.L., 2012. Adjacency-faithfulness and conservative causal inference. arXiv preprint arXiv:1206.6843.
Selected readings on causal discovery
  • Spirtes, P., 2001, January. An anytime algorithm for causal inference. In International Workshop on Artificial Intelligence and Statistics (pp. 278-285). PMLR.
  • Spirtes, P., Glymour, C. and Scheines, R., 2001. Causation, prediction, and search. MIT press. (Introduces the PC algorithm, though one might enjoy this intro by Brady Neal)
  • Chickering, D.M., 2002. Optimal structure identification with greedy search. Journal of machine learning research, 3(Nov), pp.507-554.
  • Friedman, N. and Koller, D., 2003. Being Bayesian about network structure. A Bayesian approach to structure discovery in Bayesian networks. Machine learning, 50, pp.95-125.
  • Heckerman, D., Meek, C. and Cooper, G., 2006. A Bayesian approach to causal discovery. Innovations in Machine Learning: Theory and Applications, pp.1-28.
  • Meek, C., 2013. Causal inference and causal explanation with background knowledge. arXiv preprint arXiv:1302.4972.
  • Cooper, G.F. and Yoo, C., 2013. Causal discovery from a mixture of experimental and observational data. arXiv preprint arXiv:1301.6686.
  • Ogarrio, J.M., Spirtes, P. and Ramsey, J., 2016, August. A hybrid causal search algorithm for latent variable models. In Conference on probabilistic graphical models (pp. 368-379). PMLR.
  • Ness, R.O., Sachs, K. and Vitek, O., 2016. From correlation to causality: statistical approaches to learning regulatory relationships in large-scale biomolecular investigations. Journal of Proteome Research, 15(3), pp.683-690.
  • Glymour, C., Zhang, K. and Spirtes, P., 2019. Review of causal discovery methods based on graphical models. Frontiers in genetics, 10, p.524.
  • Zheng, Y., Huang, B., Chen, W., Ramsey, J., Gong, M., Cai, R., Shimizu, S., Spirtes, P. and Zhang, K., 2024. Causal-learn: Causal discovery in python. Journal of Machine Learning Research, 25(60), pp.1-8.

Chapter 5: Building causal graphs with deep probabilistic machine learning

Causal and anticausal learning
Vision as Inverse Graphics
  • Intro to Vision as Inverse Graphics from Max Planck Institute for Intelligent Systems
  • Romaszko, L., Williams, C.K., Moreno, P. and Kohli, P., 2017. Vision-as-inverse-graphics: Obtaining a rich 3d explanation of a scene from a single image. In Proceedings of the IEEE International Conference on Computer Vision Workshops (pp. 851-859).
Causal representation learning and disentanglement
  • Intro to Causal Representation Learning from Max Planck Institute for Intelligent Systems
  • Kumar, A., Sattigeri, P. and Balakrishnan, A., 2017. Variational inference of disentangled latent concepts from unlabeled observations. arXiv preprint arXiv:1711.00848.
  • Locatello, F., Bauer, S., Lucic, M., Raetsch, G., Gelly, S., Schölkopf, B. and Bachem, O., 2019, May. Challenging common assumptions in the unsupervised learning of disentangled representations. In international conference on machine learning (pp. 4114-4124). PMLR.
  • Yang, M., Liu, F., Chen, Z., Shen, X., Hao, J. and Wang, J., 2020. Causalvae: Structured causal disentanglement in variational autoencoder. arXiv preprint arXiv:2004.08697.
  • Wang, Y. and Jordan, M.I., 2021. Desiderata for representation learning: A causal perspective. arXiv preprint arXiv:2109.03795.
  • Ahuja, K., Hartford, J. and Bengio, Y., 2021. Properties from mechanisms: an equivariance perspective on identifiable representation learning. arXiv preprint arXiv:2110.15796.
  • Reddy, A.G. and Balasubramanian, V.N., 2022, June. On causally disentangled representations. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 36, No. 7, pp. 8089-8097).
Miscellaneous
  • Ali Rahimi quote comparing machine learning to alchemy can be found at the 11:59 mark in this video of his NIPS 2017 Test-of-Time Award presentation.

Chapter 6: Structural Causal Models

Chapter 7: Interventions

Chapter 8: Counterfactuals

  • Chapter 8 notebooks
  • Pearl, J., 2010. Brief report: On the consistency rule in causal inference:" axiom, definition, assumption, or theorem?". Epidemiology, pp.872-875.
  • Beckers, S., 2021. Causal sufficiency and actual causation. Journal of Philosophical Logic50(6), pp.1341-1374. Vancouver
  • Knobe, J. and Shapiro, S., 2021. Proximate cause explained. The University of Chicago Law Review, 88(1), pp.165-236.

Chapter 9: The Counterfactual Inference Algorithm

Intractable likelihood methods in probabilistic inference
  • Papamakarios, George, et al. "Normalizing flows for probabilistic modeling and inference." The Journal of Machine Learning Research 22.1 (2021): 2617-2680.
  • Matsubara, Takuo, et al. "Robust generalised Bayesian inference for intractable likelihoods." Journal of the Royal Statistical Society Series B: Statistical Methodology 84.3 (2022): 997-1022.
  • Ritchie, Daniel, Paul Horsfall, and Noah D. Goodman. "Deep amortized inference for probabilistic programs." arXiv preprint arXiv:1610.05735 (2016).
  • Murphy, Kevin P. Probabilistic machine learning: Advanced topics. MIT press, 2023.

Chapter 10: Causal Hierarchy and Identification

Do-calculus, Pearl's Causal Hierarchy and Identification algorithms
  • The Y0 repository for causal inference and identification
  • Bareinboim, E., Correa, J.D., Ibeling, D. and Icard, T., 2022. On Pearl’s hierarchy and the foundations of causal inference. In Probabilistic and causal inference: the works of judea pearl (pp. 507-556).
  • Shpitser, I. and Pearl, J., 2006, July. Identification of joint interventional distributions in recursive semi-Markovian causal models. In Proceedings of the National Conference on Artificial Intelligence (Vol. 21, No. 2, p. 1219). Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999.
  • Shpitser, I. and Pearl, J., 2008. Complete identification methods for the causal hierarchy. Journal of Machine Learning Research, 9, pp.1941-1979.
  • Huang, Yimin, and Marco Valtorta. Pearl's calculus of intervention is complete. arXiv preprint arXiv:1206.6831 (2006).
Potential outcomes, single world intervention graphs, and related concepts
  • Malinsky, D., Shpitser, I. and Richardson, T., 2019, April. A potential outcomes calculus for identifying conditional path-specific effects. In The 22nd International Conference on Artificial Intelligence and Statistics (pp. 3080-3088). PMLR.
  • Shpitser, I., Richardson, T.S. and Robins, J.M., 2022. Multivariate counterfactual systems and causal graphical models. In Probabilistic and Causal Inference: The Works of Judea Pearl (pp. 813-852).
  • Robins, J.M. and Richardson, T.S., 2010. Alternative graphical causal models and the identification of direct effects. Causality and psychopathology: Finding the determinants of disorders and their cures, 84, pp.103-158.
  • Richardson, T.S. and Robins, J.M., 2013, July. Single world intervention graphs: a primer. In Second UAI workshop on causal structure learning, Bellevue, Washington.
  • J. Robins, T.J. vanderWeele and T.S. Richardson. (2007). Contribution to discussion of Causal Effects in the presence of non-compliance a latent variable interpretation. by A. Forcina. Metron, LXIV (3) pp. 288-298.
  • Geneletti, S., & Dawid, A. P. (2007). Defining and identifying the effect of treatment on the treated (Tech. Rep. No. 3). Imperial College London, Department of Epidemiology and Public Health.
Identification of effect of treatment on the treated
  • Shpitser, I. and Tchetgen, E.T., 2016. Causal inference with a graphical hierarchy of interventions. Annals of statistics, 44(6), p.2433.
Partial Identification Bounds
  • Mueller, S. and Pearl, J., 2022. Personalized Decision Making--A Conceptual Introduction. arXiv preprint arXiv:2208.09558.
  • Li, A. and Pearl, J., 2022. Probabilities of causation with nonbinary treatment and effect. arXiv preprint arXiv:2208.09568.
  • Li, A. and Pearl, J., 2022, June. Unit selection with causal diagram. In Proceedings of the AAAI conference on artificial intelligence (Vol. 36, No. 5, pp. 5765-5772).

Chapter 11: Building a Causal Effect Estimation Workflow

Chapter 12: Building a Causal Effect Estimation Workflow

Chapter 13: Causality and Large Language Models

Key references in the chapter
  • Kıcıman, E., Ness, R., Sharma, A. and Tan, C., 2023. Causal reasoning and large language models: Opening a new frontier for causality. arXiv preprint arXiv:2305.00050.