Judea Pearl, the inventor of Bayesian networks, recently published a book called The Book of Why: The New Science of Cause and Effect. The book covers a great many things, including a detailed history of how the fields of causality and statistics have long been at odds, Pearl’s own do-calculus framework for teasing causal inferences from observational data, and why (in Pearl’s view) the future of AI depends on causality.
One of the key points in Pearl’s book is that observational data - data collected from real world systems - on its own, can only possibly convey associations between variables. To glean which variables in the data act as causes, and which are effects of those causes, we need something more. The implications are profound. A pharmaceutical company cannot ever tell if a particular drug is an effective treatment for a disease simply by observing the outcomes of patients who have taken that drug. It is impossible for scientists to prove that smoking causes lung cancer from observing outcomes of smokers and non-smokers. And yet, these are both things that we as a society have the capability to do today.
The traditional method for proving cause and effect is called a randomized controlled trial. In a randomized controlled trial, you randomly assign some test subjects to a treatment group and some to a control group. In the case of proving the effectiveness of a drug, patients are randomly assigned to receive the drug or not. By doing this, you can guarantee that the two groups are the same in every possible way, except for the treatment. If you then observe that the outcomes of one group are better than another, you can conclude that the treatment causes the improved outcome.
But randomized controlled trials are expensive, slow, and oftentimes impossible. You cannot ethically force a group of patients to smoke for a lifetime simply for the sake of proving that smoking causes cancer. And indeed, this is precisely the dilemma that made it extremely difficult to prove that smoking causes cancer, a topic Pearl covers in exquisite detail in his book. On top of that, observational data is cheap and plentiful. Is there truly no way to tease causality from observational data?
In a recent panel on causality from the Machine Learning Summer School in South Africa, Columbia University professor David Blei explained that this conundrum is precisely what has motivated him to pursue causality in his research.
“When you sit down and read all the books about causal inference and all the papers about it, it’s very theoretical but there’s one message that you get, from the historical perspective anyway, which is that causal inference from observational data is impossible . . . To me that seemed silly, that with, say you’re a hospital and you have 250 million electronic health records of what medicines people received and what happened to those people. It seemed silly to say that it is impossible to learn, say that Advil helps headaches.”
There are many ways in which causal understanding could improve the fields of machine learning and AI, and the ability to reliably infer causation from observational data is a hot topic. Judea Pearl’s do-calculus is primarily a framework for doing just that. Under the right conditions and with some assumptions, causality can be inferred from purely observational data.
A machine learning model that captures causal relationships of data is one way to ensure that the model will generalize to new settings, one of the most difficult aspects of machine learning. A model that associates the rising of the sun with the crow of a rooster may be able to adequately predict when the sun will rise. If the rooster has just crowed, the sun will rise shortly thereafter. This model will not, however, generalize to situations where there is no rooster. It would never predict that the sun will rise because it has never observed such a data point. However, if the model captured the causal relationships between the two, that the sun being about to rise causes the rooster’s crow, it would be obvious that the sun will rise even without the rooster.
Causality is also tightly related to fairness in machine learning, a topic we care deeply about. In The Book of Why, Pearl discusses the “Berkeley admissions paradox,” the story of one statistician’s attempt in the 1970s to detect potential discrimination against admitting women at UC Berkeley. Pearl discusses how traditional statistics combined only with observational data can lead to competing conclusions. It is possible to conclude that the university discriminated against women or that they discriminated in favor of women, depending on how you slice the data. Only using the language of causality can we draw correct conclusions.
The role of causality in AI and machine learning is a controversial topic, and Pearl has no problems stoking that controversy in his book. Regardless, The Book of Why has helped revive the topic of causality in the ML and AI communities. In the recent machine learning summer school in South Africa, there were multiple sessions on causality. At the recent Fairness, Accountability, and Trust conference there were multiple discussions devoted to causality. Textbooks on causality are being published and multiple jobs asking for causal inference are popping up. Though the immediate future of causality in machine learning is likely (still) limited to randomized controlled trials like A/B testing, the potential to draw causal conclusions from near-unlimited quantities of observational data is too great to ignore. Finally, Pearl argues that cause and effect are the key mechanisms through which humans process the complex world around them, and that we can never reach true artificial general intelligence without equipping machines with notions of cause and effect.
More from the Blog
Feb 28 2019
by — The promise of Machine Learning (ML) on edge devices holds potential to enable new capabilities while reaping the benefits associated with on-device computation. As an environment that is frequently the source of data (user interactions, sensors such as cameras, acelerometers, etc.), the browser within PCs, mobile and IoT devices represents an important edge “platform.” Deploying models in suc...
Mar 20 2019
by — In recent years, machine learning technologies - especially deep learning - have made breakthroughs which have turned science fiction into reality. Autonomous cars are almost possible, and machines can comprehend language. These technical advances are unprecedented, but they hinge on the availability of vast amounts of data. For a form of machine learning known as supervised learning, having d...
Sep 27 2019
by — And no, not this kind of horizon… (image credit) In a recent newsletter, Alice mused about how evolving views and theories of learning are shaping machine learning research and practice. If you’re an enterprise data scientist you’re very much focused on the practice of machine learning. Limited awareness of what’s shaping the machine learning breakthroughs that you’re trying to apply to real...