It’s easy to understand why some people would see data mining as the finish rather than the first step. It promises a solution using available technology. It saves us, as well as future machines, the work of having to consider and articulate substantive assumptions about how the world operates. In some fields our knowledge may be in such an embryonic state that we have no clue how to begin drawing a model of the world. But Big Data will not solve this problem. The most important part of the answer must come from such a model, whether sketched by us or hypothesized and fine-tuned by machines. __ The Book of Why
Why has “machine intelligence” been stuck for decades now on the levels of data mining, data correlation, pattern recognition, etc.? Google, IBM, MIT, Apple, Amazon, China, etc. are all trying to improve the efficiency of Rung 1 “thinking.”
It is a race to the bottom rung, by the world’s foremost “artificial intelligence” researchers.
In The Book of Why, philosopher and artificial intelligence researcher Judea Pearl reintroduces the concept of “causation” to the rational discussion of cognition, hoping to move machine intelligence to a higher level of potency.
I summarize the rungs of the Pearl’s “Ladder of Causation” as such (from the bottom rung to the top rung of the ladder):
Rung 1:Association (Seeing, Observing). Rung 1 deals with identifying regularities in past behavior. It seeks to identify and codify patterns, relationships and associations based upon past behaviors (data) in order to predict future behaviors (causal inference).
Rung 2: Intervention (Doing, Intervening). Rung 2 transitions from asking what happened to asking what would happen based on possible interventions or different scenarios. It seeks to leverage the key entity (human or device) patterns, relationships and associations to predict what would happen and prescribe potential actions based on possible interventions (scenarios).
Rung 3: Counterfactuals (Imaging, Retrospection, Understanding). Rung 3 poses the “counterfactual” questions; that is, what would happen or what different outcome might have occurred if a different path had been taken. It seeks to leverage the key entity insights (behaviors, tendencies, propensities) to envision what would happen (potential different outcomes) if a different path had been taken.
Imagine a cognition machine that is capable of designing and performing a series of experimental interventions, based upon a number of competing hypotheses. Imagine the same machine as capable of integrating the results of its experiments to reformulate the scenarios underlying the various competing claims (and correcting for the discovery of faulty assumptions) to incrementally home in on the answers to important questions. The reliable and efficacious achievement of such non-human “interventional thinking” would place the machine on the level of Rung 2 — the “what if” level.
A higher Rung 3 level of “thinking” would involve a solid enough understanding of a complex scenario to allow an insightful imagining of what might happen if a number of alternative pathways of action were taken. Such thinking would begin to approach what a human child is capable of from its earliest moments.
With these three rungs of cognitive skill, a more child-like cognition machine could then be trained to achieve skillful human-level “thinking” through experience and directed exercises. Note that such a machine would be too complex for humans to foresee a precise training pathway to achieve superhuman machine intelligence. But humans would have many counterfactual suggestions, and for a while the human contribution to the learning programs of such machines might be significant.
How did humans come to have such thinking skills?
First, very early in our evolution, we humans realized that the world is not made up only of dry facts (what we might call data today); rather, these facts are glued together by an intricate web of cause-effect relationships. Second, causal explanations, not dry facts, make up the bulk of our knowledge, and should be the cornerstone of machine intelligence. Finally, our transition from processors of data to makers of explanations was not gradual; it was a leap that required an external push from an uncommon fruit. This matched perfectly with what I had observed theoretically in the Ladder of Causation: No machine can derive explanations from raw data. It needs a push.
… In his book Sapiens, historian Yuval Harari posits that our ancestors’ capacity to imagine nonexistent things was the key to everything, for it allowed them to communicate better. Before this change, they could only trust people from their immediate family or tribe. Afterward their trust extended to larger communities, bound by common fantasies (for example, belief in invisible yet imaginable deities, in the afterlife, and in the divinity of the leader) and expectations. Whether or not you agree with Harari’s theory, the connection between imagining and causal relations is almost self-evident. It is useless to ask for the causes of things unless you can imagine their consequences. __ Judea Pearl The Book of Why
No one knows what happened to the human brain to allow it to experiment on rung 2 or to envision counterfactuals on rung 3. Similarly, it is not easy to understand how to embody such types of thinking in machines — particularly when the world’s AI researchers are achieving impressive incremental results using rung 1 statistical tools. But eventually, they begin to hit limits.
With Bayesian networks, we had taught machines to think in shades of gray, and this was an important step toward humanlike thinking. But we still couldn’t teach machines to understand causes and effects. We couldn’t explain to a computer why turning the dial of a barometer won’t cause rain. Nor could we teach it what to expect when one of the riflemen on a firing squad changes his mind and decides not to shoot. Without the ability to envision alternate realities and contrast them with the currently existing reality, a machine cannot pass the mini-Turing test; it cannot answer the most basic question that makes us human: “Why?” I took this as an anomaly because I did not anticipate such natural and intuitive questions to reside beyond the reach of the most advanced reasoning systems of the time.
Only later did I realize that the same anomaly was afflicting more than just the field of artificial intelligence (AI). The very people who should care the most about “Why?” questions—namely, scientists—were laboring under a statistical culture that denied them the right to ask those questions. Of course they asked them anyway, informally, but they had to cast them as associational questions whenever they wanted to subject them to mathematical analysis. __ Book of Why
And so we remain mired in lower levels of thinking, when the solution of our many problems requires an altogether higher level of rationality combined with skillful creativity.
A Morbid Twist to the Statistical Culture Has People Living in Fear
A more dispassionate use of statistical thinking would provide the general public with genuine risk estimates that each individual could use for himself to judge his own survival prospects. Instead, government, media, and the academic culture are manipulating statistics to keep people huddled apart in fear.
Many of us are now under extreme isolation measures, following government edicts. Most of us have refrained from engaging in any social activities and traditional commercial transactions, and even if restrictions are relaxed, concern over the risk will make revival of the national economy difficult. Cable news and media sensationalism have generated a perceived risk that is largely unwarranted. For the population under 65, the crude death rate, even with Covid-19, has not substantially altered the risk of dying. A commonsense approach would suggest placing additional resources and focus on the most vulnerable (strict isolation and more care for those with comorbidities) and ramping up development of therapies to treat the illness. We need to directly address the occupational risk for health-care workers (e.g., more protection, shorter shifts, and more compensation). More important, we need to do these things not just to improve prospects for survivability of the nation’s residents but to promote confidence that it is “safe” to return to normal routines.
It is essential to convey to the public the true nature of the risk. CDC should publish risk estimates for age groups, including the joint chance of getting infected and dying. Give the public hard data on the joint risk of contracting and dying from the virus. Once the public understands that the risk is low, that modest measures can reduce the base level of risk, and that many normal activities of day-to-day life remain well within their personal risk tolerance, the public and a large segment of the workforce (most of whom have extremely low probabilities of dying from the coronavirus) more likely will head back to work. This would likely yield most of the benefits of the current lockdown and keep us from permanently damaging the national economy. __ Source
Experimental drug study shows promise
US state by state lockdown restraints and relaxations
Herd immunity would require 60% immunity
Given that healthy persons under the age of 50 are 99.9% likely to survive the encounter with Wuhan CoV-19, the incremental re-opening plans provided so far, would appear sufficiently — if not overly — cautious. Over the long run, many more of our healthy and vital young/middle aged will die from the follow-on effects of the gestapo lockdowns, than from this virus from Wuhan City.