Humans crave security. They will go to great lengths in an attempt to make themselves feel safe and secure. Unfortunately, there is no limit to potential risks. The ones we focus on are often the least likely to do us in.
“Every step we take to suppress the risks . . . will provoke some other, offsetting step.” So “neither the economy nor the natural world turned out to be as amenable to human management” as was believed.
As Velleius Paterculus observed in the history of Rome that he wrote circa 30 AD, “The most common beginning of disaster was a sense of security.”
The important systems which impinge upon the human world are all chaotic in nature. They are inherently unpredictable. Population masses in democratic societies demand security and prosperity from their governments, but like sheep trusting a senile shepherd, their trust is misplaced.
Consider a financial system. The “system” is not just all the private financial actors—bankers, brokers, investors, borrowers, savers, traders, speculators, hucksters, rating agencies, entrepreneurs, principals and agents—but equally all the government actors—multiple legislatures and central banks, the treasuries and finance ministries who must constantly borrow, politicians with competing ambitions, all varieties of regulatory agencies and bureaucrats, government credit and subsidy programs, multilateral bodies. All are intertwined and all interacting with each other, all forming expectations of the others’ likely actions, all strategizing.
No one is outside the system; all are inside the system. Its complexity leaves the many and varied participants inescapably uncertain of the outcomes of their interactions.
Within the interacting system, a fundamental strategy, as Ip says, is “to do something risky, then transfer some of the risk to someone else.” This seems perfectly sensible—say, getting subsidized flood insurance for your house built too near the river, or selling your risky loans to somebody else. But “the belief that they are now safer encourages them to take more risk, and so the aggregate level of risk in the system goes up.”
“Or,” he continues, “it might cause the risky activity to migrate elsewhere.” Where will the risk migrate to? According to Stanton’s Law, which seems right to me, “Risk migrates to the hands least competent to manage it.” Risk “finds the vulnerabilities we missed,” Ip writes. This means we are always confronted with uncertainty about what unforeseen vulnerabilities the risk will find.
Governments, media, academics, think tanks, activist groups, NGOs, foundations — all tend to focus upon specific risks which seem most threatening to the ruling elites, regardless of actual underlying hazards. The risk du jour for modern human cattle and sheep is apocalyptic climate change — but the nitwits assure you that if doom doesn’t come from climate armageddon, it will come from one of these things. Or, perhaps some other doom du jour, favoured by one special interest group or another.
Everything You Think You Know, Just Ain’t So
Given the chaotic nature of the interlocking systems in which humans are immersed, it is unlikely that highly confident intellectuals and pseudo-intellectuals will be able to anticipate what is coming.
In The Psychology of Prediction (PDF), Nobelist Daniel Kahneman explains how the intuition of experts so frequently leads them to make erroneous predictions. This is not only true in finance, climate, and geopolitics — it is also true in virtually any chaotic system that has bred an army of “experts” to tame and predict it.
Although certain members of the Al Fin Institutes predicted the fall of the USSR over a decade before it occurred, it is unlikely that many of the people you now admire — who were alive at the time — even imagined such a thing happening. Most “experts” strongly believed the opposite would take place — they felt the USSR would get stronger, and the US would more likely collapse.
We Cannot Help Predicting the Future
All of us are constantly predicting the future, whether we think about it or not. Right now, some small part of your brain is trying to predict what this show is going to be about. How do you do that? You factor in what you’ve heard so far… Maybe you know a lot, maybe you’ve never heard of it, you might think it’s some kind of communicable disease! When you predict the future, you look for cognitive cues, for data, for guidance.
… Whenever we go to a cocktail party, or a colloquium, or whatever where opinions are being shared, we frequently make likelihood judgments about possible futures. And the truth or falsity of particular claims about futures. The prediction business is a big business on Wall Street, and we have futures markets and so forth designed to regulate speculation in those areas. Obviously, government has great interest in prediction. They create large intelligence agency bureaucracies and systems to help them achieve some degree of predictability in a seemingly chaotic world.
… The thing about the radically unpredictable environments is that they often appear for long periods of time to be predictable. So, for example, if you had been a political forecaster predicting regime longevity in the Middle East, you would have done extremely well predicting in Egypt that Mubarak would continue to be the president of Egypt year after year after year in much the same way that if you had been a Sovietologist you would have done very well in the Brezhnev era predicting continuity. There’s an aphorism I quote in the “Expert Political Judgment” book from Karl Marx. I’m obviously not a Marxist but it’s a beautiful aphorism that he had which was that, “When the train of history hits a curve, the intellectuals fall off.”
Human herds want so badly for the world to be predictable. Usually, they want the world to be predictably secure. But other types of human herds — the doomer herds — want the world to be predictably insecure. They wish for peak oil doom, climate apocalypse, nuclear armageddon, pandemic holocaust, catastrophic supervolcanoes and massive transcontinental earthquakes . . . And so doomer herds have their own prediction rackets, echo choirs, circular jerkular communal gatherings.
But whether doom or security, humans crave predictability in their world. Meaning and predictability. Sadly, neither truly appear to exist in the grand universe of clashing chaotic systems — although humans have only partially observed a tiny corner of the grand universe, and that through a glass darkly.
It is the Nature of Arrogant Fools to Lead
And so human institutions of all types — governments, universities, activist lobbies, think tanks, foundations, and any other institution where failure can be easily brushed under the rug — tend to be led by arrogant fools who maintain a pretence of a higher level of knowledge and prediction. Whether singletons, committees, or entire assemblies of arrogant idiots, the hierarchy typically self-organises in the same way — with arrogant fools and psychopaths occupying the top rungs.
Where Competence Matters
The exception to the rule of top-down idiocy is in areas where humans must predict accurately and competently — when building bridges, doing surgery, designing secure banking cyber-systems, starting new businesses, or operating passenger vehicles such as planes / trains / ships / school buses etc. In areas where competence is critical — and where incompetence cannot be hidden or denied — bright, creative, and competent humans tend to rise to the top.
That is only possible in opportunity societies, however. Which is why opportunity societies — such as the US in the 1800s and early 1900s — were the centre of so much disruptive innovation, and epi-centres of the explosive adoption of disruptive innovations that occurred in other places. Nations which suffocate opportunity do not tend to innovate as well, or to be able to broadly implement and build upon the disruptive innovations that come along somehow.
It is not that entrepreneurs, inventors, innovators, and venture capitalists are that much better at predicting the future than most other intelligent humans — little more than blind biological evolution is good at predicting the future. No, in opportunity societies, economies and technologies co-evolve and adapt much as species evolve and adapt in biological evolution. Luck and good location usually play as important a role as bright vision.
In more stifling societies, the process of evolution and disruptive innovation takes place at a relative crawl, in comparison. Often, such societies are prone to abrupt revolutions or to conquest from the outside.
Creation of Islands of Competence and Opportunity Inside Otherwise Stifling Societies
Human herds continue to follow arrogant leaders who appeal to their senses of security, predictability, and vanity. And so they give up more of their own powers to psychopaths and bureaucracies that claim to know what is best for them.
As most nations of Europe and the Anglosphere slide toward the stifling hyper-statist end of the spectrum, with its concomitant over-centralisation and stunting of opportunity, it becomes critical for persons of clearer perspective to design and build refuges of competence, free thought and expression, and disruptive innovation.
Such islands of competence and opportunity serve to create parallel infrastructures of advanced existence. They are meant to maintain the productive systems of reason, creativity, and creative transformation which allowed humans to touch the moon — and will allow them to move much further outward and deeper inward. But they must keep a low profile, and hide in plain sight as it were.
Hope for the best. Prepare for the worst. And in the end, it will likely be the doom you never expected that gets you.