… “any sufficiently advanced benevolence may be indistinguishable from malevolence.” __ Charles T. Rubin
We are being warned by prominent scientists, engineers, and men of commerce that machine intelligence — as implemented by military powers — is a threat to humanity. But just as in nuclear weaponry, the Pandora’s Box of advanced computing has been long opened.
There are many approaches to “artificial intelligence,” and it is not clear which approach — or combination of approaches — is likely to present the greatest threat to humans.
Machine Intelligence via Brain Simulation / Neuronal Networks
The best-known projects to develop intelligent machines involve some form of “brain simulation (PDF),” or neuronal networks: copying how the brain generates intelligence.
From Jeff Hawkins to Randall Koene to Henry Markram to the successors of Gerald Edelman — and more — scores of artificial intelligence researchers have based their theories upon how brains and neuronal nets work.
It seems a reasonable way to approach machine intelligence, since the human brain is the only known example in the universe of the type of intelligence that can do the things that humans are most interested in.
But many researchers and philosophers are not convinced that emulating the human brain is the best way to approach the problem of machine intelligence.
Thinking and acting like a human was a popular way of defining AI in the early days. Indeed, the pioneering paper in the field — Alan Turing’s ‘Computing Machinery and Intelligence’ — adopts an ‘acting like a human’ definition of AI. But that popularity has now waned. This is for several reasons, chief among them being the fact that designing systems that try to mimic human cognitive processes, or that are behaviourally indistinguishable from humans, is not very productive when it comes to building actual systems. ___ http://hplusmagazine.com/2015/07/15/is-regulation-of-artificial-intelligence-possible/
Other Approaches to AI — A Brief History
SymbolicMain article: Symbolic AI
When access to digital computers became possible in the middle 1950s, AI research began to explore the possibility that human intelligence could be reduced to symbol manipulation. The research was centered in three institutions: Carnegie Mellon University, Stanford and MIT, and each one developed its own style of research. John Haugeland named these approaches to AI “good old fashioned AI” or “GOFAI“. During the 1960s, symbolic approaches had achieved great success at simulating high-level thinking in small demonstration programs. Approaches based on cybernetics or neural networks were abandoned or pushed into the background. Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence and considered this the goal of their field.
- Cognitive simulation
- Economist Herbert Simon and Allen Newell studied human problem-solving skills and attempted to formalize them, and their work laid the foundations of the field of artificial intelligence, as well as cognitive science, operations research and management science. Their research team used the results of psychological experiments to develop programs that simulated the techniques that people used to solve problems. This tradition, centered at Carnegie Mellon University would eventually culminate in the development of the Soar architecture in the middle 1980s.
- Unlike Newell and Simon, John McCarthy felt that machines did not need to simulate human thought, but should instead try to find the essence of abstract reasoning and problem solving, regardless of whether people used the same algorithms. His laboratory at Stanford (SAIL) focused on using formal logic to solve a wide variety of problems, including knowledge representation, planning and learning. Logic was also the focus of the work at the University of Edinburgh and elsewhere in Europe which led to the development of the programming language Prolog and the science of logic programming.
- “Anti-logic” or “scruffy”
- Researchers at MIT (such as Marvin Minsky and Seymour Papert) found that solving difficult problems in vision and natural language processing required ad-hoc solutions – they argued that there was no simple and general principle (like logic) that would capture all the aspects of intelligent behavior. Roger Schank described their “anti-logic” approaches as “scruffy” (as opposed to the “neat” paradigms at CMU and Stanford). Commonsense knowledge bases (such as Doug Lenat‘s Cyc) are an example of “scruffy” AI, since they must be built by hand, one complicated concept at a time.
- When computers with large memories became available around 1970, researchers from all three traditions began to build knowledge into AI applications. This “knowledge revolution” led to the development and deployment of expert systems (introduced by Edward Feigenbaum), the first truly successful form of AI software. The knowledge revolution was also driven by the realization that enormous amounts of knowledge would be required by many simple AI applications.
By the 1980s progress in symbolic AI seemed to stall and many believed that symbolic systems would never be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition. A number of researchers began to look into “sub-symbolic” approaches to specific AI problems.
- Bottom-up, embodied, situated, behavior-based or nouvelle AI
- Researchers from the related field of robotics, such as Rodney Brooks, rejected symbolic AI and focused on the basic engineering problems that would allow robots to move and survive. Their work revived the non-symbolic viewpoint of the early cybernetics researchers of the 1950s and reintroduced the use of control theory in AI. This coincided with the development of the embodied mind thesis in the related field of cognitive science: the idea that aspects of the body (such as movement, perception and visualization) are required for higher intelligence.
- Computational intelligence and soft computing
- Interest in neural networks and “connectionism” was revived by David Rumelhart and others in the middle 1980s. Neural networks are an example of soft computing — they are solutions to problems which cannot be solved with complete logical certainty, and where an approximate solution is often enough. Other soft computing approaches to AI include fuzzy systems, evolutionary computation and many statistical tools. The application of soft computing to AI is studied collectively by the emerging discipline of computational intelligence.
In the 1990s, AI researchers developed sophisticated mathematical tools to solve specific subproblems. These tools are truly scientific, in the sense that their results are both measurable and verifiable, and they have been responsible for many of AI’s recent successes. The shared mathematical language has also permitted a high level of collaboration with more established fields (like mathematics, economics or operations research). Stuart Russell and Peter Norvig describe this movement as nothing less than a “revolution” and “the victory of the neats.” Critics argue that these techniques (with few exceptions) are too focused on particular problems and have failed to address the long-term goal of general intelligence. There is an ongoing debate about the relevance and validity of statistical approaches in AI, exemplified in part by exchanges between Peter Norvig and Noam Chomsky.
Integrating the approaches
- Intelligent agent paradigm
- An intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success. The simplest intelligent agents are programs that solve specific problems. More complicated agents include human beings and organizations of human beings (such as firms). The paradigm gives researchers license to study isolated problems and find solutions that are both verifiable and useful, without agreeing on one single approach. An agent that solves a specific problem can use any approach that works – some agents are symbolic and logical, some are sub-symbolic neural networks and others may use new approaches. The paradigm also gives researchers a common language to communicate with other fields—such as decision theory and economics—that also use concepts of abstract agents. The intelligent agent paradigm became widely accepted during the 1990s.
- Agent architectures and cognitive architectures
- Researchers have designed systems to build intelligent systems out of interacting intelligent agents in a multi-agent system. A system with both symbolic and sub-symbolic components is a hybrid intelligent system, and the study of such systems is artificial intelligence systems integration. A hierarchical control system provides a bridge between sub-symbolic AI at its lowest, reactive levels and traditional symbolic AI at its highest levels, where relaxed time constraints permit planning and world modelling. Rodney Brooks‘ subsumption architecture was an early proposal for such a hierarchical system.
The 60+ year history of AI is instructive for all the different approaches that did not yield a quick or total success. This lack of achievement in the field of machine intelligence might be reassuring to many who would otherwise fear non-human intelligences — particularly those in control of weapons systems.
And yet, in thousands of universities, national labs, corporate labs, and research think tanks around the world, advanced knowledge of computing and simulated intelligent systems is accumulating and progressing in fits and starts. It is not yet possible to predict when benevolent/malevolent AI will appear — only that it will eventually be possible.
And so far, there is not much incentive for governments, militaries, and the public to put a “speed limit” on what machine intelligences will be able to do.
In connection with machine intelligence, it does not seem very promising to try to limit the power or ability of computers. The danger (or promise) that computers might develop characteristics that lead some people to call them conscious — and that this age of intelligent machines would mean our extinction — seems remote when compared with their practical benefits. We already rely so heavily on computers that the incentives to make them easier to use and more powerful are very great. Computers already do a great many things better than we can, and there seems to be no natural place to enforce a stopping point to further abilities. __ Artificial Intelligence and Human Nature
And yet we do not necessarily want “intelligent machines” to decide whether we live or die. We have seen enough movies such as Terminator and The Matrix to imagine the sorts of ruined or dystopian worlds which intelligent machines might make for us.
And so bright people with a philosophical bent such as Nick Bostrom, Eliezer Yudkowsky, and others, spend a good deal of time trying to work out ways of assuring that our machine intelligences will be benevolent, rather than malevolent.
All of these concerns may seem like putting the cart before the horse, given that genuine human-level or superhuman-level machine intelligence seems many decades away — no matter which approach is taken.
But how much “intelligence” does a machine need to make life or death decisions over human futures? Not much, apparently, because it is already being done. The reliability of giant electric power grids, for example, is controlled by hackable electronic systems. This is a life or death issue for many thousands — perhaps millions — yet powerful interests continue to push for “smarter grids,” which would be even more easily hacked and sobotaged.
The same caveat applies for multiple critical human infrastructures in advanced nations. Computer systems and networked electronic systems control so much of our critical infrastructures already, that we do not have to wait to see the day when our fates rest within the grasp of (very unintelligent) machine intelligences. These machines can be hacked, have been hacked, and will be hacked again.
Perhaps it is not the intelligent machines of the future that we should fear, but rather the advanced governmental and independent systems of hackers who can already put millions of humans at risk — treating it as a game.
Machines are stupid in many ways, but very clever and successful in other ways. This is a reflection of the cleverness and success of their human designers, but it also reveals the way in which the application of simple rules can result in a very complex and perhaps unanticipated result.
We are seeing the same type of “emergence of complex results” in genetics, nanotechnologies, additive manufacturing, synthetic biology, chemical analysis and synthesis, and many other areas of science and technology which may coincidentally have military applications.
It is not that advanced artificial intelligence is not potentially dangerous — it is. But it is not alone in that regard. Humans tend to only be able to focus on one danger at a time. That limitation has proven generally surmountable up until now. But in this age of mounting, potentially existential hazards, we need to develop better workarounds for that limitation. And we will, as we can.
The widespread descent of human governments and systems into “Idiocracy” does not particularly help things. The distinction between systems and alliances who are working to destroy the human future, and those thinkers and groups who are attempting to build an expansive and abundant human future, is slowly becoming more clear.
Hope for the best, prepare for the worst. It is never too late to have a Dangerous Childhood.