… “any sufficiently advanced benevolence may be indistinguishable from malevolence.” __ Charles T. Rubin
We are being warned by prominent scientists, engineers, and men of commerce that machine intelligence — as implemented by military powers — is a threat to humanity. But just as in nuclear weaponry, the Pandora’s Box of advanced computing has been long opened.
There are many approaches to “artificial intelligence,” and it is not clear which approach — or combination of approaches — is likely to present the greatest threat to humans.

‘TERMINATOR 3: RISE OF THE MACHINES’ FILM STILLS – 2003…No Merchandising. Editorial Use Only. No Book Cover Usage
Mandatory Credit: Photo by c.Warner Br/Everett/REX (421101h)
‘TERMINATOR 3: RISE OF THE MACHINES’
‘TERMINATOR 3: RISE OF THE MACHINES’ FILM STILLS – 2003
Machine Intelligence via Brain Simulation / Neuronal Networks
The best-known projects to develop intelligent machines involve some form of “brain simulation (PDF),” or neuronal networks: copying how the brain generates intelligence.
From Jeff Hawkins to Randall Koene to Henry Markram to the successors of Gerald Edelman — and more — scores of artificial intelligence researchers have based their theories upon how brains and neuronal nets work.
It seems a reasonable way to approach machine intelligence, since the human brain is the only known example in the universe of the type of intelligence that can do the things that humans are most interested in.
But many researchers and philosophers are not convinced that emulating the human brain is the best way to approach the problem of machine intelligence.
Thinking and acting like a human was a popular way of defining AI in the early days. Indeed, the pioneering paper in the field — Alan Turing’s ‘Computing Machinery and Intelligence’ — adopts an ‘acting like a human’ definition of AI. But that popularity has now waned. This is for several reasons, chief among them being the fact that designing systems that try to mimic human cognitive processes, or that are behaviourally indistinguishable from humans, is not very productive when it comes to building actual systems. ___ http://hplusmagazine.com/2015/07/15/is-regulation-of-artificial-intelligence-possible/
Other Approaches to AI — A Brief History
Symbolic
Main article: Symbolic AIWhen access to digital computers became possible in the middle 1950s, AI research began to explore the possibility that human intelligence could be reduced to symbol manipulation. The research was centered in three institutions: Carnegie Mellon University, Stanford and MIT, and each one developed its own style of research. John Haugeland named these approaches to AI “good old fashioned AI” or “GOFAI“.[101] During the 1960s, symbolic approaches had achieved great success at simulating high-level thinking in small demonstration programs. Approaches based on cybernetics or neural networks were abandoned or pushed into the background.[102] Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence and considered this the goal of their field.
- Cognitive simulation
- Economist Herbert Simon and Allen Newell studied human problem-solving skills and attempted to formalize them, and their work laid the foundations of the field of artificial intelligence, as well as cognitive science, operations research and management science. Their research team used the results of psychological experiments to develop programs that simulated the techniques that people used to solve problems. This tradition, centered at Carnegie Mellon University would eventually culminate in the development of the Soar architecture in the middle 1980s.[103][104]
- Logic-based
- Unlike Newell and Simon, John McCarthy felt that machines did not need to simulate human thought, but should instead try to find the essence of abstract reasoning and problem solving, regardless of whether people used the same algorithms.[95] His laboratory at Stanford (SAIL) focused on using formal logic to solve a wide variety of problems, including knowledge representation, planning and learning.[105] Logic was also the focus of the work at the University of Edinburgh and elsewhere in Europe which led to the development of the programming language Prolog and the science of logic programming.[106]
- “Anti-logic” or “scruffy”
- Researchers at MIT (such as Marvin Minsky and Seymour Papert)[107] found that solving difficult problems in vision and natural language processing required ad-hoc solutions – they argued that there was no simple and general principle (like logic) that would capture all the aspects of intelligent behavior. Roger Schank described their “anti-logic” approaches as “scruffy” (as opposed to the “neat” paradigms at CMU and Stanford).[96] Commonsense knowledge bases (such as Doug Lenat‘s Cyc) are an example of “scruffy” AI, since they must be built by hand, one complicated concept at a time.[108]
- Knowledge-based
- When computers with large memories became available around 1970, researchers from all three traditions began to build knowledge into AI applications.[109] This “knowledge revolution” led to the development and deployment of expert systems (introduced by Edward Feigenbaum), the first truly successful form of AI software.[31] The knowledge revolution was also driven by the realization that enormous amounts of knowledge would be required by many simple AI applications.
Sub-symbolic
By the 1980s progress in symbolic AI seemed to stall and many believed that symbolic systems would never be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition. A number of researchers began to look into “sub-symbolic” approaches to specific AI problems.[97]
- Bottom-up, embodied, situated, behavior-based or nouvelle AI
- Researchers from the related field of robotics, such as Rodney Brooks, rejected symbolic AI and focused on the basic engineering problems that would allow robots to move and survive.[110] Their work revived the non-symbolic viewpoint of the early cybernetics researchers of the 1950s and reintroduced the use of control theory in AI. This coincided with the development of the embodied mind thesis in the related field of cognitive science: the idea that aspects of the body (such as movement, perception and visualization) are required for higher intelligence.
- Computational intelligence and soft computing
- Interest in neural networks and “connectionism” was revived by David Rumelhart and others in the middle 1980s.[111] Neural networks are an example of soft computing — they are solutions to problems which cannot be solved with complete logical certainty, and where an approximate solution is often enough. Other soft computing approaches to AI include fuzzy systems, evolutionary computation and many statistical tools. The application of soft computing to AI is studied collectively by the emerging discipline of computational intelligence.[112]
Statistical
In the 1990s, AI researchers developed sophisticated mathematical tools to solve specific subproblems. These tools are truly scientific, in the sense that their results are both measurable and verifiable, and they have been responsible for many of AI’s recent successes. The shared mathematical language has also permitted a high level of collaboration with more established fields (like mathematics, economics or operations research). Stuart Russell and Peter Norvig describe this movement as nothing less than a “revolution” and “the victory of the neats.”[34] Critics argue that these techniques (with few exceptions[113]) are too focused on particular problems and have failed to address the long-term goal of general intelligence.[114] There is an ongoing debate about the relevance and validity of statistical approaches in AI, exemplified in part by exchanges between Peter Norvig and Noam Chomsky.[115][116]
Integrating the approaches
- Intelligent agent paradigm
- An intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success. The simplest intelligent agents are programs that solve specific problems. More complicated agents include human beings and organizations of human beings (such as firms). The paradigm gives researchers license to study isolated problems and find solutions that are both verifiable and useful, without agreeing on one single approach. An agent that solves a specific problem can use any approach that works – some agents are symbolic and logical, some are sub-symbolic neural networks and others may use new approaches. The paradigm also gives researchers a common language to communicate with other fields—such as decision theory and economics—that also use concepts of abstract agents. The intelligent agent paradigm became widely accepted during the 1990s.[2]
- Agent architectures and cognitive architectures
- Researchers have designed systems to build intelligent systems out of interacting intelligent agents in a multi-agent system.[117] A system with both symbolic and sub-symbolic components is a hybrid intelligent system, and the study of such systems is artificial intelligence systems integration. A hierarchical control system provides a bridge between sub-symbolic AI at its lowest, reactive levels and traditional symbolic AI at its highest levels, where relaxed time constraints permit planning and world modelling.[118] Rodney Brooks‘ subsumption architecture was an early proposal for such a hierarchical system.
The 60+ year history of AI is instructive for all the different approaches that did not yield a quick or total success. This lack of achievement in the field of machine intelligence might be reassuring to many who would otherwise fear non-human intelligences — particularly those in control of weapons systems.
And yet, in thousands of universities, national labs, corporate labs, and research think tanks around the world, advanced knowledge of computing and simulated intelligent systems is accumulating and progressing in fits and starts. It is not yet possible to predict when benevolent/malevolent AI will appear — only that it will eventually be possible.
And so far, there is not much incentive for governments, militaries, and the public to put a “speed limit” on what machine intelligences will be able to do.
In connection with machine intelligence, it does not seem very promising to try to limit the power or ability of computers. The danger (or promise) that computers might develop characteristics that lead some people to call them conscious — and that this age of intelligent machines would mean our extinction — seems remote when compared with their practical benefits. We already rely so heavily on computers that the incentives to make them easier to use and more powerful are very great. Computers already do a great many things better than we can, and there seems to be no natural place to enforce a stopping point to further abilities. __ Artificial Intelligence and Human Nature
And yet we do not necessarily want “intelligent machines” to decide whether we live or die. We have seen enough movies such as Terminator and The Matrix to imagine the sorts of ruined or dystopian worlds which intelligent machines might make for us.
And so bright people with a philosophical bent such as Nick Bostrom, Eliezer Yudkowsky, and others, spend a good deal of time trying to work out ways of assuring that our machine intelligences will be benevolent, rather than malevolent.
All of these concerns may seem like putting the cart before the horse, given that genuine human-level or superhuman-level machine intelligence seems many decades away — no matter which approach is taken.
But how much “intelligence” does a machine need to make life or death decisions over human futures? Not much, apparently, because it is already being done. The reliability of giant electric power grids, for example, is controlled by hackable electronic systems. This is a life or death issue for many thousands — perhaps millions — yet powerful interests continue to push for “smarter grids,” which would be even more easily hacked and sobotaged.
The same caveat applies for multiple critical human infrastructures in advanced nations. Computer systems and networked electronic systems control so much of our critical infrastructures already, that we do not have to wait to see the day when our fates rest within the grasp of (very unintelligent) machine intelligences. These machines can be hacked, have been hacked, and will be hacked again.
Perhaps it is not the intelligent machines of the future that we should fear, but rather the advanced governmental and independent systems of hackers who can already put millions of humans at risk — treating it as a game.
Machines are stupid in many ways, but very clever and successful in other ways. This is a reflection of the cleverness and success of their human designers, but it also reveals the way in which the application of simple rules can result in a very complex and perhaps unanticipated result.
We are seeing the same type of “emergence of complex results” in genetics, nanotechnologies, additive manufacturing, synthetic biology, chemical analysis and synthesis, and many other areas of science and technology which may coincidentally have military applications.
It is not that advanced artificial intelligence is not potentially dangerous — it is. But it is not alone in that regard. Humans tend to only be able to focus on one danger at a time. That limitation has proven generally surmountable up until now. But in this age of mounting, potentially existential hazards, we need to develop better workarounds for that limitation. And we will, as we can.
The widespread descent of human governments and systems into “Idiocracy” does not particularly help things. The distinction between systems and alliances who are working to destroy the human future, and those thinkers and groups who are attempting to build an expansive and abundant human future, is slowly becoming more clear.
Hope for the best, prepare for the worst. It is never too late to have a Dangerous Childhood.
Further reading:
Branches of AI by John McCarthy
Much more detail on approaches to Artificial Intelligence — Santa Fe Inst.
4 Volume Handbook of Artificial Intelligence by Barr & Feigenbaum download from Archive.org
Thousands of Links for AI Topics
Can Artificial Intelligence Be Regulated?
Artificial Intelligence and Human Nature
https://reason.com/archives/2014/09/12/will-superintelligent-machines-destroy-h
Reblogged this on The Arts Mechanical and commented:
The way things are going, getting a true AI isn’t going to happen anytime soon. Machines are going to remain stupid for a long time yet.
I read the Dittmer blog posting you linked to previously. His arguments about the molecular biology of the brain (which parallels my own) is one of the reasons why I do not expect true AI until the end of this century, at the earliest. I don’t consider “rouge” AI to be an issue in the foreseeable future.
Besides, I think the AI singularity boosterism is somewhat of a cargo cult. E.g. we create the AI and it does all of the hard work to develop biological immortality, FTL, and Eric Drexler’s nanotechnology. The reality is that we have to do all of this hard work ourselves. There isn’t going to be any shortcut on this.
https://timdettmers.wordpress.com/2015/07/27/brain-vs-deep-learning-singularity/
This is the Dittmer article you refer to. When combined with other recent neuroscience findings, it certainly portrays the task of emulating the brain in a more daunting light.
A semi-intelligent machine can do a lot of harm without intending to. It is best for humans themselves to grow smarter and wiser, and simply use dumb machine assistants to manage complex systems.
In the coming Idiocracy, it is likely that stupider humans will choose to hand over more decision-making power to machines — simply because the stupider people will lack the knowledge required to manage complex systems themselves.
I agree with all of what you say here. Machine learning will get better and we will get decent machine vision and related capabilities. This will make for better robotics and other automation capabilities. As an automation engineer, I expect to make use of these capabilities in my work over the coming years. However, based on the arguments of Dittmer and others, I don’t expect machine sentience in the foreseeable future. e.
Cars effectively replaced horses even though they need roads and the fact that no one has made a car that can jump over a fence. We don’t need sentient A.I. to create the future we want. We simply need to make machines that can do the work that needs to be done.
Dittmer’s arguments are essentially an updated version of arguments from guys I know in the life extension and cryonics milieu, who also had doubts about near-term A.I., based on what we knew of neuro-biology in the 1990’s. The guy I’m thinking of is already in cryo-suspension (he had brain cancer). Most people in the cryonics scene are also dubious about the possibility of A.I.