On the Verge of an AI-pocalypse?

For 70 years we have been told that we are on the verge of a machine apocalypse — the near-total replacement of humans by machines across society. But the gap between the reality and the hype remains wide.

Artificial Intelligence: Failed Early Predictions

In the 1950s and 1960s, computing pioneers were convinced that humans were on the brink of the emergence of revolutionary “thinking machines.” Smart computers would design smarter computers, and so on, until machine intelligence would eclipse the intelligence of humans.

In the 1960s, pioneers in artificial intelligence made grand claims that AI systems would surpass human intelligence before the end of the 20th century. Except for beating the world chess champion in 1997, none of the other predictions have come true. __ http://www.jfsowa.com/pubs/micai.pdf


Research in artificial intelligence has been able to provide a number of technological breakthroughs in isolated areas, but the quest to create a machine of human level intelligence across a wide range of problems continues to run into significant obstacles.

Now, as Moore’s Law seems to be starting some sort of long goodbye, a couple of themes are dominating discussions of computing’s future. One centers on quantum computers and stupendous feats of decryption, genome analysis, and drug development. The other, more interesting vision is of machines that have something like human cognition. They will be our intellectual partners in solving some of the great medical, technical, and scientific problems confronting humanity. And their thinking may share some of the fantastic and maddening beauty, unpredictability, irrationality, intuition, obsessiveness, and creative ferment of our own. __ IEEE Spectrum: Can We Copy the Brain?

Researchers are being led to an imitation of nature, motivated by opportunism combined with frustration over past failures.

The Return of AI Giddiness in the 21st Century

The failure to achieve “artificial general intelligence,” or human level intelligence, has not stopped 21st century researchers from continuing to pursue the holy grail of the great machine ascendancy. Consider the recent special issue of the IEEE Spectrum, an international journal for electrical engineers, which examines how machines might become intelligent by imitating or “recreating” the human brain:

Karlheinz Meier of Heidelberg University describes various approaches to computer imitation of the human brain — including his own research efforts — in the article linked below:

Copying brain operation in electronics may actually be more feasible than it seems at first glance. It turns out the energy cost of creating an electric potential in a synapse is about 10 femtojoules (10-15 joules). The gate of a metal-oxide-semiconductor (MOS) transistor that is considerably larger and more energy hungry than those used in state-of-the-art CPUs requires just 0.5 fJ to charge. A synaptic transmission is therefore equivalent to the charging of at least 20 transistors. What’s more, on the device level, biological and electronic circuits are not that different. So, in principle, we should be able to build structures like synapses and neurons from transistors and wire them up to make an artificial brain that wouldn’t consume an egregious amount of energy. __ Imitating the Brain with Electronics

The brighter theorists in AI are no longer discussing “intelligent algorithms,” but are rather pursuing the “supra-algorithmic space” in which human brains operate. But they have not quite located the level of salience within the brain’s cognitive operations. But they are trying very hard.

Jeff Hawkins of Numenta describes his own multidisciplinary approach to the problem in the article linked below:

The solution is finally coming within reach. It will emerge from the intersection of two major pursuits: the reverse engineering of the brain and the burgeoning field of artificial intelligence. Over the next 20 years, these two pursuits will combine to usher in a new epoch of intelligent machines.

The neocortex stores these patterns [of perception] primarily by forming new synapses. This storage enables you to recognize faces and places when you see them again, and also recall them from your memory. For example, when you think of your friend’s face, a pattern of neural firing occurs in the neocortex that is similar to the one that occurs when you are actually seeing your friend’s face.

While it is true that today’s AI techniques reference neuroscience, they use an overly simplified neuron model, one that omits essential features of real neurons, and they are connected in ways that do not reflect the reality of our brain’s complex architecture. These differences are many, and they matter. They are why AI today may be good at labeling images or recognizing spoken words but is not able to reason, plan, and act in creative ways.

__ Jeff Hawkins in IEEE Spectrum

AI is Not Ready to Tackle the Human Brain as a Whole

Even the rat brain is currently too much of a challenge. At this time it is all that neuroscientists can do to comprehensively map 1 cubic mm of a rat’s brain.

That may not sound like much, but that tiny cube contains about 50,000 neurons connected to one another at about 500 million junctures called synapses. The researchers hope that a clear view of all those connections will allow them to discover the neural “circuits” that are activated when the visual cortex is hard at work. The project requires specialized brain imaging that shows individual neurons with nanometer-level resolution, which has never before been attempted for a brain chunk of this size. __ “http://spectrum.ieee.org/biomedical/imaging/ai-designers-find-inspiration-in-rat-brains”

Think about that for a moment: The 5 year, $100 million project is hard-put to thoroughly understand 1 cubic mm of rat visual cortex! Not only that, they are looking at a “static” map which can only be computer modeled — not actually observed in action as it works within the living creature going about its natural life.

Some Differences Between Brains and Electronic Machines

The grand pioneer of information theory, Claude Shannon, was quoted in 1961 as saying:

…I believe that… there is very little similarity between the methods of operation of [present-day] computers and the brain. Some of the apparent differences are the following. In the first place, the wiring and circuitry of the computers are extremely precise and methodical. A single incorrect connection will generally cause errors and malfunctioning. The connections in the brain appear, at least locally, to be rather random, and even large numbers of malfunctioning parts do not cause complete breakdown of the system. In the second place, computers work on a generally serial basis, doing one small operation at a time. The nervous system, on the other hand, appears to be more of a parallel-type computer with a large fraction of neurons active at any given time. In the third place, it may be pointed out that most computers are either digital or analog. The nervous system seems to have a complex mixture of both representations of data.

These and other arguments suggest that efficient machines for such problems as pattern recognition, language translation, and so on, may require a different type of computer than any we have today. It is my feeling that this computer will be so organized that single components do not carry out simple, easily described functions. One cannot say that this transistor is used for this purpose, but rather that this group of components together performs such and such function. If this is true, the design of such a computer may lead us into something very difficult for humans to invent and something that requires very penetrating insights… I know of very few devices in existence which exhibit this property of diffusion of function over many components… __ http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/what-should-we-learn-past-ai-forecasts

Computing has advanced since Shannon’s day, but the essence of many of his reservations holds true. Neural nets, fuzzy logic, genetic algorithms, and many other clever approaches to overcoming the limits of digital computing algorithms are still not bringing us close to human level AI.

Some specific challenges of imitating the brain:

Traditionally, neurons were viewed as units that collect thousands of inputs, transform them computationally, and then send signals downstream to other neurons via connections called synapses. But it turns out that this model is too simplistic; surprising computational power exists in every part of the system. Even a single synapse contains hundreds of different protein types having complex interactions. It’s a molecular computer in its own right.

And there are hundreds of different types of neurons, each performing a special role in the neural circuitry. Most neurons communicate through physical contact, so they grow long skinny branches to find the right partner. Signals move along these branches via a chain of amplifiers. Ion pumps keep the neuron’s cell membrane charged, like a battery. Signals travel as short sharp changes of voltage, called spikes, which ripple down the membrane.

The power of the brain goes beyond its internal connections, and includes its ability to communicate with other brains. Some animals form swarms or social groups, but only humans form deep hierarchies. This penchant, more than any unique cognitive ability, enables us to dominate the planet and construct objects of exquisite complexity. __ http://spectrum.ieee.org/computing/software/in-the-future-machines-will-borrow-our-brains-best-tricks by Fred Rothganger of Sandia National Labs

More from the IEEE Spectrum special edition:

Could We Build an Artificial Brain Now? No, but . . .

Deep Learning via Neuromorphic Chips

Navigating Using Artificial Rat Brains

Can We Quantify Machine “Consciousness?”

Human Level AI “Right Around the Corner?”

AI Optimists Were Wrong 60 Years Ago

Are their predictions more accurate now? Ray Kurzweil predicts human level AI by 2029. Jurgen Schmidhuber of the Swiss AI Labs predicts human level AI “soon.” Jeff Hawkins expects it to take 20 years. Psychology prof Gary Marcus of NYU predicts such machines within 20 to 50 years. Most other “experts” are more cautious in their predictions.

Minds are not Like Computers

Early AI researchers imagined that they were devising algorithmic “thinking machines” which worked in closely analogous manner to how minds work — in essence if not in substance. But that optimistic idea (to them) fell by the wayside.

Few researchers now have the goal of devising a single method that will by itself give rise to a thinking machine; instead, typical research projects attempt to tackle small subsystems of intelligence. The hypothesis, again, is that the separability of the mind into layers implies that each layer, like a computer system, is composed of distinct modules that can be studied and replicated independently. Among those researchers whose ultimate goal is still to create a truly thinking machine, the hope is that, when the subsystems become sufficiently advanced, they can be joined together to create a mind. __ Why Minds are Not Like Computers

It is likely that those researchers who wish to create individual working components of thinking minds — then join them together to achieve human level AI — are barking up the wrong tree. Such an approach may well create “brain prostheses” which can be implanted to “replace” damaged brain components. But enough of the original brain must remain to coordinate activity and compensate for the shortcomings of the artificial prosthetic devices.

Those theorists who drone on about creating a “thinking algorithm” are even more completely deluded as to the nature of human cognition. And those who believe that assembling a large enough conglomerate of computing devices will result in a thinking machine, are most deluded of all.

By examining the basic misconceptions of AI researchers, it becomes easier to understand why the research has become so fragmented, specialised, and diverging. No single approach — or simple combination of approaches — is achieving what is desired. And so, unfortunately, $billions of dollars and many of the best human minds are sent on wild goose chases in search of a holy grail that cannot be found on the level of thinking which is being utilised.

We will continue to reap significant benefits from this research, of course. But anyone who is well informed in the multiple disciplines of neuroscience, philosophy, psychology, computing, and information theory cannot help but be frustrated by the failure of both the researchers themselves — and the funding agencies — to see beyond the fumbling levels of thought currently being utilised. This is particularly true for any such person with even a milligram of intuition or lateral thinking skill.

All of the cutting edge repositories of modern technological wealth — including Amazon, Google, Apple, IBM, etc. — are in pursuit of the holy grail of machine intelligence. The rewards of winning such a race are incalculable, particularly if human-level AI quickly morphs into superhuman-level AI.

But to this point none of the published approaches seem to have overcome the theoretical limitations pointed out by so many AI sceptics over the past several decades.

While AI may not “stink” as badly as it did for many years, it can still be labeled “malodorous” in comparison to the many claims still being made by its enthusiasts. AI is likely to remain in the category of subservient “human helpers” for many decades to come.

The Age of Machines Has Not Been Canceled

At the same time, the steady replacement of human workers by machines — which started thousands of years ago and was accelerated in the industrial revolution — shows no sign of slowing down in the modern age of advanced computing.

Humans must therefore develop great flexibility of mind and body if they are to avoid becoming underskilled dependents of the steadily encroaching “age of machines.” It is never too late for a Dangerous Childhood.

More:

Deep Neural Nets: Someone needs to tell the writers of this type of article that such computing devices can no longer be properly referred to as “algorithmic.” Training algorithms are utilised, but that is also true for training humans. The actual functioning devices — neural nets or human brains — do not utilise algorithms to complete their tasks in any meaningful sense of the word. The use of the word to describe non-algorithmic mechanisms betrays the lack of relevant vocabulary for what is on the way.

Advertisements
This entry was posted in Human Brain, Machine Intelligence and tagged . Bookmark the permalink.

2 Responses to On the Verge of an AI-pocalypse?

  1. yoananda says:

    The danger is rather / also that humans behave more and more like machines.

  2. Matt Musson says:

    Time flys like an arrow.
    Fruit flies like a banana.

    Let me know when you have a computer that can understand those two statements.

Comments are closed.