Artificial Intelligence: Does It Still Stink?

Despite nearly 65 years of failed promises, interest in artificial intelligence (AI) is once again picking up in the venture/startup world.

One of the most talked about VC deals in March, for example, was a $40 million round for Vicarious FPC, an artificial intelligence company that had so much hype around it that the biggest names of the tech world – including Mark Zuckerberg and Elon Musk (and Ashton Kutcher) – lined up to participate. And that comes just two months after Google made a monster $400 million bet on DeepMind, an AI start-up based in London. __WP

More on the Vicarious startup from WSJ

Behind the scenes, Jeff Hawkins and his company Numenta continue working hard on the approach to artificial intelligence spelled out in Hawkins’ 2004 book, “On Intelligence.”

The road to a successful, widely deployable framework for an artificial mind is littered with failed schemes, dead ends, and traps. No one has come to the end of it, yet. But while major firms like Google and Facebook, and small companies like Vicarious, are striding over well-worn paths, Hawkins believes he is taking a new approach that could take him and his colleagues at his company, Numenta, all the way.

For over a decade, Hawkins has poured his energy into amassing enough knowledge about the brain and about how to program it in software. Now, he believes he is on the cusp of a great period of invention that may yield some very powerful technology. __Jeff Hawkins and His Brain-Like AI

Ramez Naan says: “Not so fast. The Singularity is Further Than it Appears” .

The most successful and profitable AI in the world is almost certainly Google Search. In fact, in Search alone, Google uses a great many AI techniques. Some to rank documents, some to classify spam, some to classify adult content, some to match ads, and so on. In your daily life you interact with other ‘AI’ technologies (or technologies once considered AI) whenever you use an online map, when you play a video game, or any of a dozen other activities.

None of these is about to become sentient. None of these is built towards sentience. Sentience brings no advantage to the companies who build these software systems. Building it would entail an epic research project – indeed, one of unknown length involving uncapped expenditure for potentially decades – for no obvious outcome. So why would anyone do it?

… IBM’s ‘Blue Brain’ project has used one of the world’s most powerful supercomputers (an IBM Blue Gene/P with 147,456 CPUs) to run a simulation of 1.6 billion neurons and almost 9 trillion synapses, roughly the size of a cat brain. The simulation ran around 600 times slower than real time – that is to say, it took 600 seconds to simulate 1 second of brain activity….

… the IBM Blue Brain simulation uses neurons that accumulate inputs from other neurons and which then ‘fire’, like real neurons, to pass signals on down the line. But those neurons lack many features of actual flesh and blood neurons. They don’t have real receptors that neurotransmitter molecules (the serotonin, dopamine, opiates, and so on…

…. consider three more discoveries we’ve made in recent years about how the brain works, none of which are included in current brain simulations.
First, there’re glial cells. Glial cells outnumber neurons in the human brain. And traditionally we’ve thought of them as ‘support’ cells that just help keep neurons running. But new research has shown that they’re also important for cognition. Yet the Blue Gene simulation contains none.

Second, very recent work has shown that, sometimes, neurons that don’t have any synapses connecting them can actually communicate. The electrical activity of one neuron can cause a nearby neuron to fire (or not fire) just by affecting an electric field, and without any release of neurotransmitters between them. This too is not included in the Blue Brain model.

Third, and finally, other research has shown that the overall electrical activity of the brain also affects the firing behavior of individual neurons by changing the brain’s electrical field. Again, this isn’t included in any brain models today.

… I think the near future will be one of quite a tremendous amount of technological advancement. I’m extremely excited about it. But I don’t see a Singularity [(super)-human level AI] in our future for quite a long time to come. __Singularity? Not Yet

One can find all of these arguments — and a lot more — in the Al Fin blog archives on Artificial Intelligence. But then, anyone who depends upon the mainstream for information is always the last to know — and the most misinformed.

The human brain is a massively complex puzzle — from the structure to real-time gene expression to its multiple levels and types of communication and modulation. Since the homo sapiens brain is the only working prototype of human-level intelligence, and since we are not even close to understanding how the brain makes the thing we call intelligence, it is not surprising that all of the purely abstract and disconnected approaches to artificial intelligence are only frail and mocking shadows compared to the brain of a human infant — and much less when compared to the brain of a competent and adaptable adult.

That is likely to remain the case for at least the next half century — until humans learn to train ambitious multi-competent, polymath human minds with advanced practical skills and lively creative imaginations.

It is never too late to have a dangerous childhood. Even for a lazy, self-satisfied species.

This entry was posted in Human Brain, Machine Intelligence, Technology and tagged . Bookmark the permalink.

3 Responses to Artificial Intelligence: Does It Still Stink?

  1. jabowery says:

    Although I haven’t yet had a chance to pursue it in my current work, I’m interested in applying Hecht-Nielsen’s “confabulation theory” to see how well it stacks up against Bayesian methods.

    The basic theory is outlined in his 2004 paper “Cogent Confabulation“. In particular he posits the confabulation equation is a major discovery that debunks what he calls the “Bayesian religion” by providing a scalable model of cognition in which the parallel processing elements are performing functions similar to the brain’s thalamocortical modules. At present I’m looking at some “C” code I found at github that implements the primary component called a Thalamocortical Module.

    It is, of course, tempting to dismiss his extreme claim, that he has somehow debunked the “Bayesian religion”, as some sort of mental aberration — perhaps resulting from his having hit the jackpot with the sale of his company for between $3B and $4B to one of the most prominent credit rating agencies in the world. Moreover, there appears to have been a remarkable drop-off in citations to this theory.

    On the other hand, he did sell his company for between $3B and $4B to one of the most prominent credit rating agencies in the world. Moreover, an intriguing reference to this theory is his attached presentation at a national lab which just might explain the drop-off in citations to this theory if anything like the propose project went forward — although one then would have expected them to delete this presentation from the web server.

    If we give the initial statement in Clark’s Laws any credence: “When a distinguished but elderly scientist states that something is possible, he is almost certainly right.”, RHN’s age, that he is considered a founding father of neural network technology and the fact that he is commenting on his specialization should be given some weight.

    With this in mind, I would ask you to review the presentation — which I located at Sandia’s website — made by RHN at Sandia. Note he proposes an “Extraction System Organization” with a budget rising to $300B/year by 2015.

    In particular, I found this item interesting:

    Collectors and Analysts have no need to know how extraction system works (this knowledge should be highly restricted) – users need only know extraction system’s capabilities and how to use it

    You can download RHN’s book Confabulation Theory: The Mechanism of Thought.

  2. alfin2101 says:

    Thanks for the ideas and pointers.

    AI theory has been almost nothing but wild goose chases from the beginning. So many bright minds pursuing so many blind pathways. Such prolonged futility is to be expected from an inbred system of education, management, and research, particularly when trying to break new ground in a poorly understood field.

    Hecht-Nielsen’s ideas are promising. The base level theory seems rudimentary but sound. The application level results (sales of $4 billion to credit rating agencies) suggests high level competency. But there is a wide gap between the two, and I am not so sure that conventionally trained people — no matter how intelligent — will be able to juggle all the necessary balls, swords, flaming torches, and roaring chain saws to reach the important connecting insights.


  3. jabowery says:

    It was bad enough when Minsky pulled a fast one to deep-six data-driven approaches back in the 60s (basically that’s what he did in his book “Perceptrons”), giving rise to decades of nonsense culminating in the hilarious “Cyc” project. (While working at Memex Corp with him, I asked Charles Smith, the guy who financed the PDP books, what he thought of Lenat’s vision for AI and he just burst out laughing — he couldn’t even bring himself to comment coherently.) But since the DotCon bubble burst at the same time the H-1b program was expanded to fill the vanishing jobs, the Nation of Settlers has basically been persona non-grata in Silicon Valley in particular — except of course the few enormously rich guys who are surrounded with sycophants and toadies from cultures in which those are a very highly evolved strain. Its just too embarrassing to admit stuff like Noyce was from Grinell, IA, the first supercomputer was built on a farm in Chippewa Falls, WI and the first computer was built — according to court findings — in an agricultural school in Ames, IA prior to WW II. Hecht- Neilsen is one of those guys who normally would be deep-sixed in sycophantry — unable to think straight due to the toady pheromone-triggered endorphins — but I think the fact that he had to overcome the Minsky insanity in order to make any headway with neural networks, and the fact that he found a commercial track forward rather than technosocialist/academic political posturing, somehow established an immune response. The result is apparent in this video taken in Silicon Valley several years ago where you’ll see guys try to make arguments like “but isn’t this just another form of Bayesian reasoning?”. The IBM presentation RHN gave had similar comments that were even more hilarious, like “but isn’t this just another probabilistic model?” Anything but admit a guy who looks like one of THEM isn’t totally under control. To paraphrase Walter Kurtz: “The Stupidity…. The Stupidity….”

Comments are closed.