Humans, Cyborgs, and Intelligent Machines

Barrat argues that the time it will take for ASI to surpass human level intelligence, rendering us ant-like in comparison, could be a matter of days, if not mere hours, after it is created. Worse (it keeps getting worse), human researchers may not even know they have created this potent ASI until it is too late to attempt to contain it. An ASI birthed in a supercomputer may choose, Barrat writes, to hide itself and its capabilities lest the human masters it knows so much about it, attempt to shut it down. Then, it would silently replicate itself and spread. With no need to eat and sleep and with an intelligence that is constantly improving and war-gaming survival strategies, ASI could hide, wait and grow its capabilities while humanity plods along, blissfully unaware. _Greg Scoblete

Unaltered humans of average intelligence are becoming an endangered species in the modern western world. Threatened by a flood of immigrant labor on one hand and machines replacing human jobs on the other, the average person of few skills can find himself in an existential bind.

We are already facing stiff competition from non-thinking machines for tech jobs and a wide array of other occupations. More

So far, most brighter humans who are quick on their feet have been able to stay ahead of that threat reasonably well. But even non-thinking machines are becoming more versatile in the tasks they can accomplish. Humans beware.

More from Brian Wang:

One possible answer to this dilemma is the cyborg approach. When body parts can be replaced by advanced prosthetic devices, an average person may suddenly become above average in some way. Sure, the line between human and machine becomes blurred, but a person has to provide for himself and his family somehow.

More alien than extreme cyborgs and non-thinking machines, will be the intelligent machines — which will have even less in common with humanity than the borgs. A recent book review on the KurzeilAI website discusses James Barrat’s recent work on the coming AI apocalypse: Our Final Invention.

Luke Muehlhauser takes the threat seriously, and suggests some precautions that humans can take to protect themselves against the potential of hostile AI takeover.

But we need more than discussion; we need action. Here are some concrete ways to take action:

  • Raise awareness of the issue. Write about Our Final Invention on your blog or social media. If you read the book, write a short review of it on Amazon.
  • If you’re a philanthropist, consider supporting the ongoing work at MIRI or FHI.
  • If you’d like to see whether you might be able to contribute to the ongoing research itself, get in touch with one of the two leading institutes researching these topics: MIRI or FHI. If you’ve got significant mathematical ability, you can also apply to attend a MIRI research workshop.

Our world will not be saved by those who talk. It will be saved by those who roll up their sleeves and get to work (and by those who support them).

_Muehlhauser at KurzweilAI

Muehlhauser makes a good point when he says that “our world will not be saved by those who talk.” But is it possible that he is taking the threat of a hostile AI takeover a bit too seriously?

StatisticianMatt Briggs takes a more skeptical view of the prospects for an imminent takeover of machine intelligences.

Barrat’s fears mark a corrosive mixture of Disney-style anthropomorphisation and rampant scientism. Whatever can be made can have eyes drawn on it, therefore it must be alive, and if some scientist says it’s thinking, then it must be thinking, and if it’s thinking it must be smarter than us, therefore it’s out to get us. Curiously, Barrat tries to evade the anthropomorphisation critique by claiming he’s not engaging in it—right before he does it. _Matt Briggs

Once an AI enthusiast, Al Fin has recently come to view prospects for the near- to mid-term development of human-level machine intelligence with a great deal of skepticism. And for very good reason. Most AI researchers continue to approach the problem from an algorithmic standpoint, which works very well for automation and simple-minded robotic devices. But for human-level, autonomous intelligences, algorithms are hopeless kludges.

This is not to say that alternatives to algorithms will not eventually prove workable as conceptual platforms for intelligent machines. If so, we will certainly need to make provisions to deal with the consequences. But it may not be wise to wait too late to begin preparing.

What is not an option is to wait until AI gets out of hand and then try mounting a “war of the worlds” campaign against superintelligent AGI. This makes for great cinema, but it’s wholly unrealistic. AIs would get too smart and too powerful for us to have any chance against them. _Seth Baum

Unfortunately, we humans can only focus on a few potential catastrophes at a time. That is why our society’s fascination with faux catastrophes such as climate apocalypse and resource scarcity Armageddon is so tragically wasteful. Too many resources are being squandered on false but fashionable dooms, which only adds to the designed dumbing down of society.

Further, humans have become so compartmentalized conceptually, and so limited in competence by over-specialization, that the mental flexibility required to deal with the rapidly changing occupational landscape of the future, is generally lacking.

Modern humans do not develop their potential, and it is in that sense of human limitation that machines can easily be seen capable of replacing humans. But what if humans woke up to their potential, and learned how to develop it to high levels in their offspring?

That would be a different story. A story of Dangerous Children working together to build the Next Level. Not a politically correct story, if you want to know the truth. But a more interesting story than anything you will hear from the skankstream.

This entry was posted in Cognition, Technology and tagged , . Bookmark the permalink.

6 Responses to Humans, Cyborgs, and Intelligent Machines

  1. Abelard Lindsey says:

    I’m an AI skeptic as well. For one thing, Moore’s Law is approaching the end. Even if we get molecular electronics, Moore’s Law will end around 2030 or so. Another thing, semiconductor circuits do not have the inherent dynamism that human brains have. For example, our brains rewire their dendritic connections every time we sleep. Semiconductor devices don’t do this.

    What we will see is continued automation of jobs of all levels. Indeed, computerization is more of a threat to white collar work than it is blue collar work. White collar work involves the managing and manipulation of information, something computers are designed to do. Blue collar workers to physical work, something computers are less adept at doing.

    All of the “fluff” people from the economic bubbles of 1995 to 2008 will be permanently out of work. This, of course, will piss them off.

  2. Matt Musson says:

    Next up – Robot Marketing Managers and Cyborg Ad men.

  3. Good luck trying to stop the singularity, assuming of course it’s possible. I share Abelard Lindsey’s skepticism; real AI may never be possible. However… if it is technologically possible, then it will happen. Skynet here we come.

  4. Stephen says:

    For those of us who despise the idea of becoming a cyborg, I wonder if we cannot develop a biological singularity through gene selection and gene therapy.

  5. Stephen says:

    If AI is eventually possible could it be designed in a way that it could not go against humanity and be programmed to feel something like pride, satisfaction, and happiness by helping humanity? I remember a novel from the 1990s called Serpent’s Walk where the supercomputer in the story has been designed in this manner and is simply not capable of going against humanity. If not, a mechanized singularity becomes as frightening to consider as James Barrat thinks it will be.

Comments are closed.