Artificial Superior Intelligence, by Andrew Joppa

Artificial Superior Intelligence

by Andrew Joppa

 

“By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.”

― Eliezer Yudkowsky

 

From Today’s Headlines:

 

July 17, 2022

How Artificial Intelligence Brings the End of the World

This vast multi-faceted subject will bewilder almost everyone for years to come.

We know this because Elon Musk and Mark Zuckerberg, billionaire brainiacs, disagree diametrically.  So how can the rest of us find solid ground? 

Musk casually asserts, “A.I. is a fundamental threat to the existence of human civilization.”  Zuckerberg answers with some petulance, “A.I. is going to make our lives better. … Doomsday scenarios are pretty irresponsible.”….

Scientists claim to have built a robot they describe as ‘self-aware’

 

Device reportedly learned to recognize itself within its physical environment.

Updated: July 16, 2022

A team of U.S.-based scientists is claiming to have developed an artificial intelligence that has achieved a degree of self-awareness — an accomplishment that, if true, would indicate a major leap forward for AI technology

 

These type of stories and debates are appearing more and more frequently and with a growing intensity.  It is an issue that must be discussed. It is one of the true existential issues facing our species.

 

Modern humans, Homo Sapien Sapiens, legitimately consider themselves the most intelligent life form on the planet Earth.  While certain brain attributes seem more finely honed in other animal forms, a sense of direction and prolonged memory for example, there can be little doubt that the capacity and reasoning capability of the human brain is without rival.

 

Our great intelligence has enabled us to harness nature and bend it to our will.  Because of this intellectual superiority, and its extension into tool making mastery, we are the dominant, and controlling, life form on the planet.  To a greater or lesser extent, all other life forms are subject to our control and…whims. We could debate forever whether that should be the case…but there is no doubt it is true.

 

We know of no moment when our species was not the supreme intelligence and we, consequently, have no real method of appreciating the implications if that scenario were no longer valid. It is that inability that might usher in situations of unfathomable potentials.

 

We tend to define the quality and quantity of intelligence on a human scale.  That is, a chimpanzee may be measured as having 20% of the intelligence of a human being. A chicken may only be 1% as intelligent (both percentages created for illustration only).  Thus, the chimpanzee is given a higher ethical weight because of its gross intellectual similarity to humans.  The chicken is nothing more than food stuff. Plant life, with no measurable intellect, is used indiscriminately, with no ethical concern given to it as a life entity…kill that weed…eat your broccoli. No guilt is necessary.

 

Within the next few decades, or tomorrow, all these intellectual elements that have created and defined the human experience may be permanently altered. What we may discover is that our “superior” intelligence has only been relative to the other life forms that have evolved with us.  We may find that the human brain, although possessing amazing capacity and power, has, like any non-upgradable computer, reached the end of its potential. We may discover that persistent evolutionary artifacts have restricted our brains ability to mutate into new and more powerful forms. Some would say we are actually reversing the power of human intelligence…. but that’s the subject of another essay.

 

The development of full artificial intelligence could spell the end of the human race… It would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.

— Stephen Hawking told the BBC

 

The question that will emerge, and will be then be answered, but not by us, is whether homo sapiens will be the future chimpanzee, chicken, plant…or merely molecular material in the hands of a new, dominant, life form: that emerging life form will be…ASI…Artificial Superior Intelligence. These “machines” may be some 1000 times more intelligent than humans (or more) …and may possess none of the constraining ethical codes that limit human actions.  This new life form may regard humanity as little more than a nuisance to be tolerated…or, at worst, to be eliminated.

 

“There is no reason and no way that a human mind can keep up with an artificial intelligence machine by 2035.” —Gray Scott

 

Many projections of the impact of ASI suggests a world where all human problems are solved as this superior machine intelligence finds answers to all problems…including immortality.  Many other projections are far more ominous, with the total end of protoplasmic human life possibly coming to an end.  These latter voices must be heard…now…it will be too late to deal with once AGI (Artificial General Intelligence) is acquired. These voices are not a few chosen merely to make a point.  These are the thoughts of major leaders in the world of AI.

 

Stephen Hawking, Bill Gates, Elon Musk and Bill Joy (among many others) have lined up to warn us about something that may soon end life as we know it. In the last few years, Artificial Intelligence has come under unprecedented attack. Two Nobel prize-winning scientists, a space-age entrepreneur, two founders of the personal computer industry — one of them the richest man in the world — have, with eerie regularity, stepped forward to warn about a time when humans will lose control of intelligent machines and be enslaved or exterminated by them. It’s hard to think of an historical parallel to this outpouring of scientific angst. Big technological change has always caused unease. But when have such prominent, technologically savvy people raised such an alarm?

 

“The pace of progress in artificial intelligence (I’m not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast—it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five-year timeframe. 10 years at most.”

—Elon Musk wrote in a comment on Edge.org

 

Their hue and cry is even more remarkable because two of the protestors — Bill Gates and Steve Wozniak — helped create the modern information technology landscape in which an A.I. renaissance now appears. And one — Stuart Russell, is a leading A.I. expert. Russell co-authored its standard text, Artificial Intelligence: A Modern Approach.

 

But what exactly are these science and industry giants up in arms about? And should we be worried too?

 

The crux of the problem is that we don’t know how to control super intelligent machines. Many assume they will be harmless or even grateful. But important research conducted by A.I. scientist Steve Omohundro indicates that they will develop basic drives. Whether their job is to mine asteroids, pick stocks or manage our critical infrastructure of energy and water, they’ll become self-protective and seek resources to better achieve their goals. They’ll fight us to survive, and they won’t want to be turned off. Omohundro’s research concludes that the drives of super intelligent machines will be on a collision course with our own unless we design them very carefully. We are right to ask, as Stephen Hawking did, “So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right?”

 

Wrong! With few exceptions, they’re developing products, not exploring safety and ethics. In the next decade, artificial intelligence-enhanced products are projected to create trillions of dollars in economic value. Shouldn’t some fraction of that be invested in the ethics of autonomous machines, solving the A.I. control problem and ensuring mankind’s survival? That is, if it can be solved at all.

 

We deeply fret over the remote possibility of a North Korean nuclear attack on Hawaii.  Perhaps we should devote some time and resources to the real threat of the annihilation of the human species as it may be attacked, intentionally or incidentally, by an Artificial Superior Intelligence.

 

“Artificial intelligence will reach human levels by around 2029. Follow that out further to, say, 2045, we will have multiplied the intelligence, the human biological machine intelligence of our civilization a billion-fold.” —Ray Kurzweil

Check Also

Two Environments Requiring ZERO Failures, by Andrew Joppa

Two Environments Requiring ZERO Failures by Andy Joppa   For this purpose, I will tell …