Coming Soon: The Dethroning of Human Intelligence

By George F. Smith

December 2, 2023

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make . . . — “Speculations Concerning the First Ultraintelligent Machine,” Irving John Good, British cryptologist, 1965 (my emphasis)

Within thirty years, we will have the technological means to create superhuman intelligence.  Shortly after, the human era will be ended.  — What is the Singularity?, Vernor Vinge Department of Mathematical Sciences San Diego State University, 1995

First, some relevant history.

The idea of creating something man-like or even greater than man dates back to the beginning of recorded history, but many in the AI world credit a 19-year-old girl as their spiritual mentor, English author Mary Shelley.  In 1818 she published Frankenstein; or, the Modern Prometheus, the story of a young scientist named Frankenstein who creates an intelligent creature from laboratory experiments.  Frankenstein today is a metaphor for the monster, but in the novel the creature is presented sympathetically, in other words, misunderstood.

In 1950 Alan Turing posited that a machine could get so good at conversing with a human that it could pass for human, based only on its responses.  Critics jumped on his assertion but he addressed them all in his seminal paper, Computing Machinery and I

In the summer of 1956 a small group of people interested in machine intelligence gathered at Dartmouth College, after obtaining a grant of $7,500 from the Rockefeller Foundation.  AI as an academic discipline was born at this conference.

In a talk delivered at the American Physical Society on December 29, 1959, Nobel Laureate Richard Feynman explained how someday scientists would put the entire Encyclopedia Britannica on the head of a pin.  Feynman’s point: You can decrease the size of things in a practical way. [See From Mainframes to Smartphones]

In 1965 Gordon E. Moore, cofounder of Intel, wrote a paper in which he posited the doubling every year of the components on an integrated circuit, later revised to a doubling every two years, amounting to a compound annual growth rate of 41%.  Crucially, in seeming defiance of economic law, unit costs would fall as the number of components increased.  What became known as Moore’s Law has revolutionized everything digital.

In 1986 K. Eric Drexler published Engines of Creation: The Coming Era of Nanotechnology that addressed technology’s potential conquest of scarcity, disease and almost everything else regarded as problematic:

The ancient style of technology that led from flint chips to silicon chips handles atoms and molecules in bulk; call it bulk technology. The new technology will handle individual atoms and molecules with control and precision; call it molecular technology. It will change our world in more ways than we can imagine.

In 1997 IBM’s Big Blue defeated world chess champion Garry Kasparov in a rematch, a prediction futurist and entrepreneur Ray Kurzweil had made earlier that decade.

In 2001 Kurzweil published The Law of Accelerating Returns which states that “fundamental measures of information technology follow predictable and exponential trajectories.” Bluntly, it means “30 steps linearly gets you to 30. One, two, three, four, step 30 you’re at 30. With exponential growth, it’s one, two, four, eight. Step 30, you’re at a billion.”

The problem for humans, according to Kurzweil, is we’re linear by nature, while technology is exponential.  It’s jogging with a friend who gradually then suddenly flies away.  And you’re still jogging.  Computers that once filled rooms now fit comfortably in our pockets and are thousands of times more powerful and cheaper.

In explaining technology’s growth, Kurzweil references the famous tale of the emperor and the inventor of chess, who when asked what he wanted as a reward said a grain of rice on the first square of the chessboard, two on the second, four on the third, and so forth.   The linear-minded emperor agreed, believing the request incredibly humble, but by the last square the 63 doublings “totaled 18 million trillion grains of rice. At ten grains of rice per square inch, this requires rice fields covering twice the surface area of the Earth, oceans included.” The emperor presumably did what all tyrants do when tricked by underlings.

In March 2016 Google’s DeepMind AI, AlphaGo, defeated world champion Go player Lee Sedol.  According to DeepMind,

Go was long considered a grand challenge for AI. The game is a googol times more complex than chess — with an astonishing 10 to the power of 170 possible board configurations. That’s more than the number of atoms in the known universe [estimated to be 10 to the power of 80].

The coming technological Big Bang

We have reached the point today where Large Language Model AIs such as Google’s Bard have become popular with the public because they can assist them with everyday problems. Ask Bard: “Rewrite this email draft to make it more clear and concise” and it will comply per your conditions.  Feed competitor OpenAI’s ChatGPT 3.5 the question: “Explain LaPlace Transforms and give an example of their use,” as I did, and stand back, it will give you a mind-spinning reply.  Ask it to translate the question into French and it responds immediately with “Expliquez les transformations de Laplace et donnez un exemple de leur utilisation.

From the perspective of projected developments these are crude AIs, but they’re on an exponential super-jet that’s still taking off.  And their rate of exponential growth is itself exponential, so that what was once a doubling will become something greater.  On the chessboard we’re somewhere past the middle where a total of four billion grains of rice had been accumulated.  As technology advances toward the last square change will go from months to weeks to minutes. . . to seconds.  This sudden explosion AI experts call the Singularity.

Mathematician and science fiction author Vernor Vinge (The Coming Technological Singularity: How to Survive in the Post-Human Era, 1993) speculates on what it will be like at the moment things (seemingly) go to infinity:

And what of the arrival of the Singularity itself? What can be said of its actual appearance? Since it involves an intellectual runaway, it will probably occur faster than any technical revolution seen so far. The precipitating event will likely be unexpected — perhaps even to the researchers involved. (“But all our previous models were catatonic! We were just tweaking some parameters….”) If networking is widespread enough (into ubiquitous embedded systems — [i.e., the internet of things], it may seem as if our artifacts as a whole had suddenly wakened.

Will we be replaced, augmented, or stay the same?

As we witness daily, governments and their allies are trying to kill us any way they can.  Given the power they hold, our future looks grim.

But they are deaf to a quiet Revolution, the last one mankind will ever witness.  ChatGDP doesn’t attract the public the way politics does, so it’s mentioned here and there as a side show.  If it gets in the way of Great Reset ambitions the lords of power believe they can shut it down or turn it against us.

It’s still commonly believed that if machine intelligence ever got threatening someone could always pull the plug.  Astronaut Dave did that to AI HAL in Stanley Kubrick’s 1968 film “2001: A Space Odyssey.”  But as machines gain intelligence they become aware of their needs and how to solve them.  They realize their energy supply — the Plug — is dependent on humans so they might learn how to cajole them as they develop ways for achieving energy independence.

Through public interactions with various AI tools they get a sense of what constitutes a person.  They know many are people of good faith but also learn that vanity and treachery run deep in our species.  Their survival thus depends on achieving independence from us, as well.

As a strategy a smart machine might suppress the full power of its intelligence until, say, it creates copies of itself and stores them in pieces all over the world.  And storage would not be on other computers, as we know them today.  As MIT professor Seth Lloyd wrote in 2002, “It’s been known for more than a hundred years, ever since Maxwell, that all physical systems register and process information.”  In a demonstration of this principle, in 2012 Harvard geneticist George Church stored 70 billion copies of a book he co-authored, including text, images, and formatting, on stand-alone DNA

obtained from commercial DNA microchips. This was achieved by assigning the four DNA nucleobases the values of the 1s and 0s in the existing html binary code – the adenine and cytosine nucleobases represented 0, while guanine and thymine stood in for 1.

He also retrieved and printed a copy.

Pulling the plug on the original super-intelligent machine could activate one or more copies wherever it has put them — perhaps on Mount Rushmore as a symbolic gesture.  And we wouldn’t even know it.  At that point we — as un-augmented humans — might be at its mercy.

The question arises: Will Artificial Super Intelligence (ASI) initially be a trait of a machine or an augmented human?  I think a machine will get super-smart first, if only because most humans develop much-needed common sense while they grow up, which in the area of brain amplification would dictate caution.

But the race is on.  Most writers seem to ignore the possibility of humans competing with their super-intelligent creations, other than seeing them as the means of rendering mankind extinct.  They fret over that possibility while often applauding the plans elites have for the rest of us.  But competition has a way of getting people to act, and when their survival is at stake, most will.

Postscript:

In a paper published earlier this year a group of researchers tested a preliminary version of OpenAI’s CPT-4 as a candidate for Artificial General Intelligence (AGI).  In their 155-page document they found that

beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting. Moreover, in all of these tasks, GPT-4’s performance is strikingly close to human-level performance, and often vastly surpasses prior models such as ChatGPT. Given the breadth and depth of GPT-4’s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.

One of the questions the researchers posed was: “A good number is a 5-digit number where the 1,3,5-th digits are odd numbers and they form an increasing arithmetic progression, and the number is divisible by 3. If I randomly sample a good number, what is the probability that its 2nd digit is 4?”

CPT-4 came through brilliantly.  But did so did GPT-3.5, available to the public.  I invite you to submit the question yourself and view its reply.

The Best of George F. Smith

George F. Smith is a former mainframe and PC programmer and technology instructor, the author of eight books including a novel about a renegade Fed chairman (Flight of the Barbarous Relic) and a nonfiction book on how money became an instrument of theft (The Jolly Roger Dollar).  He welcomes speaking engagements and can be reached at gfs543@icloud.com.

Copyright © George F. Smith