I’ve been hammering on this
strong classical AI and the classical singularity to set the stage to
compare it with the quantum AI. Remember that the MadScientists asked me, “What is a
quantum artificial intelligence?” Well, I don’t know. Classical AI is easy to
predict because exponential curves are easy to predict. There is no knee and no
singularity in the curve and there will be no point where the classical AI
becomes suddenly self-aware like in the movies. In the animal kingdom, dolphins
and chimps are likely somewhat self aware and humans, much more so. All three
of our species recognize that the thing we see in a mirror is ourselves, as
opposed to a non-self-aware fighting alpha male beta fish that thinks the thing
in the mirror is a different invading fighting beta fish and tries to attack
itself. In the same way as the classical AI becomes more human-like, it will
display over time more and more signs of self-awareness.
Unlike the Colossus scenario, there is not a single day where the computer becomes
self-aware and tries to kill us all. It becomes more and more self aware over
time. There is no sharp line. There is no such thing as a point of self
awareness in this sense. Some animals are more self-aware than others. Some
computers will be more self-aware than others. The transition will be gradual.
The little old lonely lady that
Turkle discusses in Alone Together
has a robot pet baby seal called Paro. The seal coos and squirms while being
petted and talked to. The lonely old lady pets it and talks to it and claims it
responds to her emotions. Turkle claims clearly it does not. But why not?
Because it is a
machine and not a “real” baby seal? But what is a “real” baby seal if not a meat machine?
machine and not a “real” baby seal? But what is a “real” baby seal if not a meat machine?
This is the kind of mechanical bigotry that the android Lt. Commander Data encounters in Star Trek all the time. If the strong AI hypothesis holds, then Data has a mind in every way equivalent to mine. Hofstadter makes the case in I Am a Strange Loop that self-awareness is a type of illusion generated when a sufficiently powerful computer is programmed to recursively reflect upon itself. If the strong AI hypothesis holds, and I soon will know, then electronic computers are subject to the same illusion and will be self-aware in the same sense. Sentience will not then be an illusion common only to humans but to the machines as well, and we’ll recognize this illusion in them just as we recognize it in ourselves and in our children. Self-awareness will then become a delusion shared by man and machine alike.
A “real” baby seal is a machine!
It is a meat computer. A robotic computerized baby seal sufficiently advanced
in electronic processing power can show all the responses of the meat-computer
seal. Why are meat computers somehow better at responding to human gestures
then electronic ones? Well, that is the point of the strong AI hypothesis. They
are not different fundamentally; it isall a matter of processing power. The
fact that the processor is a biological neural net in the meat-seal brain and
an electronic neural net in the electronic-seal brain is irrelevant. A
sufficiently powerful computerized baby seal will respond in a way that is
indistinguishable from the biological one. This is the robotic baby seal
version of passing the Turing test. The robotic seal is not quite there yet but
close enough to give comfort to a lonely old lady whose children never visit
her anymore. And the robotic seal Paro does not have to be fed or let out to poop
— it just has to be recharged.
Today, Paro Mark I is
sufficiently powerful in processing power to give a strong illusion that
responds to the lady’s voice and petting. Mark V Paro may be indistinguishable
from a real baby seal and will have passed the Turing test for baby seals. As
the processing power continues up its Moore’s law–driven path, soon a human
robot will have responses to our voice and touch that will be indistinguishable
from that of a human baby, then a human child, then a human adult. There will
not be one day when the AI becomes self-aware. It will happen continuously, but
the path is predictable because we have Moore’s law. We can predict that this
will happen in the next 50 years or so but the key point is that the path is
predictable—exponential but predictable. There is no knee and no singularity in
the exponential curve. But a quantum strong AI is much less predictable.What
would a quantum mind be like?
The problem is that we have no
meat-based quantum mind to gauge an electronic (or spintronic or photonic or
superconducting) quantum AI against. (Here, I am directly assuming that the
human mind is a powerful but still a classical self-reflecting quantum computer
— the strong AI hypothesis. Not all assume this, as we’ll discuss below.)
Again, a quantum computer can always efficiently simulate a classical quantum
computer. Then, according to the strong AI hypothesis of Searle, it may be
extended to a quantum computer thusly: The appropriately programmed quantum
computer with the right inputs and outputs, running as an efficient simulator
of a powerful classical computer, would thereby have a mind in exactly the same
sense human beings have minds. I will call this the “strong quantum AI
hypothesis” and its truth is a direct consequence of the strong classical AI
hypothesis. That is, if in the near future a powerful classical computer passes
the Turing test, proving the strong AI hypothesis to be true, then even if the
quantum computer does not yet exist, we can definitively say that when it does,
it will, running in classical mode, have a mind in exactly the same sense that
humans have minds. The real question then is what happens when you flip the
switch on the powerful quantum computer and toggle it out of classical mode
into full quantum mode? Well, it will still have a mind exactly in the same
sense that humans have minds, but at the flip of that switch, that mind, unlike
us, will now have the freedom to directly exploit the exponential largeness of Hilbert
space. We meat computers and our classical strong AI computer brethren will
have no such access to Hilbert space to supplement our thinking.
What happens next is a new
exponential growth in processing power that, unlike Moore’s law that takes
place in physical three-dimensional space, takes place in the utter vastness of
Hilbert space. How vast? A rough measure of the classical AI threshold is it is
predicted to occur (i.e., our classical computers will become self-aware) when
the number of transistors and interconnects hits that of the human brain. When
the day finally comes that our classical computers have far more transistors
and interconnects than the human brain, and the classical computer still does
not exhibit any sign of having a human-like mind, this would provide evidence
against the strong AI hypothesis. As you can tell by now, I am a strong
believer of not only strong AI but also the scientific method. The
hypothesis—that if strong AI fails, there is something else to the human mind
than classical computer processing power—must be tested and it could prove to
be false. I don’t believe it but that needs to be tested. When will this processing
power threshold come? The human brain has approximately 100 billion neurons and
100 trillion synapses or interconnects. The Intel Tukwila, released in 2010,
has approximately 1 billion transistors. This is what has the singularitarians worried.
Given the exponential growth of Moore’s law, it is expected that we’ll have a
classical computer with 100 billion transistors sometime in around 20 years.
The major bone of connection is in interconnects. In the human brain, particularly
in the cerebral cortex, the center of higher thought, it is not just neuron
count that matters but their interconnectivity. In the human brain, we can see
that the number of interconnects (100 trillion) is far greater than the number
of neurons (100 billion). Dividing that out means roughly that each neuron in
our brain is connected to a thousand others.
On the other hand, most commercial
electronic computer chips have limited numbers of interconnects—transistors in
the Tukwila talk to just a handful of other transistors. But that is changing.
IBM just announced a prototype “cognitive computing” chip, modeled after the
human brain, that has many more such synaptic interconnects. Current research
in neuroscience or the “science of the brain” suggests that the computing power
of our brains and the emergence of consciousness come not from the neurons
themselves but from the huge numbers of interconnects or synapses that connect
each neuron to thousands of others. This system of neurons and their
interconnects is called a neural network and the IBM chip is a type of
artificial neural network. Moore’s law for classical computing is directly tied
to three-dimensional space. Our computers become exponentially more powerful
year by year as a direct consequence of the present nanotechnology that allows
us to make the transistors exponentially smaller year by year. The smaller the transistors
are, the more we can pack on a computer chip and so directly the speed and memory
of the computer increases. A Moore’s law for quantum computers will also drive
the placement of ever exponentially more qubits and quantum gates on a quantum chip,
but the Hilbert space that goes with those qubits grows super-exponentially.
The growth in processing power that accompanies the super-exponential growth of
the Hilbert space should not be called Moore’s law but something else entirely.
Let us call it S’mores law.
Assuming that the Searle’s
strong AI hypothesis holds, then someday we’ll have a quantum computer, running
in classical mode, which has a mind just as a human has a mind. When we throw
that switch to run it in full quantum mode, that classical AI will become a
quantum AI. Unlike classical meat or electronic computers, the quantum AI will
begin thinking in Hilbert space. My ability to extrapolate here is hindered in
that we have only begun to explore Hilbert space and it is not easy to predict,
following a super-exponential increase in processing power, just what it will
mean for thought. I know what a super classical AI will mean. It is just an AI
that has more classical processing power than my brain. Maybe exponentially
more, but it is still just more. A quantum AI, thinking great thoughts in the
vastness of Hilbert space, will think in a way that is fundamentally different
from my brain or any classical AI. What it will think is at this point in time
impossible to predict. There are again three scenarios for the quantum AI to
follow, the quantum versions of the I-Robot, Colossus, or Borg Identity. The
quantum AI can, as I suggested above, deploy the quantum Turing test. It can
begin asking us and our classical AI brothers and sisters questions to see if
we are like it or not like it. Very soon, it will hit on “Please crack this
1024-bit public-key encrypted message in less than a second?” Then, it will
realize we are not like it at all. My brain cannot do this and no classical AI
can do this. The quantum AI can do it with ease. When we and our classical
machines fail the quantum Turing test, what then?
Under the quantum I-Robot
scenario, we humans and our classical AI build fail-safes into the quantum AI
programming to keep it from enslaving or killing us. The quantum AI will not
really treat us and our classical machines any differently from each other. In
this scenario, the quantum AI becomes our servant or our equal, an entity that
thinks wildly different from what we do but that we peacefully coexist with.
Under the Colossus scenario, the
quantum AI decides all us classical meat and electronic AI are a threat and it
decides to kill us all off. It may not deliberately kill us all off, but in
competition for scarce resources, it may just force us to go extinct, just as
it was once proposed the humans did to the Neanderthals. But remember, we now
know that the humans merged with the Neanderthals and did not displace them;
all modern humans of non-African descent have Neanderthal DNA in their genes.
This gives me hope for the Borg
Identity. We humans will merge with our classical AI, and once we, the
classical AI, succeed in building a quantum AI, we’ll merge with that too. Our
teenagers’ teenagers’ teenagers’ teenagers will do all their thinking in
Hilbert space and the evolution of the human race will continue there far
outside the confines of the ordinary three-dimensional space that now so
confines us. After exploring 60 orders of magnitude in three-dimensional space,
we will move to new explorations in hundreds of thousands or millions of orders
of magnitudes in Hilbert space. What will that mean? I do not know. I sure wish
I would be around to find out.
Dowling, Jonathan P.
(2013-04-11). Schrödinger's Killer App: Race to Build the World's First Quantum
Computer (Pages: 409–413). Taylor and Francis CRC Press.