In a recent article in the magazine Quanta, much ado was made about the new so-called Neven's law:
"That rapid improvement has led to what’s being called 'Neven’s law,' a new kind of rule to describe how quickly quantum computers are gaining on classical ones. The rule began as an in-house observation before Neven mentioned it in May at the Google Quantum Spring Symposium. There, he said that quantum computers are gaining computational power relative to classical ones at a 'doubly exponential' rate — a staggeringly fast clip."
Hartmut Neven is the director of the Google Quantum Artificial Intelligence Lab. In some sense — at least for the quantum cognoscenti! — this is a trivial observation. We have known for well over 30 years that the processing power of a quantum computer scales exponentially in the number of qubits. Hence, when the point came that the number of qubits on a chip began to grow exponentially, it is simple to deduce that the processing power will grow doubly exponentially.
Nevertheless, the general public has difficulty grasping ordinary exponential growth — much less doubly exponential growth! So in my 2013 book, Schrödinger's Killer App: Race to Build the World's First Quantum Computer, I wrote an entire short section on this double-exponential growth in processing power. At the time I called it "Moore's Law for Quantum Computing."
This double exponential growth has reached a stage where it is a reality, as Neven noted, since the number of superconducting qubits on a wide variety of platforms is doubling every six months. I thought it would be useful to set the historical record straight by posting here the excerpt from my book that discusses the double-exponential scaling, a feature of quantum computers that I propose we call the Dowling-Neven Law, given that I cooked it up six years before Neven did.
––––––––––––––––––––––––––––––––––––––––––––––––––––––
Excerpt from: Schrödinger's Killer App: Race to Build the World's First Quantum Computer, by Jonathan P. Dowling (Taylor and Francis CRC Press, 2013) pp. 391–392.
––––––––––––––––––––––––––––––––––––––––––––––––––––––
"I will take [Bill] Phillips up on his wager
and bet that we have a quantum computer capable of running Shor’s factoring
algorithm in 50 years or so. I will also take the science fiction writer point
of view and extrapolate wildly beyond known physics to get us there. (Remember,
the science fiction writers are more often right than the too conservative
scientists.) There is consensus in the community that such a practical
factoring engine will require around a trillion (1012) qubits. Conjecturing
something like Moore’s law for quantum computing I will further conjecture that
new technologies are around the corner that will allow us to double the size of
our quantum processors every few years. So if we think of the new NIST ion trap
working this in a few years (2020) with 1000 entangled ions we then just scale this up,
doubling the number of qubits every few years, until we get to a billion
entangled qubits. In figure 6.4, I plot this trajectory with the number of
qubits on the red horizontal scale, the size of the corresponding Hilbert space
on the green vertical scale, and the year along the diagonal on the orange
scale. I show a few qubit ion trap photograph around the year 2000, a 1000 qubit ion trap around
the year 2020, a fanciful rendition of a 100000 qubit machine made from a carbon graphene
lattice in 2040, and then I run out of ideas for the hardware. But you can bet
that scientists in 2040 will not run out of ideas (they have thirty years of
new discoveries in hand that I don’t) and they will build the million qubit
machine by 2060 and finally the Internet hacking billion qubit machine running
Shor’s algorithm (for which I display the quantum circuit) by 2080. The growth
in the number of qubits, as per Moore’s law, is exponential by year. Since the
size of the Hilbert space, vertical green scale, is exponential in the number
of qubits it is therefore super
exponential. Following this trajectory, if I hedge William Phillips bet and we
have a billion qubit machine in 70 years by 2080 then we humans will have
explored 3000000000000
(three-trillion) orders of magnitude in size in Hilbert space. That is to be
compared to the rather paltry 60
orders of magnitude in three-dimensional space humans have explored in all of
recorded history, tens of thousands of years. The exploration of all of
three-dimensional space in all of human history is nothing compared to the
exploration of Hilbert space in the next 70 years. That is the promise of
quantum technology.
Taking that position that Hilbert
space is real, just as real as three-space, and then this is not just a
mathematical exercise. If Hilbert space is a physical resource then by 2080
we’ll have a 103000000000000
dimensional resource at our disposal. It is difficult to wrap your brain around
the size of this number.[i] This is
a one followed by three trillion zeros. This book you are reading has around a
million characters in it. Hence to print out the number 103000000000000 in full on paper
would require a printing of 3000000
(three-million) books this size filled with all zeros; that is about a tenth of
all the books in the US Library of Congress, one of the largest book
repositories in the world. The number of particles in the entire universe is
only about 1080 or one followed by only 80 zeros. In the Church of
the Larger Hilbert space we believe that not only are these huge numbers
attainable but that they are attainable in a generation and that they
correspond to the size of a real thing, Hilbert space, that we can build things
with using it as a resource. What is in Hilbert space? Well nothing I’m sure at
the moment. That is whenever we opened a new window in three-dimensional space,
by inventing the telescope or microscope; we found things that were already
there, new planets in the former case and microbes in the latter case. This is
why the exploration of three-dimensional space is a science. As our observing
tools see bigger or smaller things we find stuff that has been there all along
but that we just could not see before. Not so in Hilbert space. Hilbert space
is not independent of us, there for us to find stuff in, but rather we create
Hilbert space and then use it in turn to power new types of machines. The
exploration of Hilbert space is much less a science and much more of a
technology. I rather doubt that when we build the billion-qubit quantum
computer that it will open a portal into an exponentially large Hilbert space
that contains Hilbert space creatures that will leap out at us and gobble us
all up. Rather then Hilbert space is empty until we make it and begin to
manipulate our quantum mechanical states inside of it and fill it with our
technologies. It is because we make it that it is a technology. What will we
make in a 103000000000000
dimensional Hilbert space? Well there is plenty
of room in there so I would be surprised if all we came up with was a quantum
computer.
Figure
6.4: Exponentially large Hilbert space over the next fifty years. The red
horizontal scale assumes a type of Moore’s law for quantum processors with the
number of qubits in the processor following an exponential growth doubling in
size every few years. (This is the type of scaling we’ll have to have if a
universal quantum computer with a billion qubits is to be built in fifty
years.) The dimension of the Hilbert space scales exponentially again with the
number of qubits and so it scales ‘super’ exponentially along the vertical
green axis. The diagonal orange arrow indicates approximate year. The figures
show an ion trap quantum computer with one, two, three six, and ten qubit
(lower left) where ten qubits gives a Hilbert space dimension of 210
or about 1000. The second graphic is the NIST ion trap that may have 1000
entangled qubits in 2020 with a Hilbert space dimension of 21000 or
10300. The third graphic is a schematic of the carbon graphene structure
that might have 1000000 entangled qubits or a Hilbert space of 21000000
that is about 10300000. The final graphic (upper right) is a circuit
for Shor’s algorithm running on a billion-qubit machine (of unknown technology)
that has a Hilbert space dimension of 2100000000000 or 10300000000000.
This chart implies that we will cover 300000000000
orders of magnitude of Hilbert space in the next fifty years compared to 60 orders of magnitude (figure 6.3) in three-dimensional space covered in
the past several thousand years.[ii]
[i]
See, “The Biggest Numbers in the Universe,” by Bryan Clair in Strange Horizons (02 April 2001) <http://www.strangehorizons.com/2001/20010402/biggest_numbers.shtml>.
[ii]
The figure is a composite. The two ion trap photos are courtesy of NIST and as
work of the US Government are not subject to copyright. The graphene molecule,
a two-dimensional crystalline form of carbon that has been proposed as platform
for quantum computing, is a schematic that is the work of Krapnik <http://en.wikipedia.org/wiki/File:CF_1.png>
and the circuit diagram of Shor’s algorithm was created by Bender2k14 <http://en.wikipedia.org/wiki/File:Shore_code.svg> and both are and licensed under the Creative Commons Attribution-Share
Alike 3.0 Unported license.
Stigler's law of eponymy: https://en.wikipedia.org/wiki/Stigler's_law_of_eponymy
ReplyDelete