During the period 2011–2013 I was honored to participate as
a co-investigator on one of the four teams funded under the Intelligence
Advanced Research Projects Activity (IARPA) Quantum Computer Science Program (QCS). To quote from the IARPA web page, “The IARPA Quantum
Computer Science (QCS) Program explored questions relating to the computational
resources required to run quantum algorithms on realistic quantum computers.”
First let me state that this blog post represents my own
viewpoint and opinion of the program and not that of IARPA or other
participating teams or government agencies involved. However, as most of my
readers know, given my involvement in the US Department of Defense (DoD) and
Intelligence Agency programs in quantum computing from their inception in 1994,
both as an advisor to the government and as a government-funded researcher, I
perhaps have a unique perspective that, hopefully, can help us see the big
picture of what emerged from the QCS program. (For readers who don’t know my
background in this area, please pick up a copy of my new book, Schrödinger’s KillerApp: Race to Build the World’s First Quantum Computer, and then very
carefully set it down again.)
Quantum computing is not merely the sum of a qubit hardware
technology and quantum control and quantum error correction and an algorithm,
but rather is a system where the combination of those elements may work better
or worse than the sum of the parts.
This, of course, leads to the research questions related to where those
nonlinearities are, how to avoid or exploit them as appropriate, and how to
build tools to allow easier control of those nonlinearities. Our particular
team’s work on decoherence-free subspaces, in this context, was an early
example of what we hoped to achieve in that it ignores the line between quantum
computing and quantum error correction and focuses on how to build a protocol
that performs both functions in an integrated way better than you'd get with
individual pieces.
Particularly I would first like to spell out that our team
was a unified and integrated research program and not just a loose federation
of independent research efforts. Frankly, going into this, I thought this
unification would be difficult, if not impossible, to attain. My fear was that
our team, consisting of tens of researchers from backgrounds in classical
computer science, quantum physics, engineering, and so forth would have a great
deal of trouble even communicating with each other — to get beyond the
specialty specific jargon issue. This was perhaps a problem in the first few
months but we all learned from each other very rapidly and by the end of the
first few months of Phase I, everybody was on the same plane and had a much
better view of the integrated whole than, say, when we wrote the original
proposal. Sometimes throwing a bunch of people from disparate backgrounds into
a room and leaving them to figure out how to communicate actually does work.
As this intra-team communication progressed, I personally
came to better understand in just what ways quantum computer is more than the
sum of its individual parts. That is a quantum computer is not just a particular
qubit hardware platform that is cobbled together with quantum control, quantum
error correction, quantum compilers and programming language. For example, as a
theoretical physicist that has worked close to the qubit hardware for many
years, it was difficult for me wrap my brain around what a quantum programming
language would even look like. Hence it was quite comforting to me, when I met
our team member Peter Selinger in person for the first time at the kick-off
meeting, that he confessed to me that he, in spite of taking the lead, both
internationally and on our team, in developing exactly such a programming
language (Quipper),
that he had trouble wrapping his brain around the some of the things we found
in the government furnished information such as the description of errors in
terms of Krauss operators. Hence it was that, while our team had experts in all
of the disparate sub-areas needed to carry out the requisite resource
estimates, at least initially I am sure none of us really saw the big picture.
That changed very quickly as the project began to pick up steam and our team
was really able to integrate these disparate sets of expertise so that what we
were able to produce was type of interactive and nonlinear system analysis
where the various pieces of the quantum computer were being tailored in real
time to work well with all the other pieces in a way that at the outset I had
not expected to happen so quickly.
An outsider once asked me what the QCS program was all about
and, somewhat flippantly, I replied that it was the Manhattan Project for
quantum computing. (The only difference is that, for the Manhattan Project, the
end goal was to ensure that something blew up but for the QCS program the end
goal was to make sure that nothing ‘blew up’.) In hindsight I do no think this
remark is flippant at all. Surely we were not, unlike the Manhattan Project,
trying to build an actually working device, but like the Manhattan Project we
had experts from disparate fields all working on different aspects of the
development of the resource estimation. Let me give a particular example. Going
into this program, no one our team really understood the interplay between the
resource estimates, the type of quantum error correction (QEC) code employed,
and the type of error mitigation techniques such as quantum control (QC) and
decoherence free subspaces (DFS) that needed to be used. We all understood that
there was going to be a tremendous amount of overhead coming from the QEC
codes, particularly if the errors where near threshold where large numbers of
concatenation would be required, but we did not really have a quantitative feel
for how this all played out. As the baseline resource estimates of our team and
the other three teams began to give these initially somewhat long time scales
for running various algorithms, we were then able to quantitatively go back and
investigate the quantum error mitigation methods with a much better
understanding. As it was unclear which mitigation would work best with which
government-provided physical machine description (PMD) we tried a number of
them in parallel including active quantum control with feedback, dynamical
decoupling, and DFS. This is similar to, for example, in the Manhattan Project
where they pursued both uranium and plutonium designs in parallel, as well as
spherical implosion and gun designs for inducing detonation. One of the things
that we quickly realized is that for most of the PMDs the noise and decoherence
was not well enough understood nor characterized in the experimental literature
for the theorists to actually optimize over the differing mitigation protocols.
Another interesting result to come out of our teams ‘divide
and conquer’ strategy was that, going into the program, I had the feeling the
each of the QEC encodings would perform reasonably well. I would wager this was
the assumption of many in the field. What clearly came out of this program is
that some QEC encodings, for the particular PMDs investigate here, really do
much better and across all PMDs. Although resistant to this idea at first, we
finally did convince ourselves that what some vocal members of other teams had
been saying all along was likely correct; surface codes were clearly the best
shot at moving towards a scalable quantum computer with current levels of
errors in the gates and the qubit storage. However we were able to quantify
just in what way that was correct using the tools developed in our research
program so we did not have to just take another team’s word for it.
It is interesting to reflect on the role of error correction
codes in the development of classical computers. In the 1950s VonNeumann (and
others) developed rudimentary classical error correction codes to address the
problem that the digital computers of the time, such as the ENIAC and the
EDVAC, were blowing out vacuum tubes about as fast as they could replace them.
These codes are not used today because of the technological growth curve that
has taken us from vacuum tubes to the integrated circuit where fault tolerance
is built into the hardware. To foresee that classical technology curve in the
1950s, VonNeumann would have to have predicted the invention of the integrated
circuit (1960s), the Mead-Conway VLSI design rules (1980s), and the consequent
path of Moore’s Law that has given us the fault-tolerant computers of today. In
many ways the QCS program is much like an attempt to carry out resource
estimates for various classical computer algorithms while assuming that the
hardware is some variant of the ENIAC or other vacuum tube technology. Indeed,
a single ion-trap quantum computer with millions or billions of qubits, along
with all the requisite trapping and gate implementation hardware would easily
fill a large warehouse and weight several tons, just as the ENIAC did in its
day.
Very likely, if resource estimates for now common classical
algorithms were carried out in the 1950s, the researchers would have also run
into long time scales for their baseline resource estimates (for, say, the
algorithm for playing World of Warcraft on an ENIAC) — particularly if large
amounts of classical error correction needed to be deployed. Yet we now know
that the then unforeseen breakthroughs leading up to Moore’s Law allowed us to
reduce the size and weight of such a computer so that it fits in my pocket
(iPhone) and the run time for most algorithms of interest are measured in
milliseconds. The wrong message to take away from a baseline resource estimate
made in the 1950s for classical computers was that classical computing was
doomed. In the same way it is the wrong message to take away from the QCS
program that the large baseline resource estimates that PLATO and all the teams
found should be a sign that quantum computing is doomed. These baseline
resource estimates (BRE) form an upper
bound against which all unforeseen (but highly probable to be found) future
technological advances in quantum computer hardware or architecture developed
will be measured. All new technologies have initially a slow growth but
inevitably new discoveries and innovations come to bear and the growth then
settles into an exponential pace of development. There will be someday a
quantum Moore’s Law as the these new discoveries are deployed and in tens of
years the warehouse-sized quantum computer will fit in my new grandnephew’s
pocket.
Our team’s BRE work came out of the need to understand the
problem space and develop a baseline.
Most non-algorithm work that has considered quantum algorithms at all
has looked at Shor's or possibly Grover's.
These algorithms do have the advantage that they are relatively simple
and their practical value would be immediate.
However, if we presume that quantum computers are going to be generally
useful, rather than dedicated problem solvers, then we also assume the space of
algorithms will continue to grow and that we'll have other, more complex
algorithms that are useful. Focusing on the simplest algorithms has, in
computer science, traditionally led to bad decisions about what is possible and
what resources are necessary. Thus, the QCS program has examined a range of
algorithms — from some that would be practical to run if a quantum computer of
modest resources could be built — up to some that will likely never be
practical to run.
Thus the QCS program is similar to the classical computer
algorithm analysis in HACKMEM performed
in 1972 in “Proposed Computer Programs, In Order Of Increasing Running Time”
(problem 77–96). They consider several
games up to checkers and what it would take to “solve” them. Then they also consider hex, chess, and go;
analyzing their complexity. In
particular their results would have seemed to indicate that chess and go would
be intractable on a classical computer. In contrast to what some might take
away from the QCS baseline resource estimates, the HACKMEN group did not,
therefore conclude that classical computing is impossible or worthless. Rather this sort of analysis is generally
taken as being useful for having a broad view of what is and is not feasible as
well as presenting the basis for understanding when heuristic short cuts will
be necessary. Thus, chess is an example
of a game where classical computational brute force techniques are
insufficient, so chess playing computers use a combination of techniques — from
stored game openings and endings to search techniques that increase the
probability of finding a good move in a reasonable amount of time at the
possible expense of finding the best move. Contrary to what one might have
taken away from the HACKMEM results in 1972, since 1997 the best chess players
in the world are computers ever since IBM’s Deep Blue beat world chess champion
Garry Kasparov.
The QCS BRE’s are also necessary as a baseline. For
networking audiences, I typically mention that we have a baseline. For networking programs, we’re either asked
how much we can improve over TCP (for pure networking) or how much worse than
TCP are we (for security-focused programs).
For quantum computing, we haven't had a similar baseline to compare
against. Even if it's imperfect, this is
something that the community very much needs to make it easier to compare
claims and to make it harder for people to report results that work only on
their carefully chosen problem. (It may
be going to far to point out that various people have started referring to what
D-Wave presents results against as "the D-Wave problem" since it was
carefully chosen to match the capabilities of their machine.)
So what did we learn in by the end of the QCS program? Along
the lines of the Manhattan Project analogy, we have learned that it is indeed
possible to put together a team of smart people with disparate backgrounds
ranging from physics to computer science and hammer out a common language and
mode of thinking that allowed us to successfully make reasonable BREs for the
wildly different PMDs and algorithms. Going into this at the kick off meeting I
was not sure even that would be possible. And yet here we are at the end of
Phase II and all four teams, taking often different approaches, found similar
patterns for all of the algorithms investigated.
We learned in a quantitative way just how the bad the QEC
overhead is and in just what way that overhead affects the various run times
for different choices off the menu. Before QCS there was only a qualitative
feeling for how this would play out. In particular QCS gave us a robust way to
compare different QEC schemes to each other. Before QCS different researchers
chose QEC schemes based on what was either popular at the time or what they
were most familiar with. Particularly I was not convinced that one QEC, surface
codes, would be dramatically better than the others — in spite of dramatic
claims made by members of other teams. Where were the numbers to back up these
claims? Well the QCS program produced them and our own numbers produced by our
team told us that well indeed that surface error correcting codes were probably
the way to go.
Another critical point that came out of the QCS program was
the interplay between quantum error correction (QEC), quantum control (QC), and
other error reduction methods such as decoherence free subspaces (DFS) and
quantum dynamical decoupling (QDD). The physical machine descriptions (PMD) and
government furnished information (GFI) was presented to us in such a way that,
unless something was done, the native error rates in the hardware were all just
right at the threshold for where the QEC would work. My understanding is that
the government folks expected all the teams to use QC, DFS, and QDD to lower
these given thresholds to a point where large depth QEC codes could be avoided.
It did not become clear until the meeting in Princeton in July of 2012 that
nearly all the noise models provided to the performing teams by the government
teams were of a sort where quantum control techniques would not work.
The most critical point to come out at that meeting was
that, in fact, in most of the quantum computing experiments these noise spectra
are either unknown or unmeasured, which I suspect lead to the simplest
assumption, which was to make them all Gaussian, precisely the type of noise
that the most powerful quantum control methods are useless against. At one review
meeting it was deemed important get the experimenters and the quantum
controllers all into the same room for a two day meeting to hash out just what
the theorist needed from the experiments to optimize the control techniques.
Such a meeting, I think, would be still critical for the advancement of the
field of quantum computing. It was, in my opinion, this being stuck right at
the error correction threshold that led to the huge code depths and the long
time scales of the resource estimates
It appears to me that rather than having a generic quantum
control or other decoherence-mitigation methodology that would be applied to
all hardware platforms; it will be critical in the short term to develop
quantum control techniques that are specifically tailored to the noise spectrum
of each of the physical devices. This suggests a program where the quantum
control and QEC community sit down with the experimenters and hash out just
what experiments need to be done and what type of data is needed in what form. Then
next from these data specific noise spectra models would be then developed and
finally the QEC and QC theorist would produce tailored quantum control
techniques to match the noise spectra. Close collaboration with the
experimentalists is also needed so that the QC theorists understand,
particularly for active quantum control, just what types of weak and strong
measurements are possible in each experimental implementation. For example, in
a recent experiment with ion traps that demonstrated Ulrich dynamical
decoupling (UDD) as a decoherence mitigation protocol, the experimenters
deliberately added noise to their system, noise with a spectrum that UDD was
designed to correct, and then corrected it. This is clearly not the way to go.
The QC methods should be tailored to the native noise spectrum and not the
reverse.
Another big success of the QCS program was the ability to
exploit the Gottesman-Knill theorem in order to characterize errors in
large-depth codes that cannot be handled by a straight forward Monte Carlo
method due to the exponentially large Hilbert space. In the proposal writing
stage in 2010 we had some vague ideas that something like this would be
possible in that the gates in the Clifford group, while efficiently simulatable
classically, might still span enough of the computational circuit space so that
we could compute errors in large-depth circuits without having to build a
quantum computer to do so. The concrete result was the resource-estimating tool
(QASM-P) that we developed
and I was really surprised how well this worked. Such a tool will be a mainstay
for quantum circuit characterization for years to come, at least until we have
enough quantum computers around where we can start using few-qubit machines to
characterize few-more qubit machines in a bootstrapping approach. QASM-P was in
fact just one of several stand-alone tools developed by our team on the fly to
address particular computations as needed. What I find amazing is that even
though these tools were developed by different team members; it was possible to
integrate their usage across all the resource estimates and produce coherent
results
This issue of large scale circuit performance
characterization, to my mind, is the real roadblock for building large quantum
computers. It is related to the issues that currently plague, for example, ion
trap computers with just about 10 qubits. The known protocols for
characterizing the gates and states are quantum process and quantum state
tomography; protocols that require an exponentially large number of data points
to be taken in the number of qubits. Blatt is fond of saying the he could
easily build a 16 qubit ion trap register but the classical resources for
characterizing the device’s performance are lacking. Here the Clifford algebra
approach does not help. The example I like to use is that when Intel tries to
characterize the performance of its latest chip, it does not do so using the
ENIAC. Hence there will come a time when we will have to use few-qubit machines
to characterize the performance of few-more qubit machines and in this way
bootstrap our way up to many qubit machines. However this quantum-to-quantum
bootstrapping characterization protocol does not yet exist, even in theory.
Until we get around this problem we’ll be limited to machines of only 10–20
qubits by pursing the bottom up approach of adding a new qubit to the device
every year or so.
So in summary, when I first read the Broad Agency
Announcement (BAA)
call for proposals for the QCS program I thought the program was wildly
overambitious in scope. I also thought that any winning team would quickly
fragment into the different team-member groups working in isolation and that it
would be impossible to extract any big picture from the outcome. As my fellow
team members will attest, I was hesitant to even join the project for these
reasons. I now, quite frankly, state that I was wrong. Our team and in fact all
the teams pulled together on what really was an immense project and the output,
as stated above, was clearly much greater than the sum of its parts. As readers
of this document will know, I have been involved in the DoD and Intelligence program
in quantum computing since its inception in 1994 (an inception I myself helped
to incept). In those nearly 20 years I have witnessed physics experimental
groups, physics theory groups, computer science and algorithm groups,
complexity theorists groups, and error correction groups all working in,
really, mostly in isolation from each other.
Until the QCS program I did not really have even an
intuitive sense of how all of this work would someway fit together into the
development of a quantum computer. One of the critical successes of the QCS
program it developed the tools and more importantly the common language needed
for me (and now many others) to see the big picture. To mangle an old metaphor,
I feel like a blind man that for 20 years has been groping his way around an
elephant, only to find that a cruel trick had been played on me and that I was
not blind but just blindfolded. The QCS program has removed that blindfold for
me and I’m sure for many others. I have always been an outspoken proponent of
the government program to pursue the development of a quantum computer. But I
have always had some nagging doubts. Perhaps I had missed something. Perhaps
there is some fundamental physical reason or practical technological reason why
a large-scale universal quantum computer could never be built — as many
distinguished scientists still claim to this day. (See Chapter 15 of Aaronson’s
book, for a list of such claims).
Well after participating in the QCS program I have gone from
a somewhat hesitant promoter of quantum computing development to now being all
in. There is nothing that we have missed. The development of a large-scale
universal quantum computer is a formidable technological challenge, and it will
likely take tens of years to complete, but there is no chance it is a
technological or physical impossibility. To my mind the QCS program results has
ruled the latter out — or at least painted it into a very small corner of
improbability. The development of a large-scale universal machine is a
mathematical certainty and it is just a matter of time before we get there.
Of this I have no longer any doubt.
Acknowledgements: I would like to thank Scott Alexander from Applied Communication Sciences, our team's Principle Investigator, for comments and suggestions and particularly adding the bit about HACKMEM. This
blog post was supported by the Intelligence Advanced Research Projects Activity
(IARPA) via Department of Interior National Business Center contract number
D12PC00527. The U.S. Government is authorized to reproduce and distribute
reprints for Governmental purposes notwithstanding any copyright annotation
thereon. Disclaimer: The views and conclusions contained herein are those of
the authors and should not be interpreted as necessarily representing the
official policies or endorsements, either expressed or implied, of IARPA,
DoI/NBC, or the U.S. Government.
Quite a summary, Jon! I know many people thought the project was nuts, but I thought it was the most important thing around.
ReplyDeleteExciting optimism, I'm hoping the universal quantum computer comes about sooner than later.
ReplyDelete