In January of 2011, I went on a six-month sabbatical — my first sabbatical ever — and, as is often the case on such ventures (in order to have such sabbatical requests approved by parsimonious deans), I declared to the dean that I would write a book.

Luna Han, my editor at Taylor & Francis, had been hounding me for some years to write a textbook.. Having moved to academia in 2004 (after 10 years of working for the federal government), I was not in a good position to write a textbook as I did not have years and years of class notes that could be 'easily' up-converted into one. I did, however, have lots and lots of notes and anecdotes spanning the 15 years I was involved in the US Government program to build a quantum computer.

Hence it was with some trepidation that I called up Ms. Han in and declared, "Well I have good news and bad news!" Ever the stoic, Luna replied, "Whenever anybody says that it is only ever bad news." I continued on, "Well the good news is I'm done with chapter one!" Luna's temper improved slightly, "Sound good so far...." Then I went in for the kill, "It's a popular book and not a textbook." This did not go over well, as popular books have much lower profit margins and so forth, but she told me to email her chapter one anyway and she would take a look. A few days latter I got her response, "I love what I'm reading ... I certainly hope you’ll consider sending me the full [book] proposal. "

The proposal was reviewed (and at times reviled) by my friends and colleagues, but the reviews were (for the most part) so positive that I went under contract with Taylor & Francis and began typing four hours a day. I shunned all refereeing, committee meetings, conference travel, proposal reviews, and did nothing much else than work on the book for two years. I fell so far off the grid that some of my colleagues were asking around to see if had met some untimely end. I submitted the book on schedule in September of 2012, then worked with the typesetters for the next few months and spent my Xmas break in 2012 proofreading the 450 page proofs, and then

*Schrödinger's Killer App: Race to Build the World's First Quantum Computer, *was off to press.

I then emerged in the January of 2013 like Rip Van Winkle, sleeping for two years after an all-night bender with some magical dwarfs; rubbing my eyes and blinking. Then I strolled down into my village only to discover that everybody was talking about the boson-sampling problem. My reaction was to inquire, "What the heck is boson the boson-sampling problem!?" I was entreated to read a 94 page

preprint posted in 2010 by Scott Aaronson and Alex Arkhipov on the ArXiv, but to a great extent I found this paper, written in the language of quantum computer complexity class theory, to be

*almost* completely incomprehensible. However I understood enough to realize that these computer scientists were claiming that us quantum physicists had missed something very important about the nature of linear optical interferometers with Fock-state (number-state) inputs. Well, I have been working on the theory of linear optical interferometers for 25 years, and this clearly now was a turf war. What the Fock did I miss? It turns out that, what I had missed, was precisely the Fock. (If you don't want to read that 94-page paper either then try this

two-page introduction to boson sampling theory and experiment, written by James Franson.)

Rather than take a crash course in quantum complexity theory and then go figure out that 94-page paper, I decided to roll up my sleeves and dig into these interferometers myself in my own way and figure out just what the heck was going on — avoiding quantum computer complexity class theory like the plague — and using only elementary tools from quantum optics theory. Until the rise of the boson sampling problem, I would attest that nearly every quantum optician on Earth, including myself, did not think there was much interesting going on in passive, linear optical interferometers, no matter what quantum state of light you put into them — Fock or not. (For a

fun an flippant overview of why we all thought this way about these interferometers, see my recent lecture given at the 2013 Rochester Quantum Information and Measurement Conference; a conference where Robert Boyd came up to me and declared, "Jon! It's so good to see you! Nobody has seen you in a couple of years and people were asking if you were all right.")

It turned out that two of my undergraduates, Bryan Gard and Robert Cross,

also in 2010, were working on a problem that was closely related to the boson-sampling problem (but none of us knew it at the time); they were analytically and numerically studying quantum random walks with multi-photon Fock states in a linear optical interferometer. I gave this to them as an undergraduate 'starter' problem, motivated by

experiments in Jeremy O'Brien's group with two-photon Fock states. Since I did not expect anything too interesting to happen when you went from quantum random walks with one or two photons to random walks with multiple photons, I expected the undergrads to come up with a closed form solution predicting the photon output from an arbitrary photon input.

Then I went on sabbatical and when was buried in the writing of my book for two years and I did not pay too much attention to them when they complained that they could not come up with even a numerical simulation for more than a few photons in an interferometer with only a few modes. They particularly complained that, "The problem blows up exponentially fast!" "These undergrads," I thought, "they see exponential blow ups whenever the math gets a little hairy." Of course, as usual, my undergrads were right and I was wrong. The thing does blow up exponentially fast.

In January of 2013 we were trying to get this random walk paper published, and after endless rounds of refereeing, we finally did and it recently

appeared in JOSA B as, "Quantum Random Walks with Multiple Photons." But in the final round of refereeing, in response to a toss-off comment we made in the paper about the apparent exponential growth of the problem's complexity, a referee suggested, "Were the authors to make this comparison [to the boson sampling problem], they would be in a position to comment on the computational hardness of such systems, which would be insightful." My thought, and that of Bryan and Robert, was, "What the heck is the boson sampling problem!?"

The referee cited an experimental paper out of Andrew White's group, and then I suddenly I remembered Andrew gave at lecture on this in a NASA conference in January of 2012. However I was then so engrossed in the writing of my book that the only take-home message I got from his talk was that Andrew was experimenting with permanents, and I joked that perhaps he was experimenting with new hairdos. Suddenly things started to click and I finally became fully aware of that daunting 94-pager by Aaronson and Arkhipov.

So Bryan and Robert and I rolled up our sleeves even farther and tackled this problem again from the point of view of counting all the resources and comparing coherent-state and squeezed-state inputs to Fock-state inputs and sure enough, although not much interesting happens with coherent and squeezed, everything blows up with the Fock. When I showed co-author Hwang Lee our proof that the Hilbert space dimension blows up as a function of the number of modes and number of photons, he retorted, "This is shocking!" But what is shocking to a quantum optical theorist is not necessarily shocking to a quantum computational theorist.

We fired off the paper to

*Physical Review Letters* (PRL) — and the referees immediately pounced. One said that our result was not new and our proof was not nearly as good as invoking the "collapse of the polynomial hierarchy" as did Aaronson and Arkhipov. At that point I had no idea what the polynomial hierarchy even was — some computer science-y thing — and I certainly did not much care if it collapsed or not and so we retorted, "

*Physical Review* is a physics journal and not a computer science journal." Comments from Aaronson and Peter Rhode were much more helpful. They both pointed out that, due to the Gottesman-Knill theorem, it is now well-known, in spite of Feynman's arguments to the contrary, that sometimes systems with exponentially large Hilbert spaces are still efficiently simulateable — who knew!? When Aaronson and Rohde both pointed out that

*fermionic* linear interferometers with Fock state inputs also have an exponentially large Hilbert space — I didn't believe it! But in spite of that blow up the fermionic devices were still efficiently simulateable do to special properties of matrix determinants that matrix permanents don't have. Sometimes, boys and girls, there are polynomial shortcuts through your exponentially large Hilbert space....

Rolling up our sleeves again (this time to our eyeballs) first I had to take a crash course on fermionic interferometers (think neutrons) and prove the Hilbert space blows up there too. (It does.) Then we had to augment our paper and bring the argument back around to matrix permanents (nothing to do with Andrew White's hairdo) and matrix determinants. (We did.) And now our revised paper, "

Classical Computers Very Likely Can Not Efficiently Simulate Multimode Linear Optical Interferometers with Arbitrary Fock-State Inputs-An Elementary Argument," sits in the limbo that awaits quantum optics papers that are not too horribly rejected from

*Physical Review Letters*, and so you try to transfer them to

*Physical Review A* (PRA) instead. In Catholic school I was taught that limbo was just like heaven — except you never got to see the face of God. Similarly, PRA-limbo is just like PRL-heaven — except you never get to see the face of Basbas....

But now finally to the point of this post. This post has a point? Finally! I think it was Einstein who once said that doing physics research is like staggering around drunk in a dark, unfamiliar, furniture-filled room, bumping over chairs and ottomans, while groping for the light switch. In this vein, having me explain to you how we do research in our group is a bit like having your

*Würst* vendor explain to you how he makes his sausages. However, if you want to pursue a career in sausage making, it might be best to know what you are getting into up front. After all you want to make sure you are always getting the best of the

*Würst*! However when it comes to the experiments done so far on boson sampling, it is not quite yet the best....

Now, after pondering the theory of this boson sampling business for six months and now knowing enough to be dangerous, I have finally decided to dive fully in and read the flurry of nearly simultaneously published recent papers claiming to implement boson sampling with three or four photons [

Broome2013,

Spring2012,

Crespi2012,

Tillmann2013,

Spagnolo13]. It is clear to me now, that in order to implement boson sampling,

*the input photons must be in Fock states *—

*that is the whole point. *Coherent states and squeezed states and other so-called Gaussian states simply will not do, as it is well known (due to work of

Sanders and

Braunstein and others) that linear optical interferometers with Gaussian-state inputs are always efficiently simulateable.

The whole point of boson sampling is that the input needs to be in the form of pure photon Fock states in order to perform the sampling. That is, thinking of the interferometer as the information processor, the processing must take place on Fock states and not on Gaussian states. If you input

*only* Gaussian states, as in the four-photon experiment of Crespi,

*et al.,* you are clearly not doing boson sampling at all. If you mix one-photon Fock states with a two-mode Gaussian state (as all the three-photon experiments do) it is not clear what you are doing, but you are certainly not implementing boson sampling as advertised. Yet all these experiments do this. I'll focus on the three-photon experiments, as they are the most tricky. (The single four-photon experiment can be discarded out of hand as there is not a Fock state in sight.)

To do this right one would need to have three, distinct, heralded single-photon Fock states. Instead all the three-photon experiments herald only one single-photon Fock state and then mix it with the output of a spontaneous parametric downconverter (SPDC) —

*an output which is a Gaussian state!* In particular it is

*not* the product of two-distinct single-photon Fock states that is required for boson sampling.

The output of the SPDC, in the Fock basis, is a superposition of mostly twin vacuum states, some twin single-Fock states, some fewer twin double-Fock states, and so on — all summing up to a Gaussian state. The experimentalist's hope is that by post-selecting only on events where three photons (in total) are counted that this experiment is equivalent to having three distinct single-photon-Fock states to begin with.

*It is not. *
What matters for boson sampling is what you put

*in* to the interferometer and not so much what you do to what comes

*out.* This post-selection process fools you into thinking that only three photons went in.

*That is not true.* A single photon went in mixed with a Gaussian, two-mode squeezed vacuum (of indeterminate photon number) — a two-mode squeezed vacuum state is definitely not two photons in a pair of Fock states.

To kick in the large Hilbert space and the permanents of matrices and the things that make boson sampling interesting the processor must process

*only* Fock states. You can't just pretend at the end of the experiment that what you sent in were Fock states. Whatever these experiments are doing it is not boson sampling as advertised. A three-photon boson sampling experiment will require that all three of the input photons are heralded — not just one (or in the case of the four-photon experiment not just

*none*).

Now before my old friend Andrew White (in his curly new hairdo) comes flying down from Brisbane to pummel me here in Sydney, where I am visiting for a few weeks, let me say that all of these experiments were impressive

*tours de force* carried out by some of the best quantum mechanics in the galaxy. Each and every experiment required hard work and lots of hours in the lab and most of these experiments are on the forefront of integrated quantum photonics, which will have a plethora of applications to quantum optical computing, metrology, and imaging. And someday soon one of these extraordinary experimental groups will demonstrate boson sampling with three photons — but I venture that this day has not yet come.

And yes, yes, yes, I know why all the experimentalists all do it this way — I may not know much about quantum complexity theory but I do know a thing or two about quantum optics. The point is that with the SPDC sources the probability of heralding exactly

*N* photons scales exponentially poorly with

*N*. That is if they tried to do this with three

* really *heralded single photons, then the data collection would have taken months, and if they tried to do this with four

*really *heralded single photons, then it would have taken years.

Nonetheless, while it is true in previous experiments in other areas of quantum optics that it does not really matter too much if you have

*real *single-photon inputs, or post-select the outputs and then pretend that you had real single-photon inputs, for boson sampling

*this distinction is critical*. The linear optical interferometer must process

*only* Fock input states to be in the boson sampling regime. If, instead, it processes only Gaussian states or admixtures of Gaussian states with Fock states, well then it is not carrying out boson sampling it is doing something else — no matter how much you pretend to the contrary.

To summarize this entire diatribe in three words:

*What? The Fock!*