In January of 2011, I went on a six-month sabbatical — my first sabbatical ever — and, as is often the case on such ventures (in order to have such sabbatical requests approved by parsimonious deans), I declared to the dean that I would write a book.
Luna Han, my editor at Taylor & Francis, had been hounding me for some years to write a textbook.. Having moved to academia in 2004 (after 10 years of working for the federal government), I was not in a good position to write a textbook as I did not have years and years of class notes that could be 'easily' up-converted into one. I did, however, have lots and lots of notes and anecdotes spanning the 15 years I was involved in the US Government program to build a quantum computer.
Hence it was with some trepidation that I called up Ms. Han in and declared, "Well I have good news and bad news!" Ever the stoic, Luna replied, "Whenever anybody says that it is only ever bad news." I continued on, "Well the good news is I'm done with chapter one!" Luna's temper improved slightly, "Sound good so far...." Then I went in for the kill, "It's a popular book and not a textbook." This did not go over well, as popular books have much lower profit margins and so forth, but she told me to email her chapter one anyway and she would take a look. A few days latter I got her response, "I love what I'm reading ... I certainly hope you’ll consider sending me the full [book] proposal. "
The proposal was reviewed (and at times reviled) by my friends and colleagues, but the reviews were (for the most part) so positive that I went under contract with Taylor & Francis and began typing four hours a day. I shunned all refereeing, committee meetings, conference travel, proposal reviews, and did nothing much else than work on the book for two years. I fell so far off the grid that some of my colleagues were asking around to see if had met some untimely end. I submitted the book on schedule in September of 2012, then worked with the typesetters for the next few months and spent my Xmas break in 2012 proofreading the 450 page proofs, and then Schrödinger's Killer App: Race to Build the World's First Quantum Computer, was off to press.
I then emerged in the January of 2013 like Rip Van Winkle, sleeping for two years after an all-night bender with some magical dwarfs; rubbing my eyes and blinking. Then I strolled down into my village only to discover that everybody was talking about the boson-sampling problem. My reaction was to inquire, "What the heck is boson the boson-sampling problem!?" I was entreated to read a 94 page preprint posted in 2010 by Scott Aaronson and Alex Arkhipov on the ArXiv, but to a great extent I found this paper, written in the language of quantum computer complexity class theory, to be almost completely incomprehensible. However I understood enough to realize that these computer scientists were claiming that us quantum physicists had missed something very important about the nature of linear optical interferometers with Fock-state (number-state) inputs. Well, I have been working on the theory of linear optical interferometers for 25 years, and this clearly now was a turf war. What the Fock did I miss? It turns out that, what I had missed, was precisely the Fock. (If you don't want to read that 94-page paper either then try this two-page introduction to boson sampling theory and experiment, written by James Franson.)
Rather than take a crash course in quantum complexity theory and then go figure out that 94-page paper, I decided to roll up my sleeves and dig into these interferometers myself in my own way and figure out just what the heck was going on — avoiding quantum computer complexity class theory like the plague — and using only elementary tools from quantum optics theory. Until the rise of the boson sampling problem, I would attest that nearly every quantum optician on Earth, including myself, did not think there was much interesting going on in passive, linear optical interferometers, no matter what quantum state of light you put into them — Fock or not. (For a fun an flippant overview of why we all thought this way about these interferometers, see my recent lecture given at the 2013 Rochester Quantum Information and Measurement Conference; a conference where Robert Boyd came up to me and declared, "Jon! It's so good to see you! Nobody has seen you in a couple of years and people were asking if you were all right.")
It turned out that two of my undergraduates, Bryan Gard and Robert Cross, also in 2010, were working on a problem that was closely related to the boson-sampling problem (but none of us knew it at the time); they were analytically and numerically studying quantum random walks with multi-photon Fock states in a linear optical interferometer. I gave this to them as an undergraduate 'starter' problem, motivated by experiments in Jeremy O'Brien's group with two-photon Fock states. Since I did not expect anything too interesting to happen when you went from quantum random walks with one or two photons to random walks with multiple photons, I expected the undergrads to come up with a closed form solution predicting the photon output from an arbitrary photon input.
Then I went on sabbatical and when was buried in the writing of my book for two years and I did not pay too much attention to them when they complained that they could not come up with even a numerical simulation for more than a few photons in an interferometer with only a few modes. They particularly complained that, "The problem blows up exponentially fast!" "These undergrads," I thought, "they see exponential blow ups whenever the math gets a little hairy." Of course, as usual, my undergrads were right and I was wrong. The thing does blow up exponentially fast.
In January of 2013 we were trying to get this random walk paper published, and after endless rounds of refereeing, we finally did and it recently appeared in JOSA B as, "Quantum Random Walks with Multiple Photons." But in the final round of refereeing, in response to a toss-off comment we made in the paper about the apparent exponential growth of the problem's complexity, a referee suggested, "Were the authors to make this comparison [to the boson sampling problem], they would be in a position to comment on the computational hardness of such systems, which would be insightful." My thought, and that of Bryan and Robert, was, "What the heck is the boson sampling problem!?"
The referee cited an experimental paper out of Andrew White's group, and then I suddenly I remembered Andrew gave at lecture on this in a NASA conference in January of 2012. However I was then so engrossed in the writing of my book that the only take-home message I got from his talk was that Andrew was experimenting with permanents, and I joked that perhaps he was experimenting with new hairdos. Suddenly things started to click and I finally became fully aware of that daunting 94-pager by Aaronson and Arkhipov.
So Bryan and Robert and I rolled up our sleeves even farther and tackled this problem again from the point of view of counting all the resources and comparing coherent-state and squeezed-state inputs to Fock-state inputs and sure enough, although not much interesting happens with coherent and squeezed, everything blows up with the Fock. When I showed co-author Hwang Lee our proof that the Hilbert space dimension blows up as a function of the number of modes and number of photons, he retorted, "This is shocking!" But what is shocking to a quantum optical theorist is not necessarily shocking to a quantum computational theorist.
We fired off the paper to Physical Review Letters (PRL) — and the referees immediately pounced. One said that our result was not new and our proof was not nearly as good as invoking the "collapse of the polynomial hierarchy" as did Aaronson and Arkhipov. At that point I had no idea what the polynomial hierarchy even was — some computer science-y thing — and I certainly did not much care if it collapsed or not and so we retorted, "Physical Review is a physics journal and not a computer science journal." Comments from Aaronson and Peter Rhode were much more helpful. They both pointed out that, due to the Gottesman-Knill theorem, it is now well-known, in spite of Feynman's arguments to the contrary, that sometimes systems with exponentially large Hilbert spaces are still efficiently simulateable — who knew!? When Aaronson and Rohde both pointed out that fermionic linear interferometers with Fock state inputs also have an exponentially large Hilbert space — I didn't believe it! But in spite of that blow up the fermionic devices were still efficiently simulateable do to special properties of matrix determinants that matrix permanents don't have. Sometimes, boys and girls, there are polynomial shortcuts through your exponentially large Hilbert space....
Rolling up our sleeves again (this time to our eyeballs) first I had to take a crash course on fermionic interferometers (think neutrons) and prove the Hilbert space blows up there too. (It does.) Then we had to augment our paper and bring the argument back around to matrix permanents (nothing to do with Andrew White's hairdo) and matrix determinants. (We did.) And now our revised paper, "Classical Computers Very Likely Can Not Efficiently Simulate Multimode Linear Optical Interferometers with Arbitrary Fock-State Inputs-An Elementary Argument," sits in the limbo that awaits quantum optics papers that are not too horribly rejected from Physical Review Letters, and so you try to transfer them to Physical Review A (PRA) instead. In Catholic school I was taught that limbo was just like heaven — except you never got to see the face of God. Similarly, PRA-limbo is just like PRL-heaven — except you never get to see the face of Basbas....
But now finally to the point of this post. This post has a point? Finally! I think it was Einstein who once said that doing physics research is like staggering around drunk in a dark, unfamiliar, furniture-filled room, bumping over chairs and ottomans, while groping for the light switch. In this vein, having me explain to you how we do research in our group is a bit like having your Würst vendor explain to you how he makes his sausages. However, if you want to pursue a career in sausage making, it might be best to know what you are getting into up front. After all you want to make sure you are always getting the best of the Würst! However when it comes to the experiments done so far on boson sampling, it is not quite yet the best....
Now, after pondering the theory of this boson sampling business for six months and now knowing enough to be dangerous, I have finally decided to dive fully in and read the flurry of nearly simultaneously published recent papers claiming to implement boson sampling with three or four photons [Broome2013, Spring2012, Crespi2012, Tillmann2013, Spagnolo13]. It is clear to me now, that in order to implement boson sampling, the input photons must be in Fock states — that is the whole point. Coherent states and squeezed states and other so-called Gaussian states simply will not do, as it is well known (due to work of Sanders and Braunstein and others) that linear optical interferometers with Gaussian-state inputs are always efficiently simulateable.
The whole point of boson sampling is that the input needs to be in the form of pure photon Fock states in order to perform the sampling. That is, thinking of the interferometer as the information processor, the processing must take place on Fock states and not on Gaussian states. If you input only Gaussian states, as in the four-photon experiment of Crespi, et al., you are clearly not doing boson sampling at all. If you mix one-photon Fock states with a two-mode Gaussian state (as all the three-photon experiments do) it is not clear what you are doing, but you are certainly not implementing boson sampling as advertised. Yet all these experiments do this. I'll focus on the three-photon experiments, as they are the most tricky. (The single four-photon experiment can be discarded out of hand as there is not a Fock state in sight.)
To do this right one would need to have three, distinct, heralded single-photon Fock states. Instead all the three-photon experiments herald only one single-photon Fock state and then mix it with the output of a spontaneous parametric downconverter (SPDC) —an output which is a Gaussian state! In particular it is not the product of two-distinct single-photon Fock states that is required for boson sampling.
The output of the SPDC, in the Fock basis, is a superposition of mostly twin vacuum states, some twin single-Fock states, some fewer twin double-Fock states, and so on — all summing up to a Gaussian state. The experimentalist's hope is that by post-selecting only on events where three photons (in total) are counted that this experiment is equivalent to having three distinct single-photon-Fock states to begin with.
It is not.
What matters for boson sampling is what you put in to the interferometer and not so much what you do to what comes out. This post-selection process fools you into thinking that only three photons went in. That is not true. A single photon went in mixed with a Gaussian, two-mode squeezed vacuum (of indeterminate photon number) — a two-mode squeezed vacuum state is definitely not two photons in a pair of Fock states.
To kick in the large Hilbert space and the permanents of matrices and the things that make boson sampling interesting the processor must process only Fock states. You can't just pretend at the end of the experiment that what you sent in were Fock states. Whatever these experiments are doing it is not boson sampling as advertised. A three-photon boson sampling experiment will require that all three of the input photons are heralded — not just one (or in the case of the four-photon experiment not just none).
Now before my old friend Andrew White (in his curly new hairdo) comes flying down from Brisbane to pummel me here in Sydney, where I am visiting for a few weeks, let me say that all of these experiments were impressive tours de force carried out by some of the best quantum mechanics in the galaxy. Each and every experiment required hard work and lots of hours in the lab and most of these experiments are on the forefront of integrated quantum photonics, which will have a plethora of applications to quantum optical computing, metrology, and imaging. And someday soon one of these extraordinary experimental groups will demonstrate boson sampling with three photons — but I venture that this day has not yet come.
And yes, yes, yes, I know why all the experimentalists all do it this way — I may not know much about quantum complexity theory but I do know a thing or two about quantum optics. The point is that with the SPDC sources the probability of heralding exactly N photons scales exponentially poorly with N. That is if they tried to do this with three really heralded single photons, then the data collection would have taken months, and if they tried to do this with four really heralded single photons, then it would have taken years.
Nonetheless, while it is true in previous experiments in other areas of quantum optics that it does not really matter too much if you have real single-photon inputs, or post-select the outputs and then pretend that you had real single-photon inputs, for boson sampling this distinction is critical. The linear optical interferometer must process only Fock input states to be in the boson sampling regime. If, instead, it processes only Gaussian states or admixtures of Gaussian states with Fock states, well then it is not carrying out boson sampling it is doing something else — no matter how much you pretend to the contrary.
To summarize this entire diatribe in three words: What? The Fock!
Luna Han, my editor at Taylor & Francis, had been hounding me for some years to write a textbook.. Having moved to academia in 2004 (after 10 years of working for the federal government), I was not in a good position to write a textbook as I did not have years and years of class notes that could be 'easily' up-converted into one. I did, however, have lots and lots of notes and anecdotes spanning the 15 years I was involved in the US Government program to build a quantum computer.
Hence it was with some trepidation that I called up Ms. Han in and declared, "Well I have good news and bad news!" Ever the stoic, Luna replied, "Whenever anybody says that it is only ever bad news." I continued on, "Well the good news is I'm done with chapter one!" Luna's temper improved slightly, "Sound good so far...." Then I went in for the kill, "It's a popular book and not a textbook." This did not go over well, as popular books have much lower profit margins and so forth, but she told me to email her chapter one anyway and she would take a look. A few days latter I got her response, "I love what I'm reading ... I certainly hope you’ll consider sending me the full [book] proposal. "
The proposal was reviewed (and at times reviled) by my friends and colleagues, but the reviews were (for the most part) so positive that I went under contract with Taylor & Francis and began typing four hours a day. I shunned all refereeing, committee meetings, conference travel, proposal reviews, and did nothing much else than work on the book for two years. I fell so far off the grid that some of my colleagues were asking around to see if had met some untimely end. I submitted the book on schedule in September of 2012, then worked with the typesetters for the next few months and spent my Xmas break in 2012 proofreading the 450 page proofs, and then Schrödinger's Killer App: Race to Build the World's First Quantum Computer, was off to press.
I then emerged in the January of 2013 like Rip Van Winkle, sleeping for two years after an all-night bender with some magical dwarfs; rubbing my eyes and blinking. Then I strolled down into my village only to discover that everybody was talking about the boson-sampling problem. My reaction was to inquire, "What the heck is boson the boson-sampling problem!?" I was entreated to read a 94 page preprint posted in 2010 by Scott Aaronson and Alex Arkhipov on the ArXiv, but to a great extent I found this paper, written in the language of quantum computer complexity class theory, to be almost completely incomprehensible. However I understood enough to realize that these computer scientists were claiming that us quantum physicists had missed something very important about the nature of linear optical interferometers with Fock-state (number-state) inputs. Well, I have been working on the theory of linear optical interferometers for 25 years, and this clearly now was a turf war. What the Fock did I miss? It turns out that, what I had missed, was precisely the Fock. (If you don't want to read that 94-page paper either then try this two-page introduction to boson sampling theory and experiment, written by James Franson.)
Rather than take a crash course in quantum complexity theory and then go figure out that 94-page paper, I decided to roll up my sleeves and dig into these interferometers myself in my own way and figure out just what the heck was going on — avoiding quantum computer complexity class theory like the plague — and using only elementary tools from quantum optics theory. Until the rise of the boson sampling problem, I would attest that nearly every quantum optician on Earth, including myself, did not think there was much interesting going on in passive, linear optical interferometers, no matter what quantum state of light you put into them — Fock or not. (For a fun an flippant overview of why we all thought this way about these interferometers, see my recent lecture given at the 2013 Rochester Quantum Information and Measurement Conference; a conference where Robert Boyd came up to me and declared, "Jon! It's so good to see you! Nobody has seen you in a couple of years and people were asking if you were all right.")
It turned out that two of my undergraduates, Bryan Gard and Robert Cross, also in 2010, were working on a problem that was closely related to the boson-sampling problem (but none of us knew it at the time); they were analytically and numerically studying quantum random walks with multi-photon Fock states in a linear optical interferometer. I gave this to them as an undergraduate 'starter' problem, motivated by experiments in Jeremy O'Brien's group with two-photon Fock states. Since I did not expect anything too interesting to happen when you went from quantum random walks with one or two photons to random walks with multiple photons, I expected the undergrads to come up with a closed form solution predicting the photon output from an arbitrary photon input.
Then I went on sabbatical and when was buried in the writing of my book for two years and I did not pay too much attention to them when they complained that they could not come up with even a numerical simulation for more than a few photons in an interferometer with only a few modes. They particularly complained that, "The problem blows up exponentially fast!" "These undergrads," I thought, "they see exponential blow ups whenever the math gets a little hairy." Of course, as usual, my undergrads were right and I was wrong. The thing does blow up exponentially fast.
In January of 2013 we were trying to get this random walk paper published, and after endless rounds of refereeing, we finally did and it recently appeared in JOSA B as, "Quantum Random Walks with Multiple Photons." But in the final round of refereeing, in response to a toss-off comment we made in the paper about the apparent exponential growth of the problem's complexity, a referee suggested, "Were the authors to make this comparison [to the boson sampling problem], they would be in a position to comment on the computational hardness of such systems, which would be insightful." My thought, and that of Bryan and Robert, was, "What the heck is the boson sampling problem!?"
The referee cited an experimental paper out of Andrew White's group, and then I suddenly I remembered Andrew gave at lecture on this in a NASA conference in January of 2012. However I was then so engrossed in the writing of my book that the only take-home message I got from his talk was that Andrew was experimenting with permanents, and I joked that perhaps he was experimenting with new hairdos. Suddenly things started to click and I finally became fully aware of that daunting 94-pager by Aaronson and Arkhipov.
So Bryan and Robert and I rolled up our sleeves even farther and tackled this problem again from the point of view of counting all the resources and comparing coherent-state and squeezed-state inputs to Fock-state inputs and sure enough, although not much interesting happens with coherent and squeezed, everything blows up with the Fock. When I showed co-author Hwang Lee our proof that the Hilbert space dimension blows up as a function of the number of modes and number of photons, he retorted, "This is shocking!" But what is shocking to a quantum optical theorist is not necessarily shocking to a quantum computational theorist.
We fired off the paper to Physical Review Letters (PRL) — and the referees immediately pounced. One said that our result was not new and our proof was not nearly as good as invoking the "collapse of the polynomial hierarchy" as did Aaronson and Arkhipov. At that point I had no idea what the polynomial hierarchy even was — some computer science-y thing — and I certainly did not much care if it collapsed or not and so we retorted, "Physical Review is a physics journal and not a computer science journal." Comments from Aaronson and Peter Rhode were much more helpful. They both pointed out that, due to the Gottesman-Knill theorem, it is now well-known, in spite of Feynman's arguments to the contrary, that sometimes systems with exponentially large Hilbert spaces are still efficiently simulateable — who knew!? When Aaronson and Rohde both pointed out that fermionic linear interferometers with Fock state inputs also have an exponentially large Hilbert space — I didn't believe it! But in spite of that blow up the fermionic devices were still efficiently simulateable do to special properties of matrix determinants that matrix permanents don't have. Sometimes, boys and girls, there are polynomial shortcuts through your exponentially large Hilbert space....
Rolling up our sleeves again (this time to our eyeballs) first I had to take a crash course on fermionic interferometers (think neutrons) and prove the Hilbert space blows up there too. (It does.) Then we had to augment our paper and bring the argument back around to matrix permanents (nothing to do with Andrew White's hairdo) and matrix determinants. (We did.) And now our revised paper, "Classical Computers Very Likely Can Not Efficiently Simulate Multimode Linear Optical Interferometers with Arbitrary Fock-State Inputs-An Elementary Argument," sits in the limbo that awaits quantum optics papers that are not too horribly rejected from Physical Review Letters, and so you try to transfer them to Physical Review A (PRA) instead. In Catholic school I was taught that limbo was just like heaven — except you never got to see the face of God. Similarly, PRA-limbo is just like PRL-heaven — except you never get to see the face of Basbas....
But now finally to the point of this post. This post has a point? Finally! I think it was Einstein who once said that doing physics research is like staggering around drunk in a dark, unfamiliar, furniture-filled room, bumping over chairs and ottomans, while groping for the light switch. In this vein, having me explain to you how we do research in our group is a bit like having your Würst vendor explain to you how he makes his sausages. However, if you want to pursue a career in sausage making, it might be best to know what you are getting into up front. After all you want to make sure you are always getting the best of the Würst! However when it comes to the experiments done so far on boson sampling, it is not quite yet the best....
Now, after pondering the theory of this boson sampling business for six months and now knowing enough to be dangerous, I have finally decided to dive fully in and read the flurry of nearly simultaneously published recent papers claiming to implement boson sampling with three or four photons [Broome2013, Spring2012, Crespi2012, Tillmann2013, Spagnolo13]. It is clear to me now, that in order to implement boson sampling, the input photons must be in Fock states — that is the whole point. Coherent states and squeezed states and other so-called Gaussian states simply will not do, as it is well known (due to work of Sanders and Braunstein and others) that linear optical interferometers with Gaussian-state inputs are always efficiently simulateable.
The whole point of boson sampling is that the input needs to be in the form of pure photon Fock states in order to perform the sampling. That is, thinking of the interferometer as the information processor, the processing must take place on Fock states and not on Gaussian states. If you input only Gaussian states, as in the four-photon experiment of Crespi, et al., you are clearly not doing boson sampling at all. If you mix one-photon Fock states with a two-mode Gaussian state (as all the three-photon experiments do) it is not clear what you are doing, but you are certainly not implementing boson sampling as advertised. Yet all these experiments do this. I'll focus on the three-photon experiments, as they are the most tricky. (The single four-photon experiment can be discarded out of hand as there is not a Fock state in sight.)
To do this right one would need to have three, distinct, heralded single-photon Fock states. Instead all the three-photon experiments herald only one single-photon Fock state and then mix it with the output of a spontaneous parametric downconverter (SPDC) —an output which is a Gaussian state! In particular it is not the product of two-distinct single-photon Fock states that is required for boson sampling.
The output of the SPDC, in the Fock basis, is a superposition of mostly twin vacuum states, some twin single-Fock states, some fewer twin double-Fock states, and so on — all summing up to a Gaussian state. The experimentalist's hope is that by post-selecting only on events where three photons (in total) are counted that this experiment is equivalent to having three distinct single-photon-Fock states to begin with.
It is not.
What matters for boson sampling is what you put in to the interferometer and not so much what you do to what comes out. This post-selection process fools you into thinking that only three photons went in. That is not true. A single photon went in mixed with a Gaussian, two-mode squeezed vacuum (of indeterminate photon number) — a two-mode squeezed vacuum state is definitely not two photons in a pair of Fock states.
To kick in the large Hilbert space and the permanents of matrices and the things that make boson sampling interesting the processor must process only Fock states. You can't just pretend at the end of the experiment that what you sent in were Fock states. Whatever these experiments are doing it is not boson sampling as advertised. A three-photon boson sampling experiment will require that all three of the input photons are heralded — not just one (or in the case of the four-photon experiment not just none).
Now before my old friend Andrew White (in his curly new hairdo) comes flying down from Brisbane to pummel me here in Sydney, where I am visiting for a few weeks, let me say that all of these experiments were impressive tours de force carried out by some of the best quantum mechanics in the galaxy. Each and every experiment required hard work and lots of hours in the lab and most of these experiments are on the forefront of integrated quantum photonics, which will have a plethora of applications to quantum optical computing, metrology, and imaging. And someday soon one of these extraordinary experimental groups will demonstrate boson sampling with three photons — but I venture that this day has not yet come.
And yes, yes, yes, I know why all the experimentalists all do it this way — I may not know much about quantum complexity theory but I do know a thing or two about quantum optics. The point is that with the SPDC sources the probability of heralding exactly N photons scales exponentially poorly with N. That is if they tried to do this with three really heralded single photons, then the data collection would have taken months, and if they tried to do this with four really heralded single photons, then it would have taken years.
Nonetheless, while it is true in previous experiments in other areas of quantum optics that it does not really matter too much if you have real single-photon inputs, or post-select the outputs and then pretend that you had real single-photon inputs, for boson sampling this distinction is critical. The linear optical interferometer must process only Fock input states to be in the boson sampling regime. If, instead, it processes only Gaussian states or admixtures of Gaussian states with Fock states, well then it is not carrying out boson sampling it is doing something else — no matter how much you pretend to the contrary.
To summarize this entire diatribe in three words: What? The Fock!
(1) Yes, completely agree that the existing experiments don't demonstrate what's needed for scalable BosonSampling -- ultimately you'd want single-photon input states and no heavy reliance on postselection. But while I'm as hazy on the details of SPDC as you are on the polynomial hierarchy, for me the "acid test" of whether an n-photon experiment has *something* to do with BosonSampling is simply whether or not you can demonstrate that the trasition amplitudes involve the appropriate nxn complex permanents. Do you dispute that the recent experiments did THAT (with n=3)?
ReplyDelete(2) Sanders et al. only showed that, if you're restricted to Gaussian input states AND Gaussian measurements, then there's an efficient classical simulation. If you have Gaussian input states and photon-number measurements, then it's a big open problem (one we've thought about on and off, with little progress): our hardness arguments based on the permanent no longer go through, but we don't know any efficient classical simulation either.
(3) "Turf war"?! Do you know anyplace where any quantum optics expert ever stated in print that, with single-photon inputs and nonadaptive measurements, you can't do anything that's not classically simulable? We couldn't find anyplace where anyone had even *raised the question* in those terms -- maybe because people were trapped in the false dichotomy of "either universal for QC or else trivial", or maybe just because the people who knew what bosons were considered the polynomial hierarchy manifestly irrelevant to anything they cared about, and vice versa. :-)
Hi Scott:
DeleteAs always thanks for you comments. Firstly, given your fondness for umlauts, I hope we can at least agree on the spelling of 'schmämpling'?
;-)
1. I think it is likely the n=3 experiments did something like what you claim they are doing. However there is a truth-in-advertising issue here. As far as I can tell all the (n=3) experimental papers claim they are doing kosher boson sampling with pure Fock-state inputs and it is clear that they are not. The state in the interferometer in the three photon experiments is properly a photon-added two-mode squeezed Gaussian state and the entire experiment should be described in those terms and the complexity worked out. It currently is not done that way at all. I think there is likely something related to permanents going on in this case but it is non-kosher --- halal? --- boson sampling.
(2) Agreed there is the Gaussian measurement of Gaussian states versus photon number measurements of Gaussian states. I'm having this exact discussion with Dominic Berry now. If all the states are purely Gaussian, such as in the n=4 experiment, then I can efficiently write down the most general input state and use matrix transfer and efficiently compute the output state (or equivalently its Wigner function) without ever resorting to computing the permanent of any matrix. Once I have the output state in closed form then I can compute anything you would like, correlation functions, higher order moments, detector coincidence probabilities, all in closed form (I conjecture), once again without permanents, regardless if you your detectors are 'Gaussian' --- homodyne? --- or number resolving. If there is something computationally complex here I just don't see it. However it is a completely different ball game if you add even a single Fock state to the Gaussian mix, as you say.
(3) How about "Türf" war? I believe from my informal polling of the quantum optics folks that indeed we were all trapped into the false dichotomy of "either universal for QC or else trivial." The trivial assumption came from a failed history of trying construct a universal QC from only passive linear optics. After so many failures, many of which were never published, it became 'obvious' to the quantum opticians that there was nothing computationally interesting going on in a passive, linear, optical interferometer. This is why your result with Alex is so important. It has woken some of us in the quantum optics community out of our torpor. However, given the response to my talks on this topic at quantum optics conferences in the past two months, the vast majority of the quantum optics people remain either blissfully unaware as to the extent and nature of your result or do not understand the result at all. I feel like a hobbit running around the forest trying to rouse the sleepy tree-like ents to the growing power of Saruman! At lunch a few weeks ago a world-class quantum optics experimentalist confessed to me that he had never heard of the permanent of a matrix before. That is what we are dealing with at the moment.
(1) Now I find myself confused about something: if the input states were 2-photon Gaussians with 1 photon added, then why would one have observed 3x3 permanents in the amplitudes? Was further postselection done on the 2 Gaussian photons, in order to make them behave effectively like 2 Fock-state photons? If so, that's consistent with what I understood. If not, then the experiments are much more of a "cheat" than I thought -- but then, again, I don't understand why the amplitudes should have been permanental at all.
Delete(2) Well, Alex worked for some time on Gaussian input / number-resolving detector problem, and was unable to find an efficient classical simulation. You should ask him for the details, but briefly the issue is this: suppose you have an n-photon Gaussian state, and suppose you condition on (say) 1 photon being observed in the first mode. Then the reduced state of the remaining n-1 photons need no longer be Gaussian! So the calculation breaks down if you want (say) n-photon coincidence probabilities. As it turns out, there's a polynomial-time algorithm to compute the moment-generating function (something we certainly don't know in the case of Fock-state inputs), but that doesn't give you the n-photon coincidence probabilities or an efficient sampling algorithm either.
Hi Scott, if you're at home you're sure up early. Let me see if I can clarify.
Delete(1) For the 3-photon case the unheralded state was a two-MODE Gaussian state of *indeterminate* photon number. Only the average photon number can be specified for an unheralded SPDC output. This is my primary complaint. It is not correct to state that this state had 'two Gaussian photons'--- that language is really meaningless. It is instead a superposition of (0,0), (1,1), (2,2), (3,3) ... (∞,∞) photons in pairs with exponentially decreasing probability that all sum up to a Gaussian state of indeterminate total photon number. And yet the introductions of these papers pretend it was a pure (1,1) state, which it is not. To this two-MODE Gaussian SPDC state of *uncertain photon number* they then add the heralded single photon that is indeed in a Fock state. The resultant is a single-photon-added Gaussian state of still *indeterminate photon number* and surely not exactly three photons. They then post-select on the events where this state collapses to exactly three photons on the output, throwing away all the higher order terms up to (∞,∞). At no time were there *exactly* three photons on the input and at no time where there *exactly* two photons in the SPDC output. I suspect this all mimics boson sampling but if one counts resources fairly one finds it is indeed a cheat.
(2) I will see Alex in January and we'll hash this out. Again it is meaningless to say you have an 'n-photon Gaussian state'. Only the *mean* photon number of a Gaussian state can be specified and not the total photon number. That is a hallmark of a Gaussian state. Let's take an interferometer with only coherent states in it, perfectly respectable Gaussian states of *uncertain* total photon number but well defined mean photon number. The argument you give breaks down. Remove a single photon from a coherent Gaussian state and you retain still a coherent Gaussian state of *indeterminate* photon number. Things are more interesting if you switch to, say, a squeezed Gaussian state (still of uncertain photon number). I agree that if you remove a single photon by heralding, then this is the so-called photon-subtracted squeezed state, which for sure is no longer Gaussian, but it *still* has indeterminate photon number --- you can't say for sure even that exactly one photon is missing from it. There is even a paradoxical behavior where removing a photon from a Gaussian state causes the mean number of photons to go *up* and not down. This calculation really needs to be done entirely in a Gaussian state basis, with standard Wigner or Q-function techniques, to see really what is going on. Photon number counting arguments fail when it comes to Gaussian states.
Yes, now that I have a baby in daycare, I'm on an early schedule for almost the first time in my life!
DeleteSorry, yes, I meant either a 2-mode Gaussian state or 2 photons in expectation -- of course the actual photon number is indeterminate. This doesn't have any effect on my point (2): yes, the moment-generating function is efficiently computable, and no, we don't know whether the distribution is efficiently samplable. I'm glad that you and Alex will have a chance to hash this out.
Regarding point (1): well, I guess the situation is a bit like Bell inequality violation, which was originally demonstrated in a very "cheating" way, but with less and less cheating over the last 30 years. It sounds like a good challenge for experimentalists to demonstrate 3-photon BosonSampling *with* postselection, but only postselection that happens before the photons enter the beamsplitter network (at which point you must have 3 single-photon Fock states).
Schwinger had it written into his contract at Harvard that he would never be asked to teach a class before 11AM. When the chair once, jokingly, asked him to teach a class at 8AM, Schwinger replied, "I'm not usually up THAT late...."
DeleteSo we have converged on all points. We agree that better and more clear-cut experiments need to be performed with all-heralded single-photon states input states — post-selected at the beginning as you say — so to be sure that what the linear optical interferometer is 'processing' is really a collection of Fock states and not some admixture of Focks and Gaussians where what it is doing is less clear.
A separate matter then is the efficiency of pure Gaussian state processors with number-resolved detection. It is news to me, and probably others of my kin, that an efficient method to compute the moments does immediately imply efficient sampling. I will have to think this over but I think I am beginning to understand the point. With nary a permanent in sight, in this case, if there is something (provably) computationally complex going on here I conjecture it is something completely different than boson sampling.
These two results indicate a renaissance in the field of linear optical interferometry, I reckon.
Quick thought, there is a way in which you get the first pair of modes "for free":
ReplyDeleteAssume there is no loss and assume we have perfect number-resolving detectors. You are then allowed to input one mode pair directly from SPDC without heralding because if that pair were to turn out to be |00> or |22> or higher then at the output you'd know you had too many or too few photons. In essence, you get to use the heralding mode in your sampling. Additional photons *must* be separately heralded because if you have more than one Gaussian pair you could end up with |2200> in place of |1111> and you couldn't distinguish these with the measurement outputs.
So, the experiments to date using 3 photons are operating under this sort of setup and taking advantage of the "free" pair. That said, clearly we do not have perfect number-resolving detectors, nor loss-free apparatuses. So, there will undoubtedly be errors and masquerading (count 3 photons but you didn't really have |11100...> input). However, this is already true with a fully heralded setup due to dark counts. The philosophy is that we do the best we can to keep these errors small and present what we have as is. (Certainly a fully heralded setup would have less errors at the tradeoff of being more difficult.) Moreover, we don't even have complexity results for BosonSampling that take practical errors into account (loss, dark counts, etc.). As far as I understand it, the approximate BosonSampling results can really only speak meaningfully about quantum detector efficiency, and even here it is an argument about asymptotically perfect detectors.
As for the complexity scaling of taking advantage of this "free" pair, clearly it carries over fine. Why? Because we only get *one* free pair and the complexity argument only cares about the asymptotic limit in which case we have one non-Fock mode pair and tons of Fock modes ;-) This exposes the dirty truth about BosonSampling: no experiment can ever say anything about the asymptotic result, approximate or otherwise. Furthermore, there is nothing interesting about individual BosonSampling instances being physically run. It's not like we have done something objectively "hard". Hardness only exists in an asymptotic sense.
If you take quantum mechanics as absolutely true, then Scott and Alex have already done the interesting part and there is nothing left to see. However, what remains an interesting experimental question is to demonstrate that quantum mechanics *is* valid in these regimes, that the probabilities are really given by permanents. This has yet to be demonstrated with large n. The recent experiments are a step down that path. Furthermore, they represent a step towards implementing the sort of systems needed for any optical quantum computation. For this sort of demonstration, the asymptotic efficiency is unimportant.
Even a fully heralded case wouldn't be any "truer". The reason being that as n scales, the heralding probability will fall off exponentially. So, without efficient, on-demand single photon generation, we can't do anything close to being considered efficient BosonSampling. In fact, even if we had magical sources and magical detectors the game wouldn't be over, because you can't really count resources experimentally. How long did it take your grad student to build and align the apparatus for each n? Could the complexity be secretly hidden in the complexity of construction? Hence again why it is meaningless to try to experimentally "prove" asymptotic results. In fact, you can even think about the "wait until success" approach of SPDC generation as a sort of complexity of construction: we have to wait around until SPDC tells us the full apparatus (including input state) has been constructed.
This post was me. Name got screwed up.
DeleteHi Justin thanks for your comments.
DeleteOne must make a clear distinction about the output of the SPDC here. It does *not* (like some double-barreled machine gun) fire out pairs of photons with different probabilities, mostly a (0,0), sometimes much less often a (1,1), and sometimes much, much less often a (2,2) and so on all the way up to (∞,∞).
The output of the SPDC is a *quantum superposition* of all of these potentialities each with a different quantum *probability amplitude*.
My beef is that the experimentalists are treating it as if it is firing out these pairs randomly with a Poisson distribution (like the machine gun) and they they post-select when they get a 'good' pair, in this case a (1,1).
If that was what an SPDC did, then I would have no issue with these experiments as what would enter the interferometer, corresponding to such post-selections, would be a pair of (1,1) Focks, as required for boson sampling, and everything would be fine and dandy.
However the input state is not this but a two-mode-squeezed vaccuum Gaussian state with all of these pairs in superposition and indeterminate total photon number. This is then mixed with the only real Fock state in the experiment, the heralded single photon in the third mode, but this does *not* give three Fock states, as required by boson-sampling, but instead a single-photon-added-two-mode-squeezed-vacuum (SPATMoSVac) state. (Borzhemoi!)
Such a SPATMoSVac state is not Gaussian but it still has an indeterminate number of photons in all superpositions of Focks. It certainly does not have exactly three photons in three, one-photon, Fock states. What they then do is process this state through the interferometer and then *collapse* it at the output at the detectors and only keep the events where they get exactly three photons out.
That is not the same as sending exactly three photons in.
Perhaps there is something computationally interesting about boson sampling with SPATMoSVac states but that has not yet been shown. Hence this leads to my claim that, whatever they are doing, it is not the original boson sampling protocol as advertised.
I am well aware of the nature of the SPDC output (to be fair, even this exponential modepair superposition is a simplification). I agree, it is not a mixture of such states, but a superposition, but the key point here is that post selection makes the a posteriori situation indistinguishable from the case where it "was that way" from the beginning.
DeleteLet me try to clarify because that sounds like trash. If you input a SPDC state into two pairs of some interferometer, and then measure the output with perfect number-resolving detectors, and find a total of 2 photons, then you can treat that entire instance of the experiment as if the input state was |11>. This is the same reason that heralding works in the first place: if you detect exactly 1 photon in the idler mode then the signal mode is in a |1> state. This is also the same reason you can play funky games with "telelporting back in time" and "entanglement swapping nonexistent photons".
So, an experiment *with perfect number-resolving detectors* post-selected on detection of two photons is the same if carried out with a SPDC state as if it was carried out with two heralded single photons from separate SPDC generations. Well...actually that's a lie. The single SPDC state will have frequency entanglement between the modes, so you can't really think of the photons as independent. This will also imply that heralded photons from separate SPDC processes will not be perfectly identical/coherent with each other. But, these (valid) criticisms are separate from your dispute. For what it's worth, there is some work going on in my group right now to address these things.
Hi Justin; since I don't know your background (nor that of people following this discussion) I am trying to keep the language as simple and tutorial-like as possible. You and I may know a lot about SPDC sources but I'm hoping to also reach out to those of my readers who do not.
DeleteI understand you argument and I am not totally against it when invoked to carry out teleportation in time and swapping nonexistent photons; I'm less for it when it comes to an issue of computational complexity where resources must be carefully tabulated.
Here is my counter argument.
If I work with my Gaussian or SPATMoSVac states I can compute the sampling probabilities for all of the four- and three-photon experiments without ever computing the the permanent of a matrix. Consistent with your argument, I could switch to the number basis and then the same computation *would* require computing the permanent of a matrix.
It is the perceived lack of an efficient classical algorithm for computing permanents that makes this all interesting for the computer scientists.
However, I contend, that if I work with all *real* heralded single-photon states, then I must always compute permanents to calculate the sampling probabilities of the outputs in the experiment. In that scenario there is no 'hidden basis' like the Gaussian or SPATMoSVac states where the classical simulation of the experiment is computationally easier.
As these experiments scale up to large numbers of modes and photons this subtle nuance will become ever more critical. My hope is to start this discussion at this early stage to nip future misunderstandings along these lines in the bud.
Firstly, it is not being invoked for a computational complexity issue. The point I am trying to make is that none of these experiments should be thought of as "doing something really hard for the first time". Moreover, I am of the opinion that we shouldn't ever think of the BosonSampling experiments as intrinsically interesting from a complexity standpoint. There is way too much in the way of resource counting. Instead, what I propose is that these experiments are interesting because they are and should be thought of as demonstrations that the physics really works the way Scott and Alex needed it to work for their arguments about the computational complexity of the physics. I am pretty confident that Scott and I agree on these points, and these are the things him and I have been telling experimentalists since before the first mount hit the first optics board.
DeleteI agree with you that postselection arguments are hairy when made about complexity issues. In fact, this is why I argue that a fully-heralded SPDC source is no better from a complexity standpoint, because the heralding probability falls off exponentially. But, if you agree that postselection arguments are valid for calculating the *result* of a particular experiment, as in the cases of teleportation and friends, then you have to concede that it is valid for verifying the behavior of single-photon states in the way that these experiments have, issues of the complexity of said verification aside.
Regarding your claim about Gaussian states, something is getting lost in translation here. Because it sounds to me like you are claiming you can use Gaussian state calculations to efficiently compute the probabilities of single-photon source, single-photon detector experiments. Forget about permanents because nothing in Scott and Alex's result says anything about whether or not you have to compute permanents, and this is one of the points that often gets muddled. I think the complexity of computing permanents should be thought of as a sort of cousin of this result. The bottom line question is whether the results of a given experiment with a given source and given detectors can be efficiently sampled. "Not have to compute a permanent" to do so does not equate to "easy", and "can do so by computing permanents" does not equate to "hard".
Now, what I think you are hinting at is that the Gaussian state inputs sort of contain multiple single-photon source experiments within them, and this is why there is a suspicion that combined with single-photon detectors that they might be interesting from a complexity standpoint. But it is unclear still, and this brings up a crucial point.
Now for the ringer, because I suspect this is where we might find strong agreement on the state of all this. If as we scale this up, we continue to introduce unheralded SPDC mode pairs, I completely agree with you: we will be performing experiments whose results do not speak for the results of single photon experiments, and how interesting the results of said experiment are depends on complexity arguments for Gaussian inputs that we do not have yet. The reason being, we will no longer be able to use the detector results to postselect the single photon scenario reliably. If on the other hand we only use *one* SPDC mode pair and the rest (many) are fully-heralded pure-state single photons and our detectors are perfect AND number resolving and there is no loss and so on and so on, THEN we can say something about a single-photon experiment's results. But, as soon as we introduce more SPDC mode pairs, that goes out the window. If the experiments carry along this way with only one SPDC mode pair, then I hope we can agree that there is no new complexity argument to hash out anyway (regardless of the fact that it is irrelevant to the point of the experiments), because the thing that we are increasing asymptotically is the number of true single photons, the Gaussian resources are remaining constant.
As a quick addendum in case my early-on implication wasn't clear, an experiment with all fully-heralded SPDC sources is precisely an experiment with Gaussian states and postselection. Heralding is postselection. So the only thing that can separate the usefulness of a fully-heralded experiment from that of another postselected SPDC experiment is whether or not either can reliably postselect the scenario in which all of the inputs were true single photons. A fully-heralded one can always do so. If the other one can, then they are totally equivalent from a numbers point of view (but may differ in the efficiency of getting there). My claim is that with proper resources, 1 unheralded SPDC pair + the rest heralded *can* be used to reliably postselect the appropriate input scenario.
DeleteThis comment has been removed by the author.
ReplyDeleteJustin I'll have to think about your claim that one unheralded SPDC pair along with all the rest heralded can always be used to reliably postselect the appropriate input scenario. I suspect you are correct. My complaint then is that this nuance should have been discussed in the experimental papers and I just don't see that. In addition in each experimental talk I've heard the claim is that the three-photon experiment is in a computational complexity regime of interest. I am not so ready to concede that point.
ReplyDeleteFar from being a boson-sampling skeptic our paper just appeared on the ArXiv that shows, counter to my intuition, that SPDC sources with multiplexing can be used to do boson sampling efficiently as long as the detector efficiency is high enough.
Boson sampling is dead!
Long-live boson sampling!
;-)
Sure thing. Keep in mind there are a lot of idealities I am assuming that are *not* true in practice. So in practice there will be some extra errors associated with this method, but feeling is that they aren't *too* terrible.
DeleteHeh, it's been a long battle with the experimentalists to keep them from saying things about complexity that they shouldn't. There isn't really any meaning behind the complexity of any single instance of any problem, at least not in the typical sense.
Yep, there are still a few tricks out there to pull, and undoubtedly will see them start creeping up in this context in the not-too-distant future.