Thursday, October 31, 2013

The Generation of Super-Resolving Single-Photon Path-Entangled States



Wei Feng, Kebei Jiang, Michelle L.-J. Lollie, M. Suhail Zubairy, Jonathan P. Dowling

One of the big drawbacks for using N00N states in quantum lithography is the need for a N-photon absorbing resist, which has a very low cross section for large N. In this paper, we propose two protocols for generating super-resolving single-photon path-entangled states from general maximally path-entangled N00N states. We also show that both protocols generate the desired state with different probabilities depending on the type of detectors being used. Such super-resolving single-photon path-entangled states preserve high resolving power but lack the requirement of a multi-photon absorbing resist, which makes this state, in principle, a perfect candidate for quantum lithography.

Friday, September 13, 2013

On the Curious Consistency of Non-Relativistic Quantum Theory with Non-Quantum Relativity Theory

For years I have puzzled over what some have called the ‘tension’ between non-relativistic quantum theory and non-quantum relativity theory. Standard quantum information theory is of the non-relativistic variety based on ordinary quantum mechanics without any appeal to, say, relativistic quantum field theory. In run of the mill quantum information theory there is nary a c, the speed of light, in sight. The term ‘relativity’ shows up only four times in the Nielsen and Chuang book, QuantumComputation and Quantum Information (a book we all call ‘Mike and Ike’ after the first names of the authors). In all four of these places it appears only so much as a foil against which quantum information theory is proffered. The authors point out that, for example, you cannot use quantum informatic teleportation to send signals faster than the speed of light — but why not?

I would naively expect non-relativistic quantum theory to make predictions that outright conflicted with the predictions of non-quantum relativity theory. For example non-relativistic Newtonian physics makes just such conflicting (and wrong) predictions. Newtonian mechanics, for example, states that an object’s mass is conserved and that said object may always be accelerated up to arbitrarily high velocities and that information-bearing signals may travel faster than light. This directly contradicts the confirmed predictions of special relativity but nobody cares because nobody expects non-relativistic Newtonian mechanics to be consistent with relativity.

And yet, over time, we have come to expect — in many venues — non-relativistic quantum information theory (and the non-relativistic quantum theory upon which it is based) to be consistent with non-quantum relativity. One example is the caveat in Mike and Ike that quantum teleportation cannot be used to send signals faster than light. The correct statement is more close to quantum teleportation cannot be used to send signals instantaneously. According to the non-relativistic version of quantum teleportation, in some sense the state to be teleported is transferred instantaneously, but it can only be extracted with help from a luminal-speed communication from Alice to Bob through some classical channel.

Another example of this ‘tension’ comes to us from Nick Herbert’s publication years ago on a superluminal-signaling scheme he called FLASH. This scheme is discussed in my book, Schrödinger’s Killer App, in David Kaiser’s book, How the HippiesSaved Physics, and in an arXiv posting by Asher Peres, “How the No-Cloning Theorem Got It’s Name.” The FLASH scheme, like teleportation, required two parties, Alice and Bob, to have set up shared entangled states, but unlike teleportation, also a noiseless state-amplification machine. The communication scheme was not only superluminal; it was instantaneous. Many physicists, including Peres and Glauber, knew the scheme had to be wrong, and the downfall of FLASH came with the invention of the no-cloning theorem and its closely related cousin, the no non-noiseless amplification theorem. Cloning and noiseless amplification devices violate the unitary and linear nature of quantum theory and so cannot exist. Everybody heaved a sigh of relief, but I was puzzled. Why were Glauber and Peres so sure that non-relativistic quantum theory should not contradict the theory of non-quantum relativity theory? If it had, I would have said this is of no more concern than noting that Newtonian mechanics contradicts relativity. One would not expect any non-relativistic theory to be consistent with a relativistic one and would in fact expect them to make inconsistent predictions.

Another example along these lines is the Bohr’s resolution to Einstein’s photon-in-a-box paradox from the Sixth Solvay Congress in 1930. Einstein cooked up a thought experiment about weighing a box before and after a photon was allowed to escape from it. He showed that this scheme apparently violated the Heisenberg energy-time uncertainty relationship. Bohr resolved the paradox and saved the uncertainty principle by invoking the gravitational red shift, a general relativistic effect. Why on earth should general relativity have anything to do with the non-relativistic Heisenberg uncertainty principle? And yet the consistency of the latter requires the former.

A final example, that I lift from Scott Aaronson’s book, QuantumComputing Since Democritus, involves black holes. Here the idea is to resolve the black hole information paradox. If you throw information-bearing quantum states into a black hole and they disappear forever then this would violate unitarity. One proposed resolution is that, near the event horizon, the quantum state is somehow duplicated — in apparent violation of the no-cloning theorem — and one copy vanishes into the singularity and the other is remitted as Hawking radiation. (This is a concept close to a resolution proposed by Chris Adami and Greg Ver Steeg in their 2004 arXiv posting, “Classical Information Capacity of Quantum Black Holes.”) 

To violate the no-cloning theorem you could grab the copy that comes out and then rocket into the black hole and grab the other copy and thus have two copies of the unknown quantum state. However, if you try to do this, apparently it takes so long for the first copy to be emitted that by the time you grab on to it the second copy has always already been gobbled up by the singularity and the no-cloning theorem is saved. Why on earth should the non-relativistic version of the no-cloning theorem be consistent with the relativistic theory of black holes? To quote Aaronson, “So it’s funny that there are these little things that seem like they might cause a conflict with quantum mechanics … but when you really examine them, it no longer seems like they do.”

It’s not funny — it’s downright bizarre.

What this tension between non-relativistic quantum theory and non-quantum relativity theory suggests to me is that there is some ur-theory, likely a phenomenological one, which unifies non-relativistic quantum theory and non-quantum relativity theory. Now I know what you are all going to say, “Sure — it's a quantum theory of gravity!” Indeed, if we had that, I expect all this tension would go away. But for quantum theories of gravity — string theory and loop-quantum gravity — the tension between relativity and quantum mechanics is down at the Planck length or up at the Planck energy. In the examples I discuss above, this tension is at ordinary length and energy scales. I don’t need physics at the Planck scale to talk about superluminal signaling, photons in boxes, or black hole information paradoxes.

Hence I suggest that there is some intermediate unified theory between quantum gravity and what we have now and that this theory in certain limits produces non-relativistic quantum theory and non-quantum relativity theory. The best lame analogy I can come up with is the unification of electricity and magnetism within Maxwell’s equations, which are phenomenological equations derived from close observations of experiments. We know now that the electromagnetic field must be quantized — à la quantum electrodynamics — when wavelengths are short and energies large and photons are needed. But the unquantized Maxwell equations served us quite well for 100 years before we hit that point. In this lame analogy, electricity and magnetism are the analog of non-relativistic quantum theory and non-quantum relativity theory; Maxwell’s equations are the analog of the unifying (but yet unknown) ur-theory, and quantum electrodynamics is the analog of a full theory of quantum gravity.

So how to find this phenomenological ur-theory that unifies non-relativistic quantum theory with non-quantum relativity theory? Continue to explore these tensions between the two; both in theory and experiment. Gisin and his group have done experiments measuring the speed of the collapse of the wave function of entangled photon pairs over large distances with detectors in different moving frames. Work along these lines should be encouraged and not disparaged.

(THU 26 SEP 2013)

Just found this comment while reading the book by Haroche and Raimond:

"The consistency between the EPR correlations and relativity is by itself also
strange, in some way. Quantum laws do ultimately satisfy relativity in their field theory
version, but the measurement description we have implicitly invoked to compute the
EPR correlations is non-relativistic. If it had violated causality, we could have invoked
the necessity to use a relativistic version of measurement theory, dealing with the
proper time of detection events. We do not have to do this. Non-relativistic quantum
physics is non-local in a way subtle enough not to contradict the inherently relativistic
causality principle.
" (Italics mine.)

Haroche, Serge; Raimond, Jean-Michel (2006-08-10). Exploring the Quantum:Atoms, Cavities, and Photons (Oxford Graduate Texts) (Page 65). OUP Oxford. Kindle Edition.

Wednesday, July 31, 2013

Spontaneous parametric down-conversion photon sources are scalable in the asymptotic limit for boson-sampling

Boson-sampling has emerged as a promising avenue towards post-classical optical quantum computation, and numerous elementary demonstrations have recently been performed. Spontaneous parametric down-conversion is the mainstay for single-photon state preparation, the technique employed in most optical quantum information processing implementations to-date. Here we present a simple architecture for boson-sampling based on multiplexed parameteric down-conversion and demonstrate that the architecture is limited only by the post-selected detection efficiency. That is, given that detection efficiencies are sufficiently high to enable post-selection, photon-number errors in the down-converters are sufficiently low as to guarantee correct boson-sampling most of the time. Thus, we show that parametric down-conversion sources will not present a bottleneck for future boson-sampling implementations. Rather, photodetection efficiency is the limiting factor and thus future implementations may continue to employ down-conversion sources.

For all of my colleagues who thought from my previous blog post that I was a boson sampling skeptic.....

Sunday, July 28, 2013

Sampling — Schmämpling!

In January of 2011, I went on a six-month sabbatical — my first sabbatical ever — and, as is often the case on such ventures (in order to have such sabbatical requests approved by parsimonious deans), I declared to the dean that I would write a book.

Luna Han, my editor at Taylor & Francis, had been hounding me for some years to write a textbook.. Having moved to academia in 2004 (after 10 years of working for the federal government), I was not in a good position to write a textbook as I did not have years and years of class notes that could be 'easily' up-converted into one. I did, however, have lots and lots of notes and anecdotes spanning the 15 years I was involved in the US Government program to build a quantum computer.

Hence it was with some trepidation that I called up Ms. Han in and declared, "Well I have good news and bad news!" Ever the stoic, Luna replied, "Whenever anybody says that it is only ever bad news." I continued on, "Well the good news is I'm done with chapter one!" Luna's temper improved slightly, "Sound good so far...." Then I went in for the kill, "It's a popular book and not a textbook." This did not go over well, as popular books have much lower profit margins and so forth, but she told me to email her chapter one anyway and she would take a look. A few days latter I got her response, "I love what I'm reading ... I certainly hope you’ll consider sending me the full [book] proposal. "

The proposal was reviewed (and at times reviled) by my friends and colleagues, but the reviews were (for the most part) so positive that I went under contract with Taylor & Francis and began typing four hours a day. I shunned all refereeing, committee meetings, conference travel, proposal reviews, and did nothing much else than work on the book for two years. I fell so far off the grid that some of my colleagues were asking around to see if had met some untimely end. I submitted the book on schedule in September of 2012, then worked with the typesetters for the next few months and spent my Xmas break in 2012 proofreading the 450 page proofs, and then Schrödinger's Killer App: Race to Build the World's First Quantum Computer, was off to press.

I then emerged in the January of 2013 like Rip Van Winkle, sleeping for two years after an all-night bender with some magical dwarfs; rubbing my eyes and blinking. Then I strolled down into my village only to discover that everybody was talking about the boson-sampling problem. My reaction was to inquire, "What the heck is boson the boson-sampling problem!?" I was entreated to read a 94 page preprint posted in 2010 by Scott Aaronson and Alex Arkhipov on the ArXiv, but to a great extent I found this paper, written in the language of quantum computer complexity class theory, to be almost completely incomprehensible. However I understood enough to realize that these computer scientists were claiming that us quantum physicists had missed something very important about the nature of linear optical interferometers with Fock-state (number-state) inputs. Well, I have been working on the theory of linear optical interferometers for 25 years, and this clearly now was a turf war. What the Fock did I miss? It turns out that, what I had missed, was precisely the Fock. (If you don't want to read that 94-page paper either then try this two-page introduction to boson sampling theory and experiment, written by James Franson.)

Rather than take a crash course in quantum complexity theory and then go figure out that 94-page paper, I decided to roll up my sleeves and dig into these interferometers myself in my own way and figure out just what the heck was going on — avoiding quantum computer complexity class theory like the plague — and using only elementary tools from quantum optics theory. Until the rise of the boson sampling problem, I would attest that nearly every quantum optician on Earth, including myself, did not think there was much interesting going on in passive, linear optical interferometers, no matter what quantum state of light you put into them — Fock or not. (For a fun an flippant overview of why we all thought this way about these interferometers, see my recent lecture given at the 2013 Rochester Quantum Information and Measurement Conference; a conference where Robert Boyd came up to me and declared, "Jon! It's so good to see you! Nobody has seen you in a couple of years and people were asking if you were all right.")

It turned out that two of my undergraduates, Bryan Gard and Robert Cross, also in 2010, were working on a problem that was closely related to the boson-sampling problem (but none of us knew it at the time); they were analytically and numerically studying quantum random walks with multi-photon Fock states in a linear optical interferometer. I gave this to them as an undergraduate 'starter' problem, motivated by experiments in Jeremy O'Brien's group with two-photon Fock states. Since I did not expect anything too interesting to happen when you went from quantum random walks with one or two photons to random walks with multiple photons, I expected the undergrads to come up with a closed form solution predicting the photon output from an arbitrary photon input.

Then I went on sabbatical and when was buried in the writing of my book for two years and I did not pay too much attention to them when they complained that they could not come up with even a numerical simulation for more than a few photons in an interferometer with only a few modes. They particularly complained that, "The problem blows up exponentially fast!" "These undergrads," I thought, "they see exponential blow ups whenever the math gets a little hairy." Of course, as usual, my undergrads were right and I was wrong. The thing does blow up exponentially fast.

In January of 2013 we were trying to get this random walk paper published, and after endless rounds of refereeing, we finally did and it recently appeared in JOSA B as, "Quantum Random Walks with Multiple Photons." But in the final round of refereeing, in response to a toss-off comment we made in the paper about the apparent exponential growth of the problem's complexity, a referee suggested, "Were the authors to make this comparison [to the boson sampling problem], they would be in a position to comment on the computational hardness of such systems, which would be insightful." My thought, and that of Bryan and Robert, was, "What the heck is the boson sampling problem!?"

The referee cited an experimental paper out of Andrew White's group, and then I suddenly I remembered Andrew gave at lecture on this in a NASA conference in January of 2012. However I was then so engrossed in the writing of my book that the only take-home message I got from his talk was that Andrew was experimenting with permanents, and I joked that perhaps he was experimenting with new hairdos. Suddenly things started to click and I finally became fully aware of that daunting 94-pager by Aaronson and Arkhipov.

So Bryan and Robert and I rolled up our sleeves even farther and tackled this problem again from the point of view of counting all the resources and comparing coherent-state and squeezed-state inputs to Fock-state inputs and sure enough, although not much interesting happens with coherent and squeezed, everything blows up with the Fock. When I showed co-author Hwang Lee our proof that the Hilbert space dimension blows up as a function of the number of modes and number of photons, he retorted, "This is shocking!" But what is shocking to a quantum optical theorist is not necessarily shocking to a quantum computational theorist.

We fired off the paper to Physical Review Letters (PRL) — and the referees immediately pounced. One said that our result was not new and our proof was not nearly as good as invoking the "collapse of the polynomial hierarchy" as did Aaronson and Arkhipov. At that point I had no idea what the polynomial hierarchy even was — some computer science-y thing — and I certainly did not much care if it collapsed or not and so we retorted, "Physical Review is a physics journal and not a computer science journal." Comments from Aaronson and Peter Rhode were much more helpful. They both pointed out that, due to the Gottesman-Knill theorem, it is now well-known, in spite of Feynman's arguments to the contrary, that sometimes systems with exponentially large Hilbert spaces are still efficiently simulateable — who knew!? When Aaronson and Rohde both pointed out that fermionic linear interferometers with Fock state inputs also have an exponentially large Hilbert space — I didn't believe it! But in spite of that blow up the fermionic devices were still efficiently simulateable do to special properties of matrix determinants that matrix permanents don't have. Sometimes, boys and girls, there are polynomial shortcuts through your exponentially large Hilbert space....

Rolling up our sleeves again (this time to our eyeballs) first I had to take a crash course on fermionic interferometers (think neutrons) and prove the Hilbert space blows up there too. (It does.) Then we had to augment our paper and bring the argument back around to matrix permanents (nothing to do with Andrew White's hairdo) and matrix determinants. (We did.) And now our revised paper, "Classical Computers Very Likely Can Not Efficiently Simulate Multimode Linear Optical Interferometers with Arbitrary Fock-State Inputs-An Elementary Argument,"  sits in the limbo that awaits quantum optics papers that are not too horribly rejected from Physical Review Letters, and so you try to transfer them to Physical Review A (PRA) instead. In Catholic school I was taught that limbo was just like heaven — except you never got to see the face of God. Similarly, PRA-limbo is just like PRL-heaven — except you never get to see the face of Basbas....

But now finally to the point of this post. This post has a point? Finally! I think it was Einstein who once said that doing physics research is like staggering around drunk in a dark, unfamiliar, furniture-filled room, bumping over chairs and ottomans, while groping for the light switch.  In this vein, having me explain to you how we do research in our group is a bit like having your Würst vendor explain to you how he makes his sausages. However, if you want to pursue a career in sausage making, it might be best to know what you are getting into up front. After all you want to make sure you are always getting the best of the Würst! However when it comes to the experiments done so far on boson sampling, it is not quite yet the best....

Now, after pondering the  theory of this boson sampling business for six months and now knowing enough to be dangerous, I have finally decided to dive fully in and read the flurry of nearly simultaneously published recent papers claiming to implement boson sampling with three or four photons  [Broome2013, Spring2012, Crespi2012, Tillmann2013, Spagnolo13]. It is clear to me now, that in order to implement boson sampling, the input photons must be in Fock states that is the whole point. Coherent states and squeezed states and other so-called Gaussian states simply will not do, as it is well known (due to work of Sanders and Braunstein and others) that linear optical interferometers with Gaussian-state inputs are always efficiently simulateable.

The whole point of boson sampling is that the input needs to be in the form of pure photon Fock states in order to perform the sampling. That is, thinking of the interferometer as the information processor, the processing must take place on Fock states and not on Gaussian states. If you input only Gaussian states, as in the four-photon experiment of Crespi, et al., you are clearly not doing boson sampling at all. If you mix one-photon Fock states with a two-mode Gaussian state (as all the three-photon experiments do) it is not clear what you are doing, but you are certainly not implementing boson sampling as advertised. Yet all these experiments do this. I'll focus on the three-photon experiments, as they are the most tricky. (The single four-photon experiment can be discarded out of hand as there is not a Fock state in sight.)

To do this right one would need to have three, distinct,  heralded single-photon Fock states. Instead all the three-photon experiments herald only one single-photon Fock state and then mix it with the output of a spontaneous parametric downconverter (SPDC) —an output which is a Gaussian state! In particular it is not the product of two-distinct single-photon Fock states that is required for boson sampling.

The output of the SPDC, in the Fock basis, is a superposition of mostly twin vacuum states, some twin single-Fock states, some fewer twin double-Fock states, and so on — all summing up to a Gaussian state. The experimentalist's hope is that by post-selecting only on events where three photons (in total) are counted that this experiment is equivalent to having three distinct single-photon-Fock states to begin with.

It is not. 

What matters for boson sampling is what you put in to the interferometer and not so much what you do to what comes out. This post-selection process fools you into thinking that only three photons went in. That is not true. A single photon went in mixed with a Gaussian, two-mode squeezed vacuum (of indeterminate photon number) — a two-mode squeezed vacuum state is definitely not two photons in a pair of Fock states.

To kick in the large Hilbert space and the permanents of matrices and the things that make boson sampling interesting the processor must process only Fock states. You can't just pretend at the end of the experiment that what you sent in were Fock states. Whatever these experiments are doing it is not boson sampling as advertised. A three-photon boson sampling experiment will require that all three of the input photons are heralded — not just one (or in the case of the four-photon experiment not just none).

Now before my old friend Andrew White (in his curly new hairdo) comes flying down from Brisbane to pummel me here in Sydney, where I am visiting for a few weeks, let me say that all of these experiments were impressive tours de force carried out by some of the best quantum mechanics in the galaxy. Each and every experiment required hard work and lots of hours in the lab and most of these experiments are on the forefront of integrated quantum photonics, which will have a plethora of applications to quantum optical computing, metrology, and imaging. And someday soon one of these extraordinary experimental groups will demonstrate boson sampling with three photons — but I venture that this day has not yet come.

And yes, yes, yes, I know why all the experimentalists all do it this way — I may not know much about quantum complexity theory but I do know a thing or two about quantum optics. The point is that with the SPDC sources the probability of heralding exactly N photons scales exponentially poorly with N. That is if they tried to do this with three really heralded single photons, then the data collection would have taken months, and if they tried to do this with four really heralded single photons, then it would have taken years.

Nonetheless, while it is true in previous experiments in other areas of quantum optics that it does not really matter too much if you have real single-photon inputs, or post-select the outputs and then pretend that you had real single-photon inputs, for boson sampling this distinction is critical. The linear optical interferometer must process only Fock input states to be in the boson sampling regime. If, instead, it processes only Gaussian states or admixtures of Gaussian states with Fock states, well then it is not carrying out boson sampling it is doing something else — no matter how much you pretend to the contrary. 

To summarize this entire diatribe in three words: What? The Fock!

Tuesday, July 23, 2013

NP or NOT NP — What is the Question?

I'm reading Scott Aaronson's book, Quantum Computing Since Democritus, which according to is frequently bought together with Schrödinger's Killer App (the second, funniest book on quantum computing to appear this year).

In spite of Stephen Wolfram's scathing review, and apparent exponential growth of mind-bending computer science acronyms, I am finding it quite enjoyable and, as a simple-minded quantum physicist, I am also finding that I'm learning quite a lot and, against my own will, thinking a lot about computer complexity theory.

A particular comment by Aaronson, connecting the P vs NP business to something in physics, struck a nerve. I first heard about this business in graduate school in the 1980s when the mathematics students would jokingly scrawl "P=NP!" on the chalkboard only to find the next day somebody added a slash so it read "P≠NP!" and they all had a good laugh and the physics students all just shook their heads.

I had to wrestle with this again in 1994 after Shor's algorithm hit the fan and, as a reviewer of quantum computing proposals for the Department of Defense, I had to have at least some idea of what all these proposals were talking about. However I felt it was all really a bunch of non-physical mumbo jumbo about what the computer scientists thought were hard or not hard problems, even though they could not much prove anything.

I really did not care if it turned out that P=NP or not and most physicists I talk to don't much care either. If it indeed turns out that someday P=NP it would just mean that the computer scientists were not trying hard enough to find a proof and in any case it had little to do with physics.

However I'm ready to not quite eat but perhaps just reheat my 'umble pie. The bit in Scott's book that struck a nerve was his discussion of the feeling that if P=NP, even on a quantum computer, then we would have 'God-like' powers. Again that sounds just like hype but here's the physics.

From the 1998 paper by Daniel Abrams and Seth Lloyd we know that if quantum mechanics was nonlinear, with even a small non-linearity of the Weinberg type, then we could build super-duper quantum computers to solve not only NP-complete problems, but also the supposed even harder class of #P problems, all in polynomial time on the nonlinear quantum mechanics machine.

However the Weinberg model for nonlinear quantum mechanics would also permit superluminal communication and the violation of the second law of thermodynamics. The superluminal communication is the worst as this would lead to a violation of Einstein causality, something that never happens in ordinary linear quantum theory. It is well known that with superluminal communication, allowing you to send messages backwards in time, you can get a computational speedup by just letting your classical computer run for an exponential amount of time and then have it transmit the answer backwards in time to you so that the overall effect is a polynomial-time computation.

So if Nature gave us Weinberg nonlinear quantum mechanics, instead of ordinary linear quantum mechanics, we could use that to compute NP-complete and #P problems in polynomial time, violate causality, and for extra credit violate the second law of thermodynamics. Physicists should be very concerned if we could do the latter two things and hence, I posit along with Aaronson, that physicists should be very concerned if we could do the first two things as well.

Beware! There is no proof that the ability to solve NP-complete or #P problems implies the violation of causality or the second law. But it is suggestive that there is one resource, nonlinear quantum mechanics of the Weinberg type, that allows you do all of these — that is it gives us 'God-like' powers.

I suggest then that physicists who shudder at the thought of a theory that implies a violation of Einstein causality and the second law should also shudder at the thought that someday we will find a polynomial route to solving NP-complete problems using the laws of physics as we currently understand them.

Tuesday, July 16, 2013

The Riemann Conjecture

Mein Lieber Herr Riemann,
All night I will dream on,
'Bout how you deserve a lecture.
But of course I allude
To your famous and shrewd
Outstanding and unsolved conjecture.

Oh, I owe you my life,
My 3 kids and my wife,
For the proof of the Prime Number Theorem.
Your zeta function trick
Made the proof really slick,
And those primes — no more do I fear 'em.

But I just stop to think,
How I've taken (hic) to drink,
And evolved this hysterical laugh —
Because still I don't know
If ζ's roots all go
On the line Re z = 1/2!

So I don't sleep at night,
And I'm losing my sight
In search of this darn thing's solution.
As my mind starts to go
My calculations grow
In a flood of "complex" confusion.

I bought a computer;
Not any astuter,
It ran for nearly 10 years — no jive!
But still it doesn't know
If zeta's roots all go
On that line Re z = .5

Now I sit in my room —
I feel doomed in the gloom —
And entombed by mountains of paper.
Still, I pray that some night
My "ol' lightbulb" will light
With the clue that could wrap-up this caper!

— "The Riemann Conjecture," Jonathan P. Dowling, Mathematics Magazine 62 (June 1989) 197; Reprinted in Gamma: Exploring Euler's Constant, by Julian Havil (Princeton University Press, 2003); see also the German-Language Edition, "Die Riemannsche Vermutung," in GAMMA - Euler's Konstante, PrimzahlstrŠnde und die Riemannsche Vermutung (Springer Heidelberg 2007) .

Wednesday, July 10, 2013

Postdoc, Ergo Proper Doc!

Postdoc in Quantum Information and/or applications to complexity  theory, foundations, quantum gravity, quantum many-body physics,  and thermodynamics

Limit of tenure: up to 3 years
Closing date: Sept 7, or until the positions are filled

The quantum information group at University College London (UCL) invite applications for two post-doctoral research positions in the groups of Fernando Brandao and Jonathan Oppenheim. We are especially interested in applications of quantum information theory to research areas such as complexity theory, foundations, quantum gravity, quantum many-body physics, and thermodynamics. Other faculty in Quantum Information Theory include Dan Browne, Simone Severini, Alessio Serafini and Sougato Bose.

The start date of the appointment is flexible, and both junior and senior research associate positions are available. The salary is £32,375-£39,132 p.a.

For any further queries on the scientific details and requirements of the post, please contact Jonathan Oppenheim (j.oppenheim[at] and/or Fernando Brandao (fgslbrandao[at]

Applications should include a full CV, publications list, a summary of research interests and previous contributions (two pages), and the names of three referees. Applications should be emailed to, and the candidate should arrange for the three reference letters to be sent to by the closing date with the subject line being the applicants name.

UCL values diversity and is committed to equality of opportunity.

Thursday, July 4, 2013

Estimation of Phase and Diffusion: Combining Quantum Statistics and Classical Noise

Estimation of Phase and Diffusion: Combining Quantum Statistics and Classical Noise

Coherent ensembles of $N$ qubits present an advantage in quantum phase estimation over separable mixtures, but coherence decay due to classical phase diffusion reduces overall precision. In some contexts, the strength of diffusion may be the parameter of interest. We examine estimation of both phase and diffusion in large spin systems using a novel mathematical formulation. For the first time, we show a closed form expression for the quantum Fisher information for estimation of a unitary parameter in a noisy environment. The optimal probe state has a non-Gaussian profile and differs also from the canonical phase state; it saturates a new tight precision bound. For noise below a critical threshold, entanglement always leads to enhanced precision, but the shot-noise limit is beaten only by a constant factor, independent of $N$. We provide upper and lower bounds to this factor, valid in low and high noise regimes. Unlike other noise types, it is shown for $N \gg 1$ that phase and diffusion can be measured simultaneously and optimally.

Wednesday, July 3, 2013

✭✭✭✭✭ Historical, Scientific, Hysterical View of Quantum Information

Second Amazon five-star review of my book, Schrödinger's Killer App, and this one is not written by my sister or any other close relative!

"Quantum computing is one of the most popular and well-funded ideas in modern physics research. This book explains, in a lively way, the ideas behind this field, its history, and colorful characters who are playing a role in making QC a reality. It starts with an overview of quantum mechanics so you can appreciate why people are going through so much trouble to make a quantum computer. Before you know it you're learning (and understanding) advanced quantum information, and how science is done in the government and academic worlds.

If you are a high school student thinking of becoming a scientist, a teacher who wants to liven up the standard curriculum, a science lover, a fan of sci-fi, considering investing in a quantum computer "company," or just love good writing, you'll want to read this book. However, if you like dry, long-winded, opaque academic writing, you're going to have to find another author.

Physicists in the field should attend a lecture by Prof. Dowling before reading this book for full effect. Knowing his personality amplifies the hilarity of his stories."

— Ellie Arroway, 29 June 2013

Friday, June 21, 2013

On The Computational Power of Linear Optics II

"Classical Computers Very Likely Can Not Efficiently Simulate Multimode Linear Optical Interferometers with Arbitrary Fock-State Inputs-An Elementary Argument," Version 3.0.

With thanks to Scott Aaronson and a referee for suggestions for improvement, we have expanded our argument, originally based on the exponential blowup in the Hilbert space for a bosonic, Fock-state interferometer, to analyze the similar exponential blowup for a fermionic, Fock state interferometer. We then examine both interferometers using an input-output formalism that allows us to treat them both as a boundary value problem. In the limit that the particle coherence length is much larger than the interferometer we then, sticking very close to the physics, argue that in order to access these exponentially large Hilbert spaces, the particles must arrive at the beamsplitters in a way that they are indistinguishable there, and thence their multi-particle  wavefunctions must be properly symmetrized everywhere in space.

In this model, then, to compute an arbitrary output state (our goal) we must simply prepare an arbitrary input state and then use the efficient matrix transfer to transform the input to the output. Therein lies the rub. For fermions there is an efficient protocol for constructing an arbitrary input state but for bosons there is not. In other words, for a large interferometer, we cannot even setup the initial conditions in the general case, much less compute the corresponding output. 

For fermions the appropriate anti-symmetrization of the total input wavefunction (the product of the spatial and spin wavefunctions) may be carried out efficiently (in polynomial time) by constructing and evaluating the Slater determinant of the basis functions. For bosons the equivalent procedure would be to symmetrize the total input wavefunction by using the 'Slater' permanent. (Slater never wrote down any such thing.) In the case of the bosons the most efficient known algorithm for computing the permanent is the Ryser algorithm, discovered 50 years ago, which scales exponentially in the size of the matrix.

Thus we conclude our elementary argument that linear optical interferometers cannot likely be efficiently simulated on a classical computer. We do this using simple arguments from quantum optics and undergraduate quantum physics and without resorting to complexity theoretic notions such as 'collapse of the polynomial hierarchy.'  The small amount of quantum computer complexity that remains in our argument can now discuss in language that can be understood by the ordinary quantum physicist on the street (or in the gutter).

An amusing and informative lecture on these results, given this past week at the Quantum Information and Measurement Conference in Rochester, NY,  may be found here [PDF, PPT].

Friday, June 14, 2013

Entagled-Photon Holes!

Entangled-photon-hole-boosted quantum teleportation of Schrödinger-cat states

We study the teleportation of non-Gaussian, non-classical Schrodinger-cat states of light using two-mode squeezed vacuum light embedded with Franson's entangled photon holes. We discuss two figures of merit a) the fidelity of teleportation and b) the maximum negativity of the Wigner function at the output. We show that the addition of entangled photon holes to two-mode squeezed vacuum light lowers the requirements on the amount of squeezing necessary to achieve any given fidelity of teleportation, or to achieve negative values of the Wigner function at the output.

Sunday, June 2, 2013

What IS a Quantum Computer?

Having a delightful time in Beijing visiting the Computational Science Research Center, which last week hosted the Fifth International Workshop on Quantum Optics and New Materials — this time to celebrate the 60th birthday of my old friend and collaborator, M. Suhail Zubairy.

On the last day, Wednesday, I sat at lunch at the vegetarian table with Zubairy, Barry Sandars, Selim Shahriar, Wolfgang Schleich, Girish Agarwal, and Jörg Evers. The discussion turned to the computational complexity of linear optical interferometers (the topic of my talk that morning) and then more broadly to D-Wave's machine and then somebody (might have been me but I had so many vegetables who can tell) asked each person in turn to define a quantum computer.

To us quantum opticians the fact that there was no agreement at all came as a bit of a surprise. Shahriar held out for a universal machine capable of running Shor's algorithm. I pointed out that, as per a 1998 article by Seth Lloyd, that coherence and strong projective measurements were necessary and sufficient to run Grover's algorithm with a quadratic speedup — shouldn't such a machine be called a type of quantum computer?

Shahriar rejected that as it was not universal. Then came the D-Wave discussion. If the D-Wave machine can do something useful, also exploiting quantum coherence, should it not also be classified as a type of quantum computer? Sanders seemed to side against this. I was more positive.

And so the discussion continued as the vegetables arrived in an never-ending sea of serving platters. In the end Zubairy thought it was quite surprising that we all could not agree on the definition of a quantum computer. I rejoined that people still argue over what was the first classical computer. The Babbage difference engine? The ENIAC? The Atanasoff–Berry Computer? How about that ancient Greek computer the Antikythera Mechanism? If historians of computer science can't even agree on what is a a classical computer what hope do we have of now agreeing on what is a quantum computer.

My own definition? Any computer that exploits some of the weirder features of quantum mechanics to attain a computational advantage over the best classical machine, no matter how slight, should be at least entertained as a type of quantum computer.

Friday, May 31, 2013


My book is inspiring people in Djakarta to get Schrödinger's-Killer-App-Cat Tattoos! (Cattoos!?)

I wonder if Scott Aaronson's book is inspiring people in Djibouti to get Democritus tattoos (Democritoos!?)


"Fergus is a medical physicist who worked with Ionizing Radiation. He is also very fascinated by the philosophy of physics, which is why he has this 'Schrödinger's cat tattoo Killer App: Race to Build the World's First Quantum Computer'. Schrödinger, which he believes "the concept of uncertainty is very good show." As if a Schrödinger cat tattoo is just is not cool enough alone, see that it also includes the radiation warning symbol and that the box is, fittingly, one of the many impossible boxes, as designed by MC Escher."

Wednesday, May 22, 2013

Stephen Hawking Reviews my Book?

As far as I can tell, this is a review of my book by Stephen Hawking....

Quantum Radar!

Super-Resolving Quantum Radar: Coherent-State Sources with Homodyne Detection Suffice to Beat the Diffraction Limit

There has been much recent interest in quantum metrology for applications to sub-Raleigh ranging and remote sensing such as in quantum radar. For quantum radar, atmospheric absorption and diffraction rapidly degrades any actively transmitted quantum states of light, such as N00N states, so that for this high-loss regime the optimal strategy is to transmit coherent states of light, which suffer no worse loss than the linear Beer's law for classical radar attenuation, and which provide sensitivity at the shot-noise limit in the returned power. We show that coherent radar radiation sources, coupled with a quantum homodyne detection scheme, provide both longitudinal and angular super-resolution much below the Rayleigh diffraction limit, with sensitivity at shot-noise in terms of the detected photon power. Our approach provides a template for the development of a complete super-resolving quantum radar system with currently available technology.

Thursday, May 2, 2013

Schrödinger's Killer App!

The eBook has been available for download since May 1st and print version comes out in two weeks.

Let the games begin!

Schrödinger's Killer App — Race to Build the World's First Quantum Computer

By Jonathan P. Dowling

To Be Published May 14th 2013 by Taylor & Francis – 453 pages

Monday, April 15, 2013

Classical Computers Very Likely Can Not Efficiently Simulate Multimode Linear Optical Interferometers

In 2010 Scott Aaronson and Alex Arkhipov posted a 95 page preprint on the quant-ph arxiv arguing that that linear optical interferometers cannot be efficiently simulated on classical computers. The argument is couched in quantum computational complexity theory, which can be daunting to those not skilled in that art.

Independently in 2011 our group was led to a similar conclusion in our attempts to predict the output of a specific linear optical interferometer we call the quantum pachinko machine. In light of the Arkhipov and Aaronson result we have gone back to investigate the complexity of this specific device and we have a new paper that explicitly constructs the Hilbert state space of the machine and shows that its dimension grows exponentially with all resources needed to build the device.

Our argument is only about 4 pages long and follows the logic of Feynman — If the Hilbert space is exponentially large then it is unlikely that a classical computer can simulate such a machine. We don't invoke any elements of quantum complexity theory in our argument, which may come as a relief to some physicists. The abstract of our new paper I present here:

"Aaronson and Arkhipov recently used computational complexity theory to argue that classical computers very likely cannot efficiently simulate linear, multimode, quantum-optical interferometers with arbitrary Fock-state inputs [S. Aaronson and A. Arkhipov, arXiv:1011.3245]. Here we present an elementary argument that utilizes only techniques from quantum optics. We explicitly construct the Hilbert space for such an interferometer and show that that its dimension scales exponentially with all the physical resources. Finally we also show that the Schrödinger and Heisenberg pictures of quantum theory, while mathematically equivalent, are not in general computationally equivalent."

Thus such linear optical interferometers, particularly with arbitrary Fock-state inputs, are likely not efficiently simulatable on a classical computer. The blow up in the Hilbert space is directly connected to the bosonic Fock states in the input. This comes as a bit of a surprise to us in the quantum optics community where the conventional wisdom had been that any linear optical device is always efficiently simulatable regardless of the input. To quote one of our co-authors, Hwang Lee, "This is shocking!"

Hence such linear optical interferometers, with their exponentially large Hilbert space, meet Feynman's necessary criterion for a universal quantum computer. Whether it is a universal quantum computer or not is an open question.

However, large-number photonic Fock states are highly susceptible to loss and dephasing, much more so than Gaussian states, and so whatever this thing is it is not a fault tolerant quantum computing machine. Error correction of some sort will need to be deployed to show that this is useful. Nevertheless, as Arkhipov and Aaronson note, linear optics with bosonic inputs is an interesting playground in quantum information processing.

We heartily agree with their assessment in our new paper.