Tuesday, March 14, 2017

Why a hole is like a beam splitter — a general diffraction theory for multimode quantum states of light

Why a hole is like a beam splitter — a general diffraction theory for multimode quantum states of light

Within the second-quantization framework, we develop a formalism for describing a spatially multimode optical field diffracted through a spatial mask and show that this process can be described as an effective interaction between various spatial modes. We demonstrate a method to calculate the quantum state in the diffracted optical field for any given quantum state in the incident field. Using numerical simulations, we also show that with single-mode squeezed-vacuum state input, the prediction of our theory is in qualitative agreement with our experimental data. We also give several additional examples of how the theory works, for various quantum input states, which may be easily tested in the lab; including two single-mode squeezed vacuums, single- and two-photon inputs, where we show the diffraction process produces two-mode squeezed vacuum, number-path entanglement and a Hong-Ou-Mandel-like effect--analogous to a beam splitter.
Subjects: Quantum Physics (quant-ph)
Cite as: arXiv:1703.03818 [quant-ph]

Tuesday, February 28, 2017

Quantum Artificial Intelligence


I’ve been hammering on this strong classical AI and the classical singularity  to set the stage to compare it with the quantum AI. Remember that the MadScientists asked me, “What is a quantum artificial intelligence?” Well, I don’t know. Classical AI is easy to predict because exponential curves are easy to predict. There is no knee and no singularity in the curve and there will be no point where the classical AI becomes suddenly self-aware like in the movies. In the animal kingdom, dolphins and chimps are likely somewhat self aware and humans, much more so. All three of our species recognize that the thing we see in a mirror is ourselves, as opposed to a non-self-aware fighting alpha male beta fish that thinks the thing in the mirror is a different invading fighting beta fish and tries to attack itself. In the same way as the classical AI becomes more human-like, it will display over time more and more signs of self-awareness. 
Unlike the Colossus scenario, there is not a single day where the computer becomes self-aware and tries to kill us all. It becomes more and more self aware over time. There is no sharp line. There is no such thing as a point of self awareness in this sense. Some animals are more self-aware than others. Some computers will be more self-aware than others. The transition will be gradual. 
The little old lonely lady that Turkle discusses in Alone Together has a robot pet baby seal called Paro. The seal coos and squirms while being petted and talked to. The lonely old lady pets it and talks to it and claims it responds to her emotions. Turkle claims clearly it does not. But why not? Because it is a
machine and not a “real” baby seal? But what is a “real” baby seal if not a meat machine?

This is the kind of mechanical bigotry that the android Lt. Commander Data encounters in Star Trek all the time. If the strong AI hypothesis holds, then Data has a mind in every way equivalent to mine. Hofstadter makes the case in I Am a Strange Loop that self-awareness is a type of illusion generated when a sufficiently powerful computer is programmed to recursively reflect upon itself. If the strong AI hypothesis holds, and I soon will know, then electronic computers are subject to the same illusion and will be self-aware in the same sense. Sentience will not then be an illusion common only to humans but to the machines as well, and we’ll recognize this illusion in them just as we recognize it in ourselves and in our children. Self-awareness will then become a delusion shared by man and machine alike. 

A “real” baby seal is a machine! It is a meat computer. A robotic computerized baby seal sufficiently advanced in electronic processing power can show all the responses of the meat-computer seal. Why are meat computers somehow better at responding to human gestures then electronic ones? Well, that is the point of the strong AI hypothesis. They are not different fundamentally; it isall a matter of processing power. The fact that the processor is a biological neural net in the meat-seal brain and an electronic neural net in the electronic-seal brain is irrelevant. A sufficiently powerful computerized baby seal will respond in a way that is indistinguishable from the biological one. This is the robotic baby seal version of passing the Turing test. The robotic seal is not quite there yet but close enough to give comfort to a lonely old lady whose children never visit her anymore. And the robotic seal Paro does not have to be fed or let out to poop — it just has to be recharged.
 
Today, Paro Mark I is sufficiently powerful in processing power to give a strong illusion that responds to the lady’s voice and petting. Mark V Paro may be indistinguishable from a real baby seal and will have passed the Turing test for baby seals. As the processing power continues up its Moore’s law–driven path, soon a human robot will have responses to our voice and touch that will be indistinguishable from that of a human baby, then a human child, then a human adult. There will not be one day when the AI becomes self-aware. It will happen continuously, but the path is predictable because we have Moore’s law. We can predict that this will happen in the next 50 years or so but the key point is that the path is predictable—exponential but predictable. There is no knee and no singularity in the exponential curve. But a quantum strong AI is much less predictable.What would a quantum mind be like? 

The problem is that we have no meat-based quantum mind to gauge an electronic (or spintronic or photonic or superconducting) quantum AI against. (Here, I am directly assuming that the human mind is a powerful but still a classical self-reflecting quantum computer — the strong AI hypothesis. Not all assume this, as we’ll discuss below.) Again, a quantum computer can always efficiently simulate a classical quantum computer. Then, according to the strong AI hypothesis of Searle, it may be extended to a quantum computer thusly: The appropriately programmed quantum computer with the right inputs and outputs, running as an efficient simulator of a powerful classical computer, would thereby have a mind in exactly the same sense human beings have minds. I will call this the “strong quantum AI hypothesis” and its truth is a direct consequence of the strong classical AI hypothesis. That is, if in the near future a powerful classical computer passes the Turing test, proving the strong AI hypothesis to be true, then even if the quantum computer does not yet exist, we can definitively say that when it does, it will, running in classical mode, have a mind in exactly the same sense that humans have minds. The real question then is what happens when you flip the switch on the powerful quantum computer and toggle it out of classical mode into full quantum mode? Well, it will still have a mind exactly in the same sense that humans have minds, but at the flip of that switch, that mind, unlike us, will now have the freedom to directly exploit the exponential largeness of Hilbert space. We meat computers and our classical strong AI computer brethren will have no such access to Hilbert space to supplement our thinking.

What happens next is a new exponential growth in processing power that, unlike Moore’s law that takes place in physical three-dimensional space, takes place in the utter vastness of Hilbert space. How vast? A rough measure of the classical AI threshold is it is predicted to occur (i.e., our classical computers will become self-aware) when the number of transistors and interconnects hits that of the human brain. When the day finally comes that our classical computers have far more transistors and interconnects than the human brain, and the classical computer still does not exhibit any sign of having a human-like mind, this would provide evidence against the strong AI hypothesis. As you can tell by now, I am a strong believer of not only strong AI but also the scientific method. The hypothesis—that if strong AI fails, there is something else to the human mind than classical computer processing power—must be tested and it could prove to be false. I don’t believe it but that needs to be tested. When will this processing power threshold come? The human brain has approximately 100 billion neurons and 100 trillion synapses or interconnects. The Intel Tukwila, released in 2010, has approximately 1 billion transistors. This is what has the singularitarians worried. Given the exponential growth of Moore’s law, it is expected that we’ll have a classical computer with 100 billion transistors sometime in around 20 years. The major bone of connection is in interconnects. In the human brain, particularly in the cerebral cortex, the center of higher thought, it is not just neuron count that matters but their interconnectivity. In the human brain, we can see that the number of interconnects (100 trillion) is far greater than the number of neurons (100 billion). Dividing that out means roughly that each neuron in our brain is connected to a thousand others. 

On the other hand, most commercial electronic computer chips have limited numbers of interconnects—transistors in the Tukwila talk to just a handful of other transistors. But that is changing. IBM just announced a prototype “cognitive computing” chip, modeled after the human brain, that has many more such synaptic interconnects. Current research in neuroscience or the “science of the brain” suggests that the computing power of our brains and the emergence of consciousness come not from the neurons themselves but from the huge numbers of interconnects or synapses that connect each neuron to thousands of others. This system of neurons and their interconnects is called a neural network and the IBM chip is a type of artificial neural network. Moore’s law for classical computing is directly tied to three-dimensional space. Our computers become exponentially more powerful year by year as a direct consequence of the present nanotechnology that allows us to make the transistors exponentially smaller year by year. The smaller the transistors are, the more we can pack on a computer chip and so directly the speed and memory of the computer increases. A Moore’s law for quantum computers will also drive the placement of ever exponentially more qubits and quantum gates on a quantum chip, but the Hilbert space that goes with those qubits grows super-exponentially. The growth in processing power that accompanies the super-exponential growth of the Hilbert space should not be called Moore’s law but something else entirely. Let us call it S’mores law. 

Assuming that the Searle’s strong AI hypothesis holds, then someday we’ll have a quantum computer, running in classical mode, which has a mind just as a human has a mind. When we throw that switch to run it in full quantum mode, that classical AI will become a quantum AI. Unlike classical meat or electronic computers, the quantum AI will begin thinking in Hilbert space. My ability to extrapolate here is hindered in that we have only begun to explore Hilbert space and it is not easy to predict, following a super-exponential increase in processing power, just what it will mean for thought. I know what a super classical AI will mean. It is just an AI that has more classical processing power than my brain. Maybe exponentially more, but it is still just more. A quantum AI, thinking great thoughts in the vastness of Hilbert space, will think in a way that is fundamentally different from my brain or any classical AI. What it will think is at this point in time impossible to predict. There are again three scenarios for the quantum AI to follow, the quantum versions of the I-Robot, Colossus, or Borg Identity. The quantum AI can, as I suggested above, deploy the quantum Turing test. It can begin asking us and our classical AI brothers and sisters questions to see if we are like it or not like it. Very soon, it will hit on “Please crack this 1024-bit public-key encrypted message in less than a second?” Then, it will realize we are not like it at all. My brain cannot do this and no classical AI can do this. The quantum AI can do it with ease. When we and our classical machines fail the quantum Turing test, what then? 

Under the quantum I-Robot scenario, we humans and our classical AI build fail-safes into the quantum AI programming to keep it from enslaving or killing us. The quantum AI will not really treat us and our classical machines any differently from each other. In this scenario, the quantum AI becomes our servant or our equal, an entity that thinks wildly different from what we do but that we peacefully coexist with. 

Under the Colossus scenario, the quantum AI decides all us classical meat and electronic AI are a threat and it decides to kill us all off. It may not deliberately kill us all off, but in competition for scarce resources, it may just force us to go extinct, just as it was once proposed the humans did to the Neanderthals. But remember, we now know that the humans merged with the Neanderthals and did not displace them; all modern humans of non-African descent have Neanderthal DNA in their genes. 
This gives me hope for the Borg Identity. We humans will merge with our classical AI, and once we, the classical AI, succeed in building a quantum AI, we’ll merge with that too. Our teenagers’ teenagers’ teenagers’ teenagers will do all their thinking in Hilbert space and the evolution of the human race will continue there far outside the confines of the ordinary three-dimensional space that now so confines us. After exploring 60 orders of magnitude in three-dimensional space, we will move to new explorations in hundreds of thousands or millions of orders of magnitudes in Hilbert space. What will that mean? I do not know. I sure wish I would be around to find out.
Dowling, Jonathan P. (2013-04-11). Schrödinger's Killer App: Race to Build the World's First Quantum Computer (Pages: 409–413). Taylor and Francis CRC Press.

Wednesday, February 8, 2017

Gaussian Beam-Propagation Theory for Nonlinear Optics - Featuring an Exact Treatment of Orbital Angular Momentum Transfer

Gaussian Beam-Propagation Theory for Nonlinear Optics — Featuring an Exact Treatment of Orbital Angular Momentum Transfer

We present a general, Gaussian spatial mode propagation formalism for describing the generation of higher order multi-spatial-mode beams generated during nonlinear interactions. Furthermore, to implement the theory, we simulate optical angular momentum transfer interactions, and show how one can optimize the interaction to reduce the undesired modes. Past theoretical treatments of this problem have often been phenomenological, at best. Here we present an exact solution for the single-pass no-cavity regime, in which the the nonlinear interaction is not overly strong. We apply our theory to two experiments, with very good agreement, and give examples of several more configurations, easily tested in the laboratory.
Subjects: Optics (physics.optics)
Cite as: arXiv:1702.01095 [physics.optics]


Friday, February 3, 2017

The Future Quantum Internet Will Have “Made In China” Stamped All Over It!


From 2006 through 2010, I participated on a large, $1.5-million-a-year Quantum Computing Concept Maturation (QCCM) in optical quantum computing that was funded by the Intelligence Advanced Research Projects Activity (IARPA), which was formerly known as the Disruptive Technology Office, which was formerly known as the Advanced Research and Development Activity (ARDA), which was formerly known as the NSA, which were all funding agencies for the US intelligence community. The changes in names, acronyms, and, more importantly, the logos took place at a frightening pace that made it hard for the research scientists to keep up. I personally had funding for optical quantum computing from 2000 to 2010, which came under the umbrella of each of these agencies in sequence and there were even two separate logos for IARPA in use at the same time. When the acronym IARPA showed up in 2007, all my colleagues would ask me, what the heck is IARPA? To this I would respond, it is the Defense Advanced Research Projects Agency (DARPA) for spies (see Figure 4.9).
But in any case, the photonic QCCM, led by American physicist Paul Kwiat, had collaborators that stretched from Austria (Anton Zeilinger) to Australia (physicist Andrew White). We had what we all thought were great results; we submitted in 2010 an essentially renewal proposal and we were not funded and neither was anybody else in photonic quantum computing.
In the case of photonic qubits, this dropping of optical quantum computing by IARPA was a bit hasty in my opinion. While the photonic quantum computer may be a bit of a long shot for the scalable quantum computer, all hardware platforms are a long shot, and photonics is the only technology that would allow us to build the scalable quantum Internet. There is a good analogy. In the 1970s and 1980s, there were predictions that silicon chip technology was coming to an end, and there was a great DoD-funded push to develop scalable classical optical computers. The thought was that as we put more and more circuits closer and closer together on the silicon chips, the electromagnetic cross talk between the wires and the transistors would grow without bound limiting the  number of processors on a chip. What was not foreseen was the development of good integrated circuit design rules, developed by American computer scientists Caver Mead, Lynn Conway, and others, which showed that the cross talk could be completely eliminated. But until that was understood, the funding for the competing optical computing rose and ran for a while and then collapsed in the mid-1980s when it became clear that the Intel silicon chips were not going anywhere and that predictions of their demise were overrated. The optical classical computer program was viewed as a colossal failure and to say you were working on optical classical computing became the kiss of death. But it was not a failure at all. The optical switches and transistors developed for the scalable optical classical computer found their way into the switches and routers and hubs for the fiber-optic-based classical Internet. The future quantum Internet will also require the manipulations of photons at the quantum level—a quantum repeater is a device for transmitting quantum information over long distances. The quantum repeater is a small, special-purpose, optical, quantum computer that executes a particular error correction protocol. The future of the quantum Internet is in photons and the short circuiting of the development of optical quantum information processors in the United States means that the future quantum Internet will have “Made in China” stamped all over it.



Figure 4.9: A composite study of the logos of ARDA throughout the ages. From 2000 to 2010, I had continuous funding for research in quantum information processing, which as far as I could tell came from one place, but for which I had to change logos five times, starting with the NSA logo on the left (2000) and ending with the second IARPA logo on the right (2010). The penultimate IARPA logo on the right had a life span of only 2 weeks and you can see that it is the logo for the Director of Central Intelligence with the letters IARPA badly and hastily photoshopped across it. In the background is a spoof of a composite map of the lands of Arda from the fictional works of J.R.R. Tolkien. (The sea monster and sailing ship are taken from ancient manuscripts and no longer subject to copyright. The map is based on “A Map of Middle Earth and the Undying Lands: A Composite Study of the Lands of ARDA,” author unknown. (Explanation of the jokes: The Unlying Lands should be the Undying Lands in Tolkien’s works. Nimanrø should be Numenor, Mittledöd should be Middle Earth, Odinaiä is Ekkaia, and Darpagar is Belegaer. ODNI is the Office of the Director of National Intelligence, NIMA is the National Imaging and Mapping Agency [now the National Geospatial-Intelligence Agency], NRO is the National Reconnaissance Office, DARPA is the Defense Advanced Research Projects Agency [parts of which were carved out into ARDA], and DoD is the Department of Defense.)

This post is directly quoted from my book, Jonathan P. Dowling, Schrödinger's Killer App — Race to Build the World's First Quantum Computer (Taylor & Francis, 2013) pp. 171–173.

Thursday, January 19, 2017

Statement of Teaching Philosophy

From the — I actually wrote this!? — file....



Statement of Teaching Philosophy
Jonathan P. Dowling

As the physicist and famed textbook author John D. Jackson once said, “Asking an experienced teacher about his teaching philosophy is like asking a fish about his swimming philosophy — it had better be second nature!” Another quote I am fond of was from one of my own woman undergraduate students, who told me, “The reason you are such a great teacher, is that there is no concept too easy for you to explain!” This taught me a lot. Anybody at my level can explain the difficult concepts — that is the easy part. Apparently my gift is to have the patience and wisdom to take a concept that may seem “easy” to me and parse it in a way that it resonates with each student.

At the graduate level, I view my role as instructor to be far more than a lecturing and grading automaton, but rather a role model for the graduate students to emulate to be the teacher and physicist. As a student, I was able to learn how to teach by emulating the masters, and learn how not to teach by studying pitfalls. Physics is a notoriously difficult subject to teach well, as the subject matter has a tendency to delivered in a dry, stuffy atmosphere, where any creative embellishments in the delivery of the material are viewed as a distraction from the accurate conveyance of the subject matter.

In order to make the subject matter of physics come alive, I have taken it upon myself to read many biographies and histories of famous physicists and their discoveries, and I weave these stories into my presentations in such a way that the spirit of Einstein and Schrödinger may live again in the room, as we learn together about not only their successes, but also their failures. Too often, physics is presented as a fait accompli, which sprang complete from the foreheads of our forefathers in its present, canned, homogenized, and pasteurized form Young scientists, struggling with their own successes and failures in their own research, need to know it is okay to make mistakes. This lesson is one of the most important for a student to learn. The second most important point is to have fun in what you are doing and to love it. It is hard for some young people to believe that physics should be fun, so I try bring them around by having fun with it myself in the classroom. With that they learn also that life is too short not to do what they love most.

Once you get across the point that learning physics can be fun, and that it is fine to make mistakes, then the students are all on board and it is smooth sailing. The quiet ones open up. The bold ones become wise and thoughtful. The assignments seem less tedious, the exams less onerous. For required core graduate courses, such as quantum mechanics, this lesson needs to be learned to face the vast amount of required material and the grueling exams. For graduate electives, such as the two-semester quantum optics course I designed and executed, it is critical to get this out front so the students will find time in their busy schedules to work hard with me on a mere elective.

From an operational point of view, I mix up the traditional lecture format with either planned or unplanned in-class student presentations. One of my favorite modes of classroom instruction is to take a seat among the students and discuss with them, not lecture to them, with each taking a turn to lead the discussion or go to the board and make their point, in mathematics or prose, and to have their peers and me weigh in on that point. Once broken of the fear to speak up in class — learning begins. This goes for my formal classes, and for my own research group meetings and seminars — where even the youngest undergraduate student must be made to feel free to be able to challenge a point the professor is making — especially since the undergraduate student is often right! A recent visitor to our group, Prof. Hal Metcalf from the State University of New York, was enthusiastic about our group discussions because, unlike in other research groups he’s visited, everybody in my group has a voice — and more importantly — is not afraid to use it!

In the end, as I teach, I am myself a role model for the student, not just as an aspiring physicist and scientist, but as a member of the educated citizenry that will guide the future of our world.