Category Archives: Common knowledge

Dealing with basic topics in quantum physics: ideas how to present them, frequent misunderstandings…

Collapse models collapse in my esteem

In preparing my teaching for the coming semester, I was lead to consider the possibility of introducing the students to collapse models. From afar, I was keen to adopt a moderate stance like “yeah, this is not standard stuff, but it’s intriguing, and it’s worth knowing”. (un)Fortunately, while in my research I fall every now and then into the sin described in my previous post, when it comes to teaching I am really incapable of regurgitating material from a book or a review article. So I spent some time thinking how I would present collapse models to an audience, who will have already studied Bell’s theorem in its device-independent approach (lecture notes available here). And I came to the conclusion that — I probably won’t present them.

Let us take one step back. The desire for collapse is triggered by the quantum description of setups like the double slit experiment. Each electron produces a very sharp dot on the screen, as one would expect from a particle. However, after detecting many electrons, the overall distribution of dots is an interference pattern, like the one expected for a wave. These are the facts. The rest depends on how you narrate the story. In the most frequently encountered narrative, the electron is delocalized like a wave before hitting the screen, then it “collapses” to a given location upon hitting the screen. A collapse model is a model that aims at describing how this transition happens.

Some very smart people,  rigorously trained in quantum thinking since the cradle, realize immediately that such a narrative is fishy, denounce it as such and ask us to move on. Less smart and/or less rigorously trained people, like me, need more evidence to be convinced. What happened to me in preparing my teaching is that I suddenly collected for myself such evidence. And now I am trying to share it with you.

So, let’s take my starting point and that of my future students: we know Bell inequalities. We know in particular that any classical mechanism aimed at reproducing quantum correlations must be non-local. “Wow, cute: non-locality!” Well, not so cute. For one, the hypothetical mechanism must be infinitely rigid in space-time, or in other words, it must propagate information at infinite speed (yes, infinite, not “just” faster-than-light). For two, the predictions of quantum physics are more restrained than those of the most general non-local theory (even under the so-called “no-observable-signaling” assumption): so, if you toy around with a non-local mechanism, you must further constrain it ad hoc in order to recover the observations. In other words, not only a non-local mechanism does not bring additional predictive power: it must be so fabricated as to match the observations, which we continue predicting using quantum theory. Really, not so cute.

Back to collapse now. A collapse model worth of its name would certainly be applicable beyond the example localization in double slit experiment. Specifically, take a Bell experiment: two photons are prepared in an entangled state, so the polarization of each one is undetermined. Upon measurement, one is found with horizontal polarization (H), the other with right circular polarization (R). This is also a case of “collapse”, where something got determined that was previously undetermined. So the collapse model should describe it too.

Now it’s time to be more precise: what is your aim exactly, in constructing a collapse model? Here come two options:

  • You want a deterministic process: something that explains that in this run of the experiment, given whatever variable in the measurement apparatus, the first photon would necessarily collapse into H; and the second photon would necessarily collapse into R. This would certainly be a very pleasant complement to quantum physics for a classical mind. But Bell’s theorem is clear: the “whatever variable” that triggers such a collapse must be unpleasantly non-local as defined above. Are you ready to buy it? Then I have infinitely many collapse models ready for you. But think twice: are you really making physics more understandable by choosing this path?
  • You want a stochastic description: here, I am a bit at a loss at what this wish is. If by “stochastic” one means “classically stochastic”, we are back to the previous case. In fact, Bell’s theorem does not apply only to deterministic models, but also to classically stochastic ones (i.e. all those where the stochastic element can be attributed to ignorance; mathematically, those that can be described as convex combinations of deterministic models). If by “stochastic” one means “any form of mathematical model with some stochastic element” — well, then quantum mechanics is there, and there does not seem to be the need to complement it with a collapse model.

In a nutshell, it seems to me that collapse models were maybe a legitimate quest at a time when “localization” was presented as the fundamental non-classical feature of quantum physics (the very smart fellows mentioned above will tell you that there has never been such a time for them, but again, this post is for normal people like me). Now we have Bell’s theorem and the corresponding experiments. You don’t need to make of Bell’s theorem your new foundational cornerstone, if you don’t want to; just take it as one of the many discoveries made in the 20th century thanks to quantum physics. Under the light of this discovery, the fog of collapse models, which could be entertained for some time, seems to be dissipating leaving little trace.

P.S. This ends up being a “negative” post: I criticize collapse models without proposing my own positive solution. At least, I know that there is one path that is not worth exploring. I am leaving now for three weeks of holidays and maybe I’ll find time to explore some other path (though, most probably, I won’t think of physics altogether).

Advertisements

Measuring uncertainty relations

In the space of two weeks, two works appear in Nature Physics about measuring uncertainty relations. In the first, an experiment is actually performed to test (and, needless to say, verify) the validity of an uncertainty relation which applies to more situations than the one originally devised by Heisenberg. In the second, it is proposed that the techniques of quantum optics may be used to probe modifications of the usual uncertainty relation due to gravity. Now, to have finally a tiny bit of evidence for quantum gravity, this would really be a breakthrough!

Faithful to my principle of not doing “refereeing on demand”, this is not an unrequested referee report: in fact, I have only browsed those papers, certainly not in enough depth to make judgments. The authors are serious so, by default, I trust them on all the technicalities. The question that I want to raise is: what claims can be made from an uncertainty relation?

An uncertainty relation looks like this:

[something related to the statistics of measurements, typically variances or errors] >= [a number that can be computed from the theory]

which has to be read as: if the left hand side is larger than 0, then there MUST be some error, or some variance, or some other form of “uncertainty” or “indeterminacy”. Let me write the equation above D>=C for shorthand.

Now, let’s see what a bad measurement can do for you. A bad measurement may introduce more uncertainties than are due to quantum physics. In other words, one may find D(measured)=C+B, where B is the additional contribution of the bad measurement. It may be the case that your devices cannot be improved, and so you can’t remove B. Now, the second paper proposes an experiment whose goal is precisely to show that D(measured)=C+G, where G is a correction due to gravity. Obviously, much more than the mere observation of the uncertainty relation will be needed, if someone has to believe their claim: they will really have to argue that there is no way to remove G and not because their devices are performing poorly. The problem is that there is always a way of removing G: a bad measurement can do it for you!

Indeed, a bad measurement may also violate the uncertainty relation. Let me give an extreme example: suppose that you forget to turn on the powermeter that makes the measurement. The result of position measurement will be systematically x=0, no error, no variance. Similarly, the result of momentum measurement will be systematically p=0, no error, no variance. In this situation, D(measured)=0. Of course, nobody would call that a “measurement”, but hey, that may well be “what you observe in the lab”. To be less trivial, suppose that the needle of your powermeter has become a bit stiff, rusty or whatever: the scale may be uncalibrated and you may easily observe D(measured)<C.

So, a bad measurement can influence the uncertainty relation both ways, either increasing or decreasing C.

Now, there are reasonable ways of getting around these arguments. For instance, by checking functional relations: don’t measure only one value, but several values, in different configurations. If the results match what you expect from quantum theory, a conspiracy becomes highly improbable; and indirectly it hints that your measurement was not bad after all. For instance, this is the case of Fig. 5a of the first paper mentioned above.

Still, I am left wondering if the tool of the uncertainty relation is at all needed, since by itself it constitutes very little evidence. Let me ask it this way: why, having collected enough statistics for a claim, should one process the information into an “uncertainty relation”? The information was already there, and probably much more of it than gets finally squeezed into those variances or errors. OK, maybe it’s just the right buzzword to get your serious science into Nature Physics: after all, “generalized uncertainty relation” will appeal to journalists much more than “a rigorous study of the observed data”.

Advances in foundations?

Yesterday I attended a talk by Daniel Terno. It was about one of his recent works, re-examining the so-called delayed choice experiment and its implications. It was a well-delivered talk with an artistic touch, as usual for the speaker. Berge Englert, who was in the audience, made a few historical observations of which I was not aware: notably, that the idea of delayed-choice dates back to 1941 [K.F. von Weizsäcker, Zeitschrift für Physik 118 (1941) 489] and that Wheeler conveniently omitted to quote this reference in some occasion (I don’t know how he knows this, but Berge knows a lot of things). I learned something during those 45 minutes 🙂

I went home mulling on the end of the exchange between Berge and Daniel. Berge stressed that he doesnot understand why people still work on such things as local variable models. He added that, in his opinion (well informed as usual), all the foundational topics have been discussed and settled by the founders and the rest are variations or re-discoveries by people who did not bother to inform themselves. Daniel replied that he basically agrees, but since these days many people are excited about closing loopholes, he argued that these discussions are relevant to the community. I think both are right, but they also forgot an important point.

I agree with Berge that there is no “fundamental” reason to keep researching on local variables and their alternative friends (contextuality, Leggett-Garg…). Journals like Nature and Science are filled with such experiments these days; but this fashion and consequent flow of publications is not driven by controversy, nor by the desire of acquiring new knowledge, because everyone knows what the outcome of the experiment will be. It is a most clear form of self-complacency of a community. I agree with Daniel that there is some need to put order in the clamor of claims, so a clean analysis like the one of his paper is very welcome.

However, I think that there is a reason to close those loopholes: device-independent assessment! If quantum information tasks are to become practical, this is a really meaningful assessment. Experimentalists (I mean, the serious ones) do worry about side channels. If they could go for device-independent assessment, their worries are excluded by observation. But to reach there, you need to close the detection loophole.

I also think that the notion of device-independent is a genuine advance in foundations. I side with Berge when it comes to all the debates about “wave vs particle”, “indeterminacy”, “incompatible measurements”… On such topics, yes, the founders settled pretty much everything. But I don’t see how people immersed in those terms of debate could have anticipated an assessment of non-classicality made without describing the physical system at all: that is, without saying that “it” is an electromagnetic field in a box (Planck), or a magnetic moment in a magnetic field (Zeeman and Stern-Gerlach).

Now, you may wonder why device-independent does not receive so much public praise and excitement as the other stuff. I don’t know, but several reasons may contribute to this situation:

* The debates on waves and particles have percolated into the general educated public. Since there is no “clear” explanation available (there cannot be, if by “clear” we mean “understandable in everyday terms”), these educated people think that the problem is still open. Scientific journalists, for instance, pick up immediately every paper that hints at some advance in wave-particle blabla — I suggest they should always consult Berge before writing enthusiastic nonsense. The idea of device-independent is too novel to generate such an excitement.

* None of the great North-American prophets of the church of larger Hilbert space (i.e. the quantum information community) is preaching for device-independent. The topic is being pushed from some places in Europe, where they have a network (principal investigators: Antonio Acin, Nicolas Gisin, Serge Massar, Stefano Pironio, Jonathan Barrett, Sandu Popescu, Renato Renner) and from Singapore (Artur Ekert and myself).

* Device-independent theory is tough (you need to compute bounds without assuming anything about your system, using only the fact that you observed some statistics); experiments are even tougher (you need to close the detection loophole at the very least, as for the locality loophole, either you close it too, or you need a good reason to argue it away). So it’s a sort of “elite” topic, which does not gain visibility from mass production — yes, a constant flow of papers, even if most of them are deemed wrong or pointless, does contribute to the impression that a topic is hot and interesting.

And finally, the most powerful reason: I am neither Scott Aaronson nor John Baez, so nobody reads my blog 😉

Everyone is speaking of it, part II

The more I hear about this result, the more I fear that the media have picked it up only because they misread the meaning of the title… Let me explain what is done there as simply as I can. I’ll let the reader decide if they think the media could have understood this 🙂

Let L (lambda in the paper) be a list of deterministic instructions of the type {“Measurement A –> give result a”, for all measurements}. Since quantum states do not predict deterministic results for all measurements, a single list is trivially inadequate. But there is a very natural way to generate randomness: just pick different lists {Lk} with probability pk each. So, the model is:

Quantum state of a single object <–> {Lk, pk}.

What the paper proves is that no two quantum states can share any list: the set of lists with probability non-zero uniquely identifies a state. In other words, giving the possible lists, or even just one of them, is equivalent to describing the state…

… for a single object! Indeed, Bell’s theorem proves that not only a product of lists {La,pa}x{Lb,pb}, but even a single product list {LaxLb, pab} cannot describe entanglement. So, lists just don’t seem to do the job. Personally, I can’t believe that the randomness of one quantum object comes from a list, when we know that the randomness of two quantum objects cannot come from a list.

In the same vein, I have a small problem with the logic of the proof. One constructs a family of product states, which should be obviously described by products of lists, and measures them by projecting on a suitable family of entangled states, which… which… wait a second: how does one describe entangled states in that model?? It seems that the closest attempt was Spekkens’ toy model, which reproduces many nice features of quantum physics, but unfortunately not (guess what?) the violation of Bell’s inequalities. Maybe the contradiction exploited in the proof comes from the fact that there is no description of entangled states in a model with lists?

That being said, this paper does add something for those who still were trying to believe in lists as explaining quantum randomness — and the more this idea is shown to be inadequate, the better 🙂

Note added: I was convinced that this post misses the point, but it triggered some nice follow-up; so please read the subsequent thread of comments: the “truth” may be at the bottom — or in the exchange 😉

Everyone is speaking of it

Everyone in my field has been fascinated by a work of Pusey, Barrett and Rudolph. I did not understand it, so I wrote to the authors. Jon Barrett replied in a crystal clear way. Here are his words on what they have proved:

“Suppose there is a hidden variable model, such that a (single) system has a hidden state lambda, and when I prepare the system in a quantum state |phi>, this actually corresponds to some probability distribution rho_{phi}(lambda). If I prepared |psi> instead, then this would correspond to the probability distribution rho_{psi}(lambda). It is obvious that rho_{phi}(lambda) and rho_{psi}(lambda) cannot be the same distribution. But we ask, can it happen that the distributions rho_{psi} and rho_{phi} overlap? The answer is obviously “no” if |phi> and |psi> are orthogonal. But what if they are non-orthogonal? We show that the answer is “no” in this case too. For any pair of quantum states which are not identical, the distributions rho_{phi}(lambda) and rho_{psi}(lambda) must be completely disjoint. Hence for any lambda, there is only one possible quantum state such that Probability(lambda|psi) is nonzero. This means that if someone knows lambda, then they know the quantum state. The whole of the quantum state must be “written into” the actual physical state of the system”. Jon also pointed me to Matt Leifer’s post, which is indeed a very clear critical appraisal.

Now, why are people excited?

Well, many seem to argue that this paper is falsifying some attempts at “being moderate”, cornering people to adopt either of the following positions:

* A realism with “crazy” features (many-worlds with the existence of all potentialities; or Bohmian mechanics with potentials that change non-locally);

* A complete renunciation to realism: we cannot hold any meaningful discourse on quantum physics, just apply the theory that gives us correct predictions for our observations.

I am not sure that the alternatives are so sharp… anyway, now I have to go back to atomic physics and figure out formulas that describe reality — sorry: the perception of the experimentalists 😉

 

Q & indeterminism

In the workshop I attended last week, an important question was raised: does quantum physics imply indeterminism? We have all been told this in the basic lectures, but… how do we know? I addressed this question partly in another post, to which I refer for more details; but I prefer to devote a new specific post to this important and often asked question. So here is my answer:

With one-particle phenomena, we can’t know: hidden variable descriptions, however hated and non-orthodox they may be, are possible. This means that it is possible to keep a deterministic view of such phenomena. So, if you have been told that the double slit or the Stern-Gerlach experiments necessarily imply indeterminism, think again.

When one goes to two particles, the violation of Bell’s inequalities implies that either of the following three assumptions is wrong (see other post):
1. Determinism (or outcome independence)
2. No-signaling
3. “Free will”, or more precisely, the fact that the choice of the measurements are not correlated with the source.
Adepts of Bohmian mechanics give up 2 (their “quantum potential” is a signaling hidden variable); adepts of many-world interpretations will give up 3 in a complicated way. If you want to keep 2 and 3, then indeed quantum phenomena imply indeterminism. An important remark: it is not quantum theory (i.e. a mathematical construction), but the observed violation of Bell’s inequalities (i.e. a fact) that implies indeterminism.

Workshop about Q in school

I am attending a very interesting workshop with many secondary school teachers of Switzerland. It’s in French, but I blog on it in English. Today, two notes:

Note 1

Jean-Marc Levy-Leblond has presented a nice conceptual framework to understand how the notion of “wave-particle duality” arose. He pointed out that the classical notion of “particle” is discontinuous (or discrete) both in its spatial extension and in its quantity; the classical notion of field is “continuous” in both. The “quantons” (he likes this term, and it’s practical indeed) are continuous in spatial extension but discrete in quantity.

A nice hint… but no need to expand: he immediately said that, nowadays, the notion of “wave-particle duality” should just be erased forever from our vocabulary — on which, I cannot agree more.

Note 2

This is just devoted to state my admiration for the teachers I talked to (to be extended by extrapolation to all the participants). With some, I even ended up talking already of the latest results of axiomatics, of device-independent, of indeterminism… I’ll blog about these ideas one day. Now it’s dinner time.

Just for laughs…

A friend of mine reminded me right now of something that happened during one of my outreach talks, some years ago: I had spoken of the Schrodinger cat, and someone asked if it works for other animals as well…

Seven “typically quantum” phenomena

I normally don’t like The New Scientist: their journalism suggests that we should change our view of the natural world every week, it’s a bit too fast for me to follow (true enough, Nature and more recently PRL are going down the same drain).

However, this article is nice — that is, perfectly in line with the scope of this blog 🙂 And the author knows well that the phenomena he describes are not this week’s latest craze, but the accumulation of decades of evidence and reflection.