Advances in foundations?

Yesterday I attended a talk by Daniel Terno. It was about one of his recent works, re-examining the so-called delayed choice experiment and its implications. It was a well-delivered talk with an artistic touch, as usual for the speaker. Berge Englert, who was in the audience, made a few historical observations of which I was not aware: notably, that the idea of delayed-choice dates back to 1941 [K.F. von Weizsäcker, Zeitschrift für Physik 118 (1941) 489] and that Wheeler conveniently omitted to quote this reference in some occasion (I don’t know how he knows this, but Berge knows a lot of things). I learned something during those 45 minutes 🙂

I went home mulling on the end of the exchange between Berge and Daniel. Berge stressed that he doesnot understand why people still work on such things as local variable models. He added that, in his opinion (well informed as usual), all the foundational topics have been discussed and settled by the founders and the rest are variations or re-discoveries by people who did not bother to inform themselves. Daniel replied that he basically agrees, but since these days many people are excited about closing loopholes, he argued that these discussions are relevant to the community. I think both are right, but they also forgot an important point.

I agree with Berge that there is no “fundamental” reason to keep researching on local variables and their alternative friends (contextuality, Leggett-Garg…). Journals like Nature and Science are filled with such experiments these days; but this fashion and consequent flow of publications is not driven by controversy, nor by the desire of acquiring new knowledge, because everyone knows what the outcome of the experiment will be. It is a most clear form of self-complacency of a community. I agree with Daniel that there is some need to put order in the clamor of claims, so a clean analysis like the one of his paper is very welcome.

However, I think that there is a reason to close those loopholes: device-independent assessment! If quantum information tasks are to become practical, this is a really meaningful assessment. Experimentalists (I mean, the serious ones) do worry about side channels. If they could go for device-independent assessment, their worries are excluded by observation. But to reach there, you need to close the detection loophole.

I also think that the notion of device-independent is a genuine advance in foundations. I side with Berge when it comes to all the debates about “wave vs particle”, “indeterminacy”, “incompatible measurements”… On such topics, yes, the founders settled pretty much everything. But I don’t see how people immersed in those terms of debate could have anticipated an assessment of non-classicality made without describing the physical system at all: that is, without saying that “it” is an electromagnetic field in a box (Planck), or a magnetic moment in a magnetic field (Zeeman and Stern-Gerlach).

Now, you may wonder why device-independent does not receive so much public praise and excitement as the other stuff. I don’t know, but several reasons may contribute to this situation:

* The debates on waves and particles have percolated into the general educated public. Since there is no “clear” explanation available (there cannot be, if by “clear” we mean “understandable in everyday terms”), these educated people think that the problem is still open. Scientific journalists, for instance, pick up immediately every paper that hints at some advance in wave-particle blabla — I suggest they should always consult Berge before writing enthusiastic nonsense. The idea of device-independent is too novel to generate such an excitement.

* None of the great North-American prophets of the church of larger Hilbert space (i.e. the quantum information community) is preaching for device-independent. The topic is being pushed from some places in Europe, where they have a network (principal investigators: Antonio Acin, Nicolas Gisin, Serge Massar, Stefano Pironio, Jonathan Barrett, Sandu Popescu, Renato Renner) and from Singapore (Artur Ekert and myself).

* Device-independent theory is tough (you need to compute bounds without assuming anything about your system, using only the fact that you observed some statistics); experiments are even tougher (you need to close the detection loophole at the very least, as for the locality loophole, either you close it too, or you need a good reason to argue it away). So it’s a sort of “elite” topic, which does not gain visibility from mass production — yes, a constant flow of papers, even if most of them are deemed wrong or pointless, does contribute to the impression that a topic is hot and interesting.

And finally, the most powerful reason: I am neither Scott Aaronson nor John Baez, so nobody reads my blog 😉

Advertisements

About valerio

Principal investigator at Centre for Quantum Technologies and professor at National University of Singapore

Posted on January 20, 2012, in Common knowledge, Latest topics. Bookmark the permalink. 3 Comments.

  1. Valerio, cheer up, I do read your blog (every now and then), and it reads well 🙂

  2. Hi Dr. Valerio, I am a big fan of your blog and I really enjoy reading it.

  3. I read yours! hahhaha…..

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: