Category Archives: Latest topics

Some results which, even if they represent the top of current research, may just as well help in spreading quantum physics to larger audiences.

Collapse models collapse in my esteem

In preparing my teaching for the coming semester, I was lead to consider the possibility of introducing the students to collapse models. From afar, I was keen to adopt a moderate stance like “yeah, this is not standard stuff, but it’s intriguing, and it’s worth knowing”. (un)Fortunately, while in my research I fall every now and then into the sin described in my previous post, when it comes to teaching I am really incapable of regurgitating material from a book or a review article. So I spent some time thinking how I would present collapse models to an audience, who will have already studied Bell’s theorem in its device-independent approach (lecture notes available here). And I came to the conclusion that — I probably won’t present them.

Let us take one step back. The desire for collapse is triggered by the quantum description of setups like the double slit experiment. Each electron produces a very sharp dot on the screen, as one would expect from a particle. However, after detecting many electrons, the overall distribution of dots is an interference pattern, like the one expected for a wave. These are the facts. The rest depends on how you narrate the story. In the most frequently encountered narrative, the electron is delocalized like a wave before hitting the screen, then it “collapses” to a given location upon hitting the screen. A collapse model is a model that aims at describing how this transition happens.

Some very smart people,  rigorously trained in quantum thinking since the cradle, realize immediately that such a narrative is fishy, denounce it as such and ask us to move on. Less smart and/or less rigorously trained people, like me, need more evidence to be convinced. What happened to me in preparing my teaching is that I suddenly collected for myself such evidence. And now I am trying to share it with you.

So, let’s take my starting point and that of my future students: we know Bell inequalities. We know in particular that any classical mechanism aimed at reproducing quantum correlations must be non-local. “Wow, cute: non-locality!” Well, not so cute. For one, the hypothetical mechanism must be infinitely rigid in space-time, or in other words, it must propagate information at infinite speed (yes, infinite, not “just” faster-than-light). For two, the predictions of quantum physics are more restrained than those of the most general non-local theory (even under the so-called “no-observable-signaling” assumption): so, if you toy around with a non-local mechanism, you must further constrain it ad hoc in order to recover the observations. In other words, not only a non-local mechanism does not bring additional predictive power: it must be so fabricated as to match the observations, which we continue predicting using quantum theory. Really, not so cute.

Back to collapse now. A collapse model worth of its name would certainly be applicable beyond the example localization in double slit experiment. Specifically, take a Bell experiment: two photons are prepared in an entangled state, so the polarization of each one is undetermined. Upon measurement, one is found with horizontal polarization (H), the other with right circular polarization (R). This is also a case of “collapse”, where something got determined that was previously undetermined. So the collapse model should describe it too.

Now it’s time to be more precise: what is your aim exactly, in constructing a collapse model? Here come two options:

  • You want a deterministic process: something that explains that in this run of the experiment, given whatever variable in the measurement apparatus, the first photon would necessarily collapse into H; and the second photon would necessarily collapse into R. This would certainly be a very pleasant complement to quantum physics for a classical mind. But Bell’s theorem is clear: the “whatever variable” that triggers such a collapse must be unpleasantly non-local as defined above. Are you ready to buy it? Then I have infinitely many collapse models ready for you. But think twice: are you really making physics more understandable by choosing this path?
  • You want a stochastic description: here, I am a bit at a loss at what this wish is. If by “stochastic” one means “classically stochastic”, we are back to the previous case. In fact, Bell’s theorem does not apply only to deterministic models, but also to classically stochastic ones (i.e. all those where the stochastic element can be attributed to ignorance; mathematically, those that can be described as convex combinations of deterministic models). If by “stochastic” one means “any form of mathematical model with some stochastic element” — well, then quantum mechanics is there, and there does not seem to be the need to complement it with a collapse model.

In a nutshell, it seems to me that collapse models were maybe a legitimate quest at a time when “localization” was presented as the fundamental non-classical feature of quantum physics (the very smart fellows mentioned above will tell you that there has never been such a time for them, but again, this post is for normal people like me). Now we have Bell’s theorem and the corresponding experiments. You don’t need to make of Bell’s theorem your new foundational cornerstone, if you don’t want to; just take it as one of the many discoveries made in the 20th century thanks to quantum physics. Under the light of this discovery, the fog of collapse models, which could be entertained for some time, seems to be dissipating leaving little trace.

P.S. This ends up being a “negative” post: I criticize collapse models without proposing my own positive solution. At least, I know that there is one path that is not worth exploring. I am leaving now for three weeks of holidays and maybe I’ll find time to explore some other path (though, most probably, I won’t think of physics altogether).

Physics and the bumper sticker

In the remote preparation for my Coursera on randomness, I read Nate Silver‘s The signal and the noise. I am not sure how much of it will enter my course, since I don’t plan to enter into the topics he deals with (politics, the stock market, climate change, prevention of terrorism, baseball and poker). But the conclusion struck a cord.

The author lists seven approximations to describe the “efficient market hypothesis”, which run: 1. No investor can beat the stock market, 2. No investor can beat the stock market over the long run, and so on until approximation 7 which a is five lines long sentence. Then he adds (emphasis is mine):

“The first approximation — the unqualified statement that no investor can beat the stock market — seems to be extremely powerful. By the time we get to the last one, which is full of expressions of uncertainty, we have nothing that would fit on a bumper sticker. But it is also a more complete description of the objective world.”

Sounds familiar? Let’s give it a try:

Example 1:

  • Bumper sticker: No extension of quantum theory can have improved predictive power
  • Expression full of uncertainty: the authors work under the assumption of no-signaling (so, if you are Bohmian, don’t worry, our result does not concern you).  Then they assume a lot of quantum physics, but not all of it, otherwise the claim would be tautological. Beyond the case of the maximally entangled state, which had been settled in a previous paper, they prove something that I honestly have not fully understood. Indeed, so many other colleagues have misunderstood this work, that the authors prepared a page of FAQs (extremely rare for a scientific paper) and a later, clearer version.
  • Comment: the statement “Colbeck and Renner have proved that quantum theory cannot be extended” is amazingly frequent in papers, referee reports and discussions. Often, it comes in the version: “why are people still working on [whatever], since Colbeck and Renner have conclusively proved…?” It is pretty obvious however that many colleagues making that statement are not aware of the “details” of what Colbeck and Renner have proved: they have simply memorized the bumper sticker statement. I really don’t have a problem with Colbeck and Renner summarizing their work in a catchy title; what is worrisome is other experts repeat the catchy title and base decisions solely on it.

Example 2:

  • Bumper sticker: The quantum state cannot be interpreted statistically [Yes, I know that the title of the final version is different, but this is the title that sparked the curiosity of the media]
  • Expression full of uncertainty: the authors work with a formalization of the notions of “ontic” and “epistemic” that is accepted by many people, though not by Chris Fuchs and some of his friends. They add a couple of other reasonable assumptions, where by “reasonable” I mean that I would probably have used them in a first attempt to construct an epistemic model. Then they prove that such an epistemic model is inconsistent.
  • Comment: too many people have commented on this paper. The latest contrary claim has been posted online today, I have not read it because I am really not following the debate, but for those who are interested, here it is.

Example 3:

  • Bumper sticker: either our world is fully deterministic or there exist in nature events that are fully random [the use of “either-or” makes it too elaborated for a real bumper sticker, but for someone who browses these papers, the sentence is basic enough]
  • Expression full of uncertainty: the authors consider a very weak source of randomness, something like a very biased coin; in fact, it can be more perverse than that, because it can have correlation over various tosses. But it cannot be completely perverse: the authors make an assumption about its structure (technically known as “Santha-Vazirani” by the names of the first two persons who proposed it). Then they prove that, if this source is used as seed for a specific quantum experiment, the outcomes of the experiment are guaranteed to be much more random. In the limiting case of an experiment lasting infinitely long time, and whose results do not deviate by any amount from the optimal result allowed by quantum physics, the source can contain almost no randomness, while the final list will be almost fully random.
  • Comment: in a paper just published, we studied what happens if we remove the Santha-Vazirani assumption, so that the source can be as perverse as you wish. Not surprisingly, the conclusions become more pessimistic: now, one would need a fair amount of initial randomness in order for the quantum step to produce further randomness. Nothing wrong at all: some guys get a good result with an assumption, others test the limit of the assumption, this is the normal course of science. But read again the bumper-sticker statement: taken in itself, out of the paper where it belongs, that statement has not been “scientifically proved” — it even sounds closer to being impossible to prove, without the crucial assumption

Scientific madeleines

The two conferences I attended these last weeks (CEQIP and Vaxjo) were pretty good in science, food, drink, location and atmosphere. For me, they were also full of Proustian madeleines: I have met again so many colleagues and realized how they have actually shaped my life, even when the interaction had been short.

  • Mario Ziman is one of the organizers of CEQIP. I met him in my very first conference in quantum information, in the castle of Budmerice near Bratislava, back in 2001. He was doing his PhD under the supervision of Vladimir Buzek, I had recently started my post-doc with Nicolas Gisin. As an outcome of those discussions, Mario and I (and Nicolas and Vlado and another student called Peter) worked in two papers about entanglement and thermalization. At that time, it was a rather unusual topic; now it is a big one, only in CEQIP we had at least three presentations. None of the young authors was probably even aware of our old works, but Mario and I knew better than struggling for recognition: we simply sat there in the back, enjoying the progress of the field and exchanging nods.
  • I have had fewer interactions with the other organizer, Jan Bouda; but I cannot forget a funny moment when he was visiting Singapore, probably in 2007. In the old big office of was to become CQT, Andreas Winter, Nicolas Brunner and I asked him to explain his research. He started out: “I don’t know if you are familiar with quantum cryptography”… This time, I discovered that Jan is very familiar with Moravian wines and their weaker and stronger relatives.
  • Another Slovak in CEQIP: Martin Plesch. He is presently working in Brno and has picked up the topic of randomness. In the conference in Budmerice in 2001, he was an undergrad. He had been tasked to drive Nicolas Gisin and me to Vienna airport on the last day. It was raining, we were a bit late, and Martin was going rather fast on those country roads, keeping really, really close to the car in front.
  • In Vaxjo I met again Hans-Thomas Elze, a German working in Pisa, who is the organizer of a series of conferences in Tuscany. When I went in 2004, it was held in Piombino. At that time, Hans-Thomas was still working in Brazil: as a result, the proceedings of that conference were published in Brazilian Journal of Physics. My paper dealt with an unconventional question and (as you can imagine from the journal) was forgotten until the group of Stefan Wolf made a great progress in 2011. The final solution of the problem appeared in Nature Physics. In Vaxjo, Hans-Thomas invited me to attend his next conference in September 2014. I don’t think there is an Etruscan Journal of Physics, but we’ll see…
  • Since a few years, I coincide with Mauro D’Ariano at least once per year and we always have good conversations. In the middle of complaints about bureaucracy, punctuated by the typical Italian word –zz-, he keeps an exemplary scientific drive. A few years ago, we were having fast food lunch in the March Meeting in Boston. He was telling me that, in his maturity, he wanted to start tackling “really serious” problems. Concretely, he had been reading a lot about field theory, cosmology, relativity… and was declaring his disappointment in finding gaps in the usual arguments. He had decided to try and reconstruct physics from scratch… well, from some quantum form of scratch. Normally, I tend to dismiss beginners who find problems in what others have devoted their lives too — but here, and with Mauro, I could only agree. A few years have passed: his attempt of reconstructing all that we know from basic quantum building blocks has not hit the wall: on the contrary, he and his collaborators are deriving more and more results, and even the “experts” start taking them quite seriously. Thanks Mauro for showing what serious and constant work can do!

Why am I writing all this? For no special reason other than to record minute events and people who are part of my life of a physicist.

Anything wrong with tomography?

“Quantum tomography”, or “state estimation”, is solidly established — or so it seemed until some months ago.

The notion is pretty simple and peacefully admitted: it’s just the quantum analog of the reconstruction of a statistical distribution from a sampling. How would you know if a die is balanced? You cast it many times and infer the probabilities. Ideally, you should cast it infinitely many times; fortunately, the whole field of statistics provides rigorous ways of assessing how much you can trust your inference from finite samples.

You can reconstruct a quantum state in a similar way. There is one main difference: the quantum state contains information about the statistics of all possible measurements and, as well known, in quantum physics not all measurements are compatible. This is solved by sampling not just for one measurement, but for several. For instance, if you want to reconstruct the state of a spin 1/2, you need to reconstruct the statistics of measurements along three orthogonal directions x,y,z. It’s like saying that you have to cast a die in three different ways, if you pass the lousy analogy.

In the lab, tomography has been used for decades, for characterization: you think you have a source that produces a given quantum state and use tomography to check how well you succeeded. Often, tomography is used to certify that the state of two or more quantum objects is “entangled”.

Theorists have been working in devising various improvements.  The biggest challenge is the fact that many statistical schemes may end up reconstructing a state that is not a valid one (think of reconstructing the statistics of a die and finding out the result 5 happens with negative probability!). Also, tomography is a useful motivation to study the structure of “generalized quantum measurements” (the kind that deserve the awful acronym POVMs) and plays a crucial role even in some “interpretations” of quantum physics, notably “quantum bayesianism” (I can’t really get to the bottom of it: Chris Fuchs speaks so well that, whenever I listen to him, I get carried away by the style and forget to try and understand what he means. If you really want to make the effort, read this paper).

All is well… until, a few months ago, reports appeared that quite elementary sources of possible errors had been underestimated:

  • One such source are systematic errors. Consider the example of the spin 1/2: certainly, experimentalists can’t align their devices exactly along x, y and z. They can calibrate their direction as well as the precision of their calibration devices allow. According to a paper by Gisin’s group in Geneva, the effect of the remaining error has been largely neglected. While probably not dramatic for one spin, the required corrections may become serious when it comes to estimating the state of many, possibly entangled spins.
  • Another quite obvious possibility is a drift by the source. When we cast a die many times, we make the assumption that we are always casting the same die. This is not necessarily true down to ultimate principles: some tiny pieces of matter may be detached by each collision of the die with the floor, so the die may be lighter and deformed after many trials. This deterioration seems inconsequential with a die. But things may be different when it comes to quantum states that are produced by complex laboratory equipment that have the nasty tendency of not being as stable as your mobile telephone (for those who don’t know, in a research lab, the stabilization and calibration of the setup typically takes months: once it is done, the actual collection of interesting data may only take a few days or even hours). Two papers, one in December 2012 and the other posted three days ago but written earlier, explore the possibility of tomography when the source of quanta is not producing the same state in each instance.

Does all this story undermine quantum tomography? Does it even cast a gloomy shadow on science? My answer is an unambiguous NO. All the previous works on tomography were done under some assumptions. Whenever those assumptions hold, whenever there is reason to trust them, those works are correct. If the assumptions can be doubted, then obviously the conclusion should be doubted too. With these new developments, people will be able to do tomography even under more relaxed assumptions: great! The lesson to be learned is: state your assumptions (OK, you may not want to state all the assumptions in all your technical papers aimed at your knowledgeable peers: but you must be aware of them, and state them whenever you write a review paper, lecture notes or similar material).

The Nobel Prize and quantum information

Everyone is happy with the attribution of the 2012 Nobel Prize for physics, and so definitely am I. However, I cannot fully agree with those of my colleagues who are hailing this attribution as “a Nobel Prize for quantum information”. Serge Haroche and Dave Wineland started working on those experiments well before the idea of the quantum computer. Did they join the quantum information community, or is it the community that joined them? There is no sharp answer of course, because the cross-fertilization of ideas goes both ways; but I think that Serge and Dave would be more or less where they are without quantum information.

By their choice, the Nobel committee endorses great developments in atomic physics and quantum optics. The endorsement of quantum information proper is still pending.

A parable

Recently, I have read a paper in a prestigious journal in physics, whose logic was a bit stretched. Let me paraphrase it for you.

Italians are known to be good soccer players. Recently, some authors have noticed that Singaporeans may also be pretty decent soccer players. In this paper, we prove that Singaporeans can even be better than Italians.
For the test, the Singaporeans were chosen from one of the many soccer schools in the island; the Italians were chosen among the finalists of the certamen ciceronianum, the most famous competition of Latin prose writing. The age and bodily weight distributions were the same for both samples.
Each player was asked to try and score a penalty kick with the heel. Remarkably, the Singaporeans fared far better than the Italians. This conclusively proves that Singaporeans can be better soccer players than Italians in some tasks.

Reference: Xxxxx et al, Nature Physics Y, yyyy (20zz)

Complacency in science

I have finally read Galbraith’s Short history of financial euphoria, which Alain Aspect suggested to me during a random dinner chat a few months ago. It’s nice: it’s the first time I understand something about finance. And it triggered a concern about academia.

In finance as well as in academia, people often fall into euphoria over something that is, by all rational standards, rather worthless. In my field of research, for instance, the latest craze is the following process:

  1. Write down a new version of some criterion that tests that “something is quantum” (a new Bell inequality, a new test of contextuality, a version of Leggett-Garg…); the simpler — the more trivial — the better, because of point 2.
  2. Find a couple of friends to do an experiment for you. Better if they have been running their setup for ages and have exhausted all the serious science that could possibly be done with it, because they will be more than happy to learn that their old machinery can still be used to perform “fundamental tests”. Moreover, since your test is simple and simple quantum physics has been tested to exhaustion, you have no doubt that the experimental results will uphold your theory.
  3. If you can, present it as “the first step towards [a big goal]”. Never mind that it is rather the last use of a setup that has made its time (I refrained to use “swan’s song”, because the last song of the swan is supposed to be the most beautiful; the last concert of an 80 years old pop star would be more appropriate a metaphor). If you can’t invoke the future, present it as “the conclusive proof of [some quantum claim]”. Never mind that the claim is usually always the same, namely, that results of measurements are not pre-established, that there is intrinsic randomness, or however you want to phrase it. Also never mind the fact that there cannot be a “conclusive claim” every month.

The euphoria mechanism is entertained as follows:

  • The big journals (Nature at the forefront) prefer to publish tons of poor science rather than risking and losing a single real breakthrough. So, if someone claims to have solved “the mystery of the quantum” (the general readership of Nature finds quantum physics mysterious), better take them seriously.
  • In turn, people notice that “if you do that, you publish in Nature”. Since “that” is not that difficult after all, it’s worth while going for it.
  • Once you have published in Nature (or Science or…), you are hailed as a hero by the head of your Department, by the communication office of your university, by the agencies that granted you the funds.
  • Put yourself now at the other end, namely in the place of the one who would like to raise a dissenting voice and reveal the triviality of the result. All the legitimate instances (peer reviewed journals, heads of prestigious Departments, grant agencies, even popular magazines and newspapers!) are against you. Isn’t it “obvious” then that you are only venting your jealousy, the jealousy of the loser?

So far, the analogy with financial euphoria is clear. I guess (though I have not studied the statistics) that the speed of the crash is also analogously fast: it happens when some of the editors of the main journals take a conscious decision of having “no more of that”, because they realize that there is really nothing to gain. The rumor spreads that “refereeing has become tough”; the journals are accused of having become irrational since “if they accepted the previous paper, why they refuse this one” (while it’s one of their few moments of rationality).

And the consequences? The same too, but fortunately without criminal pursuits, despair and suicides. The very big fish get out unscathed: either their science is really serious (that is, they have invested only a small amount of their scientific capital in the euphoric topic); or their power is really big (that is, they have invested only a small amount of their political power in backing the euphoria). The opportunists will try to follow the wind as they should, and will be forgotten as it should. Those who face uncertain destiny are the young fellows, who were doing serious science when the euphoria caught them at the right time and the right place. Because of this, they have been raised to prominence. Somehow, all their capital is invested in that topic. Will they be able to find their way out and continue doing serious science? Or will they end up teaming with their buddies, set up a specialized journal for themselves and publishing there until their old age? If one day you find me as the founder of a journal called “Nonlocality”, please wake me up.

Happy Easter!

Measuring uncertainty relations

In the space of two weeks, two works appear in Nature Physics about measuring uncertainty relations. In the first, an experiment is actually performed to test (and, needless to say, verify) the validity of an uncertainty relation which applies to more situations than the one originally devised by Heisenberg. In the second, it is proposed that the techniques of quantum optics may be used to probe modifications of the usual uncertainty relation due to gravity. Now, to have finally a tiny bit of evidence for quantum gravity, this would really be a breakthrough!

Faithful to my principle of not doing “refereeing on demand”, this is not an unrequested referee report: in fact, I have only browsed those papers, certainly not in enough depth to make judgments. The authors are serious so, by default, I trust them on all the technicalities. The question that I want to raise is: what claims can be made from an uncertainty relation?

An uncertainty relation looks like this:

[something related to the statistics of measurements, typically variances or errors] >= [a number that can be computed from the theory]

which has to be read as: if the left hand side is larger than 0, then there MUST be some error, or some variance, or some other form of “uncertainty” or “indeterminacy”. Let me write the equation above D>=C for shorthand.

Now, let’s see what a bad measurement can do for you. A bad measurement may introduce more uncertainties than are due to quantum physics. In other words, one may find D(measured)=C+B, where B is the additional contribution of the bad measurement. It may be the case that your devices cannot be improved, and so you can’t remove B. Now, the second paper proposes an experiment whose goal is precisely to show that D(measured)=C+G, where G is a correction due to gravity. Obviously, much more than the mere observation of the uncertainty relation will be needed, if someone has to believe their claim: they will really have to argue that there is no way to remove G and not because their devices are performing poorly. The problem is that there is always a way of removing G: a bad measurement can do it for you!

Indeed, a bad measurement may also violate the uncertainty relation. Let me give an extreme example: suppose that you forget to turn on the powermeter that makes the measurement. The result of position measurement will be systematically x=0, no error, no variance. Similarly, the result of momentum measurement will be systematically p=0, no error, no variance. In this situation, D(measured)=0. Of course, nobody would call that a “measurement”, but hey, that may well be “what you observe in the lab”. To be less trivial, suppose that the needle of your powermeter has become a bit stiff, rusty or whatever: the scale may be uncalibrated and you may easily observe D(measured)<C.

So, a bad measurement can influence the uncertainty relation both ways, either increasing or decreasing C.

Now, there are reasonable ways of getting around these arguments. For instance, by checking functional relations: don’t measure only one value, but several values, in different configurations. If the results match what you expect from quantum theory, a conspiracy becomes highly improbable; and indirectly it hints that your measurement was not bad after all. For instance, this is the case of Fig. 5a of the first paper mentioned above.

Still, I am left wondering if the tool of the uncertainty relation is at all needed, since by itself it constitutes very little evidence. Let me ask it this way: why, having collected enough statistics for a claim, should one process the information into an “uncertainty relation”? The information was already there, and probably much more of it than gets finally squeezed into those variances or errors. OK, maybe it’s just the right buzzword to get your serious science into Nature Physics: after all, “generalized uncertainty relation” will appeal to journalists much more than “a rigorous study of the observed data”.

An ongoing experiment in sociology of science

Two months ago, Pusey, Barrett and Rudolph posted on the arXiv a paper with the title The quantum state cannot be interpreted statistically. It generated a lot of hype, and also serious interest. As for myself, I did not finish to understand it (my latest attempt is at the bottom of this series of comments), but this is not the matter now.

Two days ago, the same Barrett and Rudolph posted a new paper on the arXiv (with two other co-authors). The title: The quantum state can be interpreted statistically.

What’s going on? There is no mystery: in the first paper, the no-go theorem was proved under some assumptions; in the second paper, one of the assumptions is removed and an explicit model is constructed. Scientifically, this is nice: it clarifies the conditions, under which a statistical interpretation can be constructed.

This post is not meant as a criticism of the scientific content of those papers — I don’t do refereeing on demand, and even less by blog. I am just alerting those interested in sociology of science. Indeed, in the coming week or so, it will be very instructive to monitor the reaction of the scientific media to this new paper. The claim as stated in the title would deserve the same hype as the previous one (if you care about A, both the propositions “A is false” and “A is true” carry the same importance). My prediction is that, while there may be discussions in scientific blogs, the journals will not pick up the story this time. Anyway, let’s not waste more time by discussing all possible scenarios: we’ll discuss post factum on the one that will actually have happened.

Advances in foundations?

Yesterday I attended a talk by Daniel Terno. It was about one of his recent works, re-examining the so-called delayed choice experiment and its implications. It was a well-delivered talk with an artistic touch, as usual for the speaker. Berge Englert, who was in the audience, made a few historical observations of which I was not aware: notably, that the idea of delayed-choice dates back to 1941 [K.F. von Weizsäcker, Zeitschrift für Physik 118 (1941) 489] and that Wheeler conveniently omitted to quote this reference in some occasion (I don’t know how he knows this, but Berge knows a lot of things). I learned something during those 45 minutes 🙂

I went home mulling on the end of the exchange between Berge and Daniel. Berge stressed that he doesnot understand why people still work on such things as local variable models. He added that, in his opinion (well informed as usual), all the foundational topics have been discussed and settled by the founders and the rest are variations or re-discoveries by people who did not bother to inform themselves. Daniel replied that he basically agrees, but since these days many people are excited about closing loopholes, he argued that these discussions are relevant to the community. I think both are right, but they also forgot an important point.

I agree with Berge that there is no “fundamental” reason to keep researching on local variables and their alternative friends (contextuality, Leggett-Garg…). Journals like Nature and Science are filled with such experiments these days; but this fashion and consequent flow of publications is not driven by controversy, nor by the desire of acquiring new knowledge, because everyone knows what the outcome of the experiment will be. It is a most clear form of self-complacency of a community. I agree with Daniel that there is some need to put order in the clamor of claims, so a clean analysis like the one of his paper is very welcome.

However, I think that there is a reason to close those loopholes: device-independent assessment! If quantum information tasks are to become practical, this is a really meaningful assessment. Experimentalists (I mean, the serious ones) do worry about side channels. If they could go for device-independent assessment, their worries are excluded by observation. But to reach there, you need to close the detection loophole.

I also think that the notion of device-independent is a genuine advance in foundations. I side with Berge when it comes to all the debates about “wave vs particle”, “indeterminacy”, “incompatible measurements”… On such topics, yes, the founders settled pretty much everything. But I don’t see how people immersed in those terms of debate could have anticipated an assessment of non-classicality made without describing the physical system at all: that is, without saying that “it” is an electromagnetic field in a box (Planck), or a magnetic moment in a magnetic field (Zeeman and Stern-Gerlach).

Now, you may wonder why device-independent does not receive so much public praise and excitement as the other stuff. I don’t know, but several reasons may contribute to this situation:

* The debates on waves and particles have percolated into the general educated public. Since there is no “clear” explanation available (there cannot be, if by “clear” we mean “understandable in everyday terms”), these educated people think that the problem is still open. Scientific journalists, for instance, pick up immediately every paper that hints at some advance in wave-particle blabla — I suggest they should always consult Berge before writing enthusiastic nonsense. The idea of device-independent is too novel to generate such an excitement.

* None of the great North-American prophets of the church of larger Hilbert space (i.e. the quantum information community) is preaching for device-independent. The topic is being pushed from some places in Europe, where they have a network (principal investigators: Antonio Acin, Nicolas Gisin, Serge Massar, Stefano Pironio, Jonathan Barrett, Sandu Popescu, Renato Renner) and from Singapore (Artur Ekert and myself).

* Device-independent theory is tough (you need to compute bounds without assuming anything about your system, using only the fact that you observed some statistics); experiments are even tougher (you need to close the detection loophole at the very least, as for the locality loophole, either you close it too, or you need a good reason to argue it away). So it’s a sort of “elite” topic, which does not gain visibility from mass production — yes, a constant flow of papers, even if most of them are deemed wrong or pointless, does contribute to the impression that a topic is hot and interesting.

And finally, the most powerful reason: I am neither Scott Aaronson nor John Baez, so nobody reads my blog 😉