An ongoing experiment in sociology of science

Two months ago, Pusey, Barrett and Rudolph posted on the arXiv a paper with the title The quantum state cannot be interpreted statistically. It generated a lot of hype, and also serious interest. As for myself, I did not finish to understand it (my latest attempt is at the bottom of this series of comments), but this is not the matter now.

Two days ago, the same Barrett and Rudolph posted a new paper on the arXiv (with two other co-authors). The title: The quantum state can be interpreted statistically.

What’s going on? There is no mystery: in the first paper, the no-go theorem was proved under some assumptions; in the second paper, one of the assumptions is removed and an explicit model is constructed. Scientifically, this is nice: it clarifies the conditions, under which a statistical interpretation can be constructed.

This post is not meant as a criticism of the scientific content of those papers — I don’t do refereeing on demand, and even less by blog. I am just alerting those interested in sociology of science. Indeed, in the coming week or so, it will be very instructive to monitor the reaction of the scientific media to this new paper. The claim as stated in the title would deserve the same hype as the previous one (if you care about A, both the propositions “A is false” and “A is true” carry the same importance). My prediction is that, while there may be discussions in scientific blogs, the journals will not pick up the story this time. Anyway, let’s not waste more time by discussing all possible scenarios: we’ll discuss post factum on the one that will actually have happened.

Advances in foundations?

Yesterday I attended a talk by Daniel Terno. It was about one of his recent works, re-examining the so-called delayed choice experiment and its implications. It was a well-delivered talk with an artistic touch, as usual for the speaker. Berge Englert, who was in the audience, made a few historical observations of which I was not aware: notably, that the idea of delayed-choice dates back to 1941 [K.F. von Weizsäcker, Zeitschrift für Physik 118 (1941) 489] and that Wheeler conveniently omitted to quote this reference in some occasion (I don’t know how he knows this, but Berge knows a lot of things). I learned something during those 45 minutes 🙂

I went home mulling on the end of the exchange between Berge and Daniel. Berge stressed that he doesnot understand why people still work on such things as local variable models. He added that, in his opinion (well informed as usual), all the foundational topics have been discussed and settled by the founders and the rest are variations or re-discoveries by people who did not bother to inform themselves. Daniel replied that he basically agrees, but since these days many people are excited about closing loopholes, he argued that these discussions are relevant to the community. I think both are right, but they also forgot an important point.

I agree with Berge that there is no “fundamental” reason to keep researching on local variables and their alternative friends (contextuality, Leggett-Garg…). Journals like Nature and Science are filled with such experiments these days; but this fashion and consequent flow of publications is not driven by controversy, nor by the desire of acquiring new knowledge, because everyone knows what the outcome of the experiment will be. It is a most clear form of self-complacency of a community. I agree with Daniel that there is some need to put order in the clamor of claims, so a clean analysis like the one of his paper is very welcome.

However, I think that there is a reason to close those loopholes: device-independent assessment! If quantum information tasks are to become practical, this is a really meaningful assessment. Experimentalists (I mean, the serious ones) do worry about side channels. If they could go for device-independent assessment, their worries are excluded by observation. But to reach there, you need to close the detection loophole.

I also think that the notion of device-independent is a genuine advance in foundations. I side with Berge when it comes to all the debates about “wave vs particle”, “indeterminacy”, “incompatible measurements”… On such topics, yes, the founders settled pretty much everything. But I don’t see how people immersed in those terms of debate could have anticipated an assessment of non-classicality made without describing the physical system at all: that is, without saying that “it” is an electromagnetic field in a box (Planck), or a magnetic moment in a magnetic field (Zeeman and Stern-Gerlach).

Now, you may wonder why device-independent does not receive so much public praise and excitement as the other stuff. I don’t know, but several reasons may contribute to this situation:

* The debates on waves and particles have percolated into the general educated public. Since there is no “clear” explanation available (there cannot be, if by “clear” we mean “understandable in everyday terms”), these educated people think that the problem is still open. Scientific journalists, for instance, pick up immediately every paper that hints at some advance in wave-particle blabla — I suggest they should always consult Berge before writing enthusiastic nonsense. The idea of device-independent is too novel to generate such an excitement.

* None of the great North-American prophets of the church of larger Hilbert space (i.e. the quantum information community) is preaching for device-independent. The topic is being pushed from some places in Europe, where they have a network (principal investigators: Antonio Acin, Nicolas Gisin, Serge Massar, Stefano Pironio, Jonathan Barrett, Sandu Popescu, Renato Renner) and from Singapore (Artur Ekert and myself).

* Device-independent theory is tough (you need to compute bounds without assuming anything about your system, using only the fact that you observed some statistics); experiments are even tougher (you need to close the detection loophole at the very least, as for the locality loophole, either you close it too, or you need a good reason to argue it away). So it’s a sort of “elite” topic, which does not gain visibility from mass production — yes, a constant flow of papers, even if most of them are deemed wrong or pointless, does contribute to the impression that a topic is hot and interesting.

And finally, the most powerful reason: I am neither Scott Aaronson nor John Baez, so nobody reads my blog 😉

A tale of 2011

Many things happened in 2011, of which I can only be thankful. I wanted to consign one to record, which may otherwise be missed, because it is about a “failure” — or better said: a beautiful reaction to a disappointing realization.

Starting in August 2010, a student of mine, Thinh, had been studying a new class of protocols for quantum cryptography, inspired by a previous work. By April, he had managed to define the key mathematical objects to very general scenarios. This was his Final Year Project (FYP), which was awarded as “Outstanding” by the university. A few months later, together with Lana (post-doc), we prepared a paper and submitted to Physical Review Letters (PRL; for the unaware: one of the most prestigious journals for physics).

When the referee report came, the tone was expected: “good work but not of enough broad interest” — very common nowadays for quantum cryptography. The referee stressed how he/she liked very much our generalization, i.e. Thinh’s result. With a few modifications, we could have had the paper published in Physical Review A (PRA; a very good journal still, edited by the same society; a Tier 1 journal in NUS, for the sake of the bureaucrats who care about these classifications).

However, one of the small comments of the referee caught our attention: we realized that the family of protocols we had considered was uninteresting! In a nutshell, these protocols collect a lot of information, but then discard much of it and rely on the rest. Why should one do so?? In other words, all that we did was correct and even elegant, but the object of our study was sort of pointless.

Now you see the alternatives we were facing: (1) skip this awareness under the carpet, do the modifications suggested by the referees and submit to PRA, with quasi-certainty of being accepted; (2) forget about this paper and write rather a technical note, explaining why these protocols are not interesting, to be sent to a very specialized (i.e. less visible) journal. For me, there was no doubt that (2) was the correct course, but I let Thinh and Lana decide — and I am very proud to say that they took the right decision 🙂 The paper has duly been re-written and is under consideration in a specialized journal of our field.

Now comes the scary part of it. I told this story to several friends working in the academic world, over coffees or lunches or other informal meetings. Many of them, especially the younger one, were astonished: “Wow, you guys are so honest! I know many who would never had dropped the chance of publishing in a Tier 1 journal”. For myself, I am sure that Thinh and Lana have made a bigger step in their career by choosing the right course: if you keep your standards high, Tier 1 publications will come.

Happy New Year!


Everyone is speaking of it, part II

The more I hear about this result, the more I fear that the media have picked it up only because they misread the meaning of the title… Let me explain what is done there as simply as I can. I’ll let the reader decide if they think the media could have understood this 🙂

Let L (lambda in the paper) be a list of deterministic instructions of the type {“Measurement A –> give result a”, for all measurements}. Since quantum states do not predict deterministic results for all measurements, a single list is trivially inadequate. But there is a very natural way to generate randomness: just pick different lists {Lk} with probability pk each. So, the model is:

Quantum state of a single object <–> {Lk, pk}.

What the paper proves is that no two quantum states can share any list: the set of lists with probability non-zero uniquely identifies a state. In other words, giving the possible lists, or even just one of them, is equivalent to describing the state…

… for a single object! Indeed, Bell’s theorem proves that not only a product of lists {La,pa}x{Lb,pb}, but even a single product list {LaxLb, pab} cannot describe entanglement. So, lists just don’t seem to do the job. Personally, I can’t believe that the randomness of one quantum object comes from a list, when we know that the randomness of two quantum objects cannot come from a list.

In the same vein, I have a small problem with the logic of the proof. One constructs a family of product states, which should be obviously described by products of lists, and measures them by projecting on a suitable family of entangled states, which… which… wait a second: how does one describe entangled states in that model?? It seems that the closest attempt was Spekkens’ toy model, which reproduces many nice features of quantum physics, but unfortunately not (guess what?) the violation of Bell’s inequalities. Maybe the contradiction exploited in the proof comes from the fact that there is no description of entangled states in a model with lists?

That being said, this paper does add something for those who still were trying to believe in lists as explaining quantum randomness — and the more this idea is shown to be inadequate, the better 🙂

Note added: I was convinced that this post misses the point, but it triggered some nice follow-up; so please read the subsequent thread of comments: the “truth” may be at the bottom — or in the exchange 😉

Everyone is speaking of it

Everyone in my field has been fascinated by a work of Pusey, Barrett and Rudolph. I did not understand it, so I wrote to the authors. Jon Barrett replied in a crystal clear way. Here are his words on what they have proved:

“Suppose there is a hidden variable model, such that a (single) system has a hidden state lambda, and when I prepare the system in a quantum state |phi>, this actually corresponds to some probability distribution rho_{phi}(lambda). If I prepared |psi> instead, then this would correspond to the probability distribution rho_{psi}(lambda). It is obvious that rho_{phi}(lambda) and rho_{psi}(lambda) cannot be the same distribution. But we ask, can it happen that the distributions rho_{psi} and rho_{phi} overlap? The answer is obviously “no” if |phi> and |psi> are orthogonal. But what if they are non-orthogonal? We show that the answer is “no” in this case too. For any pair of quantum states which are not identical, the distributions rho_{phi}(lambda) and rho_{psi}(lambda) must be completely disjoint. Hence for any lambda, there is only one possible quantum state such that Probability(lambda|psi) is nonzero. This means that if someone knows lambda, then they know the quantum state. The whole of the quantum state must be “written into” the actual physical state of the system”. Jon also pointed me to Matt Leifer’s post, which is indeed a very clear critical appraisal.

Now, why are people excited?

Well, many seem to argue that this paper is falsifying some attempts at “being moderate”, cornering people to adopt either of the following positions:

* A realism with “crazy” features (many-worlds with the existence of all potentialities; or Bohmian mechanics with potentials that change non-locally);

* A complete renunciation to realism: we cannot hold any meaningful discourse on quantum physics, just apply the theory that gives us correct predictions for our observations.

I am not sure that the alternatives are so sharp… anyway, now I have to go back to atomic physics and figure out formulas that describe reality — sorry: the perception of the experimentalists 😉


The randomness of evaluation

You may expect that, with all the advices I am giving out, I am a great expert at evaluating presentations… Well, recently I had another proof that the task is probably undecidable.

I was in a panel evaluating student presentations. My guts considered some of them as perfect as one can expect at that level; others as catastrophic as it can get; a couple of them average. But, instead of using guts, we had agreed on an evaluation grid, giving points for criteria like “quality of the slides”, “delivery”, “ability to answer questions”. Then, I had to admit that the delivery of the “perfect” presentations was maybe not so perfect, that the slides of the “catastrophic” ones were not so catastrophic… In short, the analytical approach tends to squeeze the distribution around the average, while the synthetic approach of the guts tends to divide in just “good” and “bad”.

Now, which approach to follow? There is no clear cut answer. In our super-bureaucratized universities, the analytical approach is almost compulsory: the other one would be regarded as too subjective. Moreover… hmmm… I don’t trust myself the guts of some of my colleagues, so why should they trust mine?

However, let me tell you a personal story which shows how the analytical approach can lead to monumental failures. It happened a few years ago.

I was assigned to evaluate some projects for a scientific competition of high school students. In one of the projects, the student had combined several notions and techniques of modern physics, in order to propose a new device. According to all the criteria I was asked to judge, the project was absolutely outstanding: “creativity”, “effort”, “understanding of the basics”, “quality of the report” etc. It suffered from a little problem: the whole proposal did not make any sense! The reason is that those notions and techniques just could not be combined in the same implementation. Unfortunately, “correctness” was NOT one of the criteria we were asked to assess and grade. I had to use the space left blank for “additional remarks” and write there “The whole proposal is flawed”.

This happened twice, evaluating the report first and a poster presentation later. Guess what? Bureaucrats had no time to read the additional remarks, they just summed up the points in the main table: the student won the very first prize with high honors and was invited to attend the international phase of the competition in the U.S.

This is enough to prove my point. However, for completeness, let me tell you how the story ends. Upon receiving notification of the triumph, the school contacted me with an enthusiastic e-mail, amounting at: “We heard that you were involved in the evaluation, now he has to go to the U.S. and we need to make sure that the proposal looks perfect there: would you mind helping us to fix all the remaining details?” To which, I replied in capital letters: THAT PROPOSAL IS FLAWED, followed by the list of flaws. After some days of silence, a much humbler e-mail popped up in my box: “Is there anything we can do to fix it?” Well, to fix the science, one could just replace electrons with neutrons. But since the whole device was meant for use in a computer, the applied part would then become a proposal to put sources of neutrons (= a nuclear reaction) in personal computers, which may not be deemed as practical or even desirable… Finally, the outburst of H1N1 solved the case: all the trips of students to the U.S. were canceled and the project was forgotten. The school and the student retained all the honors — which they both fully deserve, maybe not on the basis of that project, but for the ensemble of their achievements; of course, this is only my guts’ assessment 😉

Presentations: special for students

The mistakes addressed here may happen at any level, but are very frequent in a specific case: students having to present their first research projects within their university. So I’ll write having them in mind; feel free to generalize.

Mistake 1: assume that the basics of your work are known by everyone in the audience

Let me explain this by public confession: in this very moment, I would not be able to recover from my memory such basic stuff as the equation of the incompressible fluid, the critical exponent of magnetization in the Ising model, the definition of the Young coefficient, the reaction to explain which the neutrino was first postulated… Similarly, most of my colleagues may need to be reminded that Bell’s inequalities are tests on two entangled particles, or that quantum cryptography is about Alice trying to establish a secret key with Bob while Eve tries to spy, etc.

We won’t need much to remember these things: show the equations or the schemes on a slide, spend 20-30 seconds of words, and it will come back. So, the matter is not to spend too much time recalling trivialities, just to set rapidly the stage.

Mistake 2: assume that you have show all that you have done (especially the tedious part)

Many students think that they have to impress the audience with their advanced results; or even worse, that they should show how much they have worked. The tedious part of your work must indeed be reported somewhere: in the written report that normally accompanies this kind of university projects. There, there must be lines and lines of code, of equations, of calibration procedures for your devices… But not in the oral presentation.

Why? Well, my explanation is: a reader can choose which parts of your report to read carefully and which to browse fast; but in an oral presentation, the audience cannot really choose. So, since you have only one message to convey, try not to convey your sufferings 😉

Last year we had a fantastic presentation by a student on chaos-induced tunneling (a topic in quantum chaos, of which I knew nothing before). He introduced the main ideas, then showed one (one!) graph and said “this is the spectrum for the model I have studied: this peak corresponds to this orbit, etc”. It was patent to everyone that, in order to get there, he had to write many complicated equations in a computer, then sort out the results — but he was kind enough not to bother us with all the definitions and show us directly the digested result.

As a counter-example, I remember a student who spent her presentation telling us how she wrote a code to map a database that was using x=0 or 1 onto another database that was rather using x=+1 or -1. The audience can safely assume that you have made no mistake in such technicalities. Incidentally, that student had forgotten to tell us what the information in the database was…

Mistake 3: lack of suitable citations

I have seen many presentations in which the only references given looked like: [Very famous fellow 1905], [My supervisor 2007, 2009]; the latter being the two articles, often published in minor journals, that the supervisor suggested as starting point. At the end of your project, you are supposed to know many more references, notably books and review papers.

A correct presentation has:

  1. Basic introduction, with reference: [Very famous fellow 1905]
  2. Overview of progress along the years, with reference: [Book 1960], [Recent review 1998], and maybe a couple of milestone papers
  3. Scope of my project, with reference: [My supervisor 2007, 2009] and [some other active fellow, 2008].

Mistake 4: repeat blindly what you have read

Let me explain this by two examples I witnessed:

  • Presentation on a source of entanglement by Kwiat et al. The student has read in the introduction of the paper the following motivation: “The generation of entangled photons is promising for applications, for instance in super-dense coding”. He just repeated that sentence in his presentation. I asked him what is “super-dense coding”: he had no idea!
  • Presentation on decoherence. The student writes on a slide that “typical decoherence times are 10^{-23} seconds”. I asked him what physical systems he had in mind; answer “hmmm, none specifically, this is very general”. Well, if it would be so general, we would never have observed any quantum phenomenon! That number is probably an estimate for a macroscopic system.

I like to compare this to a sentence from the 1st Letter of St Peter: “Always be prepared to give an answer to everyone who asks you to give the reason for the hope that you have” — just apply this to your presentation: be ready to explain why you are saying that!

If this last point is not clear yet, don’t worry: a new round of presentations is coming soon, I’ll surely have new examples to update you 😉

Presentations (2) let’s begin

What do you start your presentation with? I hear a roaring choir of voices shouting: “The outline!”


If people don’t have a clue on what you are going to talk about, the outline will be spoiled. Your first slide should be a sort of abstract: “I am working in this field, there have been lots of advances, we are currently here, and my talk will present how to move to there”.

Let me illustrate it with an example (it’s not a real presentation, just one I rapidly cooked up for this post). Suppose I want to present the research reported in this nice collaboration between my student Yimin and a group in Bilbao. The title is “Ultrafast gates in circuit QED”. The outline would look something like this:

Hmmm… apart from the fact that “introduction” and “conclusion” are sort of obvious… what is coupling to what? what is a qubit? a gate? come on, what are we talking about? The public can’t appreciate this outline, they will forget it even before you switch to the next slide!

However, consider putting this slide BEFORE the outline:

(it would be even better to add a drawing and maybe some references).

Now it starts making more sense — at least, if you are a little bit into quantum optics. We are speaking of some things that behave like atoms; we want to couple them using some field, in a regime that was not explored before. Now go back to the outline: the talk will describe (1) this new regime, (2) how to build the artificial atom, (3) the proposal for the gate that couples two such atoms.

OK, if you completely unfamiliar with this part of physics, you are as much lost as before on the specific topic of our paper… but you have got the idea concerning the presentation, have you not 😉

Trust science news?

Yesterday I was invited as a panelist in a discussion about objectivity in science, organized by the students of the philosophy interest group (they don’t seem to use an acronym, I wonder why…). I told the story of the black paper of quantum cryptography and the reactions it provoked: how nobody ever questioned the “truth” of what was written there, some experts legitimately questioned the “convenience” of writing that piece (which admittedly had a too pessimistic undertone), some non-experts got mad because they were using “the success of cryptography” to push their own agenda and did not expect those problems.

The other panelists and the students contributed many interesting ideas. I select two of them for consideration:

  1. A student mentioned quantum physics as “not predictive”. I had to correct him: it is probably the most predictive sector of science, both in precision and in scope. But it is true that this is NOT the perception of quantum physics people have: quantum physics is associated with weird claims (true) bordering on science-fiction (wrong). It is really a priority in communication of science, to convey the idea that quantum physics is first and foremost a solid body of theoretical and experimental knowledge. As for its weirdness, it’s fascinating, not as funky science-fiction, but as deep philosophy of nature!
  2. Well-known problems of communication of science were raised: overstatements by scientists themselves and by the media, the proliferation of crackpots who pass themselves as “experts” of some topic… The moderator asked then, how can the public know the right from the wrong? One of us gave the only possible answer: “ask someone you trust”. The bottom line is that science is a human endeavor, in which I believe that sound knowledge can be reached (I am a “realist” in this sense), but is far from “brute evidence”.

Hmmm… I just wonder how close to brute evidence we can come in quantum physics with the device-independent program… I’ll have to try and explain this to the people I met yesterday 🙂

Presentations (1) basics

Time for student presentations (talks and posters) is approaching in my university, so let me start a few posts on this exercise. Today, the two basic rules. Disclaimer: I was hesitating in writing these because they are very very basic… but I thought about some presentations I heard recently, and not by students… and decided that maybe it is useful after all 🙂 so let’s go.

Basic rule 1: choose your message and get it through

I bet that each time you were disappointed by a presentation, it was by one of the following two reasons: (i) “I did not understand”, or more precisely “I don’t know what I should have taken away”, or (ii) the content itself was disappointing. For (ii), there is no real remedy. But assuming that the presenter is competent and is dealing with potentially interesting content, the success of the presentation depends on the choice of the message to be conveyed and a correct gauging of the audience.

At the moment of preparing, think: what do I want those people to remember? Normally, one message is enough. Surely you have done a lot of calculations, or have set up a complicated experiment. You can use one slide, or one corner of your poster, to remind all this; but nothing more… unless of course the message you chose to convey is precisely “I toiled a lot for this and now I want you to share in my toil”.

If you don’t choose the message, the message will be chosen for you; and most of the time, it will be “this presentation was a waste of time”.

Basic rule 2: stick to the time

Even if you are the smartest presenter with the most exciting subject, people get very nervous if your talk lasts longer than usual; or if you are so talkative in front of your poster that people don’t manage to get away.

For talks, just run through the presentation once or twice before, fully and aloud. If some friends are ready to listen, great; otherwise, close yourself in your room and do it. Golden rule: 1 slide = 2-3 minutes (yes: a 15 minutes presentation is only 6-7 slides, plus the title!).

For posters, look at the people you are talking to in the eyes: incipient boredom is written there very clearly.