Random for whom?

At the turn of the year, I have to admit that I have not used this blog very often; but I just wrote a post for another one, so just as well link it here:

http://www.iqoqi-vienna.at/random-for-whom/

I am preparing another text on randomness, of a more unusual kind… more soon 🙂 In the mean time, Merry Christmas and Happy New Year!

Free will, the quantum and the cosmos

The scientific media have recently reported a proposal aimed at closing the “free-will loophole” in Bell experiments with light coming from distant quasars. Wow: free will, quantum mysteries and the cosmos in a single paper! This could only make headlines!

Let me take you on a tour to understand what this is about. The idea that some form of “free will” (in precise jargon, “measurement independence”) is required for a Bell test to be meaningful is not new: John Bell himself stressed it. But I dare say that Francis Bacon and Galileo, and probably even Aristotle, had understood measurement independence — not as applied to Bell tests, for sure, but more modestly as a cornerstone of the scientific method. If you measure the position of a star, you assume that the star is not changing its behavior just because it is being observed. If you drop a stone from different heights, you assume that the stone does not adapt its motion to the height you dropped it from. Think what would be left of physics and chemistry, if matter could look around, guess which experiment it is going to be submitted to, and rapidly cook up an adaptive strategy. And why do you think biologists try not to be seen when they observe the behavior of animals who, contrary to stones, might indeed adapt to the observers’ presence? Well, you got the point.

Still, when it comes to quantum physics, some results are so counter-intuitive that some people are ready to doubt of everything. Would it be possible that the famous two entangled photons, at the moment of leaving the source, are aware of the measurement they are going to be submitted to? Or reversing the roles, is it conceivable that the photons actively influence the random number generators that choose the measurements, in order to bias the choice towards a favorable one? Are photons closer to zebras than to stones? Raise this kind of concern with electric trains and robots, and you get rightly classified in the paranormal category. Raise it with entangled photons, and it becomes respectable science (yes, I ride on it too).

Anyway, let’s continue playing the game. I play the guy who is strongly suspicious of a conspiracy between the thing you call “a source of entangled photons” and the things you call “random number generators” (RNGs). Can you do something to convince me that my suspicions are not founded? If I am absolutely paranoid, you can’t: super-determinism or Matrix-like simulations cannot be falsified by observation. But suppose I am willing to concede something. Specifically, I concede that the RNGs are really initially uncorrelated from the source. Then, measurement independence can only be violated if some signal propagates from the RNGs to the source. Haha, here you catch me: every signal must propagate at the speed of light (OK, OK, I concede you also that one). So, if the suitable space-like separation is guaranteed, the signal will never arrive to the source on time: the photons have left the source without knowing which measurements await them. Measurement independence is guaranteed. This reasoning was first explicitly made (to my knowledge) in a paper by Zeilinger’s group. It is nice science: state assumptions and derive falsifiable consequences.

But now… hmmm… maybe I want to come a little bit back on my concession: maybe you can try and convince me that the random number generators are really independent of the source, at least up to some degree of confidence. You think and… I SAID YOU THINK: pause and think how you would try to convince me, before continuing!

Got it? OK, let’s compare our answers.

What I would like to see is a SIMPLE random number generator, something I can reasonably trust. A resistor directly connected to an oscilloscope, showing a trace that behaves like thermal noise, maybe would do it; or a visible light beam sent on a bulky beam-splitter before reaching some detectors. Or, if I don’t trust electronics (maybe the conspirators use the power network of your city), I may be happy with a grain of pollen performing brownian motion. Or maybe you are so kind that you go all the way and ask two human beings to act as random number generators (although, as well known, we humans are not great at generating randomness). I know well that all these options are questionable, but again, if I don’t concede that there can be randomness somewhere, you won’t be able to convince me of the opposite.

What probably would NOT convince me is to be shown a set of two telescopes pointing at some distant points of the universe, connected with all kind of filters and electronics, accompanied only by your plead of “trust that the only signal that is detected at the end of the measurement chain is that of some very distant quasars, who have not talked to each other nor to any matter here on Earth since billions of years”.

Still, who can resist at the charm of having free will, quantum mysteries and the cosmos in a single paper?

Collapse models collapse in my esteem

In preparing my teaching for the coming semester, I was lead to consider the possibility of introducing the students to collapse models. From afar, I was keen to adopt a moderate stance like “yeah, this is not standard stuff, but it’s intriguing, and it’s worth knowing”. (un)Fortunately, while in my research I fall every now and then into the sin described in my previous post, when it comes to teaching I am really incapable of regurgitating material from a book or a review article. So I spent some time thinking how I would present collapse models to an audience, who will have already studied Bell’s theorem in its device-independent approach (lecture notes available here). And I came to the conclusion that — I probably won’t present them.

Let us take one step back. The desire for collapse is triggered by the quantum description of setups like the double slit experiment. Each electron produces a very sharp dot on the screen, as one would expect from a particle. However, after detecting many electrons, the overall distribution of dots is an interference pattern, like the one expected for a wave. These are the facts. The rest depends on how you narrate the story. In the most frequently encountered narrative, the electron is delocalized like a wave before hitting the screen, then it “collapses” to a given location upon hitting the screen. A collapse model is a model that aims at describing how this transition happens.

Some very smart people,  rigorously trained in quantum thinking since the cradle, realize immediately that such a narrative is fishy, denounce it as such and ask us to move on. Less smart and/or less rigorously trained people, like me, need more evidence to be convinced. What happened to me in preparing my teaching is that I suddenly collected for myself such evidence. And now I am trying to share it with you.

So, let’s take my starting point and that of my future students: we know Bell inequalities. We know in particular that any classical mechanism aimed at reproducing quantum correlations must be non-local. “Wow, cute: non-locality!” Well, not so cute. For one, the hypothetical mechanism must be infinitely rigid in space-time, or in other words, it must propagate information at infinite speed (yes, infinite, not “just” faster-than-light). For two, the predictions of quantum physics are more restrained than those of the most general non-local theory (even under the so-called “no-observable-signaling” assumption): so, if you toy around with a non-local mechanism, you must further constrain it ad hoc in order to recover the observations. In other words, not only a non-local mechanism does not bring additional predictive power: it must be so fabricated as to match the observations, which we continue predicting using quantum theory. Really, not so cute.

Back to collapse now. A collapse model worth of its name would certainly be applicable beyond the example localization in double slit experiment. Specifically, take a Bell experiment: two photons are prepared in an entangled state, so the polarization of each one is undetermined. Upon measurement, one is found with horizontal polarization (H), the other with right circular polarization (R). This is also a case of “collapse”, where something got determined that was previously undetermined. So the collapse model should describe it too.

Now it’s time to be more precise: what is your aim exactly, in constructing a collapse model? Here come two options:

  • You want a deterministic process: something that explains that in this run of the experiment, given whatever variable in the measurement apparatus, the first photon would necessarily collapse into H; and the second photon would necessarily collapse into R. This would certainly be a very pleasant complement to quantum physics for a classical mind. But Bell’s theorem is clear: the “whatever variable” that triggers such a collapse must be unpleasantly non-local as defined above. Are you ready to buy it? Then I have infinitely many collapse models ready for you. But think twice: are you really making physics more understandable by choosing this path?
  • You want a stochastic description: here, I am a bit at a loss at what this wish is. If by “stochastic” one means “classically stochastic”, we are back to the previous case. In fact, Bell’s theorem does not apply only to deterministic models, but also to classically stochastic ones (i.e. all those where the stochastic element can be attributed to ignorance; mathematically, those that can be described as convex combinations of deterministic models). If by “stochastic” one means “any form of mathematical model with some stochastic element” — well, then quantum mechanics is there, and there does not seem to be the need to complement it with a collapse model.

In a nutshell, it seems to me that collapse models were maybe a legitimate quest at a time when “localization” was presented as the fundamental non-classical feature of quantum physics (the very smart fellows mentioned above will tell you that there has never been such a time for them, but again, this post is for normal people like me). Now we have Bell’s theorem and the corresponding experiments. You don’t need to make of Bell’s theorem your new foundational cornerstone, if you don’t want to; just take it as one of the many discoveries made in the 20th century thanks to quantum physics. Under the light of this discovery, the fog of collapse models, which could be entertained for some time, seems to be dissipating leaving little trace.

P.S. This ends up being a “negative” post: I criticize collapse models without proposing my own positive solution. At least, I know that there is one path that is not worth exploring. I am leaving now for three weeks of holidays and maybe I’ll find time to explore some other path (though, most probably, I won’t think of physics altogether).

Physics and the bumper sticker

In the remote preparation for my Coursera on randomness, I read Nate Silver‘s The signal and the noise. I am not sure how much of it will enter my course, since I don’t plan to enter into the topics he deals with (politics, the stock market, climate change, prevention of terrorism, baseball and poker). But the conclusion struck a cord.

The author lists seven approximations to describe the “efficient market hypothesis”, which run: 1. No investor can beat the stock market, 2. No investor can beat the stock market over the long run, and so on until approximation 7 which a is five lines long sentence. Then he adds (emphasis is mine):

“The first approximation — the unqualified statement that no investor can beat the stock market — seems to be extremely powerful. By the time we get to the last one, which is full of expressions of uncertainty, we have nothing that would fit on a bumper sticker. But it is also a more complete description of the objective world.”

Sounds familiar? Let’s give it a try:

Example 1:

  • Bumper sticker: No extension of quantum theory can have improved predictive power
  • Expression full of uncertainty: the authors work under the assumption of no-signaling (so, if you are Bohmian, don’t worry, our result does not concern you).  Then they assume a lot of quantum physics, but not all of it, otherwise the claim would be tautological. Beyond the case of the maximally entangled state, which had been settled in a previous paper, they prove something that I honestly have not fully understood. Indeed, so many other colleagues have misunderstood this work, that the authors prepared a page of FAQs (extremely rare for a scientific paper) and a later, clearer version.
  • Comment: the statement “Colbeck and Renner have proved that quantum theory cannot be extended” is amazingly frequent in papers, referee reports and discussions. Often, it comes in the version: “why are people still working on [whatever], since Colbeck and Renner have conclusively proved…?” It is pretty obvious however that many colleagues making that statement are not aware of the “details” of what Colbeck and Renner have proved: they have simply memorized the bumper sticker statement. I really don’t have a problem with Colbeck and Renner summarizing their work in a catchy title; what is worrisome is other experts repeat the catchy title and base decisions solely on it.

Example 2:

  • Bumper sticker: The quantum state cannot be interpreted statistically [Yes, I know that the title of the final version is different, but this is the title that sparked the curiosity of the media]
  • Expression full of uncertainty: the authors work with a formalization of the notions of “ontic” and “epistemic” that is accepted by many people, though not by Chris Fuchs and some of his friends. They add a couple of other reasonable assumptions, where by “reasonable” I mean that I would probably have used them in a first attempt to construct an epistemic model. Then they prove that such an epistemic model is inconsistent.
  • Comment: too many people have commented on this paper. The latest contrary claim has been posted online today, I have not read it because I am really not following the debate, but for those who are interested, here it is.

Example 3:

  • Bumper sticker: either our world is fully deterministic or there exist in nature events that are fully random [the use of “either-or” makes it too elaborated for a real bumper sticker, but for someone who browses these papers, the sentence is basic enough]
  • Expression full of uncertainty: the authors consider a very weak source of randomness, something like a very biased coin; in fact, it can be more perverse than that, because it can have correlation over various tosses. But it cannot be completely perverse: the authors make an assumption about its structure (technically known as “Santha-Vazirani” by the names of the first two persons who proposed it). Then they prove that, if this source is used as seed for a specific quantum experiment, the outcomes of the experiment are guaranteed to be much more random. In the limiting case of an experiment lasting infinitely long time, and whose results do not deviate by any amount from the optimal result allowed by quantum physics, the source can contain almost no randomness, while the final list will be almost fully random.
  • Comment: in a paper just published, we studied what happens if we remove the Santha-Vazirani assumption, so that the source can be as perverse as you wish. Not surprisingly, the conclusions become more pessimistic: now, one would need a fair amount of initial randomness in order for the quantum step to produce further randomness. Nothing wrong at all: some guys get a good result with an assumption, others test the limit of the assumption, this is the normal course of science. But read again the bumper-sticker statement: taken in itself, out of the paper where it belongs, that statement has not been “scientifically proved” — it even sounds closer to being impossible to prove, without the crucial assumption

Scientific madeleines

The two conferences I attended these last weeks (CEQIP and Vaxjo) were pretty good in science, food, drink, location and atmosphere. For me, they were also full of Proustian madeleines: I have met again so many colleagues and realized how they have actually shaped my life, even when the interaction had been short.

  • Mario Ziman is one of the organizers of CEQIP. I met him in my very first conference in quantum information, in the castle of Budmerice near Bratislava, back in 2001. He was doing his PhD under the supervision of Vladimir Buzek, I had recently started my post-doc with Nicolas Gisin. As an outcome of those discussions, Mario and I (and Nicolas and Vlado and another student called Peter) worked in two papers about entanglement and thermalization. At that time, it was a rather unusual topic; now it is a big one, only in CEQIP we had at least three presentations. None of the young authors was probably even aware of our old works, but Mario and I knew better than struggling for recognition: we simply sat there in the back, enjoying the progress of the field and exchanging nods.
  • I have had fewer interactions with the other organizer, Jan Bouda; but I cannot forget a funny moment when he was visiting Singapore, probably in 2007. In the old big office of was to become CQT, Andreas Winter, Nicolas Brunner and I asked him to explain his research. He started out: “I don’t know if you are familiar with quantum cryptography”… This time, I discovered that Jan is very familiar with Moravian wines and their weaker and stronger relatives.
  • Another Slovak in CEQIP: Martin Plesch. He is presently working in Brno and has picked up the topic of randomness. In the conference in Budmerice in 2001, he was an undergrad. He had been tasked to drive Nicolas Gisin and me to Vienna airport on the last day. It was raining, we were a bit late, and Martin was going rather fast on those country roads, keeping really, really close to the car in front.
  • In Vaxjo I met again Hans-Thomas Elze, a German working in Pisa, who is the organizer of a series of conferences in Tuscany. When I went in 2004, it was held in Piombino. At that time, Hans-Thomas was still working in Brazil: as a result, the proceedings of that conference were published in Brazilian Journal of Physics. My paper dealt with an unconventional question and (as you can imagine from the journal) was forgotten until the group of Stefan Wolf made a great progress in 2011. The final solution of the problem appeared in Nature Physics. In Vaxjo, Hans-Thomas invited me to attend his next conference in September 2014. I don’t think there is an Etruscan Journal of Physics, but we’ll see…
  • Since a few years, I coincide with Mauro D’Ariano at least once per year and we always have good conversations. In the middle of complaints about bureaucracy, punctuated by the typical Italian word –zz-, he keeps an exemplary scientific drive. A few years ago, we were having fast food lunch in the March Meeting in Boston. He was telling me that, in his maturity, he wanted to start tackling “really serious” problems. Concretely, he had been reading a lot about field theory, cosmology, relativity… and was declaring his disappointment in finding gaps in the usual arguments. He had decided to try and reconstruct physics from scratch… well, from some quantum form of scratch. Normally, I tend to dismiss beginners who find problems in what others have devoted their lives too — but here, and with Mauro, I could only agree. A few years have passed: his attempt of reconstructing all that we know from basic quantum building blocks has not hit the wall: on the contrary, he and his collaborators are deriving more and more results, and even the “experts” start taking them quite seriously. Thanks Mauro for showing what serious and constant work can do!

Why am I writing all this? For no special reason other than to record minute events and people who are part of my life of a physicist.

Peer review: quick guide for rejection

Recently, I have been very lucky with the referees of my papers. It is a good time to recall the occasions where I had been not so lucky in the past, without the danger of venting unresolved personal angers; and give some advice to young referees.

So, imagine you receive an article to referee from a prestigious journal; you don’t really understand what it is about, but your gut feeling is that it should be rejected. There are good and bad ways of doing that.

Here are some hateful strategies which border the unethical:

  1. The paper presents a new idea and studies only the simplest case: dismiss it with “There is no much interest in the topic” or “The idea is interesting but is not applied to a realistic scenario”.
  2. The paper provides the solution to an open problem: dismiss it with “The idea is not new” or “The result is not surprising”.
  3. The ever-successful “it is not of sufficient broad interest to justify publication in this journal” can be applied at any time.

Notice how 1 and 2 can be used in sequence to block a full series of papers: one can first reject the initial idea by saying that the authors should work more, then reject the extensions by stressing that the idea has already been presented before.

How do you see that these strategies are nasty? Basically, they use a very generic statement, against which the authors have no possibility of scientific reply. If the authors are told “not broad interest”, their only hope is to argue “yes, look better, it is interesting”, not very convincing; if the authors are told “not surprising”, they can only try and convince the editor that “surprise” is maybe not a criterion, but editors get very nervous if you try to teach them criteria for acceptance…

So, what are better ways of rejecting? It depends on each paper, but when you write a report, you can check some items:

  • If your criticism can be applied to a paper you admire, it is wrong. For instance, with strategy 1 above, you could have rejected the original quantum teleportation paper: bad idea, for sure. As for criterion 2,  I am sure you have written yourself some paper with very good, but “not surprising” results. Never use an argument that can be used against yourself!
  • If your report is topic-independent, it is wrong. Always base your rejection on the scientific content of the paper: “This is a marginal improvement over [work that you consider similar]”, “This is not new since it was already mentioned in [review paper]”…
  • As a referee you must feel moderately competent in the field, otherwise it is better to decline (journals know that I decline a lot). Assuming you think you are competent, the burden of clarity lies on the authors, not on you. So it is fine to write something like “It is not clear to me what these guys are trying to prove, they say it is very different from [reference] but it looks the same to me. Unless they clarify their contribution, I cannot accept”.

P.S. as written in the first paragraph, this post refers to the case where the referee does not have strong objective arguments for rejection, but just a gut feeling (which may be perfectly right and licit) that a paper is of low quality. If the referee has objective arguments, of course he/she should use them!

The decline of impact factors

The influence of the impact factor is declining, according to a statistical survey which I reached starting from a blog post in Physics Today.

Many, including myself, shall certainly welcome a scientific world in which it won’t be true any longer that “a research published in Nature is, by the very fact, of the highest quality” and that “a young scientist who has published in Nature has far higher chances of getting a job“. But we don’t have to forget that the issue is deeper.

In the past, careers in science were supposedly determined by a panel of wise men (I would like to add “and women”, but it would be an anachronism): as well known, oligarchy is fair only in the eyes of those who share the same wisdom as the oligarchs. Presently, the panel of wise persons is still required for hiring, promotions etc, but there is a request of control by an independent, supposedly neutral authority. This motivates the demand of metrics, may reduce the influence of the whims of some people but introduces other problems. I fear that we won’t hit the perfect system.

Back to the statistical survey: figures 4 and 5 are really intriguing: they indicate that only few of the most cited papers are published in the most cited journals, and the percent is declining since around the year 1990. I am not sure if this is an instance of Simpson paradox… What is even more intriguing is that each figure has two graphs, and it seems to me that, by the definitions used, the two graphs should add up to 100%; but they don’t. So either something is wrong with me, or with this analysis: better finish this post and go back to work.

Pope & media: basic guide

This time, I decided to post about something that is not related to science: the resignation of the Pope. After all, it’s public knowledge that I am a practicing Catholic. I know pretty well that most of those who browse this blog are not, and many just don’t care about religion: take this post as an exercise in critical spirit. I am not going to give you “my opinion”, because I am really nobody to have an opinion on such things. But I want to give you a guide to read the media, in case you follow the developments in the coming weeks.

Let us start with an obvious fact: journalists can’t be experts of everything and have to produce stories that attract attention. Also, they have to craft the story in the way the reader expects it. When it comes to scientific topics, we know pretty well how a piece of news should be cast in order to make in the mass media: it will have to sound either like science fiction (faster-than-light communication, parallel universes, time travel…) or like an answer to our ultimate concerns (the existence of God, free will, faster computers and flatter screens).

Now, how do the media craft a story about the Catholic church? Since the 1960s, it has been customary to use the bi-partite categorization “conservatives versus liberals” (at the beginning, the terms used to be “reactionaries versus progressives”, but the ideology that used that language has become less fashionable in the past decades). From afar, this may seem like as suitable a scheme as any other. In reality, this scheme is as wrong as the wave-particle duality in quantum physics: by describing the truth as tension between two extremes, it misses… well, the truth. I am going to propose you an alternative scheme: it’s three-partite, but I guess you can handle the complication.

At one extreme you have those that we scientists tend not to like very much. They think that the church has gone astray in the last 50 years or so, by speaking in favor of religious freedom, by daring to hold prayer sessions with members of other religions, and by accepting the claims of science. To be fair, you won’t find many Catholics thinking this way: they are minorities, to be found essentially in those nations in which Evangelical Fundamentalism is strong (osmosis happens), in some Alpine valleys, and maybe in some particularly stuffy sacristies (but I have not visited the latter).

At the other extreme, you have those who, in the words of a famous author, want to “reduce the Catholic church to yet another liberal Protestant denomination”. The media have a lot of sympathy for those, and maybe my readers too. But my readers are supposed also to understand (rationally, if not emotionally) why myself and many other Catholics don’t want to go that way either.

So far, we have the bipartite scheme. Notice how all those who don’t fit exactly in any of the above categories will be treated by the media as torn between the two, “conservative here, liberal there”. Have you not found this tension in most of the recent media portraits of Pope Benedict? Whatever your opinion on this Pope, he is certainly not a torn, tormented soul: the serenity of the intellectual is one of the traits unanimously noticed. Have you not found the same tension in most of the portraits of the cardinals that are presented as possible successors? “Cardinal X will be liberal on this topic and conservative in this other”. And if you ask me, you will find out that, in what I consider my personal coherence, I am very “liberal” in some topics and very  “conservative” in others (I keep these discussions off my blog, so you have to ask me personally).

In reality, Pope Benedict and most Catholics including myself (and, you can bet it, including the next Pope) belong to a third category: those who know that Ecclesia semper reformanda (“the Church is always in need of change”: I wrote it in Latin to show that it’s quite an old idea, we did not need the pressure of the media to realize it) but who believe in the promise of Jesus that his message will always be preserved in that Church. Of course, this category does not define a monolithic bloc: there are differences of opinion, at times significant ones. Does discord grow? Sadly, at times it does: just as in science, among specialists, we have different opinions on how to make the field progress and we often forget that we have a common goal, the progress of the field. Anyway, whether the Catholics in this category manage to recall that, beyond our differences, we have a common goal, is probably no longer the concern of my reader. So I stop here: just keep in mind this third category, if you want to understand a bit better the media reports in the coming weeks.

Anything wrong with tomography?

“Quantum tomography”, or “state estimation”, is solidly established — or so it seemed until some months ago.

The notion is pretty simple and peacefully admitted: it’s just the quantum analog of the reconstruction of a statistical distribution from a sampling. How would you know if a die is balanced? You cast it many times and infer the probabilities. Ideally, you should cast it infinitely many times; fortunately, the whole field of statistics provides rigorous ways of assessing how much you can trust your inference from finite samples.

You can reconstruct a quantum state in a similar way. There is one main difference: the quantum state contains information about the statistics of all possible measurements and, as well known, in quantum physics not all measurements are compatible. This is solved by sampling not just for one measurement, but for several. For instance, if you want to reconstruct the state of a spin 1/2, you need to reconstruct the statistics of measurements along three orthogonal directions x,y,z. It’s like saying that you have to cast a die in three different ways, if you pass the lousy analogy.

In the lab, tomography has been used for decades, for characterization: you think you have a source that produces a given quantum state and use tomography to check how well you succeeded. Often, tomography is used to certify that the state of two or more quantum objects is “entangled”.

Theorists have been working in devising various improvements.  The biggest challenge is the fact that many statistical schemes may end up reconstructing a state that is not a valid one (think of reconstructing the statistics of a die and finding out the result 5 happens with negative probability!). Also, tomography is a useful motivation to study the structure of “generalized quantum measurements” (the kind that deserve the awful acronym POVMs) and plays a crucial role even in some “interpretations” of quantum physics, notably “quantum bayesianism” (I can’t really get to the bottom of it: Chris Fuchs speaks so well that, whenever I listen to him, I get carried away by the style and forget to try and understand what he means. If you really want to make the effort, read this paper).

All is well… until, a few months ago, reports appeared that quite elementary sources of possible errors had been underestimated:

  • One such source are systematic errors. Consider the example of the spin 1/2: certainly, experimentalists can’t align their devices exactly along x, y and z. They can calibrate their direction as well as the precision of their calibration devices allow. According to a paper by Gisin’s group in Geneva, the effect of the remaining error has been largely neglected. While probably not dramatic for one spin, the required corrections may become serious when it comes to estimating the state of many, possibly entangled spins.
  • Another quite obvious possibility is a drift by the source. When we cast a die many times, we make the assumption that we are always casting the same die. This is not necessarily true down to ultimate principles: some tiny pieces of matter may be detached by each collision of the die with the floor, so the die may be lighter and deformed after many trials. This deterioration seems inconsequential with a die. But things may be different when it comes to quantum states that are produced by complex laboratory equipment that have the nasty tendency of not being as stable as your mobile telephone (for those who don’t know, in a research lab, the stabilization and calibration of the setup typically takes months: once it is done, the actual collection of interesting data may only take a few days or even hours). Two papers, one in December 2012 and the other posted three days ago but written earlier, explore the possibility of tomography when the source of quanta is not producing the same state in each instance.

Does all this story undermine quantum tomography? Does it even cast a gloomy shadow on science? My answer is an unambiguous NO. All the previous works on tomography were done under some assumptions. Whenever those assumptions hold, whenever there is reason to trust them, those works are correct. If the assumptions can be doubted, then obviously the conclusion should be doubted too. With these new developments, people will be able to do tomography even under more relaxed assumptions: great! The lesson to be learned is: state your assumptions (OK, you may not want to state all the assumptions in all your technical papers aimed at your knowledgeable peers: but you must be aware of them, and state them whenever you write a review paper, lecture notes or similar material).

The Nobel Prize and quantum information

Everyone is happy with the attribution of the 2012 Nobel Prize for physics, and so definitely am I. However, I cannot fully agree with those of my colleagues who are hailing this attribution as “a Nobel Prize for quantum information”. Serge Haroche and Dave Wineland started working on those experiments well before the idea of the quantum computer. Did they join the quantum information community, or is it the community that joined them? There is no sharp answer of course, because the cross-fertilization of ideas goes both ways; but I think that Serge and Dave would be more or less where they are without quantum information.

By their choice, the Nobel committee endorses great developments in atomic physics and quantum optics. The endorsement of quantum information proper is still pending.