Category Archives: Quantum
In preparing my teaching for the coming semester, I was lead to consider the possibility of introducing the students to collapse models. From afar, I was keen to adopt a moderate stance like “yeah, this is not standard stuff, but it’s intriguing, and it’s worth knowing”. (un)Fortunately, while in my research I fall every now and then into the sin described in my previous post, when it comes to teaching I am really incapable of regurgitating material from a book or a review article. So I spent some time thinking how I would present collapse models to an audience, who will have already studied Bell’s theorem in its device-independent approach (lecture notes available here). And I came to the conclusion that — I probably won’t present them.
Let us take one step back. The desire for collapse is triggered by the quantum description of setups like the double slit experiment. Each electron produces a very sharp dot on the screen, as one would expect from a particle. However, after detecting many electrons, the overall distribution of dots is an interference pattern, like the one expected for a wave. These are the facts. The rest depends on how you narrate the story. In the most frequently encountered narrative, the electron is delocalized like a wave before hitting the screen, then it “collapses” to a given location upon hitting the screen. A collapse model is a model that aims at describing how this transition happens.
Some very smart people, rigorously trained in quantum thinking since the cradle, realize immediately that such a narrative is fishy, denounce it as such and ask us to move on. Less smart and/or less rigorously trained people, like me, need more evidence to be convinced. What happened to me in preparing my teaching is that I suddenly collected for myself such evidence. And now I am trying to share it with you.
So, let’s take my starting point and that of my future students: we know Bell inequalities. We know in particular that any classical mechanism aimed at reproducing quantum correlations must be non-local. “Wow, cute: non-locality!” Well, not so cute. For one, the hypothetical mechanism must be infinitely rigid in space-time, or in other words, it must propagate information at infinite speed (yes, infinite, not “just” faster-than-light). For two, the predictions of quantum physics are more restrained than those of the most general non-local theory (even under the so-called “no-observable-signaling” assumption): so, if you toy around with a non-local mechanism, you must further constrain it ad hoc in order to recover the observations. In other words, not only a non-local mechanism does not bring additional predictive power: it must be so fabricated as to match the observations, which we continue predicting using quantum theory. Really, not so cute.
Back to collapse now. A collapse model worth of its name would certainly be applicable beyond the example localization in double slit experiment. Specifically, take a Bell experiment: two photons are prepared in an entangled state, so the polarization of each one is undetermined. Upon measurement, one is found with horizontal polarization (H), the other with right circular polarization (R). This is also a case of “collapse”, where something got determined that was previously undetermined. So the collapse model should describe it too.
Now it’s time to be more precise: what is your aim exactly, in constructing a collapse model? Here come two options:
- You want a deterministic process: something that explains that in this run of the experiment, given whatever variable in the measurement apparatus, the first photon would necessarily collapse into H; and the second photon would necessarily collapse into R. This would certainly be a very pleasant complement to quantum physics for a classical mind. But Bell’s theorem is clear: the “whatever variable” that triggers such a collapse must be unpleasantly non-local as defined above. Are you ready to buy it? Then I have infinitely many collapse models ready for you. But think twice: are you really making physics more understandable by choosing this path?
- You want a stochastic description: here, I am a bit at a loss at what this wish is. If by “stochastic” one means “classically stochastic”, we are back to the previous case. In fact, Bell’s theorem does not apply only to deterministic models, but also to classically stochastic ones (i.e. all those where the stochastic element can be attributed to ignorance; mathematically, those that can be described as convex combinations of deterministic models). If by “stochastic” one means “any form of mathematical model with some stochastic element” — well, then quantum mechanics is there, and there does not seem to be the need to complement it with a collapse model.
In a nutshell, it seems to me that collapse models were maybe a legitimate quest at a time when “localization” was presented as the fundamental non-classical feature of quantum physics (the very smart fellows mentioned above will tell you that there has never been such a time for them, but again, this post is for normal people like me). Now we have Bell’s theorem and the corresponding experiments. You don’t need to make of Bell’s theorem your new foundational cornerstone, if you don’t want to; just take it as one of the many discoveries made in the 20th century thanks to quantum physics. Under the light of this discovery, the fog of collapse models, which could be entertained for some time, seems to be dissipating leaving little trace.
P.S. This ends up being a “negative” post: I criticize collapse models without proposing my own positive solution. At least, I know that there is one path that is not worth exploring. I am leaving now for three weeks of holidays and maybe I’ll find time to explore some other path (though, most probably, I won’t think of physics altogether).
In the remote preparation for my Coursera on randomness, I read Nate Silver‘s The signal and the noise. I am not sure how much of it will enter my course, since I don’t plan to enter into the topics he deals with (politics, the stock market, climate change, prevention of terrorism, baseball and poker). But the conclusion struck a cord.
The author lists seven approximations to describe the “efficient market hypothesis”, which run: 1. No investor can beat the stock market, 2. No investor can beat the stock market over the long run, and so on until approximation 7 which a is five lines long sentence. Then he adds (emphasis is mine):
“The first approximation — the unqualified statement that no investor can beat the stock market — seems to be extremely powerful. By the time we get to the last one, which is full of expressions of uncertainty, we have nothing that would fit on a bumper sticker. But it is also a more complete description of the objective world.”
Sounds familiar? Let’s give it a try:
- Bumper sticker: No extension of quantum theory can have improved predictive power
- Expression full of uncertainty: the authors work under the assumption of no-signaling (so, if you are Bohmian, don’t worry, our result does not concern you). Then they assume a lot of quantum physics, but not all of it, otherwise the claim would be tautological. Beyond the case of the maximally entangled state, which had been settled in a previous paper, they prove something that I honestly have not fully understood. Indeed, so many other colleagues have misunderstood this work, that the authors prepared a page of FAQs (extremely rare for a scientific paper) and a later, clearer version.
- Comment: the statement “Colbeck and Renner have proved that quantum theory cannot be extended” is amazingly frequent in papers, referee reports and discussions. Often, it comes in the version: “why are people still working on [whatever], since Colbeck and Renner have conclusively proved…?” It is pretty obvious however that many colleagues making that statement are not aware of the “details” of what Colbeck and Renner have proved: they have simply memorized the bumper sticker statement. I really don’t have a problem with Colbeck and Renner summarizing their work in a catchy title; what is worrisome is other experts repeat the catchy title and base decisions solely on it.
- Bumper sticker: The quantum state cannot be interpreted statistically [Yes, I know that the title of the final version is different, but this is the title that sparked the curiosity of the media]
- Expression full of uncertainty: the authors work with a formalization of the notions of “ontic” and “epistemic” that is accepted by many people, though not by Chris Fuchs and some of his friends. They add a couple of other reasonable assumptions, where by “reasonable” I mean that I would probably have used them in a first attempt to construct an epistemic model. Then they prove that such an epistemic model is inconsistent.
- Comment: too many people have commented on this paper. The latest contrary claim has been posted online today, I have not read it because I am really not following the debate, but for those who are interested, here it is.
- Bumper sticker: either our world is fully deterministic or there exist in nature events that are fully random [the use of “either-or” makes it too elaborated for a real bumper sticker, but for someone who browses these papers, the sentence is basic enough]
- Expression full of uncertainty: the authors consider a very weak source of randomness, something like a very biased coin; in fact, it can be more perverse than that, because it can have correlation over various tosses. But it cannot be completely perverse: the authors make an assumption about its structure (technically known as “Santha-Vazirani” by the names of the first two persons who proposed it). Then they prove that, if this source is used as seed for a specific quantum experiment, the outcomes of the experiment are guaranteed to be much more random. In the limiting case of an experiment lasting infinitely long time, and whose results do not deviate by any amount from the optimal result allowed by quantum physics, the source can contain almost no randomness, while the final list will be almost fully random.
- Comment: in a paper just published, we studied what happens if we remove the Santha-Vazirani assumption, so that the source can be as perverse as you wish. Not surprisingly, the conclusions become more pessimistic: now, one would need a fair amount of initial randomness in order for the quantum step to produce further randomness. Nothing wrong at all: some guys get a good result with an assumption, others test the limit of the assumption, this is the normal course of science. But read again the bumper-sticker statement: taken in itself, out of the paper where it belongs, that statement has not been “scientifically proved” — it even sounds closer to being impossible to prove, without the crucial assumption
The two conferences I attended these last weeks (CEQIP and Vaxjo) were pretty good in science, food, drink, location and atmosphere. For me, they were also full of Proustian madeleines: I have met again so many colleagues and realized how they have actually shaped my life, even when the interaction had been short.
- Mario Ziman is one of the organizers of CEQIP. I met him in my very first conference in quantum information, in the castle of Budmerice near Bratislava, back in 2001. He was doing his PhD under the supervision of Vladimir Buzek, I had recently started my post-doc with Nicolas Gisin. As an outcome of those discussions, Mario and I (and Nicolas and Vlado and another student called Peter) worked in two papers about entanglement and thermalization. At that time, it was a rather unusual topic; now it is a big one, only in CEQIP we had at least three presentations. None of the young authors was probably even aware of our old works, but Mario and I knew better than struggling for recognition: we simply sat there in the back, enjoying the progress of the field and exchanging nods.
- I have had fewer interactions with the other organizer, Jan Bouda; but I cannot forget a funny moment when he was visiting Singapore, probably in 2007. In the old big office of was to become CQT, Andreas Winter, Nicolas Brunner and I asked him to explain his research. He started out: “I don’t know if you are familiar with quantum cryptography”… This time, I discovered that Jan is very familiar with Moravian wines and their weaker and stronger relatives.
- Another Slovak in CEQIP: Martin Plesch. He is presently working in Brno and has picked up the topic of randomness. In the conference in Budmerice in 2001, he was an undergrad. He had been tasked to drive Nicolas Gisin and me to Vienna airport on the last day. It was raining, we were a bit late, and Martin was going rather fast on those country roads, keeping really, really close to the car in front.
- In Vaxjo I met again Hans-Thomas Elze, a German working in Pisa, who is the organizer of a series of conferences in Tuscany. When I went in 2004, it was held in Piombino. At that time, Hans-Thomas was still working in Brazil: as a result, the proceedings of that conference were published in Brazilian Journal of Physics. My paper dealt with an unconventional question and (as you can imagine from the journal) was forgotten until the group of Stefan Wolf made a great progress in 2011. The final solution of the problem appeared in Nature Physics. In Vaxjo, Hans-Thomas invited me to attend his next conference in September 2014. I don’t think there is an Etruscan Journal of Physics, but we’ll see…
- Since a few years, I coincide with Mauro D’Ariano at least once per year and we always have good conversations. In the middle of complaints about bureaucracy, punctuated by the typical Italian word –zz-, he keeps an exemplary scientific drive. A few years ago, we were having fast food lunch in the March Meeting in Boston. He was telling me that, in his maturity, he wanted to start tackling “really serious” problems. Concretely, he had been reading a lot about field theory, cosmology, relativity… and was declaring his disappointment in finding gaps in the usual arguments. He had decided to try and reconstruct physics from scratch… well, from some quantum form of scratch. Normally, I tend to dismiss beginners who find problems in what others have devoted their lives too — but here, and with Mauro, I could only agree. A few years have passed: his attempt of reconstructing all that we know from basic quantum building blocks has not hit the wall: on the contrary, he and his collaborators are deriving more and more results, and even the “experts” start taking them quite seriously. Thanks Mauro for showing what serious and constant work can do!
Why am I writing all this? For no special reason other than to record minute events and people who are part of my life of a physicist.
“Quantum tomography”, or “state estimation”, is solidly established — or so it seemed until some months ago.
The notion is pretty simple and peacefully admitted: it’s just the quantum analog of the reconstruction of a statistical distribution from a sampling. How would you know if a die is balanced? You cast it many times and infer the probabilities. Ideally, you should cast it infinitely many times; fortunately, the whole field of statistics provides rigorous ways of assessing how much you can trust your inference from finite samples.
You can reconstruct a quantum state in a similar way. There is one main difference: the quantum state contains information about the statistics of all possible measurements and, as well known, in quantum physics not all measurements are compatible. This is solved by sampling not just for one measurement, but for several. For instance, if you want to reconstruct the state of a spin 1/2, you need to reconstruct the statistics of measurements along three orthogonal directions x,y,z. It’s like saying that you have to cast a die in three different ways, if you pass the lousy analogy.
In the lab, tomography has been used for decades, for characterization: you think you have a source that produces a given quantum state and use tomography to check how well you succeeded. Often, tomography is used to certify that the state of two or more quantum objects is “entangled”.
Theorists have been working in devising various improvements. The biggest challenge is the fact that many statistical schemes may end up reconstructing a state that is not a valid one (think of reconstructing the statistics of a die and finding out the result 5 happens with negative probability!). Also, tomography is a useful motivation to study the structure of “generalized quantum measurements” (the kind that deserve the awful acronym POVMs) and plays a crucial role even in some “interpretations” of quantum physics, notably “quantum bayesianism” (I can’t really get to the bottom of it: Chris Fuchs speaks so well that, whenever I listen to him, I get carried away by the style and forget to try and understand what he means. If you really want to make the effort, read this paper).
All is well… until, a few months ago, reports appeared that quite elementary sources of possible errors had been underestimated:
- One such source are systematic errors. Consider the example of the spin 1/2: certainly, experimentalists can’t align their devices exactly along x, y and z. They can calibrate their direction as well as the precision of their calibration devices allow. According to a paper by Gisin’s group in Geneva, the effect of the remaining error has been largely neglected. While probably not dramatic for one spin, the required corrections may become serious when it comes to estimating the state of many, possibly entangled spins.
- Another quite obvious possibility is a drift by the source. When we cast a die many times, we make the assumption that we are always casting the same die. This is not necessarily true down to ultimate principles: some tiny pieces of matter may be detached by each collision of the die with the floor, so the die may be lighter and deformed after many trials. This deterioration seems inconsequential with a die. But things may be different when it comes to quantum states that are produced by complex laboratory equipment that have the nasty tendency of not being as stable as your mobile telephone (for those who don’t know, in a research lab, the stabilization and calibration of the setup typically takes months: once it is done, the actual collection of interesting data may only take a few days or even hours). Two papers, one in December 2012 and the other posted three days ago but written earlier, explore the possibility of tomography when the source of quanta is not producing the same state in each instance.
Does all this story undermine quantum tomography? Does it even cast a gloomy shadow on science? My answer is an unambiguous NO. All the previous works on tomography were done under some assumptions. Whenever those assumptions hold, whenever there is reason to trust them, those works are correct. If the assumptions can be doubted, then obviously the conclusion should be doubted too. With these new developments, people will be able to do tomography even under more relaxed assumptions: great! The lesson to be learned is: state your assumptions (OK, you may not want to state all the assumptions in all your technical papers aimed at your knowledgeable peers: but you must be aware of them, and state them whenever you write a review paper, lecture notes or similar material).
Everyone is happy with the attribution of the 2012 Nobel Prize for physics, and so definitely am I. However, I cannot fully agree with those of my colleagues who are hailing this attribution as “a Nobel Prize for quantum information”. Serge Haroche and Dave Wineland started working on those experiments well before the idea of the quantum computer. Did they join the quantum information community, or is it the community that joined them? There is no sharp answer of course, because the cross-fertilization of ideas goes both ways; but I think that Serge and Dave would be more or less where they are without quantum information.
By their choice, the Nobel committee endorses great developments in atomic physics and quantum optics. The endorsement of quantum information proper is still pending.
Recently, I have read a paper in a prestigious journal in physics, whose logic was a bit stretched. Let me paraphrase it for you.
Italians are known to be good soccer players. Recently, some authors have noticed that Singaporeans may also be pretty decent soccer players. In this paper, we prove that Singaporeans can even be better than Italians.
For the test, the Singaporeans were chosen from one of the many soccer schools in the island; the Italians were chosen among the finalists of the certamen ciceronianum, the most famous competition of Latin prose writing. The age and bodily weight distributions were the same for both samples.
Each player was asked to try and score a penalty kick with the heel. Remarkably, the Singaporeans fared far better than the Italians. This conclusively proves that Singaporeans can be better soccer players than Italians in some tasks.
Reference: Xxxxx et al, Nature Physics Y, yyyy (20zz)
How do we know that no signal can travel faster than the speed of light? There cannot be a direct evidence of this fact. The indisputable fact is that the speed of light is the same in all reference frames.
In my knowledge, augmented with the browsing capability of my students, the only answer lies in the following deduction:
(1) The speed of light being the same in all reference frames and the principle of relativity lead, as well known, to the Lorentz transformation.
(2) Armed with the Lorentz transformation, one can rather easily show that a signal propagating faster than light could allow me to send a message to myself from the future to the past. Einstein himself was well aware of this (I don’t know if he was even the first to notice it).
Now, why is it a problem, that I can send a message to myself from the future to the past? Normally, the answer lists all kind of crazy things I could do, like winning all my bets and analog stuff. For me, the most dramatic consequence is all that I, and many others, can no longer do. First of all, when I receive a message at time t0 sent at time t1, I know that for sure I’ll have to send the message at time t1: it’s unavoidable. Moreover, all the events happened between t0 and t1 that the message informs me about have also become unavoidable. For instance, the message may inform me that a friend who is walking alongside me at t0 will be killed by a car, and even if this is going to happen one hour later, all I can do is to inform him that his life is about to end. Isn’t this absurd enough?
Well, I definitely find it is… but not necessarily someone who believes in full determinism! For such a person, all the information about what is going to happen in the future is already fully contained in the universe now, and always has been. There is nothing wrong in the physical universe that some existing piece of information gets stored in the neurons that call themselves “I” before the corresponding fact can be observed by all the brains: after all, that is what happens when we predict something with the laws of physics (say, the passing of an asteroid close to the earth).
The deduction made above is absolutely conclusive only if one believes in the possibility of creation of information that was previously not available. It is not such an outlandish belief: people believing in human free will have uphold it for centuries, and within physics, quite a few interpretations of quantum physics also uphold it. But it is funny to find it appearing in a topic usually supposed to derive from special relativity alone.
Two disclaimers. First, I am NOT claiming that I believe something can go faster than light — indeed, since I believe in free will, I do find the deduction above perfectly convincing. Second, I am also aware that, if anything would go faster than light, it should be massless: for massive objects, the Lorentz transformation predicts infinite inertia at a speed approaching that of light, independently of any argument about signaling to the past.
P.S. Within a few hours from the first post, I noticed another important assumption in the deduction above: the assumption that any physical phenomenon can be harnessed to send a signal — even more specifically, that it can be harnessed to implement the protocol that allows sending messages to the past (which implies sending the superluminal signal to an observer in relative motion and having it reflected back to myself).
The advantage of holidays: one does not need to be serious all the time… and this is probably the only time when the really serious questions can come to the surface. I have jotted down here three of them, each one followed by an explanation of the context – I have got a few more questions, and more specific ones, but these will becomes research projects for my students and I am not going to reveal them here.
Question 1: is quantum physics imposing on us by its sheer weight?
The inadequacy of classical physics to describe nature is an established fact. Post factum, even the source of the problem has been clearly identified: the “classical prejudice” consists in believing that a definite value (true or false) can be assigned to any physical property at any time. But physics is not in a state of despair: positively, quantum physics has scored an impressive list of successes and is presently unchallenged. But what is quantum physics?
There is no characterization of quantum physics in terms of a small set of physical principles or phenomena. Relativity arises from the constancy of the speed of light, while the only compact definition of quantum physics is the description of its mathematical recipes. The legitimacy of those recipes is justified a posteriori: the number of correct predictions that are obtained is so large, that one must be very careful before even daring challenging a theory with such achievements. This is very reasonable, but should we just be content with it? I would really like to be able to say what quantum physics is in a few sentences, instead of burying the question under the sheer weight of the number of its achievements.
This question has no answer yet, but the answer may come one day through works like http://arxiv.org/abs/1112.1142. The next two questions are more undefined.
Question 2: how necessary is quantum physics?
Here it gets very speculative – but don’t forget, I was on holidays. It is well known that our universe is extremely well tuned, and this suggests prima facie that it has been tuned by an intelligent being. For several reasons, it seems desirable to have at least an alternative to such a conclusion. The currently fashionable alternative goes along the following lines: our universe would be extraordinary if it were unique; however, if all kind of universes are being “tried” in parallel, there is nothing astonishing in us living in the “right” one – by definition, if the universe is the wrong one, we cannot be there.
At first, this solution is meant to convey the idea of universes similar to ours, but in which the values of some physical constant differ. One may also admit that universes with more (or fewer) dimensions co-exist with ours. However, to be rigorous, one must admit the possibility of universes with laws that are absolutely different from ours: not only in the value of constant or in the number of dimensions, but in the very meaning of what “matter” is. Or maybe not? Maybe some features of our universe, duly extrapolated, are a super-universal necessity? Maybe reality is more constrained than speculative logic? As you see, we are not going to find the ultimate answer to this question – but it is good at times to sit at the verge of nonsense and feel its vertiginous call.
Question 3: emergence, seriously?
When Nobel Prize winners feel the need to contribute seriously to humanity, they write books and give talks about their vision of the world in prophetic terms, hoping that future will vindicate part of their vision. In this exercise, particle physicists then tend to adopt a bottom-up approach: everything is made of elementary particles and, in principle, everything could be explained at that level (though admittedly we are happy not to have to, when it comes to putting a satellite in orbit). Condensed matter physicists like to convey a different view, one in which the mess… sorry, the complexity they deal with every day is presented as irreducible: they like then to say that there is new physics emerging at larger scales than that of elementary particles. Their favorite example is the quantum Hall effect: it cannot arise without disorder in the atomic arrangement, and yet most of its features are described only in terms of elementary constants (think it calmly: each atom around itself sees some disorder, but somehow all the atoms together manage to forget the details of the mess and act in a clean, universal way). Sounds nice… but somehow, emergentists have never managed to look consistent in my eyes.
On the one hand, as noted above, operational emergentism is a necessity: it is a practical impossibility to use many-body quantum physics to describe the motion of a satellite, no human-built computer will ever be able to do that. But if this is the meaning of emergence, it is trivial. The deep question is whether emergence is real. Let us ask it this way: is nature doing physics from bottom-up, performing the computation that we can’t dream of simulating? Or, on the contrary, does it really have layers of complexity? When hard pressed, it seems to me that even the emergentists are scared of what their idea may ultimately mean, namely that order may appear in some cases from nothing below: a very suspicious conclusion, especially for the evolutionist cosmogony of our time…
You want my opinion? Well, when I was a teenager and all my friends started smoking, by that very fact I started finding smoking silly. In the same vein, since the multiverse is so popular, I believe in one single universe, ours; since everyone believes that everything is quantum, I believe that there may a real boundary where the quantum behavior stops; and I am sympathetic with the emergentists. On holidays, one can afford to be wrong.
I have finally read Galbraith’s Short history of financial euphoria, which Alain Aspect suggested to me during a random dinner chat a few months ago. It’s nice: it’s the first time I understand something about finance. And it triggered a concern about academia.
In finance as well as in academia, people often fall into euphoria over something that is, by all rational standards, rather worthless. In my field of research, for instance, the latest craze is the following process:
- Write down a new version of some criterion that tests that “something is quantum” (a new Bell inequality, a new test of contextuality, a version of Leggett-Garg…); the simpler — the more trivial — the better, because of point 2.
- Find a couple of friends to do an experiment for you. Better if they have been running their setup for ages and have exhausted all the serious science that could possibly be done with it, because they will be more than happy to learn that their old machinery can still be used to perform “fundamental tests”. Moreover, since your test is simple and simple quantum physics has been tested to exhaustion, you have no doubt that the experimental results will uphold your theory.
- If you can, present it as “the first step towards [a big goal]”. Never mind that it is rather the last use of a setup that has made its time (I refrained to use “swan’s song”, because the last song of the swan is supposed to be the most beautiful; the last concert of an 80 years old pop star would be more appropriate a metaphor). If you can’t invoke the future, present it as “the conclusive proof of [some quantum claim]”. Never mind that the claim is usually always the same, namely, that results of measurements are not pre-established, that there is intrinsic randomness, or however you want to phrase it. Also never mind the fact that there cannot be a “conclusive claim” every month.
The euphoria mechanism is entertained as follows:
- The big journals (Nature at the forefront) prefer to publish tons of poor science rather than risking and losing a single real breakthrough. So, if someone claims to have solved “the mystery of the quantum” (the general readership of Nature finds quantum physics mysterious), better take them seriously.
- In turn, people notice that “if you do that, you publish in Nature”. Since “that” is not that difficult after all, it’s worth while going for it.
- Once you have published in Nature (or Science or…), you are hailed as a hero by the head of your Department, by the communication office of your university, by the agencies that granted you the funds.
- Put yourself now at the other end, namely in the place of the one who would like to raise a dissenting voice and reveal the triviality of the result. All the legitimate instances (peer reviewed journals, heads of prestigious Departments, grant agencies, even popular magazines and newspapers!) are against you. Isn’t it “obvious” then that you are only venting your jealousy, the jealousy of the loser?
So far, the analogy with financial euphoria is clear. I guess (though I have not studied the statistics) that the speed of the crash is also analogously fast: it happens when some of the editors of the main journals take a conscious decision of having “no more of that”, because they realize that there is really nothing to gain. The rumor spreads that “refereeing has become tough”; the journals are accused of having become irrational since “if they accepted the previous paper, why they refuse this one” (while it’s one of their few moments of rationality).
And the consequences? The same too, but fortunately without criminal pursuits, despair and suicides. The very big fish get out unscathed: either their science is really serious (that is, they have invested only a small amount of their scientific capital in the euphoric topic); or their power is really big (that is, they have invested only a small amount of their political power in backing the euphoria). The opportunists will try to follow the wind as they should, and will be forgotten as it should. Those who face uncertain destiny are the young fellows, who were doing serious science when the euphoria caught them at the right time and the right place. Because of this, they have been raised to prominence. Somehow, all their capital is invested in that topic. Will they be able to find their way out and continue doing serious science? Or will they end up teaming with their buddies, set up a specialized journal for themselves and publishing there until their old age? If one day you find me as the founder of a journal called “Nonlocality”, please wake me up.
In the space of two weeks, two works appear in Nature Physics about measuring uncertainty relations. In the first, an experiment is actually performed to test (and, needless to say, verify) the validity of an uncertainty relation which applies to more situations than the one originally devised by Heisenberg. In the second, it is proposed that the techniques of quantum optics may be used to probe modifications of the usual uncertainty relation due to gravity. Now, to have finally a tiny bit of evidence for quantum gravity, this would really be a breakthrough!
Faithful to my principle of not doing “refereeing on demand”, this is not an unrequested referee report: in fact, I have only browsed those papers, certainly not in enough depth to make judgments. The authors are serious so, by default, I trust them on all the technicalities. The question that I want to raise is: what claims can be made from an uncertainty relation?
An uncertainty relation looks like this:
[something related to the statistics of measurements, typically variances or errors] >= [a number that can be computed from the theory]
which has to be read as: if the left hand side is larger than 0, then there MUST be some error, or some variance, or some other form of “uncertainty” or “indeterminacy”. Let me write the equation above D>=C for shorthand.
Now, let’s see what a bad measurement can do for you. A bad measurement may introduce more uncertainties than are due to quantum physics. In other words, one may find D(measured)=C+B, where B is the additional contribution of the bad measurement. It may be the case that your devices cannot be improved, and so you can’t remove B. Now, the second paper proposes an experiment whose goal is precisely to show that D(measured)=C+G, where G is a correction due to gravity. Obviously, much more than the mere observation of the uncertainty relation will be needed, if someone has to believe their claim: they will really have to argue that there is no way to remove G and not because their devices are performing poorly. The problem is that there is always a way of removing G: a bad measurement can do it for you!
Indeed, a bad measurement may also violate the uncertainty relation. Let me give an extreme example: suppose that you forget to turn on the powermeter that makes the measurement. The result of position measurement will be systematically x=0, no error, no variance. Similarly, the result of momentum measurement will be systematically p=0, no error, no variance. In this situation, D(measured)=0. Of course, nobody would call that a “measurement”, but hey, that may well be “what you observe in the lab”. To be less trivial, suppose that the needle of your powermeter has become a bit stiff, rusty or whatever: the scale may be uncalibrated and you may easily observe D(measured)<C.
So, a bad measurement can influence the uncertainty relation both ways, either increasing or decreasing C.
Now, there are reasonable ways of getting around these arguments. For instance, by checking functional relations: don’t measure only one value, but several values, in different configurations. If the results match what you expect from quantum theory, a conspiracy becomes highly improbable; and indirectly it hints that your measurement was not bad after all. For instance, this is the case of Fig. 5a of the first paper mentioned above.
Still, I am left wondering if the tool of the uncertainty relation is at all needed, since by itself it constitutes very little evidence. Let me ask it this way: why, having collected enough statistics for a claim, should one process the information into an “uncertainty relation”? The information was already there, and probably much more of it than gets finally squeezed into those variances or errors. OK, maybe it’s just the right buzzword to get your serious science into Nature Physics: after all, “generalized uncertainty relation” will appeal to journalists much more than “a rigorous study of the observed data”.