Recently, I have been very lucky with the referees of my papers. It is a good time to recall the occasions where I had been not so lucky in the past, without the danger of venting unresolved personal angers; and give some advice to young referees.
So, imagine you receive an article to referee from a prestigious journal; you don’t really understand what it is about, but your gut feeling is that it should be rejected. There are good and bad ways of doing that.
Here are some hateful strategies which border the unethical:
- The paper presents a new idea and studies only the simplest case: dismiss it with “There is no much interest in the topic” or “The idea is interesting but is not applied to a realistic scenario”.
- The paper provides the solution to an open problem: dismiss it with “The idea is not new” or “The result is not surprising”.
- The ever-successful “it is not of sufficient broad interest to justify publication in this journal” can be applied at any time.
Notice how 1 and 2 can be used in sequence to block a full series of papers: one can first reject the initial idea by saying that the authors should work more, then reject the extensions by stressing that the idea has already been presented before.
How do you see that these strategies are nasty? Basically, they use a very generic statement, against which the authors have no possibility of scientific reply. If the authors are told “not broad interest”, their only hope is to argue “yes, look better, it is interesting”, not very convincing; if the authors are told “not surprising”, they can only try and convince the editor that “surprise” is maybe not a criterion, but editors get very nervous if you try to teach them criteria for acceptance…
So, what are better ways of rejecting? It depends on each paper, but when you write a report, you can check some items:
- If your criticism can be applied to a paper you admire, it is wrong. For instance, with strategy 1 above, you could have rejected the original quantum teleportation paper: bad idea, for sure. As for criterion 2, I am sure you have written yourself some paper with very good, but “not surprising” results. Never use an argument that can be used against yourself!
- If your report is topic-independent, it is wrong. Always base your rejection on the scientific content of the paper: “This is a marginal improvement over [work that you consider similar]“, “This is not new since it was already mentioned in [review paper]“…
- As a referee you must feel moderately competent in the field, otherwise it is better to decline (journals know that I decline a lot). Assuming you think you are competent, the burden of clarity lies on the authors, not on you. So it is fine to write something like “It is not clear to me what these guys are trying to prove, they say it is very different from [reference] but it looks the same to me. Unless they clarify their contribution, I cannot accept”.
P.S. as written in the first paragraph, this post refers to the case where the referee does not have strong objective arguments for rejection, but just a gut feeling (which may be perfectly right and licit) that a paper is of low quality. If the referee has objective arguments, of course he/she should use them!
Many, including myself, shall certainly welcome a scientific world in which it won’t be true any longer that “a research published in Nature is, by the very fact, of the highest quality” and that “a young scientist who has published in Nature has far higher chances of getting a job“. But we don’t have to forget that the issue is deeper.
In the past, careers in science were supposedly determined by a panel of wise men (I would like to add “and women”, but it would be an anachronism): as well known, oligarchy is fair only in the eyes of those who share the same wisdom as the oligarchs. Presently, the panel of wise persons is still required for hiring, promotions etc, but there is a request of control by an independent, supposedly neutral authority. This motivates the demand of metrics, may reduce the influence of the whims of some people but introduces other problems. I fear that we won’t hit the perfect system.
Back to the statistical survey: figures 4 and 5 are really intriguing: they indicate that only few of the most cited papers are published in the most cited journals, and the percent is declining since around the year 1990. I am not sure if this is an instance of Simpson paradox… What is even more intriguing is that each figure has two graphs, and it seems to me that, by the definitions used, the two graphs should add up to 100%; but they don’t. So either something is wrong with me, or with this analysis: better finish this post and go back to work.
This time, I decided to post about something that is not related to science: the resignation of the Pope. After all, it’s public knowledge that I am a practicing Catholic. I know pretty well that most of those who browse this blog are not, and many just don’t care about religion: take this post as an exercise in critical spirit. I am not going to give you “my opinion”, because I am really nobody to have an opinion on such things. But I want to give you a guide to read the media, in case you follow the developments in the coming weeks.
Let us start with an obvious fact: journalists can’t be experts of everything and have to produce stories that attract attention. Also, they have to craft the story in the way the reader expects it. When it comes to scientific topics, we know pretty well how a piece of news should be cast in order to make in the mass media: it will have to sound either like science fiction (faster-than-light communication, parallel universes, time travel…) or like an answer to our ultimate concerns (the existence of God, free will, faster computers and flatter screens).
Now, how do the media craft a story about the Catholic church? Since the 1960s, it has been customary to use the bi-partite categorization “conservatives versus liberals” (at the beginning, the terms used to be “reactionaries versus progressives”, but the ideology that used that language has become less fashionable in the past decades). From afar, this may seem like as suitable a scheme as any other. In reality, this scheme is as wrong as the wave-particle duality in quantum physics: by describing the truth as tension between two extremes, it misses… well, the truth. I am going to propose you an alternative scheme: it’s three-partite, but I guess you can handle the complication.
At one extreme you have those that we scientists tend not to like very much. They think that the church has gone astray in the last 50 years or so, by speaking in favor of religious freedom, by daring to hold prayer sessions with members of other religions, and by accepting the claims of science. To be fair, you won’t find many Catholics thinking this way: they are minorities, to be found essentially in those nations in which Evangelical Fundamentalism is strong (osmosis happens), in some Alpine valleys, and maybe in some particularly stuffy sacristies (but I have not visited the latter).
At the other extreme, you have those who, in the words of a famous author, want to “reduce the Catholic church to yet another liberal Protestant denomination”. The media have a lot of sympathy for those, and maybe my readers too. But my readers are supposed also to understand (rationally, if not emotionally) why myself and many other Catholics don’t want to go that way either.
So far, we have the bipartite scheme. Notice how all those who don’t fit exactly in any of the above categories will be treated by the media as torn between the two, “conservative here, liberal there”. Have you not found this tension in most of the recent media portraits of Pope Benedict? Whatever your opinion on this Pope, he is certainly not a torn, tormented soul: the serenity of the intellectual is one of the traits unanimously noticed. Have you not found the same tension in most of the portraits of the cardinals that are presented as possible successors? “Cardinal X will be liberal on this topic and conservative in this other”. And if you ask me, you will find out that, in what I consider my personal coherence, I am very “liberal” in some topics and very ”conservative” in others (I keep these discussions off my blog, so you have to ask me personally).
In reality, Pope Benedict and most Catholics including myself (and, you can bet it, including the next Pope) belong to a third category: those who know that Ecclesia semper reformanda (“the Church is always in need of change”: I wrote it in Latin to show that it’s quite an old idea, we did not need the pressure of the media to realize it) but who believe in the promise of Jesus that his message will always be preserved in that Church. Of course, this category does not define a monolithic bloc: there are differences of opinion, at times significant ones. Does discord grow? Sadly, at times it does: just as in science, among specialists, we have different opinions on how to make the field progress and we often forget that we have a common goal, the progress of the field. Anyway, whether the Catholics in this category manage to recall that, beyond our differences, we have a common goal, is probably no longer the concern of my reader. So I stop here: just keep in mind this third category, if you want to understand a bit better the media reports in the coming weeks.
“Quantum tomography”, or “state estimation”, is solidly established — or so it seemed until some months ago.
The notion is pretty simple and peacefully admitted: it’s just the quantum analog of the reconstruction of a statistical distribution from a sampling. How would you know if a die is balanced? You cast it many times and infer the probabilities. Ideally, you should cast it infinitely many times; fortunately, the whole field of statistics provides rigorous ways of assessing how much you can trust your inference from finite samples.
You can reconstruct a quantum state in a similar way. There is one main difference: the quantum state contains information about the statistics of all possible measurements and, as well known, in quantum physics not all measurements are compatible. This is solved by sampling not just for one measurement, but for several. For instance, if you want to reconstruct the state of a spin 1/2, you need to reconstruct the statistics of measurements along three orthogonal directions x,y,z. It’s like saying that you have to cast a die in three different ways, if you pass the lousy analogy.
In the lab, tomography has been used for decades, for characterization: you think you have a source that produces a given quantum state and use tomography to check how well you succeeded. Often, tomography is used to certify that the state of two or more quantum objects is “entangled”.
Theorists have been working in devising various improvements. The biggest challenge is the fact that many statistical schemes may end up reconstructing a state that is not a valid one (think of reconstructing the statistics of a die and finding out the result 5 happens with negative probability!). Also, tomography is a useful motivation to study the structure of “generalized quantum measurements” (the kind that deserve the awful acronym POVMs) and plays a crucial role even in some “interpretations” of quantum physics, notably “quantum bayesianism” (I can’t really get to the bottom of it: Chris Fuchs speaks so well that, whenever I listen to him, I get carried away by the style and forget to try and understand what he means. If you really want to make the effort, read this paper).
All is well… until, a few months ago, reports appeared that quite elementary sources of possible errors had been underestimated:
- One such source are systematic errors. Consider the example of the spin 1/2: certainly, experimentalists can’t align their devices exactly along x, y and z. They can calibrate their direction as well as the precision of their calibration devices allow. According to a paper by Gisin’s group in Geneva, the effect of the remaining error has been largely neglected. While probably not dramatic for one spin, the required corrections may become serious when it comes to estimating the state of many, possibly entangled spins.
- Another quite obvious possibility is a drift by the source. When we cast a die many times, we make the assumption that we are always casting the same die. This is not necessarily true down to ultimate principles: some tiny pieces of matter may be detached by each collision of the die with the floor, so the die may be lighter and deformed after many trials. This deterioration seems inconsequential with a die. But things may be different when it comes to quantum states that are produced by complex laboratory equipment that have the nasty tendency of not being as stable as your mobile telephone (for those who don’t know, in a research lab, the stabilization and calibration of the setup typically takes months: once it is done, the actual collection of interesting data may only take a few days or even hours). Two papers, one in December 2012 and the other posted three days ago but written earlier, explore the possibility of tomography when the source of quanta is not producing the same state in each instance.
Does all this story undermine quantum tomography? Does it even cast a gloomy shadow on science? My answer is an unambiguous NO. All the previous works on tomography were done under some assumptions. Whenever those assumptions hold, whenever there is reason to trust them, those works are correct. If the assumptions can be doubted, then obviously the conclusion should be doubted too. With these new developments, people will be able to do tomography even under more relaxed assumptions: great! The lesson to be learned is: state your assumptions (OK, you may not want to state all the assumptions in all your technical papers aimed at your knowledgeable peers: but you must be aware of them, and state them whenever you write a review paper, lecture notes or similar material).
Everyone is happy with the attribution of the 2012 Nobel Prize for physics, and so definitely am I. However, I cannot fully agree with those of my colleagues who are hailing this attribution as “a Nobel Prize for quantum information”. Serge Haroche and Dave Wineland started working on those experiments well before the idea of the quantum computer. Did they join the quantum information community, or is it the community that joined them? There is no sharp answer of course, because the cross-fertilization of ideas goes both ways; but I think that Serge and Dave would be more or less where they are without quantum information.
By their choice, the Nobel committee endorses great developments in atomic physics and quantum optics. The endorsement of quantum information proper is still pending.
Recently, I have read a paper in a prestigious journal in physics, whose logic was a bit stretched. Let me paraphrase it for you.
Italians are known to be good soccer players. Recently, some authors have noticed that Singaporeans may also be pretty decent soccer players. In this paper, we prove that Singaporeans can even be better than Italians.
For the test, the Singaporeans were chosen from one of the many soccer schools in the island; the Italians were chosen among the finalists of the certamen ciceronianum, the most famous competition of Latin prose writing. The age and bodily weight distributions were the same for both samples.
Each player was asked to try and score a penalty kick with the heel. Remarkably, the Singaporeans fared far better than the Italians. This conclusively proves that Singaporeans can be better soccer players than Italians in some tasks.
Reference: Xxxxx et al, Nature Physics Y, yyyy (20zz)
Yesterday I was talking with a colleague about some sloppy papers published in the (supposedly) best journals — don’t try to find out which papers: you won’t guess, the sample is too big. At some point, my friend said about the authors: “It will backfire on them”. He said it with a point of sadness, because he has sincere concern for those people. I have thought it myself many times. But… will it really?
How many people are prisoners of their lust and commit horrible crimes! Many, probably most, go unpunished, even by their own conscience that they have managed to silence. Why should anything happen to people who are just prisoners of their mathematics, whose only mistake consists in not noticing that their definitions do not describe reality?
How many people commit financial crimes and injustices that ruin lives, and live without worries other than that of being one day stolen of their riches by poor fellows who will be called “criminals”! Why should some scientists, very decent fellows, be the victims of unfailing divine wrath just because they embellish (I am not speaking of faking) their results in order to get additional 5k$ per year of travel money?
No, I don’t think it will backfire: I don’t think we will see those sloppy works denounced and their authors forced to make amends. When the lobbying that keeps them going comes to an end, they will probably just be forgotten… but so will most of the works that one can consider “serious”: time is quite blind in erasing stuff.
If you want to uphold supposedly high standards, you must find other motives than the mere fear of being criticized one day.
How do we know that no signal can travel faster than the speed of light? There cannot be a direct evidence of this fact. The indisputable fact is that the speed of light is the same in all reference frames.
In my knowledge, augmented with the browsing capability of my students, the only answer lies in the following deduction:
(1) The speed of light being the same in all reference frames and the principle of relativity lead, as well known, to the Lorentz transformation.
(2) Armed with the Lorentz transformation, one can rather easily show that a signal propagating faster than light could allow me to send a message to myself from the future to the past. Einstein himself was well aware of this (I don’t know if he was even the first to notice it).
Now, why is it a problem, that I can send a message to myself from the future to the past? Normally, the answer lists all kind of crazy things I could do, like winning all my bets and analog stuff. For me, the most dramatic consequence is all that I, and many others, can no longer do. First of all, when I receive a message at time t0 sent at time t1, I know that for sure I’ll have to send the message at time t1: it’s unavoidable. Moreover, all the events happened between t0 and t1 that the message informs me about have also become unavoidable. For instance, the message may inform me that a friend who is walking alongside me at t0 will be killed by a car, and even if this is going to happen one hour later, all I can do is to inform him that his life is about to end. Isn’t this absurd enough?
Well, I definitely find it is… but not necessarily someone who believes in full determinism! For such a person, all the information about what is going to happen in the future is already fully contained in the universe now, and always has been. There is nothing wrong in the physical universe that some existing piece of information gets stored in the neurons that call themselves “I” before the corresponding fact can be observed by all the brains: after all, that is what happens when we predict something with the laws of physics (say, the passing of an asteroid close to the earth).
The deduction made above is absolutely conclusive only if one believes in the possibility of creation of information that was previously not available. It is not such an outlandish belief: people believing in human free will have uphold it for centuries, and within physics, quite a few interpretations of quantum physics also uphold it. But it is funny to find it appearing in a topic usually supposed to derive from special relativity alone.
Two disclaimers. First, I am NOT claiming that I believe something can go faster than light — indeed, since I believe in free will, I do find the deduction above perfectly convincing. Second, I am also aware that, if anything would go faster than light, it should be massless: for massive objects, the Lorentz transformation predicts infinite inertia at a speed approaching that of light, independently of any argument about signaling to the past.
P.S. Within a few hours from the first post, I noticed another important assumption in the deduction above: the assumption that any physical phenomenon can be harnessed to send a signal — even more specifically, that it can be harnessed to implement the protocol that allows sending messages to the past (which implies sending the superluminal signal to an observer in relative motion and having it reflected back to myself).
The advantage of holidays: one does not need to be serious all the time… and this is probably the only time when the really serious questions can come to the surface. I have jotted down here three of them, each one followed by an explanation of the context – I have got a few more questions, and more specific ones, but these will becomes research projects for my students and I am not going to reveal them here.
Question 1: is quantum physics imposing on us by its sheer weight?
The inadequacy of classical physics to describe nature is an established fact. Post factum, even the source of the problem has been clearly identified: the “classical prejudice” consists in believing that a definite value (true or false) can be assigned to any physical property at any time. But physics is not in a state of despair: positively, quantum physics has scored an impressive list of successes and is presently unchallenged. But what is quantum physics?
There is no characterization of quantum physics in terms of a small set of physical principles or phenomena. Relativity arises from the constancy of the speed of light, while the only compact definition of quantum physics is the description of its mathematical recipes. The legitimacy of those recipes is justified a posteriori: the number of correct predictions that are obtained is so large, that one must be very careful before even daring challenging a theory with such achievements. This is very reasonable, but should we just be content with it? I would really like to be able to say what quantum physics is in a few sentences, instead of burying the question under the sheer weight of the number of its achievements.
This question has no answer yet, but the answer may come one day through works like http://arxiv.org/abs/1112.1142. The next two questions are more undefined.
Question 2: how necessary is quantum physics?
Here it gets very speculative – but don’t forget, I was on holidays. It is well known that our universe is extremely well tuned, and this suggests prima facie that it has been tuned by an intelligent being. For several reasons, it seems desirable to have at least an alternative to such a conclusion. The currently fashionable alternative goes along the following lines: our universe would be extraordinary if it were unique; however, if all kind of universes are being “tried” in parallel, there is nothing astonishing in us living in the “right” one – by definition, if the universe is the wrong one, we cannot be there.
At first, this solution is meant to convey the idea of universes similar to ours, but in which the values of some physical constant differ. One may also admit that universes with more (or fewer) dimensions co-exist with ours. However, to be rigorous, one must admit the possibility of universes with laws that are absolutely different from ours: not only in the value of constant or in the number of dimensions, but in the very meaning of what “matter” is. Or maybe not? Maybe some features of our universe, duly extrapolated, are a super-universal necessity? Maybe reality is more constrained than speculative logic? As you see, we are not going to find the ultimate answer to this question – but it is good at times to sit at the verge of nonsense and feel its vertiginous call.
Question 3: emergence, seriously?
When Nobel Prize winners feel the need to contribute seriously to humanity, they write books and give talks about their vision of the world in prophetic terms, hoping that future will vindicate part of their vision. In this exercise, particle physicists then tend to adopt a bottom-up approach: everything is made of elementary particles and, in principle, everything could be explained at that level (though admittedly we are happy not to have to, when it comes to putting a satellite in orbit). Condensed matter physicists like to convey a different view, one in which the mess… sorry, the complexity they deal with every day is presented as irreducible: they like then to say that there is new physics emerging at larger scales than that of elementary particles. Their favorite example is the quantum Hall effect: it cannot arise without disorder in the atomic arrangement, and yet most of its features are described only in terms of elementary constants (think it calmly: each atom around itself sees some disorder, but somehow all the atoms together manage to forget the details of the mess and act in a clean, universal way). Sounds nice… but somehow, emergentists have never managed to look consistent in my eyes.
On the one hand, as noted above, operational emergentism is a necessity: it is a practical impossibility to use many-body quantum physics to describe the motion of a satellite, no human-built computer will ever be able to do that. But if this is the meaning of emergence, it is trivial. The deep question is whether emergence is real. Let us ask it this way: is nature doing physics from bottom-up, performing the computation that we can’t dream of simulating? Or, on the contrary, does it really have layers of complexity? When hard pressed, it seems to me that even the emergentists are scared of what their idea may ultimately mean, namely that order may appear in some cases from nothing below: a very suspicious conclusion, especially for the evolutionist cosmogony of our time…
You want my opinion? Well, when I was a teenager and all my friends started smoking, by that very fact I started finding smoking silly. In the same vein, since the multiverse is so popular, I believe in one single universe, ours; since everyone believes that everything is quantum, I believe that there may a real boundary where the quantum behavior stops; and I am sympathetic with the emergentists. On holidays, one can afford to be wrong.
I have finally read Galbraith’s Short history of financial euphoria, which Alain Aspect suggested to me during a random dinner chat a few months ago. It’s nice: it’s the first time I understand something about finance. And it triggered a concern about academia.
In finance as well as in academia, people often fall into euphoria over something that is, by all rational standards, rather worthless. In my field of research, for instance, the latest craze is the following process:
- Write down a new version of some criterion that tests that “something is quantum” (a new Bell inequality, a new test of contextuality, a version of Leggett-Garg…); the simpler — the more trivial — the better, because of point 2.
- Find a couple of friends to do an experiment for you. Better if they have been running their setup for ages and have exhausted all the serious science that could possibly be done with it, because they will be more than happy to learn that their old machinery can still be used to perform “fundamental tests”. Moreover, since your test is simple and simple quantum physics has been tested to exhaustion, you have no doubt that the experimental results will uphold your theory.
- If you can, present it as “the first step towards [a big goal]“. Never mind that it is rather the last use of a setup that has made its time (I refrained to use “swan’s song”, because the last song of the swan is supposed to be the most beautiful; the last concert of an 80 years old pop star would be more appropriate a metaphor). If you can’t invoke the future, present it as “the conclusive proof of [some quantum claim]“. Never mind that the claim is usually always the same, namely, that results of measurements are not pre-established, that there is intrinsic randomness, or however you want to phrase it. Also never mind the fact that there cannot be a “conclusive claim” every month.
The euphoria mechanism is entertained as follows:
- The big journals (Nature at the forefront) prefer to publish tons of poor science rather than risking and losing a single real breakthrough. So, if someone claims to have solved “the mystery of the quantum” (the general readership of Nature finds quantum physics mysterious), better take them seriously.
- In turn, people notice that “if you do that, you publish in Nature”. Since “that” is not that difficult after all, it’s worth while going for it.
- Once you have published in Nature (or Science or…), you are hailed as a hero by the head of your Department, by the communication office of your university, by the agencies that granted you the funds.
- Put yourself now at the other end, namely in the place of the one who would like to raise a dissenting voice and reveal the triviality of the result. All the legitimate instances (peer reviewed journals, heads of prestigious Departments, grant agencies, even popular magazines and newspapers!) are against you. Isn’t it “obvious” then that you are only venting your jealousy, the jealousy of the loser?
So far, the analogy with financial euphoria is clear. I guess (though I have not studied the statistics) that the speed of the crash is also analogously fast: it happens when some of the editors of the main journals take a conscious decision of having “no more of that”, because they realize that there is really nothing to gain. The rumor spreads that “refereeing has become tough”; the journals are accused of having become irrational since “if they accepted the previous paper, why they refuse this one” (while it’s one of their few moments of rationality).
And the consequences? The same too, but fortunately without criminal pursuits, despair and suicides. The very big fish get out unscathed: either their science is really serious (that is, they have invested only a small amount of their scientific capital in the euphoric topic); or their power is really big (that is, they have invested only a small amount of their political power in backing the euphoria). The opportunists will try to follow the wind as they should, and will be forgotten as it should. Those who face uncertain destiny are the young fellows, who were doing serious science when the euphoria caught them at the right time and the right place. Because of this, they have been raised to prominence. Somehow, all their capital is invested in that topic. Will they be able to find their way out and continue doing serious science? Or will they end up teaming with their buddies, set up a specialized journal for themselves and publishing there until their old age? If one day you find me as the founder of a journal called “Nonlocality”, please wake me up.