Category Archives: Philosophy

Occasionally, I may be tempted to extrapolate… I’ll try to keep this to a minimum :)

Determinism and the speed of light

How do we know that no signal can travel faster than the speed of light? There cannot be a direct evidence of this fact. The indisputable fact is that the speed of light is the same in all reference frames.

In my knowledge, augmented with the browsing capability of my students, the only answer lies in the following deduction:

(1) The speed of light being the same in all reference frames and the principle of relativity lead, as well known, to the Lorentz transformation.

(2) Armed with the Lorentz transformation, one can rather easily show that a signal propagating faster than light could allow me to send a message to myself from the future to the past. Einstein himself was well aware of this (I don’t know if he was even the first to notice it).

Now, why is it a problem, that I can send a message to myself from the future to the past? Normally, the answer lists all kind of crazy things I could do, like winning all my bets and analog stuff. For me, the most dramatic consequence is all that I, and many others, can no longer do. First of all, when I receive a message at time t0 sent at time t1, I know that for sure I’ll have to send the message at time t1: it’s unavoidable. Moreover, all the events happened between t0 and t1 that the message informs me about have also become unavoidable. For instance, the message may inform me that a friend who is walking alongside me at t0 will be killed by a car, and even if this is going to happen one hour later, all I can do is to inform him that his life is about to end. Isn’t this absurd enough?

Well, I definitely find it is… but not necessarily someone who believes in full determinism! For such a person, all the information about what is going to happen in the future is already fully contained in the universe now, and always has been. There is nothing wrong in the physical universe that some existing piece of information gets stored in the neurons that call themselves “I” before the corresponding fact can be observed by all the brains: after all, that is what happens when we predict something with the laws of physics (say, the passing of an asteroid close to the earth).

The deduction made above is absolutely conclusive only if one believes in the possibility of creation of information that was previously not available. It is not such an outlandish belief: people believing in human free will have uphold it for centuries, and within physics, quite a few interpretations of quantum physics also uphold it. But it is funny to find it appearing in a topic usually supposed to derive from special relativity alone.

Two disclaimers. First, I am NOT claiming that I believe something can go faster than light — indeed, since I believe in free will, I do find the deduction above perfectly convincing. Second, I am also aware that, if anything would go faster than light, it should be massless: for massive objects, the Lorentz transformation predicts infinite inertia at a speed approaching that of light, independently of any argument about signaling to the past.

P.S. Within a few hours from the first post, I noticed another important assumption in the deduction above: the assumption that any physical phenomenon can be harnessed to send a signal — even more specifically, that it can be harnessed to implement the protocol that allows sending messages to the past (which implies sending the superluminal signal to an observer in relative motion and having it reflected back to myself).

Questions without answer

The advantage of holidays: one does not need to be serious all the time… and this is probably the only time when the really serious questions can come to the surface. I have jotted down here three of them, each one followed by an explanation of the context – I have got a few more questions, and more specific ones, but these will becomes research projects for my students and I am not going to reveal them here.

Question 1: is quantum physics imposing on us by its sheer weight?

The inadequacy of classical physics to describe nature is an established fact. Post factum, even the source of the problem has been clearly identified: the “classical prejudice” consists in believing that a definite value (true or false) can be assigned to any physical property at any time. But physics is not in a state of despair: positively, quantum physics has scored an impressive list of successes and is presently unchallenged. But what is quantum physics?

There is no characterization of quantum physics in terms of a small set of physical principles or phenomena. Relativity arises from the constancy of the speed of light, while the only compact definition of quantum physics is the description of its mathematical recipes. The legitimacy of those recipes is justified a posteriori: the number of correct predictions that are obtained is so large, that one must be very careful before even daring challenging a theory with such achievements. This is very reasonable, but should we just be content with it? I would really like to be able to say what quantum physics is in a few sentences, instead of burying the question under the sheer weight of the number of its achievements.

This question has no answer yet, but the answer may come one day through works like The next two questions are more undefined.

Question 2: how necessary is quantum physics?

Here it gets very speculative – but don’t forget, I was on holidays. It is well known that our universe is extremely well tuned, and this suggests prima facie that it has been tuned by an intelligent being. For several reasons, it seems desirable to have at least an alternative to such a conclusion. The currently fashionable alternative goes along the following lines: our universe would be extraordinary if it were unique; however, if all kind of universes are being “tried” in parallel, there is nothing astonishing in us living in the “right” one – by definition, if the universe is the wrong one, we cannot be there.

At first, this solution is meant to convey the idea of universes similar to ours, but in which the values of some physical constant differ. One may also admit that universes with more (or fewer) dimensions co-exist with ours. However, to be rigorous, one must admit the possibility of universes with laws that are absolutely different from ours: not only in the value of constant or in the number of dimensions, but in the very meaning of what “matter” is. Or maybe not? Maybe some features of our universe, duly extrapolated, are a super-universal necessity? Maybe reality is more constrained than speculative logic? As you see, we are not going to find the ultimate answer to this question – but it is good at times to sit at the verge of nonsense and feel its vertiginous call.

Question 3: emergence, seriously?

When Nobel Prize winners feel the need to contribute seriously to humanity, they write books and give talks about their vision of the world in prophetic terms, hoping that future will vindicate part of their vision. In this exercise, particle physicists then tend to adopt a bottom-up approach: everything is made of elementary particles and, in principle, everything could be explained at that level (though admittedly we are happy not to have to, when it comes to putting a satellite in orbit). Condensed matter physicists like to convey a different view, one in which the mess… sorry, the complexity they deal with every day is presented as irreducible: they like then to say that there is new physics emerging at larger scales than that of elementary particles. Their favorite example is the quantum Hall effect: it cannot arise without disorder in the atomic arrangement, and yet most of its features are described only in terms of elementary constants (think it calmly: each atom around itself sees some disorder, but somehow all the atoms together manage to forget the details of the mess and act in a clean, universal way). Sounds nice… but somehow, emergentists have never managed to look consistent in my eyes.

On the one hand, as noted above, operational emergentism is a necessity: it is a practical impossibility to use many-body quantum physics to describe the motion of a satellite, no human-built computer will ever be able to do that. But if this is the meaning of emergence, it is trivial. The deep question is whether emergence is real. Let us ask it this way: is nature doing physics from bottom-up, performing the computation that we can’t dream of simulating? Or, on the contrary, does it really have layers of complexity? When hard pressed, it seems to me that even the emergentists are scared of what their idea may ultimately mean, namely that order may appear in some cases from nothing below: a very suspicious conclusion, especially for the evolutionist cosmogony of our time…


You want my opinion? Well, when I was a teenager and all my friends started smoking, by that very fact I started finding smoking silly. In the same vein, since the multiverse is so popular, I believe in one single universe, ours; since everyone believes that everything is quantum, I believe that there may a real boundary where the quantum behavior stops; and I am sympathetic with the emergentists. On holidays, one can afford to be wrong.

Everyone is speaking of it, part II

The more I hear about this result, the more I fear that the media have picked it up only because they misread the meaning of the title… Let me explain what is done there as simply as I can. I’ll let the reader decide if they think the media could have understood this 🙂

Let L (lambda in the paper) be a list of deterministic instructions of the type {“Measurement A –> give result a”, for all measurements}. Since quantum states do not predict deterministic results for all measurements, a single list is trivially inadequate. But there is a very natural way to generate randomness: just pick different lists {Lk} with probability pk each. So, the model is:

Quantum state of a single object <–> {Lk, pk}.

What the paper proves is that no two quantum states can share any list: the set of lists with probability non-zero uniquely identifies a state. In other words, giving the possible lists, or even just one of them, is equivalent to describing the state…

… for a single object! Indeed, Bell’s theorem proves that not only a product of lists {La,pa}x{Lb,pb}, but even a single product list {LaxLb, pab} cannot describe entanglement. So, lists just don’t seem to do the job. Personally, I can’t believe that the randomness of one quantum object comes from a list, when we know that the randomness of two quantum objects cannot come from a list.

In the same vein, I have a small problem with the logic of the proof. One constructs a family of product states, which should be obviously described by products of lists, and measures them by projecting on a suitable family of entangled states, which… which… wait a second: how does one describe entangled states in that model?? It seems that the closest attempt was Spekkens’ toy model, which reproduces many nice features of quantum physics, but unfortunately not (guess what?) the violation of Bell’s inequalities. Maybe the contradiction exploited in the proof comes from the fact that there is no description of entangled states in a model with lists?

That being said, this paper does add something for those who still were trying to believe in lists as explaining quantum randomness — and the more this idea is shown to be inadequate, the better 🙂

Note added: I was convinced that this post misses the point, but it triggered some nice follow-up; so please read the subsequent thread of comments: the “truth” may be at the bottom — or in the exchange 😉

Everyone is speaking of it

Everyone in my field has been fascinated by a work of Pusey, Barrett and Rudolph. I did not understand it, so I wrote to the authors. Jon Barrett replied in a crystal clear way. Here are his words on what they have proved:

“Suppose there is a hidden variable model, such that a (single) system has a hidden state lambda, and when I prepare the system in a quantum state |phi>, this actually corresponds to some probability distribution rho_{phi}(lambda). If I prepared |psi> instead, then this would correspond to the probability distribution rho_{psi}(lambda). It is obvious that rho_{phi}(lambda) and rho_{psi}(lambda) cannot be the same distribution. But we ask, can it happen that the distributions rho_{psi} and rho_{phi} overlap? The answer is obviously “no” if |phi> and |psi> are orthogonal. But what if they are non-orthogonal? We show that the answer is “no” in this case too. For any pair of quantum states which are not identical, the distributions rho_{phi}(lambda) and rho_{psi}(lambda) must be completely disjoint. Hence for any lambda, there is only one possible quantum state such that Probability(lambda|psi) is nonzero. This means that if someone knows lambda, then they know the quantum state. The whole of the quantum state must be “written into” the actual physical state of the system”. Jon also pointed me to Matt Leifer’s post, which is indeed a very clear critical appraisal.

Now, why are people excited?

Well, many seem to argue that this paper is falsifying some attempts at “being moderate”, cornering people to adopt either of the following positions:

* A realism with “crazy” features (many-worlds with the existence of all potentialities; or Bohmian mechanics with potentials that change non-locally);

* A complete renunciation to realism: we cannot hold any meaningful discourse on quantum physics, just apply the theory that gives us correct predictions for our observations.

I am not sure that the alternatives are so sharp… anyway, now I have to go back to atomic physics and figure out formulas that describe reality — sorry: the perception of the experimentalists 😉


Trust science news?

Yesterday I was invited as a panelist in a discussion about objectivity in science, organized by the students of the philosophy interest group (they don’t seem to use an acronym, I wonder why…). I told the story of the black paper of quantum cryptography and the reactions it provoked: how nobody ever questioned the “truth” of what was written there, some experts legitimately questioned the “convenience” of writing that piece (which admittedly had a too pessimistic undertone), some non-experts got mad because they were using “the success of cryptography” to push their own agenda and did not expect those problems.

The other panelists and the students contributed many interesting ideas. I select two of them for consideration:

  1. A student mentioned quantum physics as “not predictive”. I had to correct him: it is probably the most predictive sector of science, both in precision and in scope. But it is true that this is NOT the perception of quantum physics people have: quantum physics is associated with weird claims (true) bordering on science-fiction (wrong). It is really a priority in communication of science, to convey the idea that quantum physics is first and foremost a solid body of theoretical and experimental knowledge. As for its weirdness, it’s fascinating, not as funky science-fiction, but as deep philosophy of nature!
  2. Well-known problems of communication of science were raised: overstatements by scientists themselves and by the media, the proliferation of crackpots who pass themselves as “experts” of some topic… The moderator asked then, how can the public know the right from the wrong? One of us gave the only possible answer: “ask someone you trust”. The bottom line is that science is a human endeavor, in which I believe that sound knowledge can be reached (I am a “realist” in this sense), but is far from “brute evidence”.

Hmmm… I just wonder how close to brute evidence we can come in quantum physics with the device-independent program… I’ll have to try and explain this to the people I met yesterday 🙂

Q & indeterminism

In the workshop I attended last week, an important question was raised: does quantum physics imply indeterminism? We have all been told this in the basic lectures, but… how do we know? I addressed this question partly in another post, to which I refer for more details; but I prefer to devote a new specific post to this important and often asked question. So here is my answer:

With one-particle phenomena, we can’t know: hidden variable descriptions, however hated and non-orthodox they may be, are possible. This means that it is possible to keep a deterministic view of such phenomena. So, if you have been told that the double slit or the Stern-Gerlach experiments necessarily imply indeterminism, think again.

When one goes to two particles, the violation of Bell’s inequalities implies that either of the following three assumptions is wrong (see other post):
1. Determinism (or outcome independence)
2. No-signaling
3. “Free will”, or more precisely, the fact that the choice of the measurements are not correlated with the source.
Adepts of Bohmian mechanics give up 2 (their “quantum potential” is a signaling hidden variable); adepts of many-world interpretations will give up 3 in a complicated way. If you want to keep 2 and 3, then indeed quantum phenomena imply indeterminism. An important remark: it is not quantum theory (i.e. a mathematical construction), but the observed violation of Bell’s inequalities (i.e. a fact) that implies indeterminism.

Workshop about Q in school

I am attending a very interesting workshop with many secondary school teachers of Switzerland. It’s in French, but I blog on it in English. Today, two notes:

Note 1

Jean-Marc Levy-Leblond has presented a nice conceptual framework to understand how the notion of “wave-particle duality” arose. He pointed out that the classical notion of “particle” is discontinuous (or discrete) both in its spatial extension and in its quantity; the classical notion of field is “continuous” in both. The “quantons” (he likes this term, and it’s practical indeed) are continuous in spatial extension but discrete in quantity.

A nice hint… but no need to expand: he immediately said that, nowadays, the notion of “wave-particle duality” should just be erased forever from our vocabulary — on which, I cannot agree more.

Note 2

This is just devoted to state my admiration for the teachers I talked to (to be extended by extrapolation to all the participants). With some, I even ended up talking already of the latest results of axiomatics, of device-independent, of indeterminism… I’ll blog about these ideas one day. Now it’s dinner time.

The meaning of “violation of Bell’s inequalities”

I can safely say that I have known for long time what is the meaning of Bell’s inequalities, and I have even tried to convey it (I’ll post my usual presentation soon, for comparison). But yesterday, I heard one of the clearest expositions ever, which may even re-shape the way I’ll present these topics in the future. It was a talk by Michael Hall, based on one of his recent works. Let me try to summarize it here, for everyone’s benefit and my own record.

The starting point is an attempt at explaining quantum correlations as deriving from some “underlying information” [a much better name than “local hidden variable” indeed!]. In this context, one would like to make three very plausible assumptions:

  1. Determinism (or equivalently “outcome independence”): the “underlying information” specifies the outcomes (or: given the “underlying information”, there are no correlations).
  2. No-signaling: even knowing the “underlying information”, the marginal statistics of one particle are independent of the measurement performed on the other particle.
  3. Measurement independence: the “underlying information” does not have any influence on the measurement settings that are used.

Given these assumptions, Bell’s inequalities are obtained. Since quantum correlations violate the inequalities, at least one of the three assumptions must be wrong.

In the standard interpretation of QM, the “underlying information” is the quantum state; 1 is fully denied, while 2 and 3 are retained. Personally, I stand clearly here.

In Bohmiam mechanics, the “underlying information” is the quantum potential, which must change instantaneously everywhere upon local measurement: therefore, 2 is denied, while 1 and 3 are retained.

It seems harder to give up 3: it would mean that the choice of the setting by (ideally) a human being would be influenced by the source. Or is it really harder? Many physicists I talked to, once sufficiently pressed, admit advocating a fully deterministic view of the world, in which free will is an illusion (I find it terrible, but as we know, there is no way of disproving someone who believes this).

Anyway, as Michael said, this is “philosophy” (albeit interesting one). What he did, was to try to quantify: “how much” determinism, no-signaling or free will must one give up, in order to reproduce the observations? His preliminary studies indicate that:

  1. if you want to give up determinism, you have to give it up fully, i.e. one bit per pair of particles (an extension of something we proved back in 2008);
  2. if you want to give up no-signaling, you have to signal quite a lot too.
  3. However, it is enough to give up very little of free will: only 1/15 of a bit per pair of particles, according to some figure of merit.

As for myself, I am not ready to give up even that amount of free will; but it’s intriguing nonetheless.

A last note: in the discussion, Rafael Rabelo pointed out another assumption in the derivation of Bell’s theorem: the fact that the devices have no memory, i.e. they do not take into account the settings and outcomes of the previous runs. I seem to remember that this changes little, but who knows? Anyway, see how a competent and aware student may remind an expert of something that his “fully general” analysis did not take into account 🙂