Sunday, February 10, 2013

Quantum correspondence on black holes

Some Q&A from correspondence about my recent paper on quantum mechanics of black holes. See also Lubos Motl's blog post.
Q: Put another way (loosely following Bousso), consider two observers, Alice and Bob. They steer their rocket toward the black hole and into "the zone" or "the atmosphere". Then, Bob takes a lifeboat and escapes to asymptotic infinity while Alice falls in. I hope you agree that Bob and Alice's observations should agree up to the point where their paths diverge. On the other hand, it seems that Bob, by escaping to asymptotic infinity can check whether the evolution is unitary (or at least close to unitary). I wonder which parts of this you disagree with.

A: If Bob has the measurement capability to determine whether Psi_final is a unitary evolution of Psi_initial, he has to be able to overcome decoherence ("see all the branches"). As such, he cannot have the same experience as Alice of "going to the edge of the hole" -- that experience is specific to a certain subset of branches. (Note, Bob not only has to be able to see all the semiclassical branches, he also has to be able to detect small amplitude branches where some of the information leaked out of the hole, perhaps through nonlocality.) To me, this is the essential content of complementarity: the distinction between a super-observer (who can overcome decoherence) and ordinary observers who cannot. Super-observers do not (in the BH context) record a semiclassical spacetime, but rather a superposition of such (plus even more wacky low amplitude branches).

In the paragraph of my paper that directly addresses AMPS, I simply note that the "B" used in logical steps (1) and (2) are totally different objects. One is defined on a specific branch, the other is defined over all branches. Perhaps an AMPS believer can reformulate my compressed version of the firewall argument to avoid this objection, but I think the original argument has a problem.


Q: I think all one needs for the AMPS argument is that the entanglement entropy of the radiation decreases with the newly emitted quanta. This is, of course, a very tough quantum computation, but I don't see the obstruction to it being run on disparate semiclassical branches to use your language. I was imagining doing a projective measurement of the position of the black hole (which should be effectively equivalent to decoherence of the black hole's position); this still leaves an enormous Hilbert space of states associated with the black hole unprojected/cohered. I am not sure whether you are disagreeing with that last statement or not, but let me proceed. Then it seems we are free to run the AMPS argument. There is only a relatively small amount of entanglement between the remaining degrees of freedom and the black hole's position. Thus, unitarity (and Page style arguments) suggest a particular result for the quantum computation mentioned above by Bob on whichever branch of the wave function we are discussing (which seems to be effectively the same branch that Alice is on).

A: An observer who is subject to the decoherence that spatially localizes the hole would see S_AB to be much larger than S_A, where A are the (early, far) radiation modes and B are the near-horizon modes. This is because it takes enormous resources to detect the AB entanglement, whereas A looks maximally mixed. I think this is discussed rather explicitly in arXiv:1211.7033 -- one of Nomura's papers that he made me aware of after I posted my paper. Measurements related to unitarity, purity or entanglement of, e.g., AB modes, can only be implemented by what I call super-observers: they would see multiple BH spacetimes. Since at least some A and B modes move on the light cone, these operations may require non-local actions by Bob.


Q: Do you think there is an in-principle obstruction that prevents observers from overcoming decoherence? Is there some strict delineation between what can be put in a superposition and what cannot?

A: This is an open question in quantum foundations: i.e., at what point are there not enough resources in the entire visible universe to defeat decoherence -- at which point you have de facto wavefunction collapse. Omnes wrote a whole book arguing that once you have decoherence due to Avogadro's number of environmental DOF, the universe does not contain sufficient resources to detect the other branch. It does seem true to me that if one wants to make the BH paradox sharp, which requires that the mass of the BH be taken to infinity, then, yes, there is an in-principle gap between the two. The resources required grow exponentially with the BH entropy.

1 comment:

plastic business cards said...

I'm so happy that visit this site i'm already bookmark this
site i come back again . thank for great post.





Thanks for your nice articate

plastic
business cards

,

Blog Archive

Labels