Saturday, July 05, 2014

Physics and the Horizons of Truth


I came across a PDF version of this book online. It contains a number of fine essays, including the ones excerpted from below. A recurring question concerning Godel's incompleteness results is whether they impact "interesting" mathematical questions.
CHAPTER 21 The Godel Phenomenon in Mathematics: A Modern View: ... Hilbert believed that all mathematical truths are knowable, and he set the threshold for mathematical knowledge at the ability to devise a “mechanical procedure.” This dream was shattered by Godel and Turing. Godel’s incompleteness theorem exhibited true statements that can never be proved. Turing formalized Hilbert’s notion of computation and of finite algorithms (thereby initiating the computer revolution) and proved that some problems are undecidable – they have no such algorithms.

Though the first examples of such unknowables seemed somewhat unnatural, more and more natural examples of unprovable or undecidable problems were found in different areas of mathematics. The independence of the continuum hypothesis and the undecidability of Diophantine equations are famous early examples. This became known as the Godel phenomenon, and its effect on the practice of mathematics has been debated since. Many argued that though some of the inaccessible truths above are natural, they are far from what is really of interest to most working mathematicians. Indeed, it would seem that in the seventy-five years since the incompleteness theo- rem, mathematics has continued thriving, with remarkable achievements such as the recent settlement of Fermat’s last “theorem” by Wiles and the Poincare conjecture by Perelman. Are there interesting mathematical truths that are unknowable?

The main point of this chapter is that when knowability is interpreted by modern standards, namely, via computational complexity, the Godel phenomenon is very much with us. We argue that to understand a mathematical structure, having a decision pro- cedure is but a first approximation; a real understanding requires an efficient algorithm. Remarkably, Godel was the first to propose this modern view in a letter to von Neumann in 1956, which was discovered only in the 1990s.

Meanwhile, from the mid-1960s on, the field of theoretical computer science has made formal Godel’s challenge and has created a theory that enables quantification of the difficulty of computational problems. In particular, a reasonable way to capture knowable problems (which we can efficiently solve) is the class P, and a reasonable way to capture interesting problems (which we would like to solve) is the class NP. Moreover, assuming the widely believed P ̸= NP conjecture, the class NP -complete captures interesting unknowable problems. ...
This volume also includes Paul Cohen's essay (chapter 19) on his work on the Continuum Hypothesis and his interactions with Godel. See also Horizons of Truth.
Cohen: ... I still had a feeling of skepticism about Godel's work, but skepticism mixed with awe and admiration.

I can say my feeling was roughly this: How can someone thinking about logic in almost philosophical terms discover a result that had implications for Diophantine equations? ... I closed the book and tried to rediscover the proof, which I still feel is the best way to understand things. I totally capitulated. The Incompleteness Theorem was true, and Godel was far superior to me in understanding the nature of mathematics.

Although the proof was basically simple, when stripped to its essentials I felt that its discoverer was above me and other mere mortals in his ability to understand what mathematics -- and even human thought, for that matter -- really was. From that moment on, my regard for Godel was so high that I almost felt it would be beyond my wildest dreams to meet him and discover for myself how he thought about mathematics and the fount from which his deep intuition flowed. I could imagine myself as a clever mathematician solving difficult problems, but how could I emulate a result of the magnitude of the Incompleteness Theorem? There it stood, in splendid isolation and majesty, not allowing any kind of completion or addition because it answered the basic questions with such finality.
My recent interest in this topic parallels a remark by David Deutsch
The reason why we find it possible to construct, say, electronic calculators, and indeed why we can perform mental arithmetic, cannot be found in mathematics or logic. The reason is that the laws of physics "happen" to permit the existence of physical models for the operations of arithmetic such as addition, subtraction and multiplication.
that suggests the primacy of physical reality over mathematics (usually the opposite assumption is made!) -- the parts of mathematics which are simply models or abstractions of "real" physical things are most likely to be free of contradiction or misleading intuition. Aspects of mathematics which have no physical analog (e.g., infinite sets) are prone to problems in formalization or mechanization. Physics (models which can be compared to experimental observation; actual "effective procedures") does not ever require infinity, although it may be of some conceptual convenience. Hence one suspects, along the lines above, that mathematics without something like the "axiom of infinity" might be well-defined. Is there some sort of finiteness restriction (e.g., upper bound on Godel number) that evades Godel's theorem? If one only asks arithmetical questions about numbers below some upper bound, can't one avoid undecidability?

16 comments:

Alex Jacobson said...

Proof Envy http://rjlipton.wordpress.com/2014/06/11/proof-envy/

Hacienda said...

That last paragraph you wrote was brilliant.

dxie48 said...

"primacy of primacy of physical reality over mathematics over mathematics"
However, "physics" is the mind's abstraction of the physical reality, and thus it is on par with math.

"Physics does not ever require infinity"
However, it does encompass or give rise to singularity.

5371 said...

I can't imagine why anyone would find the continuum hypothesis uninteresting.
It's not just convenience, but coherence which is lost if one tries to wholly exclude the infinite from physics. Nor have intuitionism and even more finitism or ultrafinitism shown themselves to be useful mathematical tools, despite the intrinsic interest of showing that many things still work in their new context.

billyjoerob said...

Search Aristotelian mathematics Franklin, interesting stuff & new book on similar topics.

Aaron Sheldon said...

Recursion comes at the cost of decidalbility

MUltan said...

That is similar to something I wrote on the Ultranet list back in 2004. If ZF axioms include the axiom of infinity then the existence of infinity cannot be deduced from the other axioms. The maximum physically significant number at any given time might be something like the number of Planck 4-volumes in the past light-cone raised to the power of the number of possible permutations of particles in the universe, or perhaps the number of permutations of possible particles,( for instance converting all mass to microwave-background temperature photons.) So if the universe is open (no big crunch) then the maximum possibly physically significant number grows without bound, but is never a completed infinity.



The continuum problem has a length-scale cutoff well above putting any of the above numbers in the length denominator. Probing below a certain resolution will involve using so much energy in such a small volume that persistent black holes will form.

Aaron Sheldon said...

That idea fundamental conflicts with quantum mechanics. Only infinite dimensional spaces support the commutation relationships observed in quantum systems. You do not have the Heisenberg uncertainty relationship in finite dimensional spaces, there would always be at least one QFT field mode where both momentum and position where simultaneously observable.


You can prove this yourself from the basic properties of the determinant on finite dimensional spaces.

MUltan said...

That's an interesting point, which I'll have to think about. I suspect that there aren't any observable differences between literal infinity and being without bound. Renormalization's unsatisfactory meaning seems like a reason to prefer finite concepts, but physicists seem to have developed a taste for it, despite perhaps not being too clear on what they are actually doing.

highly_adequate said...

Yeah, the real problem is with putting together any kind of theory that looks like physics as we know it which does not involve infinities. People have tried -- some of them exceedingly smart people -- with no outcome that is convincing. Intuitionist and similar kinds of mathematical approaches produce their own highly counterintuitive implications.


Certainly physics as we know it involves continuous functions and therefore the continuum, so the question of the continuum hypothesis is a real one in the standard form of physics.

steve hsu said...

No actual calculation in physics requires infinity. The easiest way to see this is to note that calculations required to compare theory to experiment can be done on finite computing machines. The continuum is an idealization and we are not sure, due to quantum gravity, whether the structure of spacetime is actually continuous. (The same applies to quantum fields and even to Hilbert space.) Some *theories* of physics may invoke infinity, but careful consideration reveals that the necessity of infinity will never be *experimentally testable* (i.e., there are related theories which do not invoke infinity which cannot be excluded by experiment).

steve hsu said...

I think the evidence is quite strong that there is a "minimal length" in spacetime. The continuum is an *idealization* that comports with our intuition because this minimal length (the Planck length) is so much smaller even than the size of atoms.

http://arxiv.org/abs/hep-th/0405033

Aaron Sheldon said...

Newton and Leibniz are rolling in their grave with that statement. To start with the whole of calculus needs not only infinity but also the continuum, and without calculus you cannot derive any of the functions used in falsification experiments.



As for space-time being discrete, considering how carefully Lorentz invariance has been tested, and the latest gamma ray observations that severely constrain any frequency dependence on the speed of light (which would occur if space-time were discrete) my money is one both infinities and continuum. First because there have been no experiments to falsify those assumptions, second because of the enormous utility of the assumptions in theory and calculation.

steve hsu said...

Lorentz invariance could be an emergent symmetry. It doesn't require anything about discreteness at small scales -- see the lattice (where Euclidean invariance is emergent at long distances), for example.


Obviously, there is a discrete version of calculus (see numerical analysis).

Aaron Sheldon said...

Then I challenge you to, for example, derive Maxwell's equations, or the stress-energy of a non-Riemannian manifold, using only finite difference equations; and remember by your assumptions you have to a priori state a cut off for the calculations - you are not allowed to cheat and say "we will do such and such in finite steps, but take the steps off to infinity"


As for Lorentz invariance being discretely emergent, the sub-group of one dimensional boosts is not cyclical so the smallest discrete sub-group of the Lorentz symmetrises would be isomorphic to the integers. Unless you want to claim that the boosts just stop at some level, but given the momentum's of protons in the LHC, electrons at SLAC, and cosmic rays that level is very large.

Aaron Sheldon said...

Among other things infinity is the only meaningful way to understand approximation and error. That is, it provides meaning to statements of the form "if you do X for N steps we will be within Epsilon of the solution, and doing M more steps will reduce the error by Delta"

Blog Archive

Labels