Tuesday 3 June 2008

Occam's Prior and Bayesian Science

Moving aside from new forces and particle physics for a second. Here i'll talk about the core process for scientific investigation. The traditional view for the last 70 years, is the Popperian view (due to Karl Popper), that science proceeds by falsifing theories. This is all very well, but suppose for than one theory is currently viable. As discussed in new scientist recently, we may use Bayesian Probability theory to assign a probability of truth to each hypothesis. Bayesian theory works using simple maths,


log[P(H_i)] = log[P_0(H_i)] + Sum_evidence[ log(P(E | H_i)) - log(P(E | H_i) ]


Allow one to update our believe in a particular hypothesis upon partial evidence, rather than requiring complete falsification. The first term above, P_0, is the prior probability of the hypothesis being truth. Where should this prior come from however?
One can't scientifical start from our cultural preferences that one thing is true rather than another, rather we should have good a mathematical basis for knowing how likely a theory is likely to be true before we start measuring. One can begin with Occam's Razor. Which states roughly that the more simple a theory is for more like it is to be true. In fact this can be encoded mathematically by the minimum message length (possibly in more than one way), or perphaps by Kolmogorov complexity.
Either way, we get a probability factor from the minimum statement of physics of the theory as some computer program (Kolmogorov), or sent as some message (MML). Its clear that as physicists we should take mathematical truth for free. We do, however, need some optimal or near optimal language for writing or compiling are physics theories down to.


We don't have this optimal language yet, so its difficult to come up with a number for how much adding each extra unknown constant costs to our theories. Or how much adding an extra field (spinor or vector) cost to our theories, what about changing a gauge group from SU(3) to SU(4), how much that cost?. So far we can guess that it does cost, but have no real idea of how much. So there is a case for a inventing whole new scientific field (actually it would be a mathematical subject and pre-physically), for calculating Occam's prior for each particular theory.


This might not be easy, there are all sorts philosophical points to consider. For example do the standard model of physics although fitting experiments nearly perfectly, (only dark energy and dark matter stand out) as unexplained, is considered ugly from Occam's point of view because it has 26 real number constants unexplained by the theory. Surely super-symmetry theory should be considered more ugly, it has about 120 unknown constants. String theory (which is often but not always super-symmetric) is considered beautiful (very high Occam's prior), because it has only one constant, the string tension, some for example Motl Lubos claim it also unique in that it doesn't have any nearby sibling theories. Unfortunately we do need to compactify its 11 dimensions down to our 3+1 dimensions, and this can happen in some 10^120 different ways, do these ways (that have be argued that can be ruled by the anthropoic principle), count to Occam's prior has much as as super-symmetries extra constants, I don't know, but I believe, it would be possible in principle to calculate.


A subject dealing Occam's prior, might in end yield a solution to the problem of what happens after some day, we have a TOE, a theory of everything. Have we then ended science with a perfect theory, or might there be further theory, somehow simpler or more powerful, that we have overlooked. Only by measuring the complexity our TOE, and checking that no simpler theories can fit our measurements, can we know for sure.


Even without a firm way of calculating Occam's prior can be use the idea, to shed light on physics current problems of explaining dark energy and dark matter. I claim that is can, the standard model was nice but its now effectively falsified, it has no explanation for either of the two dark constituents of the cosmos. Supersymmetry with its Lighest symmetric particle (LSP), might be popular, but its a whole family of theories and has nearly a hundred extra unknown constants. So Occam's prior doesn't like it much. Mirror Matter doesn't have that many proponents, maybe as few as twenty physicists have worked on it. It doubles the standard model, to contains second mirror standard model complete with mirror photons and mirror protons etc, which hardly interact with ordinary particles at all. We've just 2 possible mixings between the ordinary interaction and the mirror interactions, and even if we had to count all the constants in the SM twice (rather than assuming they had the same reason behind them so count only once), we'd still have less than half the number of free constants than super-symmetry. Occam's prior says (without attempting to fully calculate it), that mirror matter is what cosmologists should be trying to falsify and not the cold dark matter from super symmetry.


What about dark energy, well, cosmologists have parametrised and produce lots of theories of dark energy. But the only two particle physics motivated theory of dark energy I know, are Mass varying neutrinos, and my own Axial Force theory. MVN theory gets one scalar field was an ad-hoc potential and one coupling constant, while my theory has a stand inverse square potential, and one force constant. So I'd claim (biasedly of course), that the axial force theory wins by one unknown/explained function/functional.

1 comment:

Anonymous said...

I always thought that the formalisation of Occam's razor in bayesian probability is the maximum entropy principle.