Why is the speed of light constant in the vacuum?
The constancy of the speed of light c in the vacuum was the key insight in Einstein’s work on relativity. However this was an assumption, rather than a proof. It is an assumption that has worked well, in that the resulting theory has shown good agreement with many observations. The Michelson-Morley experiment directly tested the idea of Earth moving through a stationary ether, by looking for differences in the speed of light in different directions, and found no evidence to support such a theory. There is no empirical evidence that convincingly shows the speed of light to be variable in-vacuo in the vicinity of Earth. However it is possible that the speed of light is merely locally constant, and different elsewhere in the universe. In our latest paper we show why this might be so.
Fundamental questions about light
There are several perplexing questions about light:
- What is the underlying mechanism that makes the speed of light constant in the vacuum?
- What properties of the universe cause the speed of light to have the value it has?
- If the speed of light is not constant throughout the universe, what would be the mechanisms?
- How does light move through the vacuum?
- The vacuum has properties: electric and magnetic constants. Why, and what causes these?
- How does light behave as both a wave and particle? (Wave-particle duality)
- How does a photon physically take two different paths? (Superposition in interferometers)
- How does entanglement work at the level of the individual photon?
These are questions of fundamental physics, and of cosmology. Consequently there is on-going interest in the speed of light at the foundational level. The difficulty is that neither general relativity nor quantum mechanics can explain why c should be constant, or why it should have the value it does. Neither for that matter does string/M theory. Gaining a better understanding of this has the potential to bridge the particle and cosmology scales of physics.
Is the speed of light really constant? Everywhere? At all times?
There has been ongoing interest in developing theories where c is not constant. These are called variable speed of light (VSL) theories [see paper for more details]. The primary purpose of these is to explore for new physics at deeper levels, with a particular interest in quantum-gravity. For example, it may be that the invariance of c breaks down at very small scales, or for photons of different energy, though such searches have been unsuccessful to date. Another approach is cosmological. If the speed of light was to be variable, it could solve certain problems. Specifically, the horizon, inflation and flatness problems might be resolved if there were a faster c in the early universe, i.e. a time-varying speed of light. There are several other possible applications for a variable speed of light theory in cosmology.
However there is one big problem:
In all existing VSL theories the difficulty is providing reasons for why c should vary with time or geometric scale.
The theories require the speed of light to be different at genesis, and then somehow change slowly or suddenly switch over at some time or event, for reasons unknown. None of the existing VSL theories describe why this should be, nor do they propose underlying mechanics. This is problematic, and contributes to existing VSL theories not being widely accepted.
Cordus theory predicts the speed of light is variable, and attributes it to fabric density
In our paper [apr.v8n3p111] we apply the non-local hidden-variable (Cordus) theory to this problem. It turns out that it is a logical necessity of the theory that the speed of light be variable. The theory also predicts a specific underlying mechanism for this. Our findings are that the speed of light is inversely proportional to fabric density. This is because the discrete fields of the photon interact dynamically with the fabric and therefore consume frequency cycles of the photon. The fabric arises from aggregation of discrete force emissions (fields) from massy particles, which in turn depends on the proximity and spatial distribution of matter.
This theory offers a conceptually simply way to reconcile the refraction of light in both gravitational situations and optical materials: the density of matter affects the fabric density, and hence affects the speed of light. So when light enters a denser medium, say a glass prism, then it encounters an abrupt increase in fabric density, which slows its speed. Likewise light that grazes past a star is subject to a small gradient in the fabric, hence resulting in gravitational bending of the light-path. Furthermore, the theory accommodates the constant speed of light of general relativity, as a special case of a locally constant fabric density. In other words, the fabric density is homogeneous in the vicinity of Earth, so the speed of light is also constant in this locality. However, in a different part of the universe where matter is more sparse, the speed of light is predicted to be faster. Similarly, at earlier time epochs when the universe was more dense, the speed of light would have been slower. This also means that the results disfavour the universal applicability of the cosmological principle of homogeneity and isotropy of the universe.
The originality in this paper is in proposing underlying mechanisms for the speed of light. Uniquely, this theory identifies fabric density as the dependent variable. In contrast, other VSL models propose that c varies with time or some geometric-like scale, but struggle to provide plausible reasons for that dependency.
This theory predicts that the speed of light is inversely proportional to the fabric density, which in turn is related to the proximity of matter. The fabric fills even the vacuum of space, and the density of this fabric is what gives the electric and magnetic constants their values, and sets the speed of light. The speed of light is constant in the vicinity of Earth, because the local fabric density is relatively isotropic. This explanation also accommodates relativistic time dilation, gravitational time dilation, gravitational bending of light, and refraction of light. So the speed of light is a variable that depends on fabric density, hence is an emergent property of the fabric.
The paper is available open access: http://dx.doi.org/10.5539/apr.v8n3p111
The fabric density concept is covered at http://dx.doi.org/10.2174/1874381101306010077.
The corresponding theory of time, which predicts that time speeds up in situations of lower fabric density, is at http://dx.doi.org/10.5539/apr.v5n6p23.
Citation for published paper:
Pons, D. J., Pons, A. D., & Pons, A. J. (2016). Speed of light as an emergent property of the fabric. Applied Physics Research, 8(3): 111-121. http://dx.doi.org/10.5539/apr.v8n3p111
Original work on physics archive (2013) : http://vixra.org/abs/1305.0148
Which is better for fundamental physics: beautiful mathematics based on pure concepts, or qualitative insights based on natural phenomena?
According to Lee Smolin in a 2015 arxiv paper , it’s the latter.
Mathematical symmetries: More or fewer?
How to find a better physics?
26 February 2016, Christchurch, New Zealand
This is an expansion of a post at Physics Forum https://www.physicsforums.com/threads/smolin-lessons-from-einsteins-discovery.849464/#post-5390859
 1. Smolin, L.: Lessons from Einstein’s 1915 discovery of general relativity. arxiv 1512.07551, 1-14 (2015). doi: http://arxiv.org/abs/1512.07551
Fine structure constant α
This is a dimensionless constant, represented with the symbol α (alpha), and it relates together the electric charge, the vacuum permittivity, and the speed of light.
The equation is as follows:
The impedance of free space is Zo = 1/(εoc) = 2αh/e2, with electric constant εo (also called vacuum permittivity), the speed of light in the vacuum c, and the fine structure constant α = e2/(2εohc), with elementary charge e [coulombs], Planck constant h, and c as before. All these are generally considered physical constants, i.e. are fixed values for the universe.
One example of how this relationship may be used is as follows. Given the electric charge, and the vacuum permittivity, then the alpha equation may be used to explain why the speed of light has the value it does. The equation may be rearranged into other equivalent forms.
What is the physical meaning of the fine structure constant?
This is a more difficult question, especially when coupled with the question, Why does alpha take the value it does? This is something of a mystery.
We believe we can answer some parts of this question. In a recent paper of the Cordus theory it has been proposed that both the vacuum permittivity and the speed of light are dependent variables, and situationally specific. It is proposed that εo represents the density of the discrete forces in the fabric, and thus depends on the spatial distribution of mass within the universe. Thus the electric constant is recast as an emergent property of the fabric, and hence of matter.
From this perspective α is a measure of the transmission efficacy of the fabric, i.e. it determines the relationship between the electric constant of the vacuum fabric, and the speed of propagation c through the fabric.
This is consistent with the observation that α appears wherever electrical forces and propagation of fields occur, and this includes cases such as electron bonding.
The reason the speed of light is limited to a certain finite value is explained by this theory as a consequence of the fabric density creating a temporal impedance. Thus denser fabric results in a slower speed of light, and this is consistent with time dilation, and optical refraction generally. In the Cordus theory the speed of light in turn is determined by the density of the fabric discrete forces and is therefore locally consistent and relativistic, but ultimately dependent on the past history of matter density in the locally available universe. Thus the vacuum (fabric) has a finite speed of light, despite an instantaneous communication across the fibril of the particule. This Cordus theory is consistent with the known impedance of free space though comes at it from a novel direction.
The implications are the electric constant of free space is not actually constant, but rather depends on the fabric density, hence on the spatial distribution of matter. The fabric density also determines the speed of light in the situation, and α is the factor that relates the two for this universe. It would appear to be a factor set at genesis of the universe.
Pons, D. J. (2015). Inner process of Photon emission and absorption. Applied Physics Research, 7(4 ), 14-26. doi:http://dx.doi.org/10.5539/apr.v7n4p24
Title: Conceptual framework for a novel non-local hidden-variable theory of physics: Cordus theory
17 Sept 2015, 15h00, venue Ers446
Content: As per http://vixra.org/abs/1104.0015
How does the synchronous interaction, or strong nuclear force, attract nucleons and hold the nucleus together?
The Cordus theory of the synchronous interaction is key to the concept of the nuclear polymer.
How does the strong force work?
Conventionally the strong nuclear force is proposed to arise by the exchange of gluons of various colour. The theory for this is quantum chromodynamics (QCD). This force is then proposed to be much stronger in attraction than the electrostatic repulsion of protons of like charge, hence ‘strong’. Rather strangely, the theory requires the force to change and become repulsive at close range. This is to prevent it from collapsing the protons into a singularity (single point). Quite how this change operates is not explained, and the theory as a whole also cannot explain even the simplest atomic nucleus, let alone any of the features of the the table of nuclides. So there is a large gap between the colour force of QCD and any realistic explanation of atomic structure. QCD, gluons, and the strong attraction-repulsion force have no proven external validity: the concepts don’t extend to explain anything else.
It is time to attempt a different approach. Remember, it is necessary to explain not only how the quarks are bonded, but also how the protons and neutrons are bonded, and onward to explain why any one nuclide is stable/unstable/non-existent. That means seeking explanations to the bigger picture, rather than creating a narrowly-focussed mathematical model of one tiny part of the problem.
What holds protons and neutrons together in the nucleus?
Here is our progress so far. First, note that conventionally the strong nuclear force overcomes the electrostatic repulsion of protons. In contrast the Cordus theory proposes that the protons and neutrons are locked together by synchronisation of their emitted electrostatic forces. These forces are proposed to be discrete. This is a radically different mechanism that has nothing to do with the electrostatic force.
‘The Cordus theory proposes that the strong force arises from the synchronisation of discrete forces between the reactive ends of different particules. The emission directions represent the particule’s directional engagement with the external environment, and so two particules that co-locate one of each of their reactive ends need to share this access, and this is proposed as the basis for the synchronicity requirement. This causes the emission of the particules’ discrete forces to be interlocked. The discrete forces cause the reactive ends to be pulled into (or repelled from) co-location and held there. Hence the strong nature of the forces, its apparent attractive-repulsive nature, and its short range.’
What is the synchronous interaction?
Second, note that the conventional idea is that the strong force is one of a set that also includes the electrostatic, magnetic, and gravitational forces (EMG). In contrast the Cordus theory proposes that the electrostatic repulsion force is inoperable inside the atomic nucleus. So there is no need for a ‘strong’ force to ‘overcome’ the proton electrostatic repulsion. You can either have the EMG forces or the synchronous interaction, not both. The factor that determines which operates is whether the assembly of matter is discoherent or coherent.
‘Unexpectedly, the Cordus theory predicts that this synchronous force only applies to particules in coherent assembly. In such situations the synchronicity of emission means also that the assembled particules must energise at the same frequency (or a suitable harmonic), and either in or out of phase. Thus the synchronous interaction is predicted to be limited to particules in coherent assembly relationships, with the electro-magneto-gravitational forces being the corresponding interaction for discoherent assemblies of matter.’
This is a radical departure from the orthodox perspective, which otherwise sees the strong and electrostatic forces as operating concurrently. The Cordus theory predicts that the interaction between neighbouring protons in the nucleus is entirely synchronous (strong force) and that there is no electrostatic repulsion (at least for small nuclei).
What determines nuclide stability?
Third, the Cordus theory proposes, by logical extension, that the synchronous interaction makes two distinct types of bond, differentiated by same vs. opposed phase (cis- and transphasic) of the reactive ends. This concept does not exist in conventional theories of the strong force which are based on 0D points.
What is the nuclear polymer structure of the atomic nucleus?
By logical progression, this concept lead to the conclusion that protons and neutrons are bound together in a chain, or as we call it, a nuclear polymer. This proves to be a powerful concept, because with it we are able to explain nuclide structures. The following diagram shows how the principle is applied to some example nuclides.
More information maybe found in the following references. They are best read in the order given, rather than the order published.
19 July 2015, Christchurch, New Zealand
 Pons, D. J., Pons, A. D., and Pons, A. J., Synchronous interlocking of discrete forces: Strong force reconceptualised in a NLHV solution Applied Physics Research, 2013. 5(5): p. 107-126. DOI: http://dx.doi.org/10.5539/apr.v5n5107
 Pons, D. J., Pons, A. D., and Pons, A. J., Nuclear polymer explains the stability, instability, and non-existence of nuclides. Physics Research International 2015. 2015(Article ID 651361): p. 1-19. DOI: http://dx.doi.org/10.1155/2015/651361
 Pons, D. J., Pons, A. D., and Pons, A. J., Explanation of the Table of Nuclides: Qualitative nuclear mechanics from a NLHV design. Applied Physics Research 2013. 5(6): p. 145-174. DOI: http://dx.doi.org/10.5539/apr.v5n6p145
Our previous work has shown that it is possible to explain the existence of the nuclides H-Ne, specifically why each is stable/unstable/non-existent. This is achieved under the assumption that the protons and neutrons are rod-like structures. Previous work in the Cordus theory has shown how the discrete fields of these particules would interlock by synchonising their emissions. Hence the STRONG NUCLEAR FORCE was explained as a SYNCHRONOUS INTERACTION of discrete field emissions.
Now we have published the details of these mechanics. See citation below. The theory predicts the nuclear morphology, i.e. the types of shapes that the protons and neutrons can make in their bonding arrangements. It turns out that this is best described as a NUCLEAR POLYMER. Thus the atomic nucleus is proposed to consist of a chain of protons and neutrons. In the lightest nuclides this chain may be open ended, but in general the chain has to be closed. It appears that for stability the proton and neutron need to alternate, and this explains why neutrons are always needed in the nucleus above 1H1. The theory also predicts that the neutrons can form CROSS-BRIDGES, and that these stabilise the loop into smaller loops. This also explains another puzzling feature of the table of nuclides, which is why disproportionately more neutrons are required for heavier elements. In addition the theory predicts that the sub-loops of the nuclear polymer are required to take specific shapes. This paper explains all these underlying principles and applies them to explain the hydrogen and helium nuclides.
The significance of this is the following. First, this is the first published theory of why individual isotopes are stable or unstable, or even non-existent. By comparison no other theory has done this, neither the binding energy approach, the semi-empirical mass-formula (SEMF), the various bag theories, nor quantum chromodynamics (QCD). Second, this has been achieved with a hidden-variable theory. This is a surprise, since such theories have otherwise been scarce and hard to develop. The only one of note has been the de Brogle-Bohm theory of the pilot wave, and that certainly does not have application to anything nuclear. So the first theory to explain the stability features of the table of nuclides for the lighter elements is a non-local theory rather than an empirical model, quantum theory, or string theory. That is deeply unexpected. It vindicates the hidden-variable approach, which has long been neglected.
Ultimately any theory of physics is merely a proposition of causality, and while any theory may be validated as sufficiently accurate at some level, there is always opportunity for further development. The Cordus theory and its nuclear mechanics implies that quantum mechanics is a stochastic approximation based on zero-dimensional point morphology of what the Cordus theory asserts is a deeper structure to matter.
Of course there is still much work to do. Showing that a hidden-variable theory explains these nuclides is an achievement but is not proof that the theory is valid. In the future we will need to expand the theory to the larger table of nuclides. If it can explain them, well that would be something. Also, it would be interesting to devise a mathematical formalism for the Cordus theory. Doing so would provide another method to explore the validity of the theory.
Dirk Pons, 9 July 2015, Christchurch
Pons, D. J., Pons, A. D., and Pons, A. J., Nuclear polymer explains the stability, instability, and non-existence of nuclides. Physics Research International 2015. 2015(Article ID 651361): p. 1-19. DOI: http://dx.doi.org/10.1155/2015/651361 (open access) or http://vixra.org/abs/1310.0007 (open access)
Problem – The explanation of nuclear properties from the strong force upwards has been elusive. It is not clear how binding energy arises, or why the neutrons are necessary in the nucleus at all. Nor has it been possible to explain, from first principles of the strong force, why any one nuclide is stable, unstable, or non-existent. Approach – Design methods were used to develop a conceptual mechanics for the bonding arrangements between nucleons. This was based on the covert structures for the proton and neutron as defined by the Cordus theory, a type of non-local hidden-variable design with discrete fields. Findings – Nuclear bonding arises from the synchronous interaction between the discrete fields of the proton and neutron. This results in not one but multiple types of bond, cis- and transphasic, and assembly of chains and bridges of nucleons into a nuclear polymer. The synchronous interaction constrains the relative orientation of nucleons, hence the nuclear polymer takes only certain spatial layouts. The stability of nuclides is entirely predicted by morphology of the nuclear polymer and the cis/transphasic nature of the bonds. The theory successfully explains the qualitative stability characteristics of all hydrogen and helium nuclides. Originality – Novel contributions include: the concept of a nuclear polymer and its mechanics; an explanation of the stability, instability, or non-existence of nuclides starting from the strong/synchronous force; explanation of the role of the neutron in the nucleus. The theory opens a new field of mechanics by which nucleon interactions may be understood.
Our previous work indicates that, under the rules of this framework of physics, the neutrino and antineutrino (neutrino-species) interact differently with matter. Specifically that (a) they interact differently with the proton compared to the neutron, and (b) they are not only by-products of the decay of those nucleons as in the conventional understanding, but also can be inputs that initiate decay. (See previous posts).
Extending that work to the nuclides more generally, we are now able to show how it might be that decay rates could be somewhat erratic for β+, β-, and EC. It is predicted on theoretical grounds that the β-, β+ and electron capture processes may be induced by pre-supply of neutrino-species, and that the effects are asymmetrical for those species. Also predicted is that different input energies are required, i.e. that a threshold effect exists. Four simple lemmas are proposed with which it is straightforward to explain why β- and EC decays would be enhanced and correlate to solar neutrino flux (proximity & activity), and alpha (α) emission unaffected.
Basically the observed variability is proposed to be caused by the way neutrinos and antineutrinos induce decay differently. This is an interesting and potentially important finding because there are otherwise no physical explanations for how variable decay rates might arise. So the contribution here is providing a candidate theory.
We have put the paper out to peer-review, so it is currently under submission. If you are interested in preliminary information, the pre-print may be found at the physics archive:
This work makes the novel contribution of proposing a detailed mechanism for neutrino-species induced decay, broadly consistent with the empirical evidence.
New Zealand, 14 Feb 2015
You may also be interested in related sites talking about variable decay rates:
See also the references in our paper for a summary of the journal literature.
UPDATE (20 April 2015): This paper has been published as DOI: 10.5539/apr.v7n3p18 and is available open access here http://www.ccsenet.org/journal/index.php/apr/article/view/46281/25558