**How does it operate? **This is unknown. The experimental evidence is that it *does *exist, but the mechanism is not known. Classical Newtonian mechanics implies the effect should not exist. General relativity makes no provision for it. Quantum mechanics (QM) accepts it as real, and can express the outcomes mathematically, but does not describe *how *entanglement operates at the physical level.

**Does new physics offer new explanations for entanglement? **Yes. This is where the Cordus theory of fundamental physics offers a candidate solutions In the paper ‘A physical basis for entanglement in a non-local hidden variable theory’ (2017) (https://doi.org/10.4236/jmp.2017.88082) we show that superposition and entanglement may be qualitatively explained if particles were to have the internal structure proposed by the Cordus theory.

This is a non-local hidden-variable (NLHV) theory, hence naturally supports non-local behaviour. *Locality* is the expectation that a point object is only affected by the values of fields and external environmental variables at that point, not by remote values. Entanglement is a type of *non-local *behaviour – the particles evidently behave as if affected by effects happening some distance away from the point the defines the particles.

As a type of *hidden-variable *theory, the theory proposes -and this is important- that fundamental particles have internal structure. This is a major departure from QM and its assumption that particles are zero-dimensional points without sub-structure.

*Figure: Qualitative explanation of two-photon entanglement. The photons are predicted to originate from a Pauli pair of electrons – these electrons are bonded in a transphasic interaction and hence their emitted photons also have that interaction. Consequently the four reactive ends of the two photons are linked by fibrils, even as they move further apart. As a result the behaviours of the photons are coupled: hence entanglement.*

The explanation from the Cordus theory is that there is no single point that defines the position of the particule. Its reactive ends between them occupy a volume of space, and its discrete fields extend out to occupy a volume of space external to the reactive ends.

The Cordus theory explains that locality fails because the particule is affected by what happens at *both *reactive ends, and by the externally-originating discrete forces it receives at both locations. A *principle of Wider Locality* is proposed, whereby the particule is affected by the values of external discrete forces (hence also conventional fields) in the vicinity of both its reactive ends.

The ability to explain entanglement conceptually in terms of physical realism is relevant because it rebuts the claim that it is impossible that such a hidden-variable theory could exist. This is significant because previously it has been believed that only QM could explain this phenomena.

CITATION:

Pons, D. J., Pons, A. D., & Pons, A. J. (2017). A physical basis for entanglement in a non-local hidden variable theory *Journal of Modern Physics, 8*(8), 1257-1274 doi: https://doi.org/10.4236/jmp.2017.88082 or http://file.scirp.org/Html/10-7503127_77506.htm or http://vixra.org/abs/1502.0103

]]>

In the paper http://dx.doi.org/10.4236/jmp.2016.710094 we show how to solve this explanatory problem. We show that it is possible to explain many optical phenomena involving energy conversion. The solution involves a new physics at the sub-particle level, in the form of a non-local hidden-variable (NLHV) solution.

It has long been known that the bonding commitments of the electron affect its energy behaviour but the mechanisms for this have been elusive. We show how the degree of bonding constraint on the electron determines how it processes excess energy, see figure. A key concept is that the span and frequency of the electron are inversely proportional. This explains why energy changes cause positional distress for the electron.

Natural explanations are given for multiple emission phenomena: Absorbance; Saturation; Beer-Lambert law; Colour; Quantum energy states; Directional emission; Photoelectric effect; Emission of polarised photons from crystals; Refraction effects; Reflection; Transparency; Birefringence; Cherenkov radiation; Bremsstrahlung and Synchrotron radiation; Phase change at reflection; Force impulse at reflection and radiation pressure; Simulated emission (Laser).

The originality of this work is the elucidation of a mechanism for how the electron responds to combinations of bonding constraint and pumped energy. The crucial insight is that the electron size and position(s) are coupled attributes of its frequency and energy, where the coupling is achieved via physical substructures. The theory is able to provide a logically coherent explanation for a wide variety of energy conversion phenomena.

Dirk Pons

Christchurch, New Zealand

15 June 2016

*More information – The full paper (gold open access) is available at: *

Pons, D.J., Pons, A.D., and Pons, A.J., (2016), Energy conversion mechanics for photon emission per non-local hidden-variable theory. Journal of Modern Physics, 7(10), 1049-1067. http://dx.doi.org/10.4236/jmp.2016.710094

]]>

http://dx.doi.org/10.5539/apr.v8n3p111

The constancy of the speed of light *c* in the vacuum was the key insight in Einstein’s work on relativity. However this was an assumption, rather than a proof. It is an assumption that has worked well, in that the resulting theory has shown good agreement with many observations. The Michelson-Morley experiment directly tested the idea of Earth moving through a stationary ether, by looking for differences in the speed of light in different directions, and found no evidence to support such a theory. There is no empirical evidence that convincingly shows the speed of light to be variable in-vacuo in the vicinity of Earth. However it is possible that the speed of light is merely locally constant, and different elsewhere in the universe. In our latest paper we show why this might be so.

There are several perplexing questions about light:

- What is the underlying mechanism that makes the speed of light constant in the vacuum?
- What properties of the universe cause the speed of light to have the value it has?
- If the speed of light is not constant throughout the universe, what would be the mechanisms?
- How does light move through the vacuum?
- The vacuum has properties: electric and magnetic constants. Why, and what causes these?
- How does light behave as both a wave and particle? (Wave-particle duality)
- How does a photon physically take two different paths? (Superposition in interferometers)
- How does entanglement work at the level of the individual photon?

These are questions of fundamental physics, and of cosmology. Consequently there is on-going interest in the speed of light at the foundational level. The difficulty is that neither general relativity nor quantum mechanics can explain why c should be constant, or why it should have the value it does. Neither for that matter does string/M theory. Gaining a better understanding of this has the potential to bridge the particle and cosmology scales of physics.

There has been ongoing interest in developing theories where c is not constant. These are called *variable speed of light* (VSL) theories [see paper for more details]. The primary purpose of these is to explore for new physics at deeper levels, with a particular interest in quantum-gravity. For example, it may be that the invariance of c breaks down at very small scales, or for photons of different energy, though such searches have been unsuccessful to date. Another approach is cosmological. If the speed of light was to be variable, it could solve certain problems. Specifically, the horizon, inflation and flatness problems might be resolved if there were a faster c in the early universe, i.e. a time-varying speed of light. There are several other possible applications for a variable speed of light theory in cosmology.

However there is one big problem:

**In all existing VSL theories the difficulty is providing reasons for why c should vary with time or geometric scale.**

The theories require the speed of light to be different at genesis, and then somehow change slowly or suddenly switch over at some time or event, for reasons unknown. None of the existing VSL theories describe why this should be, nor do they propose underlying mechanics. This is problematic, and contributes to existing VSL theories not being widely accepted.

In our paper [apr.v8n3p111] we apply the non-local hidden-variable (Cordus) theory to this problem. It turns out that it is a logical necessity of the theory that the speed of light be variable. The theory also predicts a specific underlying mechanism for this. Our findings are that the speed of light is inversely proportional to fabric density. This is because the discrete fields of the photon interact dynamically with the fabric and therefore consume frequency cycles of the photon. The fabric arises from aggregation of discrete force emissions (fields) from massy particles, which in turn depends on the proximity and spatial distribution of matter.

This theory offers a conceptually simply way to reconcile the refraction of light in both gravitational situations and optical materials: the density of matter affects the fabric density, and hence affects the speed of light. So when light enters a denser medium, say a glass prism, then it encounters an abrupt increase in fabric density, which slows its speed. Likewise light that grazes past a star is subject to a small gradient in the fabric, hence resulting in gravitational bending of the light-path. Furthermore, the theory accommodates the constant speed of light of general relativity, as a special case of a locally constant fabric density. In other words, the fabric density is homogeneous in the vicinity of Earth, so the speed of light is also constant in this locality. However, in a different part of the universe where matter is more sparse, the speed of light is predicted to be faster. Similarly, at earlier time epochs when the universe was more dense, the speed of light would have been slower. This also means that the results disfavour the universal applicability of the cosmological principle of homogeneity and isotropy of the universe.

The originality in this paper is in proposing underlying mechanisms for the speed of light. Uniquely, this theory identifies fabric density as the dependent variable. In contrast, other VSL models propose that c varies with time or some geometric-like scale, but struggle to provide plausible reasons for that dependency.

This theory predicts that the speed of light is inversely proportional to the fabric density, which in turn is related to the proximity of matter. The fabric fills even the vacuum of space, and the density of this fabric is what gives the electric and magnetic constants their values, and sets the speed of light. The speed of light is constant in the vicinity of Earth, because the local fabric density is relatively isotropic. This explanation also accommodates relativistic time dilation, gravitational time dilation, gravitational bending of light, and refraction of light. So the speed of light is a variable that depends on fabric density, hence is an* emergent property of the fabric*.

The paper is available open access: http://dx.doi.org/10.5539/apr.v8n3p111

*The fabric density concept is covered at http://dx.doi.org/10.2174/1874381101306010077. *

*The corresponding theory of time, which predicts that time speeds up in situations of lower fabric density, is at http://dx.doi.org/10.5539/apr.v5n6p23.*

**Citation for published paper:**

Pons, D. J., Pons, A. D., & Pons, A. J. (2016). Speed of light as an emergent property of the fabric. Applied Physics Research, 8(3): 111-121. http://dx.doi.org/10.5539/apr.v8n3p111

Original work on physics archive (2013) : http://vixra.org/abs/1305.0148

]]>

According to Lee Smolin in a 2015 arxiv paper [1], it’s the latter.

As I understand him, Smolin’s main point is that elegant qualitative explanations are more valuable than beautiful mathematics, that physics fails to progress when ‘*mathematics [is used] as a substitute for insight into nature*‘ (p13).

The symmetry methodology receives criticism for the proliferation of assumptions it requires, and the lack of explanatory power. Likewise particle supersymmetry is identified as having the same failings. Smolin is also critical of of string theory, writing, ‘*Thousands of theorists have spent decades studying these [string theory] ideas, and there is not yet a single connection with experiment*‘ (p6-7).

Smolin is especially critical of the idea that progress might be found in increasingly elaborate mathematical symmetries.

I also wonder whether the ‘symmetries’ idea is overloaded. The basic concept of symmetry is that some attribute of the system should be preserved when transformed about some dimension. Even if it is possible to represent this mathematically, we should still be prudent about which attributes, transformations, and dimensions to accept. Actual physics does not necessarily follow mathematical representation. There is generally a lack of critical evaluation of the validity of specific attributes, transformations, and dimensions for the proposed symmetries. The *time* variable is a case in point. Mathematical treatments invariably consider it to be a dimension, yet empirical evidence overwhelmingly shows this not to be the case.

Irreversibility shows that time does not evidence symmetry. The time dimension cannot be traversed in a controlled manner, neither forward and especially not backward. Also, a complex system of particles will not spontaneously revert to its former configuration. Consequently *time* cannot be considered to be a dimension about which it is valid to apply a symmetry transformation even when one exists mathematically. Logically, we should therefore discard any mathematical symmetry that has a time dimension to it. That reduces the field considerably, since many symmetries have a temporal component.

Alternatively, if we are to continue to rely on temporal symmetries, it will be necessary to understand how the mechanics of irreversibility arises, and why those symmetries are exempt therefrom. I accept that relativity considers time to be a dimension, and has achieved significant theoretical advances with that premise. However relativity is also a theory of macroscopic interactions, and it is possible that assuming time to be a dimension is a sufficiently accurate premise at this scale, but not at others. Our own work suggests that time could be an emergent property of matter, rather than a dimension (http://dx.doi.org/10.5539/apr.v5n6p23). This makes it much easier to explain the origins of the arrow of time and of irreversibility. So it can be fruitful, in an ontological way, to be sceptical of the idea that mathematical formalisms of symmetry are necessarily valid representations of actual physics. It might be reading too much into Smolin’s meaning when he says that ‘time… properties reflect the positions … of matter in the universe’ (p12), but that seems consistent with our proposition.

The solution, Smolin says, is to *‘begin with new physical principles*‘ (p8). Thus we should expect new physics will emerge by developing qualitative explanations based on intuitive insights from natural phenomena, rather than trying to extend existing mathematics. Explanations that are valuable are those that are efficient (fewer parameters, less tuning, and not involving extremely big or small numbers) and logically consistent with physical realism (‘tell a coherent story’). It is necessary that the explanations come first, and the mathematics follows later as a subordinate activity to formalise and represent those insights.

However it is not so easy to do that in practice, and Smolin does not have suggestions for where these new physical principles should be sought. His statement that ‘*no such principles have been proposed*‘ (p8) is incorrect. Ourselves and others have proposed new physical principles – ours is called the Cordus theory and based on a proposed internal structure to particles. Other theories exist, see vixra and arxiv. The bigger issue is that physics journals are mostly deaf to propositions regarding new principles. Our own papers have been summarily rejected by editors many times due to ‘lack of mathematical content’ or ‘we do not publish speculative material’, or ‘extraordinary claims require extraordinary evidence’. In an ideal world all candidate solutions would at least be admitted to scrutiny, but this does not actually happen and there are multiple existing ideas in the wilds that never make it through to the formal journal literature frequented by physicists. Even then, those ideas that undergo peer review and are published, are not necessarily widely available. The problem is that the academic search engines, like Elsevier’s Compendex and Thompson’s Web of Science, are selective in what journals they index, and fail to provide reliable coverage of the more radical elements of physics. (Google Scholar appears to provide an unbiassed assay of the literature.) Most physicists would have to go out of their way to inform themselves of the protosciences and new propositions that circulate in the wild outside their bubbles of knowledge. Not all those proposals can possibly be right, but neither are they all necessarily wrong. In mitigation, the body of literature in physics has become so voluminous that it is impossible for any one physicist to be fully informed about all developments, even within a sub-field like fundamental physics. But the point remains that new principles of physics do exist, based on intuitive insights from natural phenomena, and which have high explanatory power, exactly how Smolin expected things to develop.

Smolin suspects that true solutions will have *fewer *rather than more symmetries. This is also consistent with our work, which indicates that both the asymmetrical leptogenesis and baryogenesis processes can be conceptually explained as consequences of a single deeper symmetry (http://dx.doi.org/10.4236/jmp.2014.517193). That is the matter-antimatter species differentiation (http://dx.doi.org/10.4006/0836-1398-27.1.26). That also explains asymmetries in decay rates (http://dx.doi.org/10.5539/apr.v7n2p1).

In a way, though he does not use the words, Smolin tacitly endorses the principle of physical realism: that physical observable phenomena do have deeper causal mechanics involving parameters that exist objectively. He never mentions the hidden-variable solutions. Perhaps this is indicative of the position of most theorists, that the hidden variable sector has been unproductive. Everyone has given up on it as intractable, and now ignore it. According to Google Scholar, ours looks to be the only group left in the world that is publishing non-local hidden-variable (NLHV) solutions. Time will tell whether or not these are strong enough, but these do already embody Smolin’s injunction to take a fresh look for new physical principles.

Dirk Pons

26 February 2016, Christchurch, New Zealand

*This is an expansion of a post at Physics Forum **https://www.physicsforums.com/threads/smolin-lessons-from-einsteins-discovery.849464/#post-5390859*

References

[1] 1. Smolin, L.: Lessons from Einstein’s 1915 discovery of general relativity. arxiv 1512.07551, 1-14 (2015). doi: http://arxiv.org/abs/1512.07551

]]>

This is a dimensionless constant, represented with the symbol α (alpha), and it relates together the electric charge, the vacuum permittivity, and the speed of light.

The equation is as follows:

The impedance of free space is Z_{o} = 1/(ε_{o}c) = 2αh/e^{2}, with electric constant ε_{o} (also called vacuum permittivity), the speed of light in the vacuum c, and the fine structure constant α = e^{2}/(2ε_{o}hc), with elementary charge e [coulombs], Planck constant h, and c as before. All these are generally considered physical constants, i.e. are fixed values for the universe.

One example of how this relationship may be used is as follows. Given the electric charge, and the vacuum permittivity, then the alpha equation may be used to explain why the speed of light has the value it does. The equation may be rearranged into other equivalent forms.

This is a more difficult question, especially when coupled with the question, *Why does alpha take the value it does? *This is something of a mystery.

We believe we can answer some parts of this question. In a recent paper of the Cordus theory it has been proposed that both the vacuum permittivity and the speed of light are dependent variables, and situationally specific. It is proposed that ε_{o} represents the density of the discrete forces in the fabric, and thus depends on the spatial distribution of mass within the universe. Thus the electric constant is recast as an emergent property of the fabric, and hence of matter.

*From this perspective α is a measure of the transmission efficacy of the fabric, i.e. it determines the relationship between the electric constant of the vacuum fabric, and the speed of propagation c through the fabric.*

This is consistent with the observation that *α* appears wherever electrical forces and propagation of fields occur, and this includes cases such as electron bonding.

The reason the speed of light is limited to a certain finite value is explained by this theory as a consequence of the fabric density creating a temporal impedance. Thus denser fabric results in a slower speed of light, and this is consistent with time dilation, and optical refraction generally. In the Cordus theory the speed of light in turn is determined by the density of the fabric discrete forces and is therefore locally consistent and relativistic, but ultimately dependent on the past history of matter density in the locally available universe. Thus the vacuum (fabric) has a finite speed of light, despite an instantaneous communication across the fibril of the particule. This Cordus theory is consistent with the known impedance of free space though comes at it from a novel direction.

The implications are the electric constant of free space is not actually constant, but rather depends on the fabric density, hence on the spatial distribution of matter. The fabric density also determines the speed of light in the situation, and *α* is the factor that relates the two for this universe. It would appear to be a factor set at genesis of the universe.

Pons, D. J. (2015). Inner process of Photon emission and absorption. Applied Physics Research, 7(4 ), 14-26. doi:http://dx.doi.org/10.5539/apr.v7n4p24

]]>17 Sept 2015, 15h00, venue Ers446

Content: As per http://vixra.org/abs/1104.0015

]]>Conventionally the strong nuclear force is proposed to arise by the exchange of gluons of various colour. The theory for this is quantum chromodynamics (QCD). This force is then proposed to be much stronger in attraction than the electrostatic repulsion of protons of like charge, hence ‘strong’. Rather strangely, the theory requires the force to change and become repulsive at close range. This is to prevent it from collapsing the protons into a singularity (single point). Quite how this change operates is not explained, and the theory as a whole also cannot explain even the simplest atomic nucleus, let alone any of the features of the the table of nuclides. So there is a large gap between the colour force of QCD and any realistic explanation of atomic structure. QCD, gluons, and the strong attraction-repulsion force have no proven external validity: the concepts don’t extend to explain anything else.

It is time to attempt a different approach. Remember, it is necessary to explain not only how the quarks are bonded, but also how the protons and neutrons are bonded, and onward to explain why any one nuclide is stable/unstable/non-existent. That means seeking explanations to the bigger picture, rather than creating a narrowly-focussed mathematical model of one tiny part of the problem.

Here is our progress so far. First, note that conventionally the strong nuclear force overcomes the electrostatic repulsion of protons. In contrast the Cordus theory proposes that the protons and neutrons are locked together by *synchronisation *of their emitted electrostatic forces. These forces are proposed to be *discrete. * This is a radically different mechanism that has nothing to do with the electrostatic force.

‘The Cordus theory proposes that the strong force arises from the synchronisation of discrete forces between the reactive ends of different particules. The emission directions represent the particule’s directional engagement with the external environment, and so two particules that co-locate one of each of their reactive ends need to share this access, and this is proposed as the basis for the synchronicity requirement. This causes the emission of the particules’ discrete forces to be interlocked. The discrete forces cause the reactive ends to be pulled into (or repelled from) co-location and held there. Hence the strong nature of the forces, its apparent attractive-repulsive nature, and its short range.’

Second, note that the conventional idea is that the strong force is one of a set that also includes the electrostatic, magnetic, and gravitational forces (EMG). In contrast the Cordus theory proposes that the electrostatic repulsion force is* inoperable *inside the atomic nucleus. So there is no need for a ‘strong’ force to ‘overcome’ the proton electrostatic repulsion. You can either have the EMG forces or the synchronous interaction, not both. The factor that determines which operates is whether the assembly of matter is discoherent or coherent.

‘Unexpectedly, the Cordus theory predicts that this synchronous force only applies to particules in coherent assembly. In such situations the synchronicity of emission means also that the assembled particules must energise at the same frequency (or a suitable harmonic), and either in or out of phase. Thus the synchronous interaction is predicted to be limited to particules in coherent assembly relationships, with the electro-magneto-gravitational forces being the corresponding interaction for discoherent assemblies of matter.’

This is a radical departure from the orthodox perspective, which otherwise sees the strong and electrostatic forces as operating *concurrently. *The Cordus theory predicts that the interaction between neighbouring protons in the nucleus is entirely synchronous (strong force) and that there is no electrostatic repulsion (at least for small nuclei).

Third, the Cordus theory proposes, by logical extension, that the synchronous interaction makes two distinct types of bond, differentiated by same vs. opposed phase (cis- and transphasic) of the reactive ends. This concept does not exist in conventional theories of the strong force which are based on 0D points.

By logical progression, this concept lead to the conclusion that protons and neutrons are bound together in a chain, or as we call it, a *nuclear polymer*. This proves to be a powerful concept, because with it we are able to explain nuclide structures. The following diagram shows how the principle is applied to some example nuclides.

More information maybe found in the following references. They are best read in the order given, rather than the order published.

Dirk Pons,

19 July 2015, Christchurch, New Zealand

[1] Pons, D. J., Pons, A. D., and Pons, A. J., Synchronous interlocking of discrete forces: Strong force reconceptualised in a NLHV solution Applied Physics Research, 2013. 5(5): p. 107-126. DOI: http://dx.doi.org/10.5539/apr.v5n5107

[2] Pons, D. J., Pons, A. D., and Pons, A. J., Nuclear polymer explains the stability, instability, and non-existence of nuclides. Physics Research International 2015. 2015(Article ID 651361): p. 1-19. DOI: http://dx.doi.org/10.1155/2015/651361

[3] Pons, D. J., Pons, A. D., and Pons, A. J., Explanation of the Table of Nuclides: Qualitative nuclear mechanics from a NLHV design. Applied Physics Research 2013. 5(6): p. 145-174. DOI: http://dx.doi.org/10.5539/apr.v5n6p145

]]>

**Now we have published the details of these mechanics. **See citation below. The theory predicts the nuclear morphology, i.e. the types of shapes that the protons and neutrons can make in their bonding arrangements. It turns out that this is best described as a NUCLEAR POLYMER. Thus the atomic nucleus is proposed to consist of a chain of protons and neutrons. In the lightest nuclides this chain may be open ended, but in general the chain has to be closed. It appears that for stability the proton and neutron need to alternate, and this explains why neutrons are always needed in the nucleus above 1H1. The theory also predicts that the neutrons can form CROSS-BRIDGES, and that these stabilise the loop into smaller loops. This also explains another puzzling feature of the table of nuclides, which is why disproportionately more neutrons are required for heavier elements. In addition the theory predicts that the sub-loops of the nuclear polymer are required to take specific shapes. This paper explains all these underlying principles and applies them to explain the hydrogen and helium nuclides.

The significance of this is the following. First, this is the first published theory of why individual isotopes are stable or unstable, or even non-existent. By comparison no other theory has done this, neither the binding energy approach, the semi-empirical mass-formula (SEMF), the various bag theories, nor quantum chromodynamics (QCD). Second, this has been achieved with a hidden-variable theory. This is a surprise, since such theories have otherwise been scarce and hard to develop. The only one of note has been the de Brogle-Bohm theory of the pilot wave, and that certainly does not have application to anything nuclear. So the first theory to explain the stability features of the table of nuclides for the lighter elements is a non-local theory rather than an empirical model, quantum theory, or string theory. That is deeply unexpected. It vindicates the hidden-variable approach, which has long been neglected.

Ultimately any theory of physics is merely a proposition of causality, and while any theory may be validated as sufficiently accurate at some level, there is always opportunity for further development. The Cordus theory and its nuclear mechanics implies that quantum mechanics is a stochastic approximation based on zero-dimensional point morphology of what the Cordus theory asserts is a deeper structure to matter.

Of course there is still much work to do. Showing that a hidden-variable theory explains these nuclides is an achievement but is not proof that the theory is valid. In the future we will need to expand the theory to the larger table of nuclides. If it can explain them, well that would be something. Also, it would be interesting to devise a mathematical formalism for the Cordus theory. Doing so would provide another method to explore the validity of the theory.

Dirk Pons, 9 July 2015, Christchurch

Pons, D. J., Pons, A. D., and Pons, A. J., *Nuclear polymer explains the stability, instability, and non-existence of nuclides.* Physics Research International 2015. **2015**(Article ID 651361): p. 1-19. DOI: http://dx.doi.org/10.1155/2015/651361 (open access) or http://vixra.org/abs/1310.0007 (open access)

*Problem – **The explanation of **nuclear properties from the strong force upwards has been elusive. It is not clear how binding energy arises, or why the neutrons are necessary in the nucleus at all. Nor has it been possible to explain, from first principles of the strong force, why any one nuclide is stable, unstable, or non-existent. Approach – Design methods were used to develop a conceptual mechanics for the bonding arrangements between nucleons. This was based on the covert structures for the proton and neutron as defined by the Cordus theory, a type of non-local hidden-variable design with discrete fields. Findings – *

]]>

Extending that work to the nuclides more generally, we are now able to show how it might be that decay rates could be somewhat erratic for β+, β-, and EC. It is predicted on theoretical grounds that the β-, β+ and electron capture processes may be induced by pre-supply of neutrino-species, and that the effects are asymmetrical for those species. Also predicted is that different input energies are required, i.e. that a threshold effect exists. Four simple lemmas are proposed with which it is straightforward to explain why β- and EC decays would be enhanced and correlate to solar neutrino flux (proximity & activity), and alpha (α) emission unaffected.

Basically the observed variability is proposed to be caused by the way neutrinos and antineutrinos induce decay differently. This is an interesting and potentially important finding because there are otherwise no physical explanations for how variable decay rates might arise. So the contribution here is providing a candidate theory.

We have put the paper out to peer-review, so it is currently under submission. If you are interested in preliminary information, the pre-print may be found at the physics archive:

http://vixra.org/abs/1502.0077

*This work makes the novel contribution of proposing a detailed mechanism for neutrino-species induced decay, broadly consistent with the empirical evidence.*

Dirk Pons

New Zealand, 14 Feb 2015

You may also be interested in related sites talking about variable decay rates:

http://phys.org/news202456660.html

https://tnrtb.wordpress.com/2013/01/21/commentary-on-variable-radioactive-decay-rates/

See also the references in our paper for a summary of the journal literature.

UPDATE (20 April 2015): This paper has been published as DOI: 10.5539/apr.v7n3p18 and is available open access here http://www.ccsenet.org/journal/index.php/apr/article/view/46281/25558

]]>Here is a handy Mnemonic for remembering all these decays, based on this equation: *pie with icing equals nuts with egg below and a dash of vinegar*

Pproton | + | 2y + iz(energy) | <=> | nneutron | + | eantielectron or positron | + | Vneutrino | |

pie | with | icing | equals | nuts | with | egg below | and | a dash of vinegar |

Then rearrange this to suit. Remember to invert the matter-antimatter species when you move a particle across the equality (*species transfer rule)*. Note that we use underscore to show antimatter species, and this is the same as the overbar with which you may be more familiar. (We don’t use overbar because it is a confounded symbol used in other contexts such as h-bar. Underscore is a fresh and clearer way to designate antimatter species. It is also a visual reminder that this mechanics needs to be understood from within the NLHV framework of the Cordus theory, i.e. we are not talking about the usual zero-dimensional point particles of quantum mechanics here. Underscore is also easier to print and therefore use.)

The equation as written is focussed on the *proton decay*, which is β+. It is called beta plus because it gives a positive charge output in the form of the e hence ‘+’.

β+ proton decay: p + 2y => n + e + v

For *electron capture* just move the e across the equality to the p side and change it to plain ‘e’ instead.

Electron capture (EC): p + e => n + v

For *neutron decay*, move both the e and v across the equality, changing them to e and v. It is called beta ‘minus’ because the output is the negatively charged electron.

β- neutron decay: n => p + e + v

Remember that electric charge and matter-antimatter species hand are not the same thing. This is an easy area in which to get confused. Electric charge (+/-) refers to the direction in which the discrete forces of the electric field travel, and may be outwards or inwards from the particle. The matter-antimatter species hand (m/m) refers to the handedness of the discrete field, which in the Cordus theory corresponds to the energisation sequence of the field (somewhat like the firing order of a three-cylinder internal combustion engine) which also has two variables.

The mnemonic works for all three conventional decays providing you remember the *species transfer rule*, but I’m not convinced of the soundness of the dietary advice!

- Pons, D. J., Pons, A. D., and Pons, A. J., Asymmetrical neutrino induced decay of nucleons Applied Physics Research, 2015. 7(2): p. 1-13. DOI: http://dx.doi.org/10.5539/apr.v7n2p1 or http://vixra.org/abs/1412.0279

]]>