Archive for category Difficult problems in Physics

Physical explanation of entanglement

What is entanglement? Entanglement is a known physical phenomenon whereby particles affect each other despite being a macroscopic  distance apart, and despite no apparent connection between them. The effect is typically seen in the spin, which is an orientation property of particles, whereby an action of changing the spin of one particle results in the spin of the other also changing. Macroscopic entanglement requires special situations – it requires deliberate preparation and setting up of the experiment. It is  a coherent behaviour, and the effect is lost when dis-coherence sets in, which occurs when the particles are disturbed by outside forces and fields. Consequently it is not generally observed in macroscopic phenomenon at our level of existence, and for the same reasons neither is superposition (a particle is simultaneously in two geometric locations).

How does it operate? This is unknown. The experimental evidence is that it does exist, but the mechanism is not known. Classical Newtonian mechanics implies the effect should not exist. General relativity makes no provision for it. Quantum mechanics (QM) accepts it as real, and can express the outcomes mathematically, but does not describe how entanglement operates at the physical level.

Does new physics offer new explanations for entanglement? Yes. This is where the Cordus theory of fundamental physics offers a candidate solutions In the paper ‘A physical basis for entanglement in a non-local hidden variable theory’ (2017) (https://doi.org/10.4236/jmp.2017.88082) we show that superposition and entanglement may be qualitatively explained if particles were to have the internal structure proposed by the Cordus  theory.

This is a non-local hidden-variable (NLHV) theory, hence naturally supports non-local behaviour. Locality is the expectation that a point object is only affected by the values of fields and external environmental variables at that point, not by remote values. Entanglement is a type of non-local behaviour – the particles evidently behave as if affected by effects happening some distance away from the point the defines the particles.

As a type of hidden-variable theory, the theory proposes -and this is important- that fundamental particles have internal structure. This is a major departure from QM and its assumption that particles are zero-dimensional points without sub-structure.

Figure: Qualitative explanation of two-photon entanglement. The photons are predicted to originate from a Pauli pair of electrons – these electrons are bonded in a transphasic interaction and hence their emitted photons also have that interaction. Consequently the four reactive ends of the two photons are linked by fibrils, even as they move further apart. As a result the behaviours of the photons are coupled: hence entanglement.

The explanation from the Cordus theory is that there is no single point that defines the position of the particule. Its reactive ends between them occupy a volume of space, and its discrete fields extend out to occupy a volume of space external to the reactive ends.

The Cordus theory explains that locality fails because the particule is affected by what happens at both reactive ends, and by the externally-originating discrete forces it receives at both locations. A principle of Wider Locality is proposed, whereby the particule is affected by the values of external discrete forces (hence also conventional fields) in the vicinity of both its reactive ends.

The ability to explain entanglement conceptually in terms of physical realism is relevant because it rebuts the claim that it is impossible that such a hidden-variable theory could exist. This is significant because previously it has been believed that only QM could explain this phenomena.

CITATION:

Pons, D. J., Pons, A. D., & Pons, A. J. (2017). A physical basis for entanglement in a non-local hidden variable theory Journal of Modern Physics, 8(8), 1257-1274 doi: https://doi.org/10.4236/jmp.2017.88082   or http://file.scirp.org/Html/10-7503127_77506.htm  or http://vixra.org/abs/1502.0103

 

Advertisements

, ,

Leave a comment

Why is the speed of light constant?

Why is the speed of light constant in the vacuum?

http://dx.doi.org/10.5539/apr.v8n3p111

The constancy of the speed of light c in the vacuum was the key insight in Einstein’s work on relativity. However this was an assumption, rather than a proof.  It is an assumption that has worked well, in that the resulting theory has shown good agreement with many observations. The Michelson-Morley experiment directly tested the idea of Earth moving through a stationary ether, by looking for differences in the speed of light in different directions, and found no evidence to support such a theory. There is no empirical evidence that convincingly shows the speed of light to be variable in-vacuo in the vicinity of Earth. However it is possible that the speed of light is merely locally constant, and different elsewhere in the universe. In our latest  paper we show why this might be so.

Fundamental questions about light

There are several perplexing questions about light:

  • What is the underlying mechanism that makes the speed of light constant in the vacuum?
  • What properties of the universe cause the speed of light to have the value it has?
  • If the speed of light is not constant throughout the universe, what would be the mechanisms?
  • How does light move through the vacuum?
  • The vacuum has properties: electric and magnetic constants. Why, and what causes these?
  • How does light behave as both a wave and particle? (Wave-particle duality)
  • How does a photon physically  take two different paths? (Superposition in interferometers)
  • How does entanglement work at the level of the individual photon?

These are questions of fundamental physics, and of cosmology. Consequently there is on-going interest in the speed of light at the foundational level.  The difficulty is that neither general relativity nor quantum mechanics can explain why c should be constant, or why it should have the value it does. Neither for that matter does string/M theory. Gaining a better understanding of this has the potential to bridge the particle and cosmology scales of physics.

Is the speed of light really constant? Everywhere? At all times?

There has been ongoing interest in developing theories where c is not constant. These are called variable speed of light (VSL) theories [see paper for more details]. The primary purpose of these is to explore for new physics at deeper levels, with a particular interest in quantum-gravity.  For example, it may be that the invariance of c breaks down at very small scales, or for photons of different energy, though such searches have been unsuccessful to date.  Another approach is cosmological. If the speed of light was to be variable, it could solve certain problems. Specifically, the horizon, inflation and  flatness problems might be resolved if there were a faster c in the early universe, i.e. a time-varying speed of light.  There are several other possible applications for a variable speed of light theory in cosmology.

However there is one big problem:

In all existing VSL theories the difficulty is providing reasons for why c should vary with time or geometric scale.

The theories require the speed of light to be different at genesis, and then somehow change slowly or suddenly switch over at some time or event, for reasons unknown. None of the existing VSL theories describe why this should be, nor do they propose underlying mechanics. This is  problematic, and contributes to existing VSL theories not being widely accepted.

Cordus theory predicts the speed of light is variable, and attributes it to fabric density

In our paper [apr.v8n3p111] we apply the non-local hidden-variable (Cordus) theory to this problem. It turns out that it is a logical necessity of the theory that the speed of light be variable. The theory also predicts a specific underlying mechanism for this. Our findings are that the speed of light is inversely proportional to fabric density.  This is because the discrete fields of the photon interact dynamically with the fabric and therefore consume frequency cycles of the photon. The fabric arises from aggregation of discrete force emissions (fields) from massy particles, which in turn depends on the proximity and spatial distribution of matter.

This theory offers a conceptually simply way to reconcile the refraction of light in both gravitational situations and optical materials: the density of matter affects the fabric density, and hence affects the speed of light. So when light enters a denser medium, say a glass prism, then it encounters an abrupt increase in fabric density, which slows its speed. Likewise light that grazes past a star is subject to a small gradient in the fabric, hence resulting in gravitational bending of the light-path. Furthermore, the theory accommodates the constant speed of light of general relativity, as a special case of a locally constant fabric density. In other words, the fabric density is homogeneous in the vicinity of Earth, so the speed of light is also constant in this locality. However, in a different part of the universe where matter is more sparse, the speed of light is predicted to be faster. Similarly, at earlier time epochs when the universe was more dense, the speed of light would have been slower. This also means that the results disfavour the universal applicability of the cosmological principle of homogeneity and isotropy of the universe.

The originality in this paper is in proposing underlying mechanisms for the speed of light.  Uniquely, this theory identifies fabric density as the dependent variable. In contrast, other VSL models propose that c varies with time or some geometric-like scale, but struggle to provide plausible reasons for that dependency.

Summary

This theory predicts that the speed of light is inversely proportional to the fabric density, which in turn is related to the proximity of matter. The fabric fills even the vacuum of space, and the density of this fabric is what gives the electric and magnetic constants their values, and sets the speed of light. The speed of light is constant in the vicinity of Earth, because the local fabric density is relatively isotropic. This explanation also accommodates relativistic time dilation, gravitational time dilation, gravitational bending of light, and refraction of light.  So the speed of light is a variable that depends on fabric density, hence is an emergent property of the fabric.

The paper is available open access: http://dx.doi.org/10.5539/apr.v8n3p111

 

The fabric density concept is covered at http://dx.doi.org/10.2174/1874381101306010077.

The corresponding theory of time, which predicts that time speeds up in situations of lower fabric density, is at http://dx.doi.org/10.5539/apr.v5n6p23.

 

 

Citation for published paper:

Pons, D. J., Pons, A. D., & Pons, A. J. (2016). Speed of light as an emergent property of the fabric. Applied Physics Research, 8(3): 111-121.  http://dx.doi.org/10.5539/apr.v8n3p111

Original work on physics archive (2013) : http://vixra.org/abs/1305.0148

 

, , , , , , , ,

Leave a comment

Beautiful mathematics vs. qualitative insights

Which is better for fundamental physics: beautiful mathematics based on pure concepts, or qualitative insights based on natural phenomena?

According to Lee Smolin in a 2015 arxiv paper [1], it’s the latter.

As I understand him, Smolin’s main point is that elegant qualitative explanations are more valuable than beautiful mathematics, that physics fails to progress when ‘mathematics [is used] as a substitute for insight into nature‘ (p13).
‘The point is not how beautiful the equations are, it is how minimal the assumptions needed and how elegant the explanations.‘ (http://arxiv.org/abs/1512.07551)
The symmetry methodology receives criticism for the proliferation of assumptions it requires, and the lack of explanatory power. Likewise particle supersymmetry is  identified as having the same failings. Smolin is also critical of of string theory, writing, ‘Thousands of theorists have spent decades studying these [string theory] ideas, and there is not yet a single connection with experiment‘ (p6-7).

Mathematical symmetries: More or fewer?

Smolin is especially critical of the idea that progress might be found in increasingly elaborate mathematical symmetries.
I also wonder whether the ‘symmetries’ idea is overloaded. The basic concept of symmetry is that some attribute of the system should be preserved when transformed about some dimension. Even if it is possible to represent this mathematically, we should still be prudent about which attributes, transformations, and dimensions to accept. Actual physics does not necessarily follow mathematical representation. There is generally a lack of critical evaluation of the validity of specific attributes, transformations, and dimensions for the proposed symmetries. The *time* variable is a case in point. Mathematical treatments invariably consider it to be a dimension, yet empirical evidence overwhelmingly shows this not to be the case.
Irreversibility shows that time does not evidence symmetry. The time dimension cannot be traversed in a controlled manner, neither forward and especially not backward. Also, a complex system of particles will not spontaneously revert to its former configuration.   Consequently *time* cannot be considered to be a dimension about which it is valid to apply a symmetry transformation even when one exists mathematically. Logically, we should therefore discard any mathematical symmetry that has a time dimension to it. That reduces the field considerably, since many symmetries have a temporal component.
Alternatively, if we are to continue to rely on temporal symmetries, it will be necessary to understand how the mechanics of irreversibility arises, and why those symmetries are exempt therefrom. I accept that relativity considers time to be a dimension, and has achieved significant theoretical advances with that premise. However relativity is also a theory of macroscopic interactions, and it is possible that assuming time to be a dimension is a sufficiently accurate premise at this scale, but not at others. Our own work suggests that time could be an emergent property of matter, rather than a dimension (http://dx.doi.org/10.5539/apr.v5n6p23).  This makes it much easier to explain the origins of the arrow of time and of irreversibility. So it can be fruitful, in an ontological way, to be sceptical of the idea that mathematical formalisms of symmetry are necessarily valid representations of actual physics. It might be reading too much into Smolin’s meaning when he says that ‘time… properties reflect the positions … of matter in the universe’ (p12), but that seems consistent with our proposition.

How to find a better physics?

The solution, Smolin says, is to ‘begin with new physical principles‘ (p8). Thus we should expect new physics will emerge by developing qualitative explanations based on intuitive insights from natural phenomena, rather than trying to extend existing mathematics. Explanations that are valuable are those that are efficient (fewer parameters, less tuning, and not involving extremely big or small numbers) and logically consistent with physical realism (‘tell a coherent story’). It is necessary that the explanations come first, and the mathematics follows later as a subordinate activity to formalise and represent those insights.
However it is not so easy to do that in practice, and Smolin does not have suggestions for where these new physical principles should be sought. His statement that ‘no such principles have been proposed‘ (p8) is incorrect. Ourselves and others have proposed new physical principles – ours is called the Cordus theory and based on a proposed internal structure to particles. Other theories exist, see vixra and arxiv. The bigger issue is that physics journals are mostly deaf to propositions regarding new principles. Our own papers have been summarily rejected by editors many times  due to ‘lack of mathematical content’ or ‘we do not publish speculative material’, or ‘extraordinary claims require extraordinary evidence’. In an ideal world all candidate solutions would at least be admitted to scrutiny, but this does not actually happen and there are multiple existing ideas in the wilds that never make it through to the formal journal literature frequented by physicists.  Even then, those ideas that undergo peer review and are published, are not necessarily widely available. The problem is that the academic search engines, like Elsevier’s Compendex and Thompson’s Web of Science,  are selective in what journals they index, and fail to provide  reliable coverage of the more radical elements of physics. (Google Scholar appears to provide an unbiassed assay of the literature.) Most physicists would have to go out of their way to inform themselves of the protosciences and new propositions that circulate in the wild outside their bubbles of knowledge. Not all those proposals can possibly be right, but neither are they all necessarily wrong. In mitigation, the body of literature in physics has become so voluminous that it is impossible for any one physicist to be fully informed about all developments, even within a sub-field like fundamental physics. But the point remains that new principles of physics do exist, based on intuitive insights from natural phenomena, and which have high explanatory power, exactly how Smolin expected things to develop.
Smolin suspects that true solutions will have fewer rather than more symmetries. This is also consistent with our  work, which indicates that both the asymmetrical leptogenesis and baryogenesis processes can be conceptually explained as consequences of a single deeper symmetry (http://dx.doi.org/10.4236/jmp.2014.517193). That is the matter-antimatter species differentiation (http://dx.doi.org/10.4006/0836-1398-27.1.26). That also explains asymmetries in decay rates (http://dx.doi.org/10.5539/apr.v7n2p1).
In a way, though he does not use the words, Smolin tacitly endorses the principle of physical realism: that physical observable phenomena do have deeper causal mechanics involving parameters that exist objectively. He never mentions the hidden-variable solutions. Perhaps this is indicative of the position of most theorists, that the hidden variable sector has been unproductive. Everyone has given up on it as intractable, and now ignore it. According to Google Scholar, ours looks to be the only group left in the world that is publishing non-local hidden-variable (NLHV) solutions. Time will tell whether or not these are strong enough, but these do already embody Smolin’s injunction to take a fresh look for new physical principles.

Dirk Pons

26 February 2016, Christchurch, New Zealand

This is an expansion of a post at Physics Forum  https://www.physicsforums.com/threads/smolin-lessons-from-einsteins-discovery.849464/#post-5390859

References

[1] 1. Smolin, L.: Lessons from Einstein’s 1915 discovery of general relativity. arxiv 1512.07551, 1-14 (2015). doi: http://arxiv.org/abs/1512.07551

 

 

, , , , , , , , ,

Leave a comment

What does the fine structure constant represent?

Fine structure constant α

This is a dimensionless constant, represented with the symbol α (alpha), and it relates together the electric charge, the vacuum permittivity, and the speed of light.

The equation is as follows:

The impedance of free space is Zo = 1/(εoc) = 2αh/e2, with electric constant εo (also called vacuum permittivity), the speed of light in the vacuum c, and the fine structure constant α = e2/(2εohc), with elementary charge e [coulombs], Planck constant h, and c as before. All these are generally considered physical constants, i.e. are fixed values for the universe.

One example of how this relationship may be used is as follows. Given the electric charge, and the vacuum permittivity, then the alpha equation may be used to explain why the speed of light has the value it does. The equation may be rearranged into other equivalent forms.

 What is the physical meaning of the fine structure constant?

This is a more difficult question, especially when coupled with the question, Why does alpha take the value it does? This is something of a mystery.

We believe we can answer some parts of this question. In a recent paper of the Cordus theory  it has been proposed that both the vacuum permittivity and the speed of light are dependent variables, and situationally specific. It is proposed that εo represents the density of the discrete forces in the fabric, and thus depends on the spatial distribution of mass within the universe. Thus the electric constant is recast as an emergent property of the fabric, and hence of matter.

From this perspective α is a measure of the transmission efficacy of the fabric, i.e. it determines the relationship between the electric constant of the vacuum fabric, and the speed of propagation c through the fabric.

This is consistent with the observation that α appears wherever electrical forces and propagation of fields occur, and this includes cases such as electron bonding.

The reason the speed of light is limited to a certain finite value is explained by this theory as a consequence of the fabric density creating a temporal impedance. Thus denser fabric results in a slower speed of light, and this is consistent with time dilation, and optical refraction generally. In the Cordus theory the speed of light in turn is determined by the density of the fabric discrete forces and is therefore locally consistent and relativistic, but ultimately dependent on the past history of matter density in the locally available universe. Thus the vacuum (fabric) has a finite speed of light, despite an instantaneous communication across the fibril of the particule. This Cordus theory is consistent with the known impedance of free space though comes at it from a novel direction.

The implications are the electric constant of free space is not actually constant, but rather depends on the fabric density, hence on the spatial distribution of matter. The fabric density also determines the speed of light in the situation, and α is the factor that relates the two for this universe. It would appear to be a factor set at genesis of the universe.

Reference

Pons, D. J. (2015). Inner process of Photon emission and absorption. Applied Physics Research, 7(4 ), 14-26. doi:http://dx.doi.org/10.5539/apr.v7n4p24

, ,

Leave a comment

Variable decay rates of nuclides

Our previous work indicates that, under the rules of this framework of physics, the neutrino and antineutrino (neutrino-species) interact differently with matter. Specifically that (a) they interact differently with the proton compared to the neutron, and (b) they are not only by-products of the decay of those nucleons as in the conventional understanding, but also can be inputs that initiate decay. (See previous posts).

Extending that work to the nuclides more generally, we are now able to show how it might be that decay rates could be somewhat erratic for β+, β-, and EC. It is predicted on theoretical grounds that the β-, β+ and electron capture processes may be induced by pre-supply of neutrino-species, and that the effects are asymmetrical for those species. Also predicted is that different input energies are required, i.e. that a threshold effect exists. Four simple  lemmas are proposed with which it is straightforward to explain why β- and EC decays would be enhanced and correlate to solar neutrino flux (proximity & activity), and alpha (α) emission unaffected.

Basically the observed variability is proposed to be caused by the way neutrinos and antineutrinos induce decay differently. This is an interesting and potentially important finding because there are otherwise no physical explanations for how variable decay rates might arise. So the contribution here is providing a candidate theory.

We have put the paper out to peer-review, so it is currently under submission. If you are interested in preliminary information, the pre-print may be found at the physics archive:

http://vixra.org/abs/1502.0077

This work makes the novel contribution of proposing a detailed mechanism for neutrino-species induced decay, broadly consistent with the empirical evidence.

Dirk Pons

New Zealand, 14 Feb 2015

 

You may also be interested in related sites talking about variable decay rates:

http://phys.org/news202456660.html

https://tnrtb.wordpress.com/2013/01/21/commentary-on-variable-radioactive-decay-rates/

See also the references in our paper for a summary of the journal literature.

 

UPDATE (20 April 2015): This paper has been published as DOI: 10.5539/apr.v7n3p18 and is available open access here http://www.ccsenet.org/journal/index.php/apr/article/view/46281/25558

, , , , , , ,

Leave a comment

Asymmetrical genesis

A solution to the matter-antimatter asymmetry problem

Problem: Why is there more matter than antimatter in the universe?

A deep question is why the universe has so much matter and so little antimatter.  The energy at genesis should have created equal amounts of matter and antimatter, through the pair-production process, which should have subsequently annihilated. Related questions are, ‘Why is there any matter at all?’ and ‘Where did the antimatter go, or how was it suppressed?’

While it is not impossible that there might be parts of the universe that consist of antimatter, and thereby balance the matter, neither is there any evidence that this is the case. Therefore it is generally accepted that the observed matter universe is more likely a result of an asymmetrical production of matter in the first place. Thus something in the genesis processes is thought to have skewed the production towards matter. But it is very difficult to see how physical processes, which are very even-handed, could have done this.

This is the asymmetrical genesis problem. There are two sub-parts, why there are more electrons than antielectrons around (asymmetrical leptogenesis) and why there are more nucleons (protons and neutron) than their antimatter counterparts (asymmetrical baryogenesis).

Our latest work explores this problem [1]. The full paper is published in the Journal of Modern Physics (link here), and is open access (free download). A brief summary of the findings is given below.

Solution: Remanufacture of antielectrons

The theory we put forward is that the initial genesis process converted energy into equal quantities of matter and antimatter, in the form of electrons and antielectrons (positrons). A second process, which is defined in the theory, converted the antielectrons into the protons. The antimatter component is predicted to be discarded by the production and emission of antineutrinos. Thus the antineutrinos were the waste  stream or by-product of the process. Having converted antielectrons into protons, it is easy to explain how neutrons arise, via electron capture or  beta plus decay. Thus the production processes are identified for all the building blocks of a matter universe.

Therefore according to this interpretation, the asymmetry of baryogenesis is because the antimatter is hiding in plain sight, having been remanufactured into the protons and neutrons (matter baryons) themselves.

Approach: How was this solution obtained?

To solve the genesis problem, start by abandoning the idea that particles are 0-D points. This is a radical but entirely reasonable departure.  Instead, accept that particles of matter are two-ended cord-like structures [2].

These Cordus particules emit discrete forces, hence discrete fields. The nature of those emissions defines the characteristics of the particule in terms of charge and matter-antimatter species. In turn this defines the particule type: electron, antielectron, proton, etc. This also means that any process that changes the discrete field emission sequence also changes the identity of the particule.

This allows a novel breakthrough approach: we found a way to represent the discrete force structures, and we inferred a set of mechanics that define what transformations are possible under reasonable assumptions of conservation of charge and hand. We calibrated this against the known beta decay processes [3]. We created a calculus to represent these transformation processes: this is called the Cordus HED mechanics. (See paper for details). We call the process RE-MANUFACTURING, as it involves the re-arrangement of the discrete forces including the partitioning of an assembly into multiple particules, and the management of the matter-antimatter species hand (Latin manus: hand). The same HED mechanics is good for explaining other particule transformations like the decays.

Then we used the Cordus HED mechanics to search for possible solutions to the asymmetrical genesis problem. We looked at various options but only found one solution, and this is the one reported in the paper. Thus the HED mechanics predict a production process whereby the antielectron is converted into a proton. The HED mechanics is also very specific in its predictions of the by-products of this process, and this makes it testable and falsifiable.

The antimatter field structure of the antielectron is carried away by the antineutrinos as a waste stream. The antineutrinos have little reactivity, so they escape the scene, leaving the proton behind. This is fortunate since the theory also predicts that the protons would decay back to antielectrons if struck by antielectrons. This would have dissolved the universe even as it formed.

An explanation is provided for why the matter hand prevailed over antimatter during the cosmological start-up process. This is attributed to a dynamic process of domain warfare between the matter and antimatter species, wherein the dominance oscillated and became frozen into the matter production pathway as the universe cooled.

This is an efficient solution since it solves both asymmetrical leptogenesis and asymmetrical baryogenesis.

Summary

The genesis production sequence starts with a pair of photons being converted, via pair production, into an electron and antielectron. The Cordus theory explains how [4]. The antielectron remanufacturing processes, described here, convert the antielectron into a proton. The asymmetry in the manufacturing processes arises from domain warfare between the matter-antimatter species, and re-annihilation [5]. Neutrons are formed by electron capture or beta plus decay, for which a Cordus explanation is available [3]. Thus all the components of the atom are accounted for: proton, neutron, and electron. The Cordus theory also explains the strong force, as a synchronization between discrete forces of neighbouring particules [6], and the structure of the atomic nucleus [7]. The same theory also explains the stability trends and drip lines in the table of nuclides (H-Ne) [8]. This is much more than other theories, and shows the extent to which the Cordus theory is able to radically reconceptualise the genesis process.

Production process for the conversion of an antielectron into a proton.

 

Implications

This is a radical theory, since it forces one to think deeply and in a fresh way about foundational physics, how matter, energy, time, space, and force arise.

It is also a disruptive theory. First because it predicts that locality fails, and explains how. Locality means that particles are 0-D points and only affected by the fields at that 0-D location. A Cordus particule continuously breaks locality, at least at the small scale. Many physicists have been suspicious about locality, though have been reluctant to let go of it. The Cordus theory requires us to abandon locality.

The Cordus theory also strongly reasserts physical realism, and pushes back against QM’s denial thereof.  QM gives weird explanations for double-slit behaviour, interferometer locus problems, superposition, and entanglement. The Cordus theory explains all these from the basis of physical realism, and without all the weirdness.   Quantum mechanic’s wave-function is now understood to be merely a stochastic approximation to a deeper and more deterministic reality. That QM gives weird explanations is not because reality is weird, but because QM is only an approximate mechanics for the foundational level. Naturally this is contentious, but such are the debates of science.

Keywords: matter-antimatter asymmetry problem; open questions in physics; baryogenesis; leptogenesis; Sakharov conditions; cosmology; genesis; big bang

References

  1. Pons, D.J., Pons, A.D., and Pons, A.J., Asymmetrical baryogenesis by remanufacture of antielectrons. Journal of Modern Physics, 2014. 5: p. 1980-1994. DOI: http://dx.doi.org/10.4236/jmp.2014.517193.
  2. Pons, D.J., Pons, A.D., Pons, A.M., and Pons, A.J., Wave-particle duality: A conceptual solution from the cordus conjecture. Physics Essays, 2012. 25(1): p. 132-140. DOI: http://physicsessays.org/doi/abs/10.4006/0836-1398-25.1.132.
  3. Pons, D.J., Pons, A., D., and Pons, A., J., Beta decays and the inner structures of the neutrino in a NLHV design. Applied Physics Research, 2014. 6(3): p. 50-63. DOI: http://dx.doi.org/10.5539/apr.v6n3p50
  4. Pons, D.J., Pons, A.D., and Pons, A.J., Pair production explained by a NLHV design Vixra, 2014. 1404.0051: p. 1-17. DOI: http://vixra.org/abs/1404.0051.
  5. Pons, D.J., Pons, A.D., and Pons, A.J., Annihilation mechanisms. Applied Physics Research 2014. 6(2): p. 28-46. DOI: http://dx.doi.org/10.5539/apr.v6n2p28
  6. Pons, D.J., Pons, A.D., and Pons, A.J., Synchronous interlocking of discrete forces: Strong force reconceptualised in a NLHV solution Applied Physics Research, 2013. 5(5): p. 107-126. DOI: http://dx.doi.org/10.5539/apr.v5n5107
  7. Pons, D.J., Pons, A.D., and Pons, A.J. Proton-Neutron bonds in nuclides: Cis- and Trans-phasic assembly with the synchronous interaction. vixra, 2013. 1309.0010, 1-26. DOI: http://viXra.org/abs/1309.0010.
  8. Pons, D.J., Pons, A.D., and Pons, A.J., Explanation of the Table of Nuclides: Qualitative nuclear mechanics from a NLHV design. Applied Physics Research 2013. 5(6): p. 145-174. DOI: http://dx.doi.org/10.5539/apr.v5n6p145

 

, , , , , , , , ,

Leave a comment

What holds protons and neutrons together in the atomic nucleus?

Big questions, few answers

The nucleus consists of protons and neutrons. The more difficult question is explaining how these are bonded together. How are the protons held together in the nucleus? Why don’t protons in a nucleus repel each other? Since protons all have positive change, they should REPEL each other with the electrostatic force. The atomic nucleus should fly apart, according to classical electrostatic theory.  Yet it does not.

Also, the neutrons have neutral charge, so what holds them in place? For that matter, what are the neutrons even doing in the nucleus? Why does the nucleus not consist only of protons?

There are a number of conventional theories in this area: liquid drop, shell models, and the strong force.

Liquid drop and semi-empirical mass formula

The liquid drop model assumes that the protons and neutrons (i.e. nucleons) are all thrown together without any specific internal structure or bonding arrangements. Think marbles in a bag like this image. It is called the ‘liquid drop’  because it assumes surface tension and bulk effects. Its usual manifestation is the semi-empirical mass formula (SEMF), which is a ‘model’ because it fits coefficients to a type of power series. It represents the general trends in binding energy. On the positive side, it offer an underlying theoretical justification for the various terms. However there are also several criticisms of the SEMF. It is dependent on a very generous power series with no less than seven tuneable parameters: with that number of variables it is not surprising that a fit can be obtained. Unfortunately the fit is poor: It doesn’t model the light nuclides well, it totally fails to represent the extinction of the heaviest nuclides, it doesn’t model the isotope limits (drip lines) well, and it doesn’t accommodate the fact that the nucleons come in whole units. Other criticisms are that the real nuclides show abrupt changes, that the SEMF does not represent.

Shell model

There are also shell models. These are more abstract, being mathematical representations of combinations of protons and neutrons. However the shell models don’t really provide much insight into how the nucleons are bonded. The theory assumes that independent clusters (shells) of protons and neutrons exist. It is based on the mathematical idea of a harmonic oscillator in three coordinates, but it is difficult to give a physical interpretation of this. The theory predicts certain combinations of nucleons are especially stable, hence “magic numbers”. This model also predicts stability for large atoms (hence “island of stability”) beyond the current range of synthesised elements, though the predictions vary with the particular method used. The shell model has good fit for atomic numbers below about 50, but becomes unwieldy for high atomic numbers. The related interacting boson model which assumes that nucleons exist in pairs. However this limits the model to nuclides where p=n, which is an overly simplistic assumption.

Strong Nuclear force

From the perspective of the Standard Model of quantum theory, the protons in the neutron do experience electrostatic repulsion, but the STRONG NUCLEAR FORCE is even stronger and holds the protons together. That force is proposed to be a residual of the strong force that acts at the quark level. Quantum chromodynamics (QCD) proposes that the quarks inside the protons are bonded by the exchange of gluon particles, in the strong force. These gluons are massless particles and three types are proposed, called colours (red, blue, green). Hence the force is also called the colour force. At the quark level this force is proposed to haves some unusual characteristics:

(1) The strong force is strongly ATTRACTIVE at intermediate range, such that it overcomes the electrostatic force.

(2) At short range the strong force is presumed to be REPULSIVE. This attribute is needed to explain why the force does not contract the nucleus into a singularity.

(3) At long range the strong force is CONSTANT, and unlike the electro-magneto-gravitational (EMG) forces, does not decrease with distance. This is called colour confinement.

The colour force is consistent with known empirical data from the jets of material expelled at particle impacts. QCD is a good theory to explain what happens in high-energy particle impacts. But gluons have not been actually observed, only inferred from impacts, so other explanations are still possible.

Ideally the strong nuclear force would say how the protons and neutrons are bonded together, but there is a conceptual chasm in this area. It is unclear how the residual nuclear force (at nucleus level) emerges from the strong force (at quark level), except as a general concept that the gluons leak out. The theoretical details are lacking. Nor is it clear what structure it might impose on how the nucleons are arranged within the nucleus. It is also thought that the residual strong force causes BINDING ENERGY but again the exact mechanism is unknown. QCD is unable to predict even the most basic of nuclear attributes. It does not describe how multiple nucleons interact. It cannot explain the simplest nuclei, the hydrogen isotopes. It does not explain nuclear structure or the table of nuclides.

 

Issues

It is clear from observation that no nucleus exists with multiple protons and no neutrons, so evidently neutrons provide an important role within the nucleus, which is not represented in any of the existing theories.

Logically there should be a conceptual continuity between whatever force binds the protons and neutrons together, to an explanation of the properties of the nuclides. However QCD is stuck at the first stage, and the drop/shell models are marooned at the last stage.

Explaining why any one nuclide is stable, unstable (radioactive) or non-existent is not possible with any of these theories. Nor can existing theories explain why the nuclide series start and end where they do. They also do not explain why disproportionally more neutrons are needed as the number of protons increases (the stability line for p:n is curved as opposed to being a straight line).

If we take any one line of isotopes in the table of nuclides, such as Argon, then there are a number of questions.

Argon Nuclides: Questions

Figure 1: Argon isotopes and key questions. Background image adapted from [1] https://www-nds.iaea.org/relnsd/vcharthtml/VChartHTML.html.

 

Answers in the Cordus mechanics – a design methodology

The Cordus physics answers many of these questions about the structure of the atomic nucleus. It is the only theory that can explain why any one nuclide is stable/unstable/non-existent, at least from H to Ne. The theory is based on COVERT STRUCTURES. This means that it predicts that particles are not actually zero-dimensional points, which is the standard premise of the conventional theories. Instead, the theory shows that solutions to these deep questions are possible providing one is willing to accept a design where particles have internal  structures. Not just any covert structures either –the Cordus theory goes on to work out, using principles of engineering design, exactly what those structures would need to be. The theory predicts a specific string-like structure, and shows that if particles were to have this structure then many problems in fundamental physics can be given physically realistic explanations. The structure only becomes apparent at finer scales than the relatively coarse level at which quantum mechanics views the world.

What is the covert structure?

The theory predicts that a particle consists of two reactive ends, a small distance apart and joined by a fibril. These reactive ends emit discrete forces into the external environment. This whole structure is called a PARTICULE to differentiate it from a 0-D point particle. The particules react with other particules, e.g. bonding and forces, only at the reactive ends.

Proton

Figure 2: Proposed internal and external (discrete force) structures of the proton.

 

So, what does the Cordus theory say about nuclear structure?

The theory predicts that protons and neutrons are rod-like structures that interact at their two ends. So they have physical size. They join up in chains and networks, to form nuclear polymers. They preferentially bond proton-to-neutron, but will also bond proton-proton or neutron-neutron if there is no other choice. The theory explains how these bonds work, which is by the interlocking of the discrete fields of two or more nucleons. The atomic nucleus is thus proposed to consist of a polymer of protons and neutrons.

Nuclear Polymer Example

Figure 3: The synchronous interaction (strong force) bonds protons and neutrons together in a variety of way, resulting in nuclear polymer structures. These are proposed as the structure of the nucleus.

 

Some simple geometrically plausible assumptions may be added, in which case the design is able to explain a wide range of nuclides. For example it is necessary to assume that the polymer is generally be a closed loop (exceptions for the lightest nuclides) and there can be bridges across the loop. The polymer is required to take a specific shape, which is to wrap around the edges of a set of interconnected cubes. The cube idea might seem a bit strange, but is merely a consequence of having discrete three-dimensional fields for the proton and neutron. This need not be contentious as QCD has three colour charges.

Findings

The results show that the stability of nuclides can be qualitatively predicted by morphology of the nuclear polymer and the phase (cis/transphasic nature, or spin) of the particules. The theory successfully explains the qualitative stability characteristics of the nuclides, at least from hydrogen to neon and apparently higher.

This provides a radical new perspective on nuclear mechanics. This is the first theory to explain what holds the protons and neutrons in the nucleus, what the neutrons do in the nucleus, and why each nuclide is stable or unstable. It also explains why only certain nuclides exist, as opposed to being non-existent. It is also the first to do all this from first principles. Now it is still possible that all this good news is down to a luck chance, that the whole thing is spurious causality. If that were the case, then we would expect to see the theory collapse as it was applied to higher nuclides, or we would expect to see logical inconsistencies creep in as the theory was extended to other areas. So far there are none of those problems, but more work is necessary and until then we admit that this is still an open question. Nonetheless that it has been possible to achieve this, when no other theory has been able to come close to answering these questions, is promising.

The implications for fundamental physics are potentially far-reaching. Serious consideration must now be given to the likelihood that at the deeper level, particules have internal structure after all. This theory does not conflict with quantum mechanics but rather subsumes it: QM becomes a stochastic approximation to a deeper determinism, and the Standard Model of particle physics is re-interpreted as a set of zero-dimensional point-approximations to a finer-scaled covert structure. Now that would be something to be excited about.

 

Answers, according to the Cordus theory

Q: What is the atomic nucleus made of?

A: The nucleus consists of protons and neutrons that are rod-like structures (as opposed to 0-D points) that link into chains. These chains form a Nuclear polymer that is generally a closed loop (exceptions for the lightest nuclides) and there can be bridges across the loop. The polymer is required to take a specific shape, which is to wrap around the edges of a set of interconnected cubes.

Q: How are the protons held together in the nucleus?

A: An interlocking (synchronous) interaction. Forget the strong force – it turns out that’s not a helpful way to conceptualise the situation. What seems to be really happening is that the synchronous interaction holds both protons and neutrons together. It works by the synchronisation between discrete force emissions from neighbouring particules. One reactive end from each particule is thus locked together. The other reactive ends are free to make bonds with other particules. This explains why the effect is so ‘strong’ – it is an interlock. It also explains why the nucleus does not collapse in on itself (equivalent to ‘repulsive’ strong force). Furthermore these discrete forces continue out into the external environment (equivalent to ‘constant’ strong force at long-range). Furthermore, the Cordus theory predicts that the electrostatic force does not operate in the nucleus as it only applies to discoherent matter. Likewise the synchronous interaction only applies to coherent matter.

The theory gives an explanation of the nucleus, based in physical realism. This is a radical and highly novel outcome. If true, a conceptual revolution will be required at the fundamental level. Maybe its time … Physics is overdue for an earthquake.

Dirk Pons

Christchurch

12 Sept 2014
Read more…

D J Pons, A D Pons, A M Pons, and A J Pons, Wave-particle duality: A conceptual solution from the cordus conjecture. Physics Essays. 25(1): p. 132-140. DOI: http://physicsessays.org/doi/abs/10.4006/0836-1398-25.1.132, (2012).

D J Pons, A D Pons, and A J Pons, Synchronous interlocking of discrete forces: Strong force reconceptualised in a NLHV solution Applied Physics Research. 5(5): p. 107-126. DOI: http://dx.doi.org/10.5539/apr.v5n5107 (2013).

D J Pons, A D Pons, and A J Pons, Differentiation of Matter and Antimatter by Hand: Internal and External Structures of the Electron and Antielectron. Physics Essays. 27: p. 26-35. DOI: http://vixra.org/abs/1305.0157, (2014).

D J Pons, A D Pons, and A J Pons, Explanation of the Table of Nuclides: Qualitative nuclear mechanics from a NLHV design. Applied Physics Research 5(6): p. 145-174. DOI: http://dx.doi.org/10.5539/apr.v5n6p145 (2013).

D J Pons, A D Pons, and A J Pons, Annihilation mechanisms. Applied Physics Research 6(2): p. 28-46. DOI: http://dx.doi.org/10.5539/apr.v6n2p28 (2014).

D J Pons, A Pons, D., and A Pons, J., Beta decays and the inner structures of the neutrino in a NLHV design. Applied Physics Research. 6(3): p. 50-63. DOI: http://dx.doi.org/10.5539/apr.v6n3p50 (2014).

,

Leave a comment