Dirk Pons

This user hasn't shared any biographical information

Homepage: https://cordus.wordpress.com/

Physical explanation of entanglement

What is entanglement? Entanglement is a known physical phenomenon whereby particles affect each other despite being a macroscopic  distance apart, and despite no apparent connection between them. The effect is typically seen in the spin, which is an orientation property of particles, whereby an action of changing the spin of one particle results in the spin of the other also changing. Macroscopic entanglement requires special situations – it requires deliberate preparation and setting up of the experiment. It is  a coherent behaviour, and the effect is lost when dis-coherence sets in, which occurs when the particles are disturbed by outside forces and fields. Consequently it is not generally observed in macroscopic phenomenon at our level of existence, and for the same reasons neither is superposition (a particle is simultaneously in two geometric locations).

How does it operate? This is unknown. The experimental evidence is that it does exist, but the mechanism is not known. Classical Newtonian mechanics implies the effect should not exist. General relativity makes no provision for it. Quantum mechanics (QM) accepts it as real, and can express the outcomes mathematically, but does not describe how entanglement operates at the physical level.

Does new physics offer new explanations for entanglement? Yes. This is where the Cordus theory of fundamental physics offers a candidate solutions In the paper ‘A physical basis for entanglement in a non-local hidden variable theory’ (2017) (https://doi.org/10.4236/jmp.2017.88082) we show that superposition and entanglement may be qualitatively explained if particles were to have the internal structure proposed by the Cordus  theory.

This is a non-local hidden-variable (NLHV) theory, hence naturally supports non-local behaviour. Locality is the expectation that a point object is only affected by the values of fields and external environmental variables at that point, not by remote values. Entanglement is a type of non-local behaviour – the particles evidently behave as if affected by effects happening some distance away from the point the defines the particles.

As a type of hidden-variable theory, the theory proposes -and this is important- that fundamental particles have internal structure. This is a major departure from QM and its assumption that particles are zero-dimensional points without sub-structure.

Figure: Qualitative explanation of two-photon entanglement. The photons are predicted to originate from a Pauli pair of electrons – these electrons are bonded in a transphasic interaction and hence their emitted photons also have that interaction. Consequently the four reactive ends of the two photons are linked by fibrils, even as they move further apart. As a result the behaviours of the photons are coupled: hence entanglement.

The explanation from the Cordus theory is that there is no single point that defines the position of the particule. Its reactive ends between them occupy a volume of space, and its discrete fields extend out to occupy a volume of space external to the reactive ends.

The Cordus theory explains that locality fails because the particule is affected by what happens at both reactive ends, and by the externally-originating discrete forces it receives at both locations. A principle of Wider Locality is proposed, whereby the particule is affected by the values of external discrete forces (hence also conventional fields) in the vicinity of both its reactive ends.

The ability to explain entanglement conceptually in terms of physical realism is relevant because it rebuts the claim that it is impossible that such a hidden-variable theory could exist. This is significant because previously it has been believed that only QM could explain this phenomena.


Pons, D. J., Pons, A. D., & Pons, A. J. (2017). A physical basis for entanglement in a non-local hidden variable theory Journal of Modern Physics, 8(8), 1257-1274 doi: https://doi.org/10.4236/jmp.2017.88082   or http://file.scirp.org/Html/10-7503127_77506.htm  or http://vixra.org/abs/1502.0103


, ,

Leave a comment

Optical phenomena involving energy conversion: Explanation based on new physics

Many optical phenomena have poor or no explanations at the level of individual photon particles. Examples are the processes of photon emission, photon absorption, phase change at reflection, and laser emissions. These are adequately described by the classical electromagnetic wave theory of light, but that applies to waves and is difficult to extend to individual particles. Quantum mechanics (QM) better represents the behaviour of individual particles, but its power of explanation is weak, i.e. it can put numbers to phenomena but its explanations cannot be grounded in physical realism. QM is unable to explain how the 0D point of the photon is absorbed into the 0D point of the electron, or how a 0D photon separates into an electron and antielectron (pair production), or how matter and antimatter annihilate back to photons.

In the paper http://dx.doi.org/10.4236/jmp.2016.710094 we show how to solve this explanatory problem. We show that it is possible to explain many optical phenomena involving energy conversion. The solution involves a new physics at the sub-particle level, in the form of a non-local hidden-variable (NLHV) solution.

Process of photon emission from an electron

Process of photon emission from an electron


It has long been known that the bonding commitments of the electron affect its energy behaviour but the mechanisms for this have been elusive. We show how the degree of bonding constraint on the electron determines how it processes excess energy, see figure. A key concept is that the span and frequency of the electron are inversely proportional. This explains why energy changes cause positional distress for the electron.

Natural explanations are given for  multiple emission phenomena: Absorbance; Saturation; Beer-Lambert law; Colour; Quantum energy states; Directional emission; Photoelectric effect; Emission of polarised photons from crystals; Refraction effects; Reflection; Transparency; Birefringence; Cherenkov radiation; Bremsstrahlung and Synchrotron radiation; Phase change at reflection; Force impulse at reflection and radiation pressure; Simulated emission (Laser).

The originality of this work is the elucidation of a mechanism for how the electron responds to combinations of bonding constraint and pumped energy. The crucial insight is that the electron size and position(s) are coupled attributes of its frequency and energy, where the coupling is achieved via physical substructures. The theory is able to provide a logically coherent explanation for a wide variety of energy conversion phenomena.

Dirk Pons

Christchurch, New Zealand

15 June 2016


More information – The full paper (gold open access) is available at:

Pons, D.J., Pons, A.D., and Pons, A.J., (2016), Energy conversion mechanics for photon emission per non-local hidden-variable theory. Journal of Modern Physics, 7(10), 1049-1067.  http://dx.doi.org/10.4236/jmp.2016.710094


, , , , , , ,

Leave a comment

Why is the speed of light constant?

Why is the speed of light constant in the vacuum?


The constancy of the speed of light c in the vacuum was the key insight in Einstein’s work on relativity. However this was an assumption, rather than a proof.  It is an assumption that has worked well, in that the resulting theory has shown good agreement with many observations. The Michelson-Morley experiment directly tested the idea of Earth moving through a stationary ether, by looking for differences in the speed of light in different directions, and found no evidence to support such a theory. There is no empirical evidence that convincingly shows the speed of light to be variable in-vacuo in the vicinity of Earth. However it is possible that the speed of light is merely locally constant, and different elsewhere in the universe. In our latest  paper we show why this might be so.

Fundamental questions about light

There are several perplexing questions about light:

  • What is the underlying mechanism that makes the speed of light constant in the vacuum?
  • What properties of the universe cause the speed of light to have the value it has?
  • If the speed of light is not constant throughout the universe, what would be the mechanisms?
  • How does light move through the vacuum?
  • The vacuum has properties: electric and magnetic constants. Why, and what causes these?
  • How does light behave as both a wave and particle? (Wave-particle duality)
  • How does a photon physically  take two different paths? (Superposition in interferometers)
  • How does entanglement work at the level of the individual photon?

These are questions of fundamental physics, and of cosmology. Consequently there is on-going interest in the speed of light at the foundational level.  The difficulty is that neither general relativity nor quantum mechanics can explain why c should be constant, or why it should have the value it does. Neither for that matter does string/M theory. Gaining a better understanding of this has the potential to bridge the particle and cosmology scales of physics.

Is the speed of light really constant? Everywhere? At all times?

There has been ongoing interest in developing theories where c is not constant. These are called variable speed of light (VSL) theories [see paper for more details]. The primary purpose of these is to explore for new physics at deeper levels, with a particular interest in quantum-gravity.  For example, it may be that the invariance of c breaks down at very small scales, or for photons of different energy, though such searches have been unsuccessful to date.  Another approach is cosmological. If the speed of light was to be variable, it could solve certain problems. Specifically, the horizon, inflation and  flatness problems might be resolved if there were a faster c in the early universe, i.e. a time-varying speed of light.  There are several other possible applications for a variable speed of light theory in cosmology.

However there is one big problem:

In all existing VSL theories the difficulty is providing reasons for why c should vary with time or geometric scale.

The theories require the speed of light to be different at genesis, and then somehow change slowly or suddenly switch over at some time or event, for reasons unknown. None of the existing VSL theories describe why this should be, nor do they propose underlying mechanics. This is  problematic, and contributes to existing VSL theories not being widely accepted.

Cordus theory predicts the speed of light is variable, and attributes it to fabric density

In our paper [apr.v8n3p111] we apply the non-local hidden-variable (Cordus) theory to this problem. It turns out that it is a logical necessity of the theory that the speed of light be variable. The theory also predicts a specific underlying mechanism for this. Our findings are that the speed of light is inversely proportional to fabric density.  This is because the discrete fields of the photon interact dynamically with the fabric and therefore consume frequency cycles of the photon. The fabric arises from aggregation of discrete force emissions (fields) from massy particles, which in turn depends on the proximity and spatial distribution of matter.

This theory offers a conceptually simply way to reconcile the refraction of light in both gravitational situations and optical materials: the density of matter affects the fabric density, and hence affects the speed of light. So when light enters a denser medium, say a glass prism, then it encounters an abrupt increase in fabric density, which slows its speed. Likewise light that grazes past a star is subject to a small gradient in the fabric, hence resulting in gravitational bending of the light-path. Furthermore, the theory accommodates the constant speed of light of general relativity, as a special case of a locally constant fabric density. In other words, the fabric density is homogeneous in the vicinity of Earth, so the speed of light is also constant in this locality. However, in a different part of the universe where matter is more sparse, the speed of light is predicted to be faster. Similarly, at earlier time epochs when the universe was more dense, the speed of light would have been slower. This also means that the results disfavour the universal applicability of the cosmological principle of homogeneity and isotropy of the universe.

The originality in this paper is in proposing underlying mechanisms for the speed of light.  Uniquely, this theory identifies fabric density as the dependent variable. In contrast, other VSL models propose that c varies with time or some geometric-like scale, but struggle to provide plausible reasons for that dependency.


This theory predicts that the speed of light is inversely proportional to the fabric density, which in turn is related to the proximity of matter. The fabric fills even the vacuum of space, and the density of this fabric is what gives the electric and magnetic constants their values, and sets the speed of light. The speed of light is constant in the vicinity of Earth, because the local fabric density is relatively isotropic. This explanation also accommodates relativistic time dilation, gravitational time dilation, gravitational bending of light, and refraction of light.  So the speed of light is a variable that depends on fabric density, hence is an emergent property of the fabric.

The paper is available open access: http://dx.doi.org/10.5539/apr.v8n3p111


The fabric density concept is covered at http://dx.doi.org/10.2174/1874381101306010077.

The corresponding theory of time, which predicts that time speeds up in situations of lower fabric density, is at http://dx.doi.org/10.5539/apr.v5n6p23.



Citation for published paper:

Pons, D. J., Pons, A. D., & Pons, A. J. (2016). Speed of light as an emergent property of the fabric. Applied Physics Research, 8(3): 111-121.  http://dx.doi.org/10.5539/apr.v8n3p111

Original work on physics archive (2013) : http://vixra.org/abs/1305.0148


, , , , , , , ,

Leave a comment

Beautiful mathematics vs. qualitative insights

Which is better for fundamental physics: beautiful mathematics based on pure concepts, or qualitative insights based on natural phenomena?

According to Lee Smolin in a 2015 arxiv paper [1], it’s the latter.

As I understand him, Smolin’s main point is that elegant qualitative explanations are more valuable than beautiful mathematics, that physics fails to progress when ‘mathematics [is used] as a substitute for insight into nature‘ (p13).
‘The point is not how beautiful the equations are, it is how minimal the assumptions needed and how elegant the explanations.‘ (http://arxiv.org/abs/1512.07551)
The symmetry methodology receives criticism for the proliferation of assumptions it requires, and the lack of explanatory power. Likewise particle supersymmetry is  identified as having the same failings. Smolin is also critical of of string theory, writing, ‘Thousands of theorists have spent decades studying these [string theory] ideas, and there is not yet a single connection with experiment‘ (p6-7).

Mathematical symmetries: More or fewer?

Smolin is especially critical of the idea that progress might be found in increasingly elaborate mathematical symmetries.
I also wonder whether the ‘symmetries’ idea is overloaded. The basic concept of symmetry is that some attribute of the system should be preserved when transformed about some dimension. Even if it is possible to represent this mathematically, we should still be prudent about which attributes, transformations, and dimensions to accept. Actual physics does not necessarily follow mathematical representation. There is generally a lack of critical evaluation of the validity of specific attributes, transformations, and dimensions for the proposed symmetries. The *time* variable is a case in point. Mathematical treatments invariably consider it to be a dimension, yet empirical evidence overwhelmingly shows this not to be the case.
Irreversibility shows that time does not evidence symmetry. The time dimension cannot be traversed in a controlled manner, neither forward and especially not backward. Also, a complex system of particles will not spontaneously revert to its former configuration.   Consequently *time* cannot be considered to be a dimension about which it is valid to apply a symmetry transformation even when one exists mathematically. Logically, we should therefore discard any mathematical symmetry that has a time dimension to it. That reduces the field considerably, since many symmetries have a temporal component.
Alternatively, if we are to continue to rely on temporal symmetries, it will be necessary to understand how the mechanics of irreversibility arises, and why those symmetries are exempt therefrom. I accept that relativity considers time to be a dimension, and has achieved significant theoretical advances with that premise. However relativity is also a theory of macroscopic interactions, and it is possible that assuming time to be a dimension is a sufficiently accurate premise at this scale, but not at others. Our own work suggests that time could be an emergent property of matter, rather than a dimension (http://dx.doi.org/10.5539/apr.v5n6p23).  This makes it much easier to explain the origins of the arrow of time and of irreversibility. So it can be fruitful, in an ontological way, to be sceptical of the idea that mathematical formalisms of symmetry are necessarily valid representations of actual physics. It might be reading too much into Smolin’s meaning when he says that ‘time… properties reflect the positions … of matter in the universe’ (p12), but that seems consistent with our proposition.

How to find a better physics?

The solution, Smolin says, is to ‘begin with new physical principles‘ (p8). Thus we should expect new physics will emerge by developing qualitative explanations based on intuitive insights from natural phenomena, rather than trying to extend existing mathematics. Explanations that are valuable are those that are efficient (fewer parameters, less tuning, and not involving extremely big or small numbers) and logically consistent with physical realism (‘tell a coherent story’). It is necessary that the explanations come first, and the mathematics follows later as a subordinate activity to formalise and represent those insights.
However it is not so easy to do that in practice, and Smolin does not have suggestions for where these new physical principles should be sought. His statement that ‘no such principles have been proposed‘ (p8) is incorrect. Ourselves and others have proposed new physical principles – ours is called the Cordus theory and based on a proposed internal structure to particles. Other theories exist, see vixra and arxiv. The bigger issue is that physics journals are mostly deaf to propositions regarding new principles. Our own papers have been summarily rejected by editors many times  due to ‘lack of mathematical content’ or ‘we do not publish speculative material’, or ‘extraordinary claims require extraordinary evidence’. In an ideal world all candidate solutions would at least be admitted to scrutiny, but this does not actually happen and there are multiple existing ideas in the wilds that never make it through to the formal journal literature frequented by physicists.  Even then, those ideas that undergo peer review and are published, are not necessarily widely available. The problem is that the academic search engines, like Elsevier’s Compendex and Thompson’s Web of Science,  are selective in what journals they index, and fail to provide  reliable coverage of the more radical elements of physics. (Google Scholar appears to provide an unbiassed assay of the literature.) Most physicists would have to go out of their way to inform themselves of the protosciences and new propositions that circulate in the wild outside their bubbles of knowledge. Not all those proposals can possibly be right, but neither are they all necessarily wrong. In mitigation, the body of literature in physics has become so voluminous that it is impossible for any one physicist to be fully informed about all developments, even within a sub-field like fundamental physics. But the point remains that new principles of physics do exist, based on intuitive insights from natural phenomena, and which have high explanatory power, exactly how Smolin expected things to develop.
Smolin suspects that true solutions will have fewer rather than more symmetries. This is also consistent with our  work, which indicates that both the asymmetrical leptogenesis and baryogenesis processes can be conceptually explained as consequences of a single deeper symmetry (http://dx.doi.org/10.4236/jmp.2014.517193). That is the matter-antimatter species differentiation (http://dx.doi.org/10.4006/0836-1398-27.1.26). That also explains asymmetries in decay rates (http://dx.doi.org/10.5539/apr.v7n2p1).
In a way, though he does not use the words, Smolin tacitly endorses the principle of physical realism: that physical observable phenomena do have deeper causal mechanics involving parameters that exist objectively. He never mentions the hidden-variable solutions. Perhaps this is indicative of the position of most theorists, that the hidden variable sector has been unproductive. Everyone has given up on it as intractable, and now ignore it. According to Google Scholar, ours looks to be the only group left in the world that is publishing non-local hidden-variable (NLHV) solutions. Time will tell whether or not these are strong enough, but these do already embody Smolin’s injunction to take a fresh look for new physical principles.

Dirk Pons

26 February 2016, Christchurch, New Zealand

This is an expansion of a post at Physics Forum  https://www.physicsforums.com/threads/smolin-lessons-from-einsteins-discovery.849464/#post-5390859


[1] 1. Smolin, L.: Lessons from Einstein’s 1915 discovery of general relativity. arxiv 1512.07551, 1-14 (2015). doi: http://arxiv.org/abs/1512.07551



, , , , , , , , ,

Leave a comment

What does the fine structure constant represent?

Fine structure constant α

This is a dimensionless constant, represented with the symbol α (alpha), and it relates together the electric charge, the vacuum permittivity, and the speed of light.

The equation is as follows:

The impedance of free space is Zo = 1/(εoc) = 2αh/e2, with electric constant εo (also called vacuum permittivity), the speed of light in the vacuum c, and the fine structure constant α = e2/(2εohc), with elementary charge e [coulombs], Planck constant h, and c as before. All these are generally considered physical constants, i.e. are fixed values for the universe.

One example of how this relationship may be used is as follows. Given the electric charge, and the vacuum permittivity, then the alpha equation may be used to explain why the speed of light has the value it does. The equation may be rearranged into other equivalent forms.

 What is the physical meaning of the fine structure constant?

This is a more difficult question, especially when coupled with the question, Why does alpha take the value it does? This is something of a mystery.

We believe we can answer some parts of this question. In a recent paper of the Cordus theory  it has been proposed that both the vacuum permittivity and the speed of light are dependent variables, and situationally specific. It is proposed that εo represents the density of the discrete forces in the fabric, and thus depends on the spatial distribution of mass within the universe. Thus the electric constant is recast as an emergent property of the fabric, and hence of matter.

From this perspective α is a measure of the transmission efficacy of the fabric, i.e. it determines the relationship between the electric constant of the vacuum fabric, and the speed of propagation c through the fabric.

This is consistent with the observation that α appears wherever electrical forces and propagation of fields occur, and this includes cases such as electron bonding.

The reason the speed of light is limited to a certain finite value is explained by this theory as a consequence of the fabric density creating a temporal impedance. Thus denser fabric results in a slower speed of light, and this is consistent with time dilation, and optical refraction generally. In the Cordus theory the speed of light in turn is determined by the density of the fabric discrete forces and is therefore locally consistent and relativistic, but ultimately dependent on the past history of matter density in the locally available universe. Thus the vacuum (fabric) has a finite speed of light, despite an instantaneous communication across the fibril of the particule. This Cordus theory is consistent with the known impedance of free space though comes at it from a novel direction.

The implications are the electric constant of free space is not actually constant, but rather depends on the fabric density, hence on the spatial distribution of matter. The fabric density also determines the speed of light in the situation, and α is the factor that relates the two for this universe. It would appear to be a factor set at genesis of the universe.


Pons, D. J. (2015). Inner process of Photon emission and absorption. Applied Physics Research, 7(4 ), 14-26. doi:http://dx.doi.org/10.5539/apr.v7n4p24

, ,

Leave a comment

Presentation to University of Canterbury Mathematics Department (17 Sept 2015)

Title: Conceptual framework for a novel non-local hidden-variable theory of physics: Cordus theory

17 Sept 2015, 15h00, venue Ers446

Content: As per http://vixra.org/abs/1104.0015

Leave a comment

How does the synchronous interaction, or strong nuclear force, attract nucleons and hold the nucleus together?

The Cordus theory of the synchronous interaction is key to the concept of the nuclear polymer.

How does the strong force work?

Conventionally the strong nuclear force is proposed to arise by the exchange of gluons of various colour. The theory for this is quantum chromodynamics (QCD). This force is then proposed to be much stronger in attraction than the electrostatic repulsion of protons of like charge, hence ‘strong’. Rather strangely, the theory requires the force to change and become repulsive at close range.  This is to prevent it from collapsing the protons into a singularity (single point). Quite how this change operates is not explained, and the theory as a whole also cannot explain even the simplest atomic nucleus, let alone any of the features of the the table of nuclides. So there is a large gap between the colour force of QCD and any realistic explanation of atomic structure. QCD, gluons, and the strong attraction-repulsion force have no proven external validity: the concepts don’t extend to explain anything else.

It is time to attempt a different approach. Remember, it is necessary to explain not only how the quarks are bonded, but also how the protons and neutrons are bonded, and onward to explain why any one nuclide is stable/unstable/non-existent.  That means seeking explanations to the bigger picture, rather than creating a narrowly-focussed mathematical model of one tiny part of the problem.

What holds protons and neutrons together in the nucleus?

Here is our progress so far. First, note that conventionally the strong nuclear force overcomes the electrostatic repulsion of protons. In contrast the Cordus theory proposes that the protons and neutrons are locked together by  synchronisation of their emitted electrostatic forces. These forces are proposed to be discrete.  This is a radically different mechanism that has nothing to do with the electrostatic force.

‘The Cordus theory proposes that the strong force arises from the synchronisation of discrete forces between the reactive ends of different particules. The emission directions represent the particule’s directional engagement with the external environment, and so two particules that co-locate one of each of their reactive ends need to share this access, and this is proposed as the basis for the synchronicity requirement. This causes the emission of the particules’ discrete forces to be interlocked. The discrete forces cause the reactive ends to be pulled into (or repelled from) co-location and held there. Hence the strong nature of the forces, its apparent attractive-repulsive nature, and its short range.’


The Cordus equivalent of the strong force is a synchronous interaction between particles,  Figure: CM-06-01-01

The Cordus equivalent of the strong force is a synchronous interaction between particles,
Figure: CM-06-01-01

 What is the synchronous interaction?

Second, note that the conventional idea is that the strong force is one of a set that also includes the electrostatic, magnetic, and gravitational forces (EMG). In contrast the Cordus theory proposes that the  electrostatic repulsion force is inoperable inside the atomic nucleus. So there is no need for a ‘strong’ force to ‘overcome’ the  proton  electrostatic repulsion. You can either have the EMG forces or the synchronous interaction, not both. The factor that determines which operates is whether the assembly of matter is discoherent or coherent.

‘Unexpectedly, the Cordus theory predicts that this synchronous force only applies to particules in coherent assembly. In such situations the synchronicity of emission means also that the assembled particules must energise at the same frequency (or a suitable harmonic), and either in or out of phase. Thus the synchronous interaction is predicted to be limited to particules in coherent assembly relationships, with the electro-magneto-gravitational forces being the corresponding interaction for discoherent assemblies of matter.’

This is a radical departure from the orthodox perspective, which otherwise sees the strong and electrostatic forces as operating concurrently. The Cordus theory predicts that the interaction between neighbouring protons in the nucleus is entirely synchronous (strong force) and that there is no electrostatic repulsion (at least for small nuclei).

What determines nuclide stability?

Third, the Cordus theory proposes, by logical extension, that the synchronous interaction makes two distinct types of bond, differentiated by same vs. opposed phase (cis- and transphasic) of the reactive ends. This concept does not exist in conventional theories of the strong force which are based on 0D points.


Figure CM-06-03-01B

Figure CM-06-03-01B


What is the nuclear polymer structure of the atomic nucleus?

By logical progression, this concept lead to the conclusion that protons and neutrons are bound together in a chain, or as we call it, a nuclear polymer. This proves to be a powerful concept, because with it we are able to explain nuclide structures. The following diagram shows how the principle is applied to some example nuclides.


Figure CM-06-03-02-01-4

Figure CM-06-03-02-01-4


More information maybe found in the following references. They are best read in the order given, rather than the order published.

Dirk Pons,

19 July 2015, Christchurch, New Zealand


[1] Pons, D. J., Pons, A. D., and Pons, A. J., Synchronous interlocking of discrete forces: Strong force reconceptualised in a NLHV solution  Applied Physics Research, 2013. 5(5): p. 107-126. DOI: http://dx.doi.org/10.5539/apr.v5n5107

[2] Pons, D. J., Pons, A. D., and Pons, A. J., Nuclear polymer explains the stability, instability, and non-existence of nuclides. Physics Research International 2015. 2015(Article ID 651361): p. 1-19. DOI: http://dx.doi.org/10.1155/2015/651361

[3] Pons, D. J., Pons, A. D., and Pons, A. J., Explanation of the Table of Nuclides:  Qualitative nuclear mechanics from a NLHV design. Applied Physics Research 2013. 5(6): p. 145-174. DOI: http://dx.doi.org/10.5539/apr.v5n6p145


, , , , , , ,

Leave a comment