Cover Image
close this bookNew Energy Technology 1990 (PACE, 1994)
View the document(introduction...)
View the documentIntroduction
View the documentAcknowledgements
Open this folder and view contentsFundamentals
Open this folder and view contentsClean energy developments
Open this folder and view contentsWireless transmission of electrical energy

(introduction...)

Dr. A. MICHROWSKI
Editor

THE PLANETARY ASSOCIATION
for
CLEAN ENERGY, Incorporated

OTTAWA, Canada

ISBN 0 - 919969 - 22 - 4

© Planetary Association for Clean Energy, Inc. 1990, 1994

Printed in Canada
01

Ville de Hull

Cabinet du Maire

Consciente de la portée d'un mouvement comme la Société planétaire pour l'assainissement de l'énergie, auquel elle souscrit sans hésitation, la ville de Hull reconnaît d'emblée la nécessité d'une démarche orientée vers la sauvegarde et le développement de l'environnement et de son assainissement.

La situation actuelle de la planète exige une prise en main immédiate par tous ses habitants.

La désertification, l'amincissement de la couche d'ozone, la destruction de la fores tropicale, les effets pernicieux des pluies acides, autant de problèmes criants sur lesquels l'humanité doit se percher avant d'arriver a un stade de détérioration environnementale de non retour.

Seule la mise en commun de tous les intervenants et la sensibilisation tous azymuts des profanes à cet état de fait nous permettront de mener une offensive et espérer un revirement de la situation que nous avons créée.

Le rapprochement entre tous et chacun et les échanges multipliés qu'il est impératif d'entreprendre, paveront sans aucun doute la vole à l'émergence de solutions a la mesure de la gravité des questions que se pose la Société moderne.

La ville de Hull réitère donc son appui et son soutien à la Société planétaire pour l'assainissement de l'énergie considérant son intervention indispensable et l'encourage fortement à poursuivre sa démarche humanitaire.

Le maire,

Michel Légère

Maison du Citoyen
25 rue Laurier
Hull (Québec) JBX 400
(819) 595-7100

Introduction

In the mid-1970s we established a challenging and restrictive definition for "clean energy systems." These would be systems, it was postulated, which draw on natural supply, which do not leave residue, which are inexpensive and which are universally applicable. It was considered that the list of requisite conditions was necessary in order to face the challenge of restoring a balance between human beings and Nature.

More than a decade later, working or theoretical models have in fact emerged and they seem to meet such criteria and some of these are presented in this book, often for the first time ever to the general public.

The mere existence of these technological opportunities presents a promising prospect that the current world-wide environmental and economic crisis might be turned around within a reasonable timeframe. When these technologies were the subject of the 1988 Third international New energy technology symposium/exhibition held in Hull, Quebec, conferees adopted a hopeful statement of Conclusions and Recommendations which read:

The prospect for new clean energy sources is extremely promising and what is now needed is open-minded assessment by the scientific and industrial communities and a willingness to fund practical R & D and applications based on the evidence presented at this Symposium.

We note that simultaneously with the Third International New energy technology Symposium/exhibition, the World Conference on the Changing Atmosphere was being held in Toronto, organized by Environment Canada, with support from the United Nations Committee on the Greenhouse Effect, chaired by Dr. Kenneth Hare, who is chairman of Canada's Climate Program Planning Board.

The Toronto Conference addressed the profoundly dangerous problems of the deterioration of the Earth's environment as the "Greenhouse Effect" becomes recognized as a reality.

We note that the Toronto World Conference on the Changing Atmosphere recommended the abolition of the use of fossil fuels (the burning of which, by releasing carbon dioxide, increases the "Greenhouse Effect", raising atmospheric temperatures and contributing to droughts and possible large-scale disaster).

The assembly at the Third international New energy technology Symposium/exhibition declares that, "we believe that revolutionary technologies which permit a gradual transition from current polluting energy sources can be developed in time, based on what was presented and demonstrated at our conference.

Specifically the Canadian Planetary Association for Clean Energy and the UK Advanced Energy Research Institute recommend the international promotion and funding of the crucial developments of the following Symposium highlights:

1) Revolutionary solar cell technology, based on inexpensive plastic film (by Dr. Alvin Marks, inventor of Polaroid film), could bring solar cell prices down to 5% of the current level, making them commercially feasible and widely available at last; - the UK development of accurate, flexible mirrors for solar ray concentration - and the Marks Charged Aerosol Device inexpensively retrofitted to industrial smokestacks and chimneys to clean up emissions which contribute to "Acid Rain";

2) UK breakthroughs in low-cost new methods for manufacturing high-output, efficient thermocouples to turn waste heat from utility and industrial plants into electricity;

3) The "MigmaCell", which may make feasible small-scale non-radioactive fusion, now 4/5 complete (by Dr. Bogdan Maglich of Princeton);

4) Recent developments in new energies from magnetic fields, the vacuum of space-time, and the emerging ether physics (focused by Dr. Harold Aspden, former head of IBM European patenting);

5) Demonstrated new uses of energy for novel modes of propulsion using internal inertial thrust with potential for land, sea, air, space utilization;

6) The potential for transmitting electricity to any part of the earth without wires, with negligible losses and capital costs (with construction of a pilot plant underway).

We therefore recommend the development of these new energy technologies as viable, potential solutions to the environmental crisis spotlighted by the World Conference on the Changing Atmosphere, suggesting urgent funding by private industry and by national and international organizations.

We also recommend the funding of the study of the impact of the potential new energy solutions on the economy of our world for the future. We believe that the outlines are now visible of the shape of a future with low-cost, clean, abundant energy and recommend that all appropriate governmental and private organizations hasten to the task of developing and implementing such systems, lest we further increase the damage already done.

In fact these and other technologies and their related issues as discussed in this book provide enough plausible information upon which an assessment might be forecasted of what a "clean energy" future might portend.

Reduced energy costs and accrued world-wide availability of the new, clean energy technology should instill healthy effects in sustainable global economic development and in peace and security matters.

Annually, the world economy could save about US $ 400 billion (1988 dollars) in decreased capital costs, increased revenues, accelerated debt reductions because of greater efficiencies in energy production, transmission and distribution. Further savings could also be found in the use of better motors, in lesser dependency on such secondary products as tires, in sales of lands and properties set aside commercial energy generation, transmission and distribution. These particular economic advantages might not be recurring.

To better understand the implication of large-scale implementation of clean energy systems, the case of the United States commercial electrical energy consumption could be used as an example. In 1987, US electricity consumption was 28 quadrillion BTUs ("quads"), but only 8 quads were sold to consumers -- the remainder, 20 quads, or almost 30% of the entire US primary energy consumption, was used to generate, transmit and distribute the electricity. In fact North American power companies lose between 2.5 to 3.5 more energy than they deliver. These companies also face the overwhelming prospect of having to overhaul about half of their aging generating and transmission facilities during the 1990s and not being able to find the necessary capital. So it is in gaps such as these that clean energy systems can make major and visible inroads.

Other areas of progress might involve sensible improvements in transportation and industrial consumption.

Savings from the implementation of new, clean energy systems could not only offset the tremendous costs of world-wide environmental clean-up (estimated to cost from $ 50 billion to $ 150 billion annually till the year 2000) but also improve the socio-economic welfare and physical well-being of individuals and nations. Some of the most remarkable improvements are forecastable in the energy-poor Third World countries. This kind of improvement could be beneficial in easing numerable peace and security tensions, which are often the motive for much military arms spending - one of the greater economic burdens in many nations.

There are a number of still little understood spin-offs possible from clean technology. For example the MigmaCell's technological principles could become the basis for the de-activation of toxic nuclear wastes and by-products, probably at minute costs. Tesla technology might be used for the deliberate and remotely-controlled zonal repair of the ozone layer.

The technology related to the structuring of water might introduce the universal availability of clean water as well as very advanced artificial intelligence units.

Indeed, the new, clean energy technology may be a very wise choice as well as inexpensive and highly competitive compared to the currently applied and researched technologies.

A. Michrowski

Acknowledgements

This effort has been made possible through individual and public donations. Individuals who have donated to this initiative (in alphabetical order) include: David Bird, Barbara B. Bronfman, P. Brown, David Geldart, Dr. Luisa Lunelli, Dr. Srecko Pregelj and Richard J. Reynolds 111. Organizational support has been provided by the Hon. Dan Haley, Leonard Holihan and Prof. Jim Monroe.

Publicly funded support has been granted by the City of Hull, Quebec, the British Council and the Government of Canada.

The development of this text has been made possible by Louise Dagenais, Penny Geldart, Monique Michaud and Marion Van Goudover.

The scientific basis for tapping energy in the vacuum

Harold Aspden
Department of Electrical Engineering
University of Southampton
SOUTHAMPTON SO9 5NH
United Kingdom

The thought of extracting energy from a hidden but active field environment more plentiful than air or water is rejected outright by orthodox scientists. Yet there are sound scientific reasons for expecting that we should be able to extract "free energy" from that hidden field.

The Zero-point energy field

In 1987, a Senior Research Fellow at the institute for Advanced Studies at Austin, Texas circulated a paper that had just appeared in the Physical Review (1). It was entitled, "Ground state of hydrogen as a zero-point state". It spoke of the energy of these fluctuations, but nothing in the paper would excite the interest of a "free energy" enthusiast; the paper had to be acceptable to a scientific community hostile to such ideas, otherwise it may never have been published.

What is more revealing are the words in the special summary of the paper that its author, Dr. Harold E. Puthoff, also distributed. Quoting from the summary:

"One of the more bizarre predictions of modern quantum theory is that each cubic centimeter of space, including that of the most pristine vacuum of outer space, contains an enormous amount of untapped electromagnetic energy known as zero-point energy (it is the zero-point from which all other energies are measured). The amount of energy associated with this (usually unobserved) background is conservatively estimated to be of the order of nuclear energy densities or greater."

Dr. Puthoff goes on to explain how theorists have tended to question the enormity of the energy density involved but how, over the years, the discovery of the Casimir force and other effects have given the quantitative verification. The Casimir force is a force found to exist between closely-spaced metal plates. The force results from unbalanced pressures in the zero-point field due to the presence of the plates. Dr. Puthoff's own paper announced another indication of the physical reality of this "ubiquitous" energy field, by showing how it accounted for the stability of matter at the atomic level. In his summary, he commented on this with the concluding words:

"The significance of this observation is the understanding that the very stability of matter itself depends upon, and verifies the presence of, an underlying sea of electromagnetic energy of almost inconceivable magnitude, a vast reservoir of random energy that is universally present throughout space."

Now, Dr. Puthoff is not alone in researching this subject. There are many university scientists throughout the world who are working on the underlying stochastic electrodynamics. Sadly, however, many of these researchers are led to conclusions which are not to the liking of others in the scientific community, particularly those who treasure the sterile mathematical abstractions of Einstein's model of space.

One particular conclusion, which has been the basis of my own interest for more than 30 years, is the realization that when an electron responds to an electric field it moves exactly in such a way that it conserves the energy involved in this process. The isolated electron or any such fundamental charge does not, in my opinion, radiate energy. This is consistent with quantum theory and the result deduced by Dr. Puthoff from the atomic electron interactions with the zero-point energy background, but I see this in a different context. It is the fundamental basis of the inertial property of that electron. Its mass is nothing other than a measure of its response when affected by an energy field. By conserving energy, that electron moves as if it has just the mass or inertia that assures field fluctuations consistent with overall energy conservation.

Obviously, this argument leads to a formula relating both the mass and the energy of that electron with the propagation speed of the fluctuation involved. It leads to:

E = mc²

but without requiring Einstein's theory. Though I discovered this over 30 years ago, it was not until 1976 that I found a scientific journal willing to publish such a claim (2). It was heresy to suggest that the famous Einstein formula might have a simple physical basis unconnected with the doctrines preached by Einstein. Such is the faith in Einstein's theory, that the vast majority of journal referees will only allow publications to trespass into relativistic territory when the scientific ideas advanced glorify Einstein's methods. There must be so many scientists who have had their ideas totally surpressed by the unfair attitudes of those who seek to preserve a status quo. Yet any aspect of the scientific status quo is only worth preserving if it can survive criticism. Sadly, that criticism is suppressed and the message to the public is therefore clear: Einstein's theory has weaknesses which those who speak for it cannot see, but yet they sense this, and their inbred instincts cause them to fear the consequences of embroilment to the point where they are ready to fight off any intruder.

I conclude this introductory comment by stressing that the vacuum is full of energy ready to be tapped, if only we can find the right techniques, and I express the view that those who control research funding are ensnared by the adverse doctrines of Einstein's theory. How can one think otherwise when, in 1988, a leading scientist, now retired from a lifelong service at the UK National Physical Laboratory, in which he measured those quantities that are so germane to Einstein's theory, time and the speed of light, published an article in which he called Einstein's theory a "swindle".(3) In an earlier article, again not written until after he was retired, the same author, Dr. Louis Essen, the pioneer of the Caesium atomic clock, had declared that the hope for the future lay in supporting those who reject Einstein's theory and search for that hidden energy that pervades the vacuum state (4).

Is there an ether?

The orthodox scientist will tell you that there is no ether because of an experiment performed by Michelson and Morley in 1887. A more informed scientist might add that a further experiment performed by Trouton and Noble in 1903 (5) is equally relevant in proving that there is no ether. To these scientists one can now say: "Get with it, there is something wrong with the theory of both of these experiments and a new experiment has now taken over".

An experiment has been performed in the State of Washington by E. W. Silvertooth, another retiree, but in this case from a senior position in which he exercised his expert knowledge of optical systems. The experiment was first reported in 1986 in the journal, Nature (6) and then more fully elsewhere (7) in 1987. Silvertooth avoids the retro-reflections of the Michelson-Morley experiment and uses a transparent photo-detector in linear translational motion relative to the optical source to scan along a beam set up by interfering rays coming through one another from opposite directions. He finds that the single laser apparatus he now uses can sense, in an enclosed laboratory, our motion through space in the direction of Constellation Leo at nearly 400 km/s. This is what both Michelson and Morley, and Trouton and Noble were trying to do in their experiments.

The Michelson-Morley experiment did not work because the retro-reflection set up standing waves that were locked onto the mirror surfaces and so their energy was dragged along by the apparatus. That energy affected the speed of light in those standing wave components along the beam. Michelson and Morley did not know about those standing wave properties, because standing waves as such were only discovered by Wiener. The Trouton-Noble experiment did not work because it assumed the Lorentz force law applies to non-circuital charge motion, whereas it is empirically based on effects that demand such a circuital charge motion. The experiment does not involve circuital charge motion.

These are, however, specialist points that are best discussed elsewhere. From our point of view here it suffices to declare that Silvertooth has detected our motion through the ether and so we can now begin to think of the properties of a real ether in which those zero-point energy fluctuations are seated.

1987 was the centennial year of the Michelson-Morley experiment. It gave me the occasion of mentioning the Silvertooth experiment in a letter published in Physics Today (8). It is claimed that Silvertooth's experiment has been repeated by Marinov (9) and that the positive result is confirmed. Also, a further repeat of the experiment is being considered at this time on the initiative of Professor R. Monti in Bologna, Italy.

It should not therefore be too long before we see the consequences of the detection of the ether rumbling through the orthodoxy of modern science. Undoubtedly there will be efforts to patch up the relativistic doctrine in some way, but by then the new research opportunities will have outpaced mere theory and we will, I trust, have broken through into the new world that recognizes the hidden energy field.

The Gyromagnetic anomalies

Magnetism is a phenomenon that features prominently in devices aimed at extracting free energy from the vacuum medium. It has long been realized that there are certain anomalies in magnetism evident in gyromagnetic reactions.

The orthodox scientist will tell you that an electron is a point charge that spins to set up a quantum of spin magnetic moment. How a point can spin is something that stretches the imagination. It takes one into that fuzzy and abstract world that exists only in the mind of the mathematician. How a point charge can spin and yet set up a related magnetic effect that somehow depends upon motion relative to the observer is even more taxing on the imagination. For these reasons I say to the orthodox scientist: "Think again and figure out something better".

The whole problem centres on the factor of two. When the magnetism in a steel rod is suddenly reversed the rod has a tendency to rotate about its axis. If mounted so as to rotate freely, the reversal of magnetism produces a measured angular motion. This corresponds to exactly half that expected if we suppose that magnetism comes from the orbital motion of electrons. The factor of half or two, if the formulae are inverted, is anomalous and has led to a fanciful relativistic based mathematical interpretation for which the Nobel prizewinner Paul Dirac is famous.

Let us think again on this matter. When we consider how energy travels between two interacting electric charges that are subject to an instantaneously-acting electric field, we realize that energy has to travel a certain distance commensurate with the distance between the charges in order to feed the kinetic energy of those charges. This takes a little time and it is convenient to imagine that the energy does travel at the speed of light. In this way, it can be shown that the energy in transit accounts for the magnetic interaction of two moving charges.

The key to understanding this process is to regard that zero-energy in the ether as a source of energy locally. Thus, when the instantaneous action between the two charges says that they have altered their relative positions, owing to their motion, their kinetic energies adjust by energy exchange involving the local zero-point energy background. Such energy is "radiation" energy and it communicates momentum and so force in relation to the energy involved as divided by the speed of light. The total energy in transit is measured by the transit time and the rate at which time is the separation distance divided by the speed of light. Accordingly, the overall force effect that accounts for the electrodynamic action is related to the electrostatic Coulomb force as divided by the speed of light squared and proportional to the square of the separation speed.

The point of this argument is to show how vital that background energy in the ether is to the physical justification for the electrodynamic action. Now this ignores any reaction in the ether apart from its role in providing an energetic environment and in determining the speed of light, but we see ether as containing electric charge itself in some kind of neutral composition. Therefore, this electrodynamic action between charges can never occur in isolation without inducing secondary reaction either in the ether or in any enveloping substance, such as by free electrons in the steel rod just mentioned. Detailed analysis then shows that the optimum reaction will halve any magnetic effect produced by the primary action. This brings us to a position where the ether has to be seen as having special properties which enhance the primary field effects just enough to keep the energy balance. The magnetic energy is stored in the reacting system and resultant fields conform with the unity state that we associate with a true vacuum.

In spite of this neutralizing action, we are still left with an observable gyromagnetic effect that has double the magnetic strength for the same mechanical action. This is exactly what is found if we assume that magnetism in a steel rod is due to orbital electron motion.

In summary we can say that Dirac's spin interpretation cannot be used to reject ideas about the orbital nature of ferromagnetism and that there is an essential reaction density trapped in any magnetic field. The question then is whether this energy can be extracted.

Well, of course, it can. Imagine that we supply current to magnetize a solenoid. The energy fed into the inductive field is stored somewhere and we do not go into that in detail in our textbooks. However, it really is stored in that reacting charge motion that sets up the half-canceling back-field. Thus when the solenoid is switched off it is this reacting charge that feeds energy back to the solenoid circuit. Now ask what happens if that reaction is in a substance and involves electrons in free motion. The energy of that motion is part of their thermal energy. So if we switch off the solenoid containing a core of such material we will actually extract heat energy from that material.

So this tells us that if we energize a solenoid having a copper core, say, the energy supplied to the coil will all be used to generate heat. Suppose then that we extract that heat and use it efficiently for some useful purpose. Suppose further that the solenoid itself is superconducting, meaning that there is no continuous energy loss owing to ohmic resistance. Let the core cool to its initial ambient temperature as this heat is extracted. Then let us switch off that D.C. current, de-energizing the solenoid slowly so as to minimize eddy-current losses. This will return all the inductive energy by extracting it from the thermal activity of the electrons in that copper core.

The energy supplied in the first place was not wasted. It was all available as heat output. Yet much of that energy is returned as electricity when the solenoid is switched off and this energy comes from a cooling of that copper core. We have "free energy" in the sense that ambient heat energy has been tapped to produce a useful energy output.

The orthodox scientist and also the orthodox engineer will suggest that this is not really practical but that is not our immediate concern since we seek here to make a point of principle. "Free energy" can be accessed by tapping the thermal background. Now let us go just a little further with this argument and say that the magnetic field produced by the solenoid is so powerful that when the energy is extracted from the reacting electrons in the copper core the energy drawn out exceeds all the thermal energy of that core. What would happen? Would the core cool to absolute zero, that is minus 273 degrees Centigrade? Perhaps, but in addition could we not expect to find that this process might well draw on that zero-point energy in the background vacuum field?

I can only theorize about the answers to these questions but I submit that one day, if we do get strong room temperature superconductors that can withstand very high magnetic fields, we will get to those answers in a way that could lead to practical devices. In the meantime, let us here remember that orthodox scientists do use the technique of adiabatic demagnetization to achieve supercooling to very low temperatures. What is proposed should not, therefore, be seen as pseudo-science. Furthermore, it is known from experiment that the application of short duration intense magnetic fields (several hundred kilogauss) to copper cores can reveal that they have momentarily experienced a near-molten state from which they were spontaneously cooled when the magnetization pulse terminated. The technological challenge is to get that heat energy out whilst the pulse is on and before the cooling phase begins as the pulse subsides.

I could give many references that bear on what has just been said, but will just mention one, namely my own discussion of this subject in a book dated 1969 (10). The book aroused no particular interest, possibly because its title was "Physics without Einstein", but in a world intent on discovering new sources of energy it is surprising that what I said on this subject has not been investigated or, at least, contradicted by now. I mention also that my Ph.D., which was based on experimental research at Cambridge in England, was for work on the anomalous magnetic reaction effects induced in ferromagnetic substances. That research did not extend to the ideas just presented, but it gave me a relevant scientific background and so a basic confidence in what I was later to propose.

Conductivity anomalies

The scientific world has been shaken by the recent discovery of "warm" superconductivity. Superconductivity at temperatures in excess of liquid nitrogen temperature is a phenomenon that defies the orthodox scientific expectation. The questions we should be asking are whether we are looking at zero electrical conductivity or negative electrical conductivity. The latter would imply a source of "free energy", whereas the former merely is a state of no ohmic loss.

It is conceivable to have particles such as electrons or even protons traveling through conductors and not causing thermal oscillations that imply heat loss. Therefore, logically we must be looking at a system in which there is a transfer of the heat energy associated with the random motion of the atoms that make up the conductor to an ordered motion of the charges carrying the current. Superconductivity sets in when the break-even point is reached and more energy is fed from the thermal condition to the current condition than is dissipated as ohmic loss and so fed back into heat.

This is how I, as a non-expert on matters relating to superconductivity, must view the whole process. It follows that the question of interest to me is what happens if those "warm" superconductors are operated at much lower temperatures than that of the threshold level. My hope is that the circuit might develop an EMF of its own and so supply "free energy" by feeding a current which can be used in an ohmic load, energy which is sourced in the heat of that superconductive element. This imaginary device would need to be cooled down to prime it for operation and would need to have a current fed through it also to prime it for operation, but, once primed, it could continue to feed current to a load and at the same time cool itself to remain superconductive. Indeed, to keep it operative and feeding current one would need to allow some ambient heat energy to reach the superconductive element, but only at such a rate that is needed to sustain the electrical output.

The orthodox scientist would say that this is a pipe dream Certainly it cannot work according to the second law of thermodynamics, though it does satisfy the first law. Be this as it may, in advising on entrepreneurial activity into new and safe sources of energy, I would not recommend turning away anyone who claimed that he or she could demonstrate a source of electrical energy from a primed conductor system fed only on ambient heat.

One may also wonder whether what scientists regard as superconductivity is really an essential preliminary to this prospect of "free energy" from a conductor device. We know of thermoelectric phenomena in conductors comprising junctions between dissimilar metals. In a sense these different junctions exhibit positive and negative resistance. A positive resistance produces heat in absorbing electrical power and a negative resistance cools down in supplying such electrical power. By appropriate selection of metals and operating temperatures of the junctions, one can wonder whether we may be able to fabricate a circuit in which the negative resistance junctions are more effective than the positive resistance junctions.

The result could be a solid-state device which can feed a steady supply of electricity by drawing on heat at the ambient air temperature.

Another version of the same pipe dream? There are certain scientific factors that need looking into from an experimental point of view, but there is nevertheless a sufficient scientific basis in such a "free energy" proposition to warrant the investment involved. It is probable that the Second Law of Thermodynamics will not yield ground on the "free energy" issue, but we must at least try to penetrate that barrier. At the very least I expect that we will eventually discover thermoelectric techniques by which to derive electrical power efficiently from low temperature differential and so gain our energy at the expense of the atmospheric conditions.

Other developments

Space does not permit discussion of the possibility of deriving "free energy" from special kinds of electric motor. Nor does it allow discussion as part of this paper of the current interest in gyroscopic propulsion, which brings with it the prospect of levitation and so energy saving in a new means of transportation.

It is however, appropriate to mention that there is a scientific basis for suspecting that energy can be transferred to and from that zero-point energy in the vacuum field by techniques involving electric motor generators. In evaluating any claims of "free energy" machines of this kind, one should be prepared to give more credibility to the inventor who says his machine has also the surprising property of being able to lose energy. By "free energy" we think of a machine that is more than 100 per cent efficient, but we should also have in mind the machine that is less than 0% efficient. A plausible machine would be one that is reversible to work in either way, that is control the energy transfer from and to that zero-point background.

The secret of such a device will surely be based upon the role of that zero-point vacuum state in determining the Planck quantum of action. This is what governs the quanta of energy radiation across empty space, the so-called photons. It is also what sets the magnetic polarization on a per atom basis of the ferromagnetic substances used in our electrical machines. However, what "free energy" inventors must realize is that the ferromagnet is intrinsically always magnetically saturated. All we do in magnetizing it is to re-orientate the microscopic domains within the substance. This hardly affects the magnetic energy density in these domains, at least for the level of polarizing fields used in most practical machines. Consequently, there seems no basis for extracting energy from that zero-point field.

This having been said, imagine that we do force a much higher level of magnetization so that those quantized orbital electrons do draw on the zero-point energy to help to power the forces acting between the poles of an air gap. Having done this, imagine what happens if we tap that energy in the air gap, using it to drive a motor, whilst the magnetizing current is switched on. Surely those orbital electrons in the ferromagnet will make their own contribution to the energy in the air gap, just as the supplied magnetizing current will feed in some energy. Then, with the poles having moved close together, let us switch off the current. I suspect that energy used as output will then have transferred from those two sources, the magnetizing circuit and the zero-point vacuum field, but only the part needed to sustain the inductive reaction effects in the magnetic core will be recovered.

In summary, I subscribe to the view that there could be ways of designing electrical machines which can transfer energy either way between the zero-point vacuum field and our material environment. "Free energy" in this sense is a distinct possibility and thinkers in this field should not be deterred by the opinions of orthodox scientists who have heard of Einstein but have not heard of that vast reservoir of zero-point energy.

References

1. Puthoff, H. E., Physical Review D., 35, 3266 (1987).
2. Aspden, H., Int. Jour. Theor. Phys., 15, 631 (1976).
3. Essen, L., Electronics and Wireless World, 64, 44 (October 1978).
4. Essen, L., Electronics and Wireless World, 94, 126 (February 1988).
5. Trouton, F. T. and R. H. Noble, Proceedings of the Royal Society, 72, 132 (1903).
6. Silvertooth, E. V., Nature, 332, 590 (1986).
7. Silvertooth, E. V., Speculations in Science and Technology, 10, 3 (1987).
8. Aspden, H., Physics Today, 41, 132 (March 1988).
9. Marinov, S., in: "Progress in Space-Time Physics 1987", J. P. Wesley, Editor, Benjamin Wesley, Federal Republic of Germany, pp 16-31 (1987).
10. H. Aspden, "Physics without Einstein", Sabberton, Southampton, pp. 27-37 (1969).

Strategic patenting for the inventor

Harold Aspden
Department of Electrical Engineering,
University of Southampton,
SOUTHAMPTON SO9 5NH
United Kingdom

Free inventors and those who invest in their inventions often have very little appreciation of the values and limitations of patent coverage. This paper presents a personal perspective by someone having extensive experience in a corporate patent environment, but who also has developed a strong scientific interest in electromagnetic phenomena and energy technology. The views expressed are a statement of the strategy the author would pursue if he were to invent a new energy device and had limited backing. These views do not constitute professional advice and should not be seen regarded as such.

Introducing an Invention

Imagine that you made an invention. You have invented a new machine that is able to deliver electrical power from the air temperature fluctuations in the local environment. Your machine is "clean" in every respect. It is solid state, involves no harmful chemicals, meets any safety standard and involves only modest production facilities. It does, however, involve a new physical process which, once publicized, would make it relatively easy for others to devise alternative ways of performing the same function.

Your position is that you are a "free inventor", which means, for example, that you are not tied to a corporate entity so far as this particular invention is concerned and can invent on your own behalf. You naturally want the world at large to enjoy the benefits that your invention offers, but you expect to receive public recognition for your achievement and you hope to become very rich in the process. You have reached Stage I by working out how such a machine can be built.

Now, at this stage, it is of vital importance to be sure that your machine, as conceived, will work in practice. Stage II is the stage at which you are standing in front of a working device. You may well feel sure that your "paper proposal" must work, but that is not the same as knowing it will work and it leaves you far short of being able to demonstrate that working machine to a would-be backer. So, what is your course of action, assuming you need financial backing to reach Stage II?

Very probably no financial backing will be forthcoming if all you can offer is a theoretical proposition, particularly if you have not taken some steps to secure a priority date for a patent and verify that what you propose is really new.

The Patent Application

How do you know that your "invention" has really not been thought of before? Well, you say, "if it had, then we would see it used and would all be enjoying a higher standard of living." This attitude invites the comment that it might well have been thought of before and it may even have been covered in a published patent specification but yet it has not found commercial use, simply because it is not commercially viable.

Still, you cannot let go of a good idea and you are obliged, indeed driven by internal forces, to proceed further. Whether you find a backer before or after you initiate the patent process, and before or after you reach Stage II by building a working machine, you should, in your own interests, do something to secure that basic patent protection.

This involves filing a patent application at a Patent Office and it is here that you will be seeking professional advice on the preparation of the patent specification. Then your problems begin to mount up. They are no longer just technical problems or financial problems; you have to describe a working machine even if it is only one you imagine and it has to be claimed in a way that gives legal definition to what you have invented. That means that you have to be able to explain to the patent attorney the underlying principles of machine operation and how it works. It is his task to write a general description in suitable style and structure the patent claims that will define your monopoly, should the patent be granted.

Now a word of caution. Any suggestion that you have found a really novel Earth-shaking energy conversion process will inevitably attract suspicion, if not scorn. Unless you have a very thorough scientific training in the subject involved, the chances are that what you see as an invention is ill-founded. The odds of success are such that, before proceeding even to arouse outsider interest in the project, common sense requires that you do some experimental tests to verify the foundations of your proposal. Without such evidence or an adequate scientific argument, you have no way of giving the patent attorney the information needed to proceed with a patent application. I will assume, however, that you have overcome that hurdle and can proceed with the patent process.

By filing an application for a patent you:

(a) create a legal priority date for your rights,
(b) put on public record a declaration of what you believe you have invented and
(c) initiate a process by which you can have an official assessment of the novelty of your invention.

What does that all really mean? Well, it assures that if your patent specification is published your contribution will, at least, be of historical record in public archives. It may not mean fame and fortune, but your contribution is published. It means also that if anyone else tries to patent the same invention at a later time, the fact that your "priority date" is earlier will be effective in limiting that rival action. Of more practical importance is the official patent search that will determine how novel your invention might be.

That search is very important to you and it is particularly important for you to know the result early enough to allow you to determine whether to extend your patent coverage to a multiplicity of countries. You need that information because it is a costly business and there is no point in spending a lot of money on patents if your invention lacks novelty, because at the end of the day your patent coverage could be worthless.

So that search data is important in giving confidence to backers who might be putting their money into your project.

To sum up, you have to be able to support an initial patent application to draw an official opinion on the novelty of your invention in time for you to decide how to proceed in get tiny multi-country patent coverage.

The Patented Machine

It is all too easy to think that, if you are working on a prototype machine and you have filed a patent application on that machine, then you have patent cover. You must be aware of the possibility that the machine design that you eventually settle upon and the patent claims to end up with have not moved in opposite directions, leaving you with no real protection at all. I suspect that there are too many instances where the patent expense has gone into getting the same basic patent coverage in a large number of countries rather than in filing, in just a few countries, a series of improvement patents that keep up with the machine development.

The investor who puts his money into such an invention might think the machine that evolves with that development is covered, merely because there is a long list of national patent application numbers all founded on that same initial patent application. Yet it has to be expected that problems exercising more ingenuity will be encountered in that development. Therefore, it must be expected that more inventions will arise. It may even be that the official patent brings to light earlier proposals which stand in the way of a patent grant, but nevertheless reveal ideas which stimulate improvement of the machine in a way that is itself inventive. So almost inevitably, in the electrotechnical field at least, one should be looking for a way of securing a patent coverage involving several patents based on several inventive features in the same machine.

If, therefore, you are inclined to think along the lines that a particular patent covers a particular machine, you may easily misunderstand what the patent system is all about. A patent gives you no rights to make any machine, as such. It gives you, instead, the right to stop others from making machines that are copies of your machine or machines of their own design, if those machines happen to include an invention for which your patent was granted. That invention is defined by the legal wording of the claims of your patent and not by the technical description in the specification. If those claims as granted turn out to be very limited in scope, the essential characteristic that you think makes your machine work may not be protected. A single machine can incorporate many inventions and so infringe many patents. So, my point is that, if you are to spend a given number of dollars on patent costs, it can be better to obtain five patents on improvements in, say, three countries than to settle for fifteen patents on one supposedly basic feature, by spreading the cover to fifteen countries.

The Strategy for Patent Coverage

A simple strategy is to solicit plant coverage in one country, namely the US. This can be very rewarding from a license point of view if the invention is a good one but there is always the risk that the patent may not be granted and, if it is granted, it may be contested. A potential licensee during the patent application stage knowing that you have "all your eggs in one basket" in your US application, may be inclined to wait to see what emerges and whether you are granted a patent at all. By having one or two parallel patent applications in other countries, where different examiners judge your invention and patents can be granted earlier, that potential licensee will be more likely to come to terms at an earlier phase. Furthermore, there is less likelihood that the validity of your patent will be challenged, whether by an infringer or a potential licensee, if it is known that you have several patent applications covering different inventions but all on a related theme of interest. The odds of winning such contests, the time delays involved and the ability of the respective parties to sustain legal expense over a protracted period, are all factors that come to bear in these matters. It is generally better to sit as a patentee holding several patents and patent applications on different but relevant inventions rather than to have extensive world patent coverage on a single invention. I still qualify this, by reminding you that I am speaking of electrotechnological inventions. In the pharmaceutics, where a single drug patent can cover the composition of one product, patent strategy considerations are very different. We are here concerned with the invention we see as a breakthrough in the energy field and especially one that emerges not from research in the laboratories of a major corporation.

With a limited country patent coverage goes the risk that competitors will be able to manufacture in many countries without worrying about infringement of your patents. You may see this as a lost opportunity. So, let us suppose that you have not sought coverage in Japan or Germany. Both are large markets having extensive manufacturing bases and export capacity and offer great potential for your invention. Patent filing in these countries could be particularly advantageous if you have special reason for thinking that a Japanese or German company is interested in your ideas. However, imagine that you have sought protection only in the US and UK, for example. Your patent specification will be published by the British Patent Office 18 months after your initial filing date. This is a public disclosure of the description which accompanies your claim for the grant of a patent. The ultimate grant process takes appreciably longer, but that UK Patent Specification will give a measure of your invention by showing the result of the Patent Office search. This can help in interesting licensees. Now, imagine that you can attract some news media publicity for your invention. It comes to the attention of manufacturers looking for new products. A company in Japan or Germany searches the official patent records and sees that the field is clear for local manufacture and sale at least. They decide to build a machine incorporating your invention.

Remember here that it is often not the President or a Board Director of that company that acts in response to your news coverage. If it were that would be more likely to result in your being contacted with a license opportunity in mind. The chances are that one of the technical employees of that company will read about your device in some scientific journal. That may only seed a technical idea in that person's mind. There will be no special interest in glorifying your work and acclaiming what you have done as a great discovery. Human nature amongst the technically minded can be such that they like to exercise their own creative brain power and that can lead to second hand versions of machines developing from your basic theme. They, in effect, have jumped on your idea and, in the light of some special skill they have, or some special knowledge they have about the technical trends, facilities, etc., within their own company, the scene is set for something similar to your invention to emerge under the house name of that organization.

From your viewpoint it matters not how the seeds you have sown bear their fruit, just so long as something happens. The company, whether Japanese or German, French, Swiss, etc. has no license from you and you have no patent in those countries. Their machine is a commercial success. Your efforts, even to attract a licensee in the UK or US, have failed and your own attempts to build a prototype machine have no commercial pay-off. You have the satisfaction that at least that UK Patent specification shows that you are the true and first inventor and you may have a few newspaper clippings to show to your friends. But, have you really lost out? Then ask whether it is likely that a Japanese or German company, having succeeded in exploiting what is truly your invention, can viably market such a product without wanting to export it to the US and Europe. If they do then they need a license from you to sell in the US or UK No such company can be happy about marketing a product in part of Europe without selling it in the UK International companies cannot abide such distorted market situations, when all that blocks them is a mere need for a patent license. If they do take extreme steps to avoid selling their product in these two countries you still have a course of action. You can point to the success of their product elsewhere to justify the commercial viability of your invention. This will surely attract a strong licensee interest by companies who compete with those Japanese or German interests. There will be interest in obtaining an exclusive license from you. These companies can contemplate worldwide marketing of your invention if they have a license from you for the US and UK. Furthermore, should the Japanese or German rivals have secured some specific patent coverage on what they are selling, there is always the potential for a cross-license arrangement favorable to your licensees, thanks to those key patents that you hold in the US and UK.

Had you spent a fortune obtaining patents in numerous countries you might not be much better off at the end of the day, given that your invention succeeds. However, should the invention fail, then you will have saved a very considerable outlay and your backers may still be around to invest in your next speculative invention.

As a "free inventor" in the energy field, which particularly includes an inventor in the "free energy" field, I would therefore save patent expense by very restricted country patent filing and put my money into patenting the succession of developments on the basic theme. Now, I have just mentioned "free energy", that is energy that can come in abundance if we know where to look for it and is free in the sense that it costs very little to transport it or convert it into useful form. Hydroelectric power is "free energy" but there are substantial costs in building a major dam and the available sites for such facilities are limited. However, it would be an ideal source of relatively cheap energy were it more accessible and plentiful universally.

So my comments now address the special field that many would classify as "perpetual motion". It is generally known that Patent Offices turn their backs on applications which they see as in this category.

The Perpetual Motion Syndrome

If you go to a Patent Office and offer a patent application claiming perpetual motion, you will get the feeling that the official rejection is telling you to get on your machine, set it going in a direction that takes you well away from the Patent Office and keep going. The Patent Office is staffed by reasonable people, but they are protected from wasting their time on crazy ideas, thanks to the rules and regulations prescribed. So my task now is to tell you how that great discovery of yours which does smack of "perpetual motion" might be navigated through Patent Office channels to secure a valid grant.

Here, again, I am saying what I would do myself, in the light of my patent knowledge, but I confess that, so far, I have not had the opportunity to practice what I am now about to preach. That great new "free energy" invention is something I have to look forward to having under my control. However, I have heard of others who believe they have made such inventions and who have tried or are trying to get patents and are running into difficulties.

There is a whole field of activity that is excluded from patentability by law and the way that law is interpreted. The wording of the law differs from country to country but the effects are much the same everywhere.

To be patentable the invention has to apply to something having technical qualities in an engineering or scientific sense. It must be of use industrially and one can exclude accountancy methods, methods of calculation, mere theoretical ideas and scientific principles, as such. What we are concerned with is something that is rejected on grounds which were expressed in earlier British law by words which referred to the patent application as: "frivolous because it claims as an invention anything obviously contrary to well established natural laws".

The operative law today is consistent with that governing the grant of European Patents by the European Patent Office in Munich, Germany. The guidelines for examination actually specify that a perpetual motion machine should be rejected for being alleged "to operate in a manner clearly contrary to well-established physical laws."

Now, here I want to point out something very important. The guidelines go on to say "Objection could arise only in so far as the claim specifies the intended function or purpose of the invention, but if, say, a perpetual motion machine is claimed merely as an article having a particular specified construction then objection should not be m ace. "

What this means is that your patent application must not claim perpetual motion, but if the claims are founded on a legitimate technical device which can function without breaching the bounds of well-established physical law, then even if those claims could include a perpetual motion device as well, you may still secure a grant.

Let me put this in a different way and in a slightly different technical context. Suppose your machine, in your opinion, defies one of Newton's laws of motion and offers the potential of easy interplanetary space travel. You would be foolish to claim a space craft based on a description of a machine design which obviously does not operate in accordance with accepted Newtonian law. In this extreme situation, you could however, claim a machine, the essential part of which has exactly that construction, but provided your patent puts the emphasis on the utility of that machine as a device for testing Newton's laws. With an appropriate wording of claim, the use of the same machine for transportation purposes could be covered anyway. The grant of the patent is then justified, unless what you write is truly frivolous. Obviously, if you can actually demonstrate the working machine to the Patent Office examiners, it is clear that you exceeded some boundary beyond which those Newtonian laws might not reach. Then, I feel sure that any such machine that obviously has practical utility will become patentable; the fields in which the accepted physical laws are well established have merely been restricted and so the law is not breached.

On the more direct subject of the "free energy" machine, there are two laws of thermodynamics which will usually apply. The first law requires that energy should be conserved in any physical process, and I suggest to you that a "free energy" machine, which ostensibly does not comply with accepted physical laws, must draw on energy from some source. If that source is known and catered for in your patent description then so much the better. It will, in my opinion, either be the thermal energy of the system or the zero-point vacuum energy that prevades the vacuum field. For reasons of my own, I suspect that the surplus energy in the zero-point field can only materialize as matter, e.g., creating protons in a stable situation or leptons (such as electrons or mu mesons) in a transient fluctuating situation.

So your "free energy" source is likely to be energy drawn from the thermal state of the machine or its environment. There are magneto-caloric effects operative in this way in certain magnetic apparatus. Cooling by adiabatic demagnetization is well established in physics.

The next hurdle is the second law of thermodynamics, which is generally the one that most critics would say was standing in the way. No machine should be able to run on heat energy unless that energy is degraded from a higher temperature to a lower temperature. This is a law waiting to be broken, in my opinion, unless that machine is a pure heat engine. From a formal point of view it should only be seen as a law that is well established for heat engines, as such, but when one comes to magnetic machines, the only thermodynamic law that seems of any relevance is that first law relating to energy conservation. So my perspective on this subject is that "free energy" sourced in the thermal background is a worthy subject of research.

A perpetual motion machine that runs by cooling down and, in doing useful work, regenerates heat sounds absurd. If you sit back and think about it, it makes no sense at all. Why should physical phenomena combine to do nothing but spin wheels forever, just because somebody has put together an electro-mechanical contraption of some kind. It is truly absurd. However, just remember that we are sitting on an Earth that spins and that has a magnetic field. We look at a sun that spins and feeds heat energy into space. Many physicists believe that on a cosmic time scale of tens of billions of years there is a slow resonance effect by which the processes that power the stars are regenerated as the universe expands and contracts ad infinituum. We ascribe to God the power of setting up that kind of perpetual motion, but still try to justify as much as we can to pure physics. God is not a factor in the governing equations of physics. Why then should we be so sure that a small "perpetual motion" machine can never be built by human hand?

I could, of course, go on to talk about the more general features of the patent system. There are tactics in licensing that even apply to a single national patent that arises once a prototype device can be shown to work. Suffice it to say that the patent system, unquestionably, can help the free inventor striving to make a break-through in the energy field, but it can eat up financial resources unless you are careful.

Maxwell's lost unified field theory of electromagnetics and gravitation

T. E. Bearden
A.D.A.S.
P.O. Box 1472
HUNTSVILLE, Alabama 35807
United States of America

It is revealing to discuss the basic genesis of modern electromagnetic theory starting with Maxwell's original theory expressed in quaternions (1) (2) (3).

Most of us are familiar with four fundamental equations of theoretical electromagnetics, universally taught in Western universities and colleges as "Maxwell's Equations". (See Table 1) It may come as somewhat of a surprise, at least to the casual scientist or engineer, that these equations never appeared anywhere in Maxwell's fundamental "Treatise". (4) (5). In fact, they are entirely due to the interpretation of a single brilliant man, Oliver Heaviside (6) (7).

The Early Struggle in EM Theory

Maxwell wrote his first paper on electromagnetics in 1864 - during the time of the US Civil War, and the paper was published in 1865 (8). At that time, the modern form of vector analysis had not yet been completed (9). The prevailing mathematics available for use in deeper electrical physics was the quaternion theory founded by Hamilton in 1843 (10). Hamilton's quaternion theory was the first significant nonarithmetic mathematical system (11).

Maxwell's original expression of his theory was written in quaternions and quaternion-like mathematics. It attracted singularly attention (12), and was considered only speculation until Heinrich Hertz discovered electromagnetic waves in 1885-1888 (13) (14).

Indeed, early on, mathematicians strongly attacked Maxwell for his - to them revolutionary and startling concept that energy could exist in a massless wave and travel through space (15). While that concept is considered self-evident to today's scientist and engineer, it was considered incredible and starting when Maxwell proposed it.

TABLE 1:
DIFFERENTIAL VECTOR FORM OF HEAVISIDE/MAXWELL EQUATIONS

Maxwell's equations (Gaussian units):





Combining these equations with the Lorentz force equation and Newton's Second Law of Motion is thought to provide a complete description of the classical dynamics of interacting particles, and electromagnetic fields.


Two vectors, not interlocked.

a. Two vectors which are not interlocked by the medium (abstract vector space), simply pass through each other and do not interact. They cannot be said to have a common resultant, except fleetingly.


Two vectors, interlocked.

b. Two vectors which are interlocked by the medium (abstract vector space) do not pass through each other but do interact. They can be said to have a common translation resultant externally. However, internally they must be said to produce a stress in the medium (abstract vector space).

Figure 1: A serious flaw exists in the application of the abstract vector analysis to physical systems. Only when the local gravitational effects are fleeting or negligible, does this flaw become negligible - and the vector theory become a valid representation (model).

Maxwell himself was an excellent mathematician of the time (16) (17), of ability far beyond that of most of the contemporary electrical theorists and experimenters (18) (19). Possibly as a result, both his early lectures and writings were therefore difficult - or even nearly impossible - for his contemporaries to comprehend (20). It required the translation of Maxwell's theory (20) into the abbreviated and clearer, more readily understood vector mathematics of Oliver Heaviside (22), and the publication of clearer and much simpler expositions by Heaviside (23), before "Maxwell's theory" - or at least the Heaviside subset of it - began to capture the attention of leading university electrical scientists (24).

At the same time, the major expositor of the tough, obtuse and very difficult quaternion theory - Prof. Peter Guthrie Tait - was a stubborn, fiery, argumentative mathematician rather than a physicist. He also delayed preparing his exposition of quaternion theory until a number of years had passed and his mentor, Hamilton, had had time to rework his own obtuse, difficult book (25) (26). While Tait delayed, scientists and engineers beset with practical problems in the real world of industry were frantically seeking a simplified theory that: (1) could be readily grasped and understood, and (2) could immediately be applied to solve their practical problems of equipment design and building. The only available and accessible simplified theory of electromagnetism that fitted their urgent needs was the clear, simplified and imminently practical work of Oliver Heaviside - who himself held no degree and was self-educated.

Accordingly, the die was cast. Working engineers and leading scientists focused upon Heaviside's vector interpretation of Maxwell's difficult quaternionic expressions. In Heaviside's version the engineering calculations were enormously eased, and electrical engineers could get solutions to their pressing problems and get on with their business of constructing electrical devices and electrical machinery. Except for a very few mathematical scientists who could handle the heavy labours of quaternions, Heaviside's electromagnetics rapidly became the ipso facto standard.

A major schism developed between the increasingly isolated few quaternionists and the steadily multiplying vectorists, slowly growing to white heat in the literature A final duel to the death became inevitable.

The duel exploded before the turn of the century and a short, sharp debate occurred among about 30 or so scientists and in about 12 journals (27). The culmination was quick - complete victory by the vectorists. The quaternion EM theory was simply cast out, and the scientific community turned to Heaviside's limited vector subset of the Maxwell theory. The short "debate" only confirmed what had already become an accomplished fact: the Heaviside vector analysis translation of the EM subset of Maxwell's theory was already universally accepted and applied.

Vector Analysis Excised Electrogravitation

Ironically, in their great haste to seize upon Heaviside's simpler, clearer explanation of EM and get on with solving practical engineering problems, the nineteenth century scientists gave up something of far greater value: the unification of EM and gravity, and the ability to directly engineer gravitation itself.

Maxwell had actually written a unified field theory of electromagnetics and gravitation - not just the unification of electricity and magnetism as is commonly believed (28). Further, this can readily be shown by examining some significant even startling - elementary differences between quaternion mathematics and the present vector mathematics (29) (30) (31) (32).

Let us briefly look at one of these key differences, to show that the present vector mathematics expression of Maxwell's theory is only a subset of his quaternion theory (33).

What Heaviside's theory specifically omitted was electrogravitation (KG) - the ability to transform electromagnetic force field energy into gravitational potential energy, and vice-versa. And that has been omitted because of the assumptions of the vector theory in the nature of: (1) EM vector field combination, and (2) a zero-vector resultant of the interaction of multiple nonzero EM force vectors (34) (35).

Briefly, in Heaviside's vector mathematics, the abstract vector space in which the vectors exist has no stress nor consequent "curvature" in it. That is, the mathematical vector space does not change due to interactions between the vectors it "contains". This, of course is not necessarily true in the "real space" of the physical world. Thus when such an abstract vector space and its concomitant coordinate system are taken to model physical space (physical reality), the model will be valid only when the physical space itself has no appreciable local curvature, and is in a state of total equilibrium with respect to its interactions with observable charged particles and masses.

So abstract vector theory implicitly assumes "no locked-in stress energy of the vector space itself". By assumption, the only interactions are between the objects (the vectors) placed in/on that space. Therefore, when two or more translation vectors sum or multiply locally to a zero-vector translation resultant, in such an "unstressable" vector space one is justified in: (1) replacing the system of summing/multiplying translation vector components with a zero-vector, and (2) discarding the previous translation vector components of the zero-vector system. That is, one may properly equate the translation zero-vector system with a zero-vector, since the presence or absence of the combined vectors can have no further action. Specifically, axiomatically they exert no stress on the abstract vector space. Under those assumptions, the system can be replaced by its equivalent zero-factor alone.

Note that, applied to electromagnetics, this modeling procedure eliminates any theoretical possibility of electrogravitation (EM stress curvature of local space-time) a priori.

Force Vectors are Translations of Stress

Conceptually, a force vector is actually a release of some implied stress in a local medium. The force is applied to create stress in a second local region immediately adjacent to the primary region of stress. Of course the stress being thus "translated" by the force vector may be either tensile or compressive in nature, but a priori the force vector always represents the translation of that stress from its tail-end toward its head.

Consequently, an EM force vector is a gradient (inflow or outflow) in a scalar EM potential (stress), where the referent potential stress may be either tensile or compressive. Since modern Heaviside-type vectors do not distinguish between, or even recognize, the two "head and tail" scalar EM potentials involved in a vector, one needs to refer to Whittaker (36) to get it right. Whittaker, a fine mathematician in his own right, showed that any vector field can be replaced by two scalar waves. Unfortunately, the electrogravitational implications of Whittaker's profound work were not recognized and followed up, and their connection to Maxwell's quaternionic EM theory was not noticed nor examined.

So the idea of a vector EM force represents a release of a primary "tail-associated" scalar potential, and a bleedoff of that potential. It represents an increase in its primary "head-associated" scalar potential, and a bleed into that potential. Each scalar potential itself represents trapped EM energy density in the local vacuum, in the form of two or more (even an infinite number of) internal (infolded) EM force vector components (which may be either fixed, dynamic, or a blend of the two). The trapped energy density, however, may be either positive or negative with respect to the local energy density of the standard ambient vacuum, since the potential may be either compressive or tensile.

Maxwell's Electrogravitation Was Lost

Today we know that all potentials are gravitational and curve space-time; it is well-known in general relativity that gravitational curvature is simply the set of many potentials. One therefore can see that a vector EM wave represents a progressive translation wave in the vacuum EM potential - in the local EM-induced curvature of space-time. This EM change in local curvature of space-time moves away at the speed of light, producing only the most fleeting or momentary changes in curvature of any localized space-time region.

We note here as an aside: Only in a standing EM wave and in phase waves of coupled EM waves is there any deviation from the "momentary and then lost" change in the local space-time curvature. In those cases, the "persistence" of the local change in potential is an adverse function of frequency; hence at extremely low frequencies EM potential change persistence is sufficient to produce some very small electrogravitational effects. The effect is still slight, however, since the normal concept of "standing EM wave" represents a standing force vector situation, which is actually a stabilized spatial bleedoff of the potential. If a standing scalar EM wave is produced as in the two opposing pump waves in nonlinear optics pumped phase conjugate mirrors - then the stress is not primarily spatial, but temporal. Spatial effects then occur by particle-coupling - either in the virtual particle flux of vacuum or from the nucleus of an atom to the electron shells, and out into the material lattice structure. In the case of a standing scalar wave, electrogravitational effects are highly magnified because of the conversion of the primary potential stress to the time component. This increases the EG effects obtainable by a factor up to 9x10exp16. For this situation, at ELF standing scalar EM wave frequencies, very appreciable electrogravitational effects can be locally obtained.

The "Bottom Line" of EM Force-Field Theory

Thus concentrating only on the force fields of Maxwellian EM theory is equatable to concentrating on the situation where any localized electrogravitational effect that is temporarily formed is instantly released at the speed of light. The EG-effect in such a system is so small and fleeting, that the possibility of any persisting or significant local gravitational effects may be ignored.

Because of this, in any EM theory based only on the force fields and focusing only on their effects, then: (1) EM forces and their derivative effects may be represented by elementary Heaviside/Gibbs vector theory, including the equivalence of the zero-translation vector and any zero summed/multiplied system of nonzero translation vectors, (2) a system of EM forces which sum or multiply to a zero resultant may be discarded outright and the zero-vector substituted, (3) the effect of an EM potential is only to serve as an accumulator from which an EM force may be produced, and only its nonzero gradient will be thought to have any physical significance, (4) translation of EM forces (i.e., of potential gradients) and their effects on charged matter will assume primary importance, and (5) the potentials themselves may be regarded as simply mathematical conveniences and of little importance. This is precisely the subset of Maxwell's electromagnetics extracted and written so clearly by Heaviside.

As was Hertz, Heaviside was adamantly opposed to attaching any sort of physical reality to the potentials, preferring that they should be excised and "murdered". An indelible imprint - that potentials were "mysticism" and at best only mathematical conveniences - was imposed upon physics by Heaviside and Hertz (21). This rigid mindset was not to lessen (at least in quantum mechanics) until 1959; it was not to be assuaged until the mid-1960's (22) (37).

How EM Potentials are Regarded Today

So in 1988, we have finally arrived at the state where the potentials are more-or-less understood by a consensus of quantum physicists as being the primary EM reality, while the force fields are now seen to be secondary effects generated from the potentials.

This understanding, however, still has a long way to go before it penetrates the main bastions of physics and electrical engineering. Most scientists and electrical engineers are still adamantly committed to the Heaviside version of Maxwell's theory, and are strongly conditioned that the EM force fields are the primary effectors in electromagnetics.

They are also nearly totally resistant to the idea that there may be a fundamental error in automatically replacing a zero-resultant system of EM translation force vectors with a zero factor, rather than replacing the system with the combination of a conditional zero vector (conditional for translation only) and a scalar stress potential. Consequently, most orthodox scientists and engineers are still strongly conditioned against quaternions, and erroneously believe that Heaviside's translation was complete. Seemingly it has never occurred to most mathematicians and scientists that zero-vectors are usually not truly equal. Stress-wise, zero resultant combinant systems of multiple translation vectors usually differ in: (1) magnitude, (2) polarization, (3) type of stress, (4) frequency components, (5) nonlinear components, and (6) dynamic internal variation (38).

Vectors Versus Quaternions: The Cross Product

In a conventional 3-dimensional vector, one may have three vector components, such as (in Cartesian coordinates):


(1)

where


are unit vectors in the directions of the x, y and z axes respectively and a, b and c are constants. In the right side of equation (1), the three components of vector v are:


(2)

Obviously if the vector components of vector


are zero vectors, then:


(3)

We shall be interested in the vector product of two identical vectors


, where


x

=

(4)

where A is the length (magnitude) of vector


,

is the angle between the two vectors (in this case zero), and

is the zero vector.

After Heaviside and Gibbs, electrical engineers are trained to replace the cross product


x

with the zero vector

, discarding the components of the zero vector system as having no further consequences, either electromagnetically or physically.

Now let us look at the comparable quaternion expression of this situation. First, in addition to the three vector components, a quaternion also has a scalar component, w. So the quaternion q corresponding to vector


is:


(5)

The physical interpretation of equation (5) is that there locally exists a stress w in the medium and a translation change


in that stress.

When the quaternion is multiplied times itself (that is, times an identical quaternion), the vector part zeros, just as it did for the vector expression. However, the scalar part does not go to zero. Instead, we have:


(6)

There is a very good physical interpretation of this result. The zero translation vector resultant


for the system shows that the system now does not produce translation of a charged particle. Because the force vectors have been infolded, the scalar term shows that the system is stressing, and the magnitude of that stress is given by the scalar term

.

Notice that the zero vector in equation (6) does not represent the absence of translation vectors, but it represents the presence of a system of multiple (in this case, two) vectors, one of them acting upon the other in such a manner that their external translation effect has been lost and only their stress effect remains (39). The quaternion scalar expression has, in fact, captured the local stress due to the forces acting one on the other, so to speak. It is focused on the local stress, and the abstract vector space, adding a higher dimension to it.

In other words, the


in equation (6) represents the internal stress action of a nontranslating system of vectors that are present, infolded, and acting internally together on the common medium that entraps them and locks them together. The two translation vectors have formed a deterministically substructured medium-stressing system, and this is a local gravitational effect.

One sees that, if we would capture gravitation in a vector mathematics theory of EM, we must again restore the scalar term and convert the vector to a quaternion, so that one captures the quaternionically infolded stresses. These infolded stresses actually represent curvature effects in the abstract vector space itself. Changing to quaternions changes the abstract vector space, adding higher dimensions to it.

Artificial Electrogravitational Timestress

Let us assume for a moment that the two identical vectors


and

are electric forces. Then

represents the case where they are "locked together" in a local medium.

We now recall the modern quantum mechanical view that no "static" thing exists as such in the universe. A macroscopic "static" force - at quantum level - represents a continual constant rate of quantum change. In the case of an electrical force, it represents a continual constant rate of flux exchange of virtual photons.

The zero vector in equation (6) represents a constant exchange of macroscopically organized virtual energy into the local medium. Consequently, it represents continual internal work into and onto the local medium, but without translating it.

So a zero vector system of nonzero vector components represents internal or "infolded" constantly-working forces (internal to the medium) where the system does not cause translation of the point or region of application, whereas a nonzero vector and a nonzero vector system represent external forces which cause at least some translation of the point or region of application, unless this translation is nullified by other forces (40).

Physically, equation (6) may now be seen to state that: (1) internal forces (in the form of an internal stress) are present in the local medium but no translation force is present, and (2) these internal forces are continually performing internal work on the local medium without external translation. Since the translation vector component has spatially zeroed, then the scalar component that results may be taken to represent the time rate of expenditure of this internal work that is being done on the local medium - that is, it represents the extra internal power (which is simply the extra energy density of time) now being expended locally in and on the medium as a sink. If the vector components of the zero vector system are oriented outward, then the scalar stress component changes sign and it represents the extra internal power locally flowing out of the medium as a source.

Infolded Structuring is Dynamic and Complex

As can be realized, by changing the magnitudes, phasing, directions, rotations and dynamic frequencies of the vector components of the zero-vector stress system, very elaborate and sophisticated structuring of the local space-time medium (the local vacuum) may be deterministically constructed and controlled at will.

The continually performed internal work represents an increase or decrease in the local energy density of the medium, hence in the stress of the medium. However, note that this stress - either compressive or tensile is in and on the rate of flow of local time in the region. This timestress represents an artificial stress potential, where by "artificial" we mean that the timestress of the local medium is structured and macroscopically patterned (and controlled) deterministically; translation of that timestress is spatially radiating out over a finite macroscopic neighbourhood of the local point or region of application. This may be contrasted to a "natural" potential where the internal component stress vectors in the surrounding spatial neighbourhood are microscopic and randomly varying in all directions.

In equation (6), then, we have a local gravitational effect - a local increase in the energy density of a vacuum. Because the large EM force is utilized rather than the weak G-force, and because it is a timestress condition, it is a powerful local general relativistic effect. Because the local vacuum flux is significantly altered, we have a locally curved space-time which is significantly anisotropic, in violation to one of the fundamental (and crippling) assumptions of Einsteinian general relativity. Further, this is an electrogravitational effect. since it is a gravitational effect produced by purely electromagnetic means (41).

We have therefore produced a local curvature of space-time, and done so electromagnetically.

What is even more astonishing to the conventional relativist is that this local curvature startling enough in its own right - is also deterministically structured, and we can control the structuring at will. Hence we can engineer (structure) the vacuum itself (42).

But to return to equation (6).

The EG Sine-Squared Stress Wave

Suppose that


represents a time-varying

-field vector and its amplitude is of the form:


(7)

Then:


(8)

and this is a scalar EM stress wave, of variation of the local curvature of the vacuum. It is a powerful electrogravitational (scalar electromagnetic) wave, particularly if we produce it as a standing wave and use it to "pump" atomic nuclei in a rhythmically varying manner (43).

Briefly, a sine-squared wave has the appearance of a sort of "skinny" sine wave a near-sawtoothed wave that is now oscillating about an increased bias. In other words, the wave of equation (8) represents a scalar EM (an KG) wave that pumps the atomic nuclei of a targeted material, holding those nuclei at an excited average potential level. The wave has strong and very useful applications in - among other things - electrohealing. (44).

Example: Application to Explain Four-Wave Mixing

Now most modulations are represented by similar multiplication between two waves. Suppose we have two equal-amplitude, continuous monochromatic


-field sine-waves, introduced into a nonlinear dielectric medium in antiparallel and antiphased fashion (45). The medium will act as a modulator, causing the two waves to together", so that their

-fields sum everywhere to a zero resultant vector spatially. A standing sine-squared scalar EM (electrogravitational) wave of the stress of time will be formed by the waves, very similar to equation (8) above.

This scalar wave will not appreciably react to the orbital electron shells of an atom of the dielectric, but will not pass through these outer "Faraday cages", reaching directly into the highly nonlinear nucleus itself.

We invoke the quantum mechanical picture of the nucleus as: (1) a region of local sharp curvature of space-time, (2) incredibly dynamic, with particles of every kind continually changing, transmuting, giving off other particles and waves, being absorbed, etc., (3) containing violent and dynamic charges and locally trapped fierce currents, and with field strength fluctuations reaching 10 and above, (4) in violent virtual particle exchange with the neighbouring vacuum, and (5) on the average, positively charged, so that it is -- on the average -- time reversed (46).

The presence of the sine-squared EG wave in the nucleus alters the nuclear potential by - on the average - the "DC" component potential amount. However, this delta in the potential is dynamic, varying as the sine-squared. This dynamically oscillating potential wave constitutes a pump wave on the nucleus itself, and it is rhythmically pumping the amplitude of the nuclear potential itself. We may think of the pumped nucleus as now conditioned to function as a parametric amplifier, ready to be given another "signal input" (47).

Now let us introduce yet another small sine wave into the nonlinear dielectric. It will modulate each of the two pump wave components, forming a scalar modulation upon the scalar sine-squared pump wave, and riding directly into the nucleus. In positive time, this now constitutes a "signal input" to the "parametric amplifier nucleus". The input is absorbed and amplified, up to the level of the pumping energy available in the pump wave that can be "scavenged up and gated".

Internal Absorption Can Be External Emission

However, the nucleus, being time reversed, also produces a time-reversed absorption which is seen spatially by the external observer as constituting emission! That is, in his own positive time, the external observer sees the time-reversed absorption as an emission event. Further, this is a time reversal - and hence an "emission" to the external observer of the entire parametrically amplified signal wave. (After all, in reversed time it is the pump wave that is modulating the signal wave - a principle of importance.)

So the powerfully amplified signal wave in the parametric amplifier is seen by the external spatial observer to be emitted from the nucleus. In short, a time-reversed and powerful scalar wave is emitted by the nucleus, passing back along the exact path taken by the original "signal wave". To a time-reversed entity, that invisible path is its path ahead of it in positive observer time. The external observer sees the emitted wave emerge as a powerfully amplified time-reversed EM wave, backtracking precisely back along the exact path taken by the signal wave, and appearing everywhere in phase spatially with the continuous signal wave.

In 4-space, of course, the time-reversed wave is out of phase in the fourth dimension, time.

Four-Wave Mixing is Like a Triode

This is the mechanism by which four-wave mixing provides a powerfully amplified time-reversed replica of the signal wave.

Note that the entire process can be compared to a triode: the signal wave constitutes the grid signal, the pump wave constitutes the plate voltage, and the nucleus of the atom in the dielectric provides the self-powered cathode.

We put in the signal wave (grid signal) and get out a 180-degree phase-shifted, amplified phase conjugate replica (amplified plate signal). The difference is that the PCR is phase-shifted in time, not space.

Negative Energy, Nuclear Binding and Transmutation

Note also that negative energy is already involved in the time-reversed nucleus of the atom, as in negative time (48). Excess "negative energy" in the nucleus means "additional binding energy", which will be expressed as additional inertia and coupled onto the electron shells. In this case the "inertial mass" of the pumped material increases, inversely as the pumping frequency. Less "negative energy" in the nucleus means "decreased binding energy", which places the nucleus in an unstable state. The nucleus can actually be transmuted by this means (in many cases toward barium, which apparently has the least binding energy per nucleon). Transmutation to an isomer appears easiest, though this is not always the case (49). It seems theoretically possible to design a complex pumping mixture of frequencies and power levels which will cause a specific radioactive nucleus to undergo transition to a harmless element or combination of elements. The main point is, scalar EM allows direct production of structured electrogravitational potentials in the nucleus, opening up the possibility of direct and controlled engineering of the nucleus itself.

Perspective

Obviously, in this short paper we have only scratched the surface. We have presented only the barest illustration of how Maxwell's original quaternion theory was actually a unified field theory of electrogravitation, where gravitation deals with the stress (enfolded and trapped forces) of the medium, and electrogravitation deals with the electromagnetic stress (enfolded and trapped EM forces) of the medium.

Of course, a great deal more work is necessary, but at least this indicates the way to go to obtain a unified field theory of electromagnetics and gravity that is practical and engineerable (50). I can only state that the indicated approach works in the laboratory, and let it go at that without further elaboration.

Recapitulation: From Maxwell to 1900

In summary, Maxwell himself was well-aware of the importance and reality of the potential stress of the medium (51). However, after Maxwell's death, Heaviside - together with Hertz - was responsible for striving to strip away the electromagnetic potentials from Maxwell's theory, and for strongly conditioning physicists and electrical engineers that the potentials were only mathematical conveniences and had no physical reality. Heaviside also discarded the scalar component of the quaternion, and - together with Gibbs - finalized the present modern vector analysis.

The scalar component of the quaternion, however, was the term which precisely captured the electrogravitational stress of the medium. By discarding this term, Heaviside (aided by Hertz and Gibbs) actually discarded electrogravitation, and the unified EM-G field aspects of Maxwell's theory. However, the theory and the calculations were greatly simplified in so doing, and this excision of electrogravitation provided a theory that was much more easily grasped and applied by scientists and engineers - even though they were now working in a subset of Maxwell's theory in which gravity and EM remained mutually exclusive and did not interact with each other.

Shortly before 1900, the vectorists' view prevailed, and the Heaviside version of Maxwell's theory became the established and universal "EM theory" taught in all major universities - and erroneously taught as "Maxwell's theory"! Though gravitation had been removed, the beautiful unification of the electrical and magnetic fields had been retained, and so the rise in applied and theoretical electromagnetics and electromagnetic devices began, ushering in the modern age.

Impact on Einstein's General Relativity Theory

Unfortunately, however, the excision of electrogravitation from Maxwell's theory was later to leave Albert Einstein with a quandary: it seemed that the only way space-time could be curved measureably was by and at a huge collection of mass, such as the Sun or a star. Accordingly, in constructing his theory of general relativity, Einstein assumed that the local space-time was never curved (since obviously the observer and his lab instruments would not be sitting on the surface of the Sun or of a distant star). Consequently, he did not write an unrestricted general theory of anisotropic space-time, but instead he wrote a highly restricted sort of "special relativity with distant perturbations" - which, nonetheless, was a revolutionary and epochal achievement.

Ironically, Einstein then spent the remainder of his life vainly trying to find a way to reintroduce electromagnetic fields into his general relativity, and to provide a unified field theory of gravity and electromagnetics. He failed, because his own prior assumption of a locally flat space-time had already effectively ruled out the very thing he sought.

Effect on Western Search For a Unified Field Theory

Today the magical unified theory of gravitation and electromagnetics continues to elude Western scientists, because they nearly universally adhere to Einstein's rejection of a locally-curved space-time. In so doing, the West largely rules out any local, laboratory-bench development of, and experimentation with, general relativistic systems. And in turn, that relegates general relativity to a non-experimental theory and, except for cosmological observations, a sort of "special relativity with distant perturbations."

The "locally flat space-time" assumption saves the conservation laws - and Western scientists have now become nearly totally dogmatic in their subservience to conservation. To challenge the conservation laws - and Einstein's restricted general relativity - leads to ostracization by his peers and vigourous suppression (52).

Soviet Theory of an Anisotropic Space-time

Soviet scientists, on the other hand, regularly publish papers where Einstein's crippling "local flat space-time" assumption is removed and the anisotropy of space-time is unrestricted, strongly implying that they might have developed an experimental unified field theory (53). They also are quite frank to publish statements that in a general relativistic system, conservation laws do not apply (54).

In numerous previous papers and books, the present authour has presented extensive evidence of the Soviet weaponization of electrogravitation and hence of a unified field theory (55).

Impact on Science and Humanity

Thus a great irony now is evident in Western science. More than 120 years ago, Maxwell wrote the first paper in his unified field theory of electrogravitation. Had Western scientists and mathematicians given greater attention to Maxwell's quaternion theory, by 1900 we should have been developing antigravity propulsion systems and interplanetary exploration vehicles.

Certainly humankind could have been lifted to much greater heights than where we are today. And along the way, we just might have avoided two great and bloody World Wars and a host of smaller ones.

In the modern geometrodynamic view, all forces are considered to arise from, and be rooted in, the curvature of space-time - in gravitation. If the curvature of space-time itself can easily be engineered and controlled by electromagnetic means, the extensive application of our present advanced state of electromagnetic development and devices can lead to control of the world of physical reality on a scale heretofore only dreamt of in the minds of our greatest visionaries.

Consider such a vision by Albert Einstein: Quoting:

"It would of course be a great step forward if we succeeded in combining the gravitational field and the electromagnetic field into a single structure. Only so could the era in theoretical physics inaugurated by Faraday and Clerk Maxwell be brought to a satisfactory close."

With the mastery of electrogravitation and the control of physical reality itself in our grasp, the freeing of humankind from want, misery, and poverty would directly follow. The impact on mankind's development would be almost beyond present human conception. Consider this vision from Teilhard de Chardin of the mastery of physical reality and the elimination of man's inhumanity to his fellow man:

"Someday, after we have mastered the winds, the waves, the tides and gravity, we shall harness for God the energies of love. Then for the second time in the history of the world man will have discovered fire."

Like Prometheus of old, in his quaternion EM theory James Clerk Maxwell produced a blazing coal of fire, literally taking the fire of gravitation from Olympus and giving it to human beings. Uncomprehending, scientists heaped ashes over the fiercely glowing coal, and only warmed themselves with the tiny trickle of electromagnetic heat that escaped the dampening ashes. For over a hundred years, the fiery coal has been quietly lying there, buried under the ashes, still glowing brightly.

It is time to be bold. For the enrichment of all mankind, let us uncover Maxwell's long-dormant fiery coal and fan into full bloom the Promethean flame of power that lies sleeping within.

NOTES AND REFERENCES

1. James Clerk Maxwell was born on June 13, 1831 in Edinburgh, Scotland. In 1847 he entered the University of Edinburgh, then transferred to Cambridge in the fall of 1850. After graduation, he stayed on at Cambridge in a research position. He was elected a Fellow of Trinity College and placed on the staff of college lecturers. In 1856 he returned to Scotland, where he took up a Chair of Natural Philosophy at Marshall College, Aberdeen. In autumn, 1860 he took a new position as Chair and Professor of Natural Philosophy and Astronomy at King's College, London (a position he held to 1865, at which time he resigned).

Maxwell was economically independent. He was elected to the Royal Society in 1861, while at King's College. From 1865 to 1871 he resided at his ancestral Scottish country home, Glenlair, developing his major ideas into book form.

Maxwell returned to Cambridge in 1871, where he became the first holder of the Cavendish Chair of Experimental Physics. There he also supervised the construction and operation of Cavendish Laboratory. His treatise on electromagnetism appeared in 1873. He held his position at Cambridge until he died on Nov. 5, 1879, at age 48, of a form of stomach cancer - the same ailment that had killed his mother when he was a child.

2. His famous treatise was J.C. Maxwell, "A Treatise on Electricity and Magnetism", Oxford University Press, Oxford, 1873. For an elegant and readable account of Maxwell's life and achievements, see I. Tolstoy, "James Clerk Maxwell: A Biography", Canongate, Edinburgh, 1981. Maxwell's compact and powerful quaternionic expression of the general equations of the electromagnetic field are given in Article 619, Vol. 2, p. 258 of his Treatise. See also H. J. Josephs, 'the Heaviside papers found at Paignton in 1957", "Electromagnetic Theory by Oliver Heaviside", including an account of Heaviside's unpublished notes for a fourth volume, and with a foreword by Sir Edmund Whittaker, Vol. III, Third Edition, Chelsea Publishing Co., New York, 1971, p. 660. Just how much more powerful was Maxwell's quaternionic expression of EM theory than was Heaviside's vector interpretation, was succinctly expressed by Josephs: "Hamilton's algebra of quaternions, unlike Heaviside's algebra of vectors, is not a mere abbreviated mode of expressing Cartesian analysis, but is an independent branch of mathematics with its own special rules of operation and its own special theorems. A quaternion is, in fact, a generalized or hypercomplex number... " (Josephs, ibid., p. 660)

3. The prevailing view in physics - and in most physics textbooks - is that Faraday - himself uneducated and woefully ill-prepared mathematically discovered and formulated the concept of "lines of force", or field lines; and that Maxwell, his mathematical interpreter, then tinkered together the equations to explain electromagnetic radiation on the basis of Faraday's field concepts. However, there certainly can be serious ground for contesting this simplified view. As White states, ''The mathematics which Maxwell used to develop Faraday's results came out of a body of work which had as its implicit subject unified field theory. Leonhard Euler, Pierre-Simon Laplace, Joseph-Louis Lagrange, and Karl Friedrich Gauss prepared these mathematical and theoretical foundations, elaborated by Sir William Hamilton, which shaped the positive content of Maxwell's work... As the story goes, Maxwell first elaborated the equations which describe the magnetic effects of an electrical current and the ability of a magnet in motion to induce electricity, and then, by algebraic substitution, came on the wave equations. In fact, James MacCullagh, a collaborator of Sir William Hamilton and Franz Neumann, a collaborator of Gauss, Wilhelm Weber and Bernhard Riemann, produced these same equations between the years 1839 and 1848, at least a decade before Maxwell began his scientific career... Field theory, as it was developed through the work of Euler and Lagrange, elaborated by Gauss, and totally redefined by Riemann, depends upon the concept of potential energy," (Carol White, "Energy Potential: Towards a New Electromagnetic Field Theory", with excerpts from two original works by B. Riemann, Campaigner Publications Inc., New York, 1977, p. 19-20). White gives a critical discussion of the way in which standard textbooks have assigned credit for priorities and conceptual contributions in the foundations of theoretical electromagnetics. Extending Riemann considerations, White focuses strong attention on the potentials and on a new approach to electromagnetics.

There is certainly a great deal wrong with modern EM theory, as is well-known to a small but growing circle of scientists. As Dr. Domina Spencer of the University of Connecticut states, "Since the turn of the century there has been a lot of first class experimental and theoretical work that reveals problems with relativistic electromagnetic theory, but this work has been virtually ignored by the mainstream physicist". Dr. Spencer and her colleagues are embarked on a thorough review of all the experimental work that has been performed on electromagnetic phenomena since Ampere published his first results in 1824. (Note that Soviet scientists did such a review immediately after World War II.)

For more on the subject of what's wrong with the present foundations of EM theory, see particularly Peter Graneau and P.N. Graneau, "Ampere-Neumann Electrodynamics of Metals", Hadronic Press, Nonantum, Massachusetts 1985; P. Graneau and P.N. Graneau, "Electrodynamic Explosions in Liquids", Applied Physics Letters, 46, 1985, p. 468. See also H.E. Puthoff, "Ground State of Hydrogen as a Zero-Point-Fluctuation-Determined State", Physical Review D, 35 (10), May 15, 1987, p. 3266-3269; Puthoff, "Zero-Point Fluctuations of the Vacuum as the Source of Atomic Stability and the Gravitational Interaction", Proceedings of the British Society for the Philosophy of Science International Conference, "Physical Interpretations of Relativity Theory", Imperial College, London, Sept. 1988. Also see very important work by Dr. Henry Monteith, referenced elsewhere in this paper. See also Cynthia Kolb Whitney, "Electromagnetic Fields Near Dynamic Systems of Charged Particles", Hardonic Journal, 10, 1987, p. 299-301; Whitney, "Field-to-matter Energy Transfer", "Manifest Covariance in Relativistic Potential Theory", Physics Essays, 1 (1), 1988, p. 15-17; Whitney, "Generalized Functions in Relativistic Potential Theory", Hadronic Journal, 10, 1987, p. 91-93. For a lay description of some of the exciting work and problems of foundations of electromagnetic theory, see articles by Chappell Brown in Electronic Engineering Times: "Anomalies in Electromagnetic Law Spur Debate", Sept. 14, 1987; "Railgun Research Shoots Holes in Lorentz's Theory", Apr. 6 1987; "Electrons and Conduction: Not So Simple After All", Dec. 28, 1987. Finally, it is hoped that this present paper will help shed at least a little light on the subject.

4. J.C. Maxwell, A Treatise on Electricity and Magnetism, Oxford University Press, Oxford, 1873.

5. For confirmation that the Heaviside equations - which presently are erroneously called "Maxwell's equations" - are not to be found anywhere in any of Maxwell's books or papers, see Josephs, ibid., p. 647. See also Sir Edmund Whittaker, "Oliver Heaviside", Bulletin of the Calcutta Mathematical Society, 20, 1928-1929, p. 202. See also Paul J. Nahin, "Oliver Heaviside: Sage in Solitude", IEEE Press, New York, 1988, p. 9, note 3. Today - ironically - most engineers and scientists who study and utilize "Maxwell's equations" have examined neither Maxwell's original work nor the theory of quaternions.

6. Oliver Heaviside was born in poverty on May 18, 1850 in Camden Town, the youngest of four children. Young Heaviside was forced to drop out of high school and go to work. His aunt, however, had married well, to Professor Wheatstone of King's College, London - who was later to become Sir Charles Wheatstone, F.R.S. By his uncle's influence, Heaviside was appointed to a telegrapher's position at Newcastle in 1868. Gradually, he began to theoretically attack the problems in telegraphy, but was forced by increasing deafness to resign in 1874 and return to live with his parents in London.

Heaviside never possessed a formal university degree, but was much later in the early 20th century - to be awarded an honorary doctorate.

Studying mathematics on his own, Heaviside had begun to write improvements for telegraphy, and in 1873 began using calculus. He also studied differential equations and made regular contributions to the Telegraphic Journal, the English Mechanic, and the Philosophical Magazine, with seven papers by 1874.

Heaviside was astounded by Maxwell's "Treatise on Electricity and Magnetism", published in 1873, and Maxwell became his undying hero. Heaviside mastered the manuscript in two years - something few men have done to this day.

With the invention of the telephone in 1877, Heaviside began also to study telephonic transmission. Then Maxwell died in 1879. In 1885-87 Heaviside published in the Electrician a series of articles under the title "Electromagnetic Induction and Propagation", where for the first time he gave a clear and modern vector exposition of Maxwell's theory. Heaviside was violently opposed to the potentials, however, remarking that they were "metaphysical" and that it was even "best to murder the lot". He focused strongly on the EM force fields as the primary EM causative entities. This attitude was to spread and condition generations of electrical scientists -that the EM potentials were only mathematical conveniences.

Though self-educated, Heaviside was a true genius. He also developed the energy flow in the EM field, developed the skin effect, speculated analytically on faster-than-light charged particles, discovered the theory of distortionless signal transmission, and articulated the concept of inductively loaded circuits including self-induction. He had difficulty in getting his papers accepted for publication, since he made use of unusual methods of his own in solving problems. But in 1892 his collected papers were published in two volumes under the title of "Electrical Papers". Later his "Electromagnetic Theory" also dealt with a number of important problems.

Heaviside, followed by Gibbs, attacked the quaternionist expression of Maxwell's theory, though he held the highest regard for Maxwell himself. By 1892-3 the controversy between the multiplying vectorists and the few remaining quaternionists exploded into a duel to the death, and the vectorists quickly won. Interest in quaternions then dropped sharply, and vector EM theory in accordance with Heaviside's interpretation came to be universally accepted.

Heaviside also had his bitter opponents, and even his EM theory was very slow in being accepted by the mathematical physicists, many of whom snobbishly considered Heaviside crude and uneducated. In his writings Heaviside himself often subtly railed at the rigour demanded by the mathematicians, and sometimes essentially used brute force to get the correct results even though mathematical rigor suffered.

Heaviside made major improvements in electrical transmission theory, propagation theory, and advanced the operational calculus to study transients. In "Electromagnetic Theory" (1893-1912), he postulated that the mass of an electric charge would increase as its velocity increased, anticipating one aspect of special relativity. In 1902, he predicted the ionosphere and the Earth-ionospheric duct. Eventually he was awarded an honorary doctorate and was once considered for the Nobel Prize.

Heaviside, ever the outcast and apart from his peers, died in a nursing home at Torquay on Feb. 3, 1925.

7. It is little known that, in his later years, Heaviside may again have turned to quaternion operations, and even developed a "unified" theory of electromagnetics and gravity. These papers were never published, but were reported found in 1957 where Heaviside had lived for some years (some electrical scientists, however, continue to dispute the authenticity of the papers). Little or no adequate review of this unified theory has been made, though several writers have not hesitated to express judgements pro and con as to its authenticity, its promise, or its usefulness (e.g., see Josephs, ibid.; H. J. Josephs, "History Under the Floorboards", Journal of the IEE 5, Jan. 1959, pp. 26-30; H.J. Josephs, "Postscript to the Work of Heaviside", Journal of the IEE 9, Sept. 1963, p. 511-512; B.R. Gossick, "Heaviside's 'Posthumous Papers"', Proceedings of the IEE 121, Nov. 1974, p. 1444-1446; Paul J. Nahin, "Oliver Heaviside: Sage in Solitude", IEEE Press, 1988, p. 305-307).

My own comment is that this (purportedly Heaviside's) unified theory should be examined experimentally, not just mathematically, to ascertain whether or not it works. Certainly Heaviside had long considered localization of energy: e.g., in 1893 ("Electromagnetic Theory", p. 455), he wrote: "To form any notion at all of the flux of gravitational energy, we must first localize the energy... whether this notion will turn out to be a useful one is a matter for subsequent discovery". At least he understood the requirement for a local change in the energy density of the medium by electromagnetic means.

Ironically, then, the man who almost single-handedly "slew" Quaternions and Maxwell's quaternion theory, may eventually have returned to them to try to capture the elusive gravity, which - by the present authour's thesis - inadvertently he had discarded earlier when he struck down the scalar component of the quaternion and converted it to a vector.

8. J.C. Maxwell, "A Dyamical Theory of the Electromagnetic Field", Philosophical Transactions of the Royal Society of London 155, 1865, p. 459512. (Presented in 1864).

9. For an excellent discussion of the Development of Vector Analysis, see M.J. Crowe, "A History of Vector Analysis: The Evolution of the Idea of a Vectoral System ", University of Notre Dame, Indiana, 1967.

10. IN 1835, Sir William Rowan Hamilton's memoir, "On a General Method in Dynamics", was published, expressing the equations of motion in a canonical form that captured the duality between the components of momentum and the coordinates. The deep significance of this duality was not fully appreciated until the rise of quantum mechanics nearly 100 years later. In 1843, Hamilton discovered quaternions, though his investigations in algebra had begun 10 years earlier. In 1853, his "Lectures on Quaternions" - a most difficult and awkward book - appeared. Hamilton spent the remaining 22 years of his life developing the algebra of Quaternions and its applications. His quaternion work was published posthumously in 1866 as "The Elements of Quaternions". The mantle for "quaternion champion" then passed to Professor Peter Guthrie Tait, who had patiently delayed publication of his own book on quaternion theory until after the book of Hamilton, his mentor, was published. See P.G. Tait, "An Elementary Treatise on Quaternions", Oxford Univ. Press, Oxford, 1875.

11. Hamilton's quaternion algebra was a landmark; for the first time, it freed algebra from the commutative postulate of multiplication.

12. See Paul J. Nahin, ibid., p. 100-101. See also Freeman Dyson, "The Maxwell Equations", in M.S. Berger, "J. C. Maxwell, The Sesquicentennial Symposium, Elsevier Science Publishers B.V., Amsterdam, 1984, p. 17-22. Quoting:

"... the mathematicians of the nineteenth century failed miserably to grasp the great opportunity offered to them in 1865 by Maxwell. If they had taken Maxwell's equations to heart as Euler took Newton's, they would have discovered, among other things, Einstein's theory of special relativity, the theory of topological groups and their linear representations, and probably large pieces of the theory of hyperbolic differential equations and functional analysis. A great part of twentieth century physics and mathematics could have been created in the nineteenth century, simply by exploring to the end the mathematical concepts to which Maxwell's equations naturally lead."

13. Heinrich Hertz discovered (proved) the existence of Maxwell's electromagnetic waves in 1888, almost a decade after Maxwell's untimely death.

14. In 1880 Hertz received his doctorate from the University of Berlin, where he had studied under the renowned physicist Hermann von Helmholtz. He began serious study of Maxwell's electromagnetic theory in 1883. While professor of physics at the Karlsruhe Polytechnic, between 1885 and 1889, he produced electromagnetic waves in the laboratory, just as predicted by Maxwell. In his lab, he was able to measure the wavelength and velocity of these waves, and he showed that their susceptibility to refraction and reflection was the same as that of light and heat waves. This established that light and heat are electromagnetic waves, which until then was only an unproved theory. With experimental confirmation by Hertz, Maxwell's theory especially as interpreted in a much simpler vector form by Oliver Heaviside and Hertz - became predominant.

15. See Roger Penrose, "Integrals for General-Relativistic Sources: A Development From Maxwell's Electromagnetic Theory", M.S. Berger, ibid., pp. 211-243. Quoting:

'With the notable exception of Faraday before him, no other major physicist of his day had apparently regarded the concept of 'field' as anything more than a convenient mathematical auxiliary to the prevailing point-particle-action-at-a-distance view of physical reality. The idea that 'disembodied fields' can propagate through empty space carrying energy as they go, was as startling and revolutionary an idea at the time as radio is commonplace to us today."

16. Maxwell had a distinguished academic background. He received a mathematics degree from Trinity College in 1854, became Professor of Natural Philosophy at Marshall College, Aberdeen, Scotland in 1856, and in 1860 was appointed to King's College in London. After a short retirement, he became the first Cavendish Professor of Physics at Cambridge.

17. See note 12 above with respect to the delay in physics occasioned by the leading mathematicians of Maxwell's day ignoring the impact of Maxwell's theory. For another cogent argument about what might have been discovered much earlier in physics if quaternions had not also been cast aside, see James D. Edmonds, Jr., "Quaternion Quantum Theory: New Physics or Number Mysticism?", American Journal of Physics, 42 (3), Mar. 1974, p. 220-223. For yet another argument about what quaternions might have had to say about gravitation and a unified field theory, see this present paper.

18. Of course there were exceptions, but most engineers of the day were little skilled in mathematics. Electrical theory, instruction, and knowledge was particularly primitive. Most electrical engineers desperately needed something as simple as possible, to solve their signaling and power transmission problems. Even many eminent electrical scientists, such as J.J. Thompson, themselves never quite grasped what Maxwell's theory was all about. Also, the prevailing electrodynamics theories of the time were action-at-a-distance models, such as those of Karl Friedrich Gauss and later by Wilhelm Weber. The mathematicians of Maxwell's time had developed a taste for quite different directions of endeavour, and those who had themselves lost touch with physics could not assess the merits of Maxwell's theory (see Dyson, ibid., p.21). Maxwell's own presentations were obtuse and difficult; since he used a mechanical model of the ether, his presentations were filled with clunking gears, ratchets, and distracting machinery - sufficient to route all but the hardiest theorists. Until Hertz proved Maxwell's EM waves in 1888, (over 20 years after Maxwell began publishing his theory), most scientists felt very constrained by the action-at-a-distance competition to Maxwell's theory.

19. Oliver Heaviside's clear and simplified vector exposition of Maxwell's theory began to be published in The Electrician. With Maxwell's untimely death, Heaviside became his tireless successor and unyielding advocate, though other brilliant scientists such as Gibbs, Hertz and Lorentz also made great contributions.

20. In "A Treatise on Electricity and Magnetism", Maxwell did not develop the analytical consequences of the energy concept. Instead, his paper is filled with descriptions of early Victorian ideas about the nature of electrical energy, expressed in a maze of symbols representing quaternion formulations of scalar and vector potential functions, etc. As a result, engineers of the day found Maxwell's chapter on the general equations of EM field theory quite unreadable.

Even years later in the 1880's, it was still almost impossible to find a teacher who comprehended Maxwell's electrodynamics. Michael Pupin, for example, travelled from the US to England in vain, seeking such a professor. Finally, in Berlin he found one Helmholtz - who was able to teach him Maxwell's theory (See Dyson, ibid., p.21).

21. In 1892, Heaviside's series of papers in The Electrician and elsewhere were published as his "Electrical Papers", MacMillan and Co., in two volumes. Much later, this book provided a basis for his "Electromagnetic Theory", The Electrician Publishing Co., London and New York, 1922. The second edition with an introduction by E. Weber was published at New York in 1950. The third edition with a foreword by Sir Edmund Whittaker was published by Chelsea Publishing Company, New York, 1971.

22. Heaviside railed at the elusive idea of the potential, and focused electromagnetics upon the force fields, as did Hertz and Gibbs. Scientists and the literature were strongly indoctrinated with the dogma that the potentials were only mathematical conveniences. (Before one censors Heaviside, Gibbs and Hertz too strenuously for their shortsightedness, one should recall that, classically, forces and force fields - not energy - had been uppermost in scientific theory.)

It was not until 1959 that scientists were goaded once again into facing the unpleasant fact that the potentials were the primary reality and the translation force fields were simply made from them by operations. See Y. Aharanov and D. Bohm, "Significance of Electromagnetic Potentials in the Quantum Theory", "Physical Review", Second Series, 115 (3), Aug. 1, 1959, p. 485-491.

Even so, this latter view was still not fully accepted until the mid-1980's, and it is only recently that the potentials - so beloved and emphasized by Maxwell himself - are once again accepted as the heart and soul of electromagnetics. See Bertram Schwarzschild, "Currents in Normal-Metal Rings Exhibit Aharonov-Bohm Effect", Physics Today, 39 (1), Jan. 1986, p. 17-20 for confirmation that the AB effect has been proven to the satisfaction of all but the most diehard skeptics.

Even so, the primacy of the potentials is still fully accepted only by a handful of scientists. See S. Olariu and I. Iovitzu Popescu, "The Quantum Effects of Electromagnetic Fluxes", Reviews of Modern Physics 57 (2), April 1985 for an exhaustive discussion of the Aharonov-Bohm effect (which proves the physical reality and primacy of the potential) and an extensive list of references.

23. Particularly in his earlier papers in The Electrician, and in his "Electrical Papers" in 1892.

24. In addition to being an excellent experimentalist, Heinrich Rudolf Hertz (1857-1894) was also a noted theorist - and one who also died at an untimely early age. Hertz also made a theoretical reformulation of Maxwell's theory, removing the potentials and focusing on the force fields, as did Heaviside. In his "Electrical Waves", (see Dover, New York, 1962, p. 196- 197; first published in English in 1893), Hertz stated:

"... I have been led to endeavour for some time past to sift Maxwell's formulae and to separate their essential significance from the particular form in which they first happened to appear. The results at which I have arrived are set forth in the present paper. Mr. Oliver Heaviside has been working in the same direction ever since 1885. From Maxwell's equations he removes the same symbols (the potentials) as myself; and the simplest form which these equations thereby attain is essentially the same as that (at) which I arrive. In this respect, then, Mr. Heaviside has the priority... "

25. For Tait's quaternion theory, see P.G. Tait, "An Elementary Treatise on Quaternions", Oxford University Press, Oxford, 1875, 1st edition.

26. For details of the long struggle Heaviside had with his adversary Tait, see "The Great Quaternionic War", Nahin, ibid., p. 187-215. See also M. J. Crowe, "A History of Vector Analysis", University of Notre Dame Press, Notre Dame, 1967, passim. See also A.M. Bork, "Vectors Versus Quaternions - The Letters in Nature", American Journal of Physics, 34, Mar. 1966, p. 202-211.

27. With the availability of excellent and extensive expositions of the vector interpretation of the translation force-field subset of Maxwell's theory by Heaviside and Hertz, and with the nearly insurmountable difficulty associated with the complex quaternions and potentials which few scientists understood, the rejection of the quaternionic form of Maxwell's theory and the acceptance of the vector subset was inevitable.

28. Again recall that even Heaviside, the mighty mouse of a man who, together with Gibbs slew quaternions, much later may have again turned to quaternions to grapple with the elusive gravity.

To deal with curved space-time, one must deal with potentials, for - in a strict sense - a potential is a curvature of space-time. It is also a trapped spatio-temporal stress, the nature of the stress being determined by the nature of the stressing fields comprising the potential. The stress may be either compressive or tensile, and may contain a complex infolded structure of infinite variability. Gravity is not determined by force fields (the escape of curvature of space-time), but by potentials (the stabilized presence of curvature of space-time).

In addition, one runs headlong into the need for negative energy and negative time. For example, if two like charges are brought together, energy is required to overcome the repulsion, and this energy "goes into the field" to give a positive energy density of space. Two masses, however, attract each other; it takes the exertion of energy to keep them apart - or, in other words, the field energy is negative in this case. Maxwell was much perplexed by this problem, as was Heaviside - and as has been most other physicists who struggled with it, down to and including the physicists of today.

Actually, if we accept the negative field energy requirement, we can expect to meet negative energy when time is reversed, since the fundamental quantum (photon) is composed of (


)(

) and the time-reversed quantum (anti-photon) is composed of (-

)(-

). Therefore the main involvement of gravitation should be with a time-reversed region - such as the positively charged (time-reversed) atomic nucleus - and it is.

As early as 1898 Carl Barus - in a paper titled "A Curious Inversion in the Wave Mechanism of the Electromagnetic Theory of Light", American Journal of Science 5 (Fourth Series), May 1898, p. 343348 - showed an interpretation of Maxwell's electromagnetic wave equations that could "make the wave run backward". His paper was ignored, but it may have been the first indication of what today in nonlinear phase conjugate optics is known as the time-reversed EM wave.

In the early 1970's Western scientists discovered a strange thing in the open Soviet literature: the production of a time-reversed (TR) wave in nonlinear optics. Indeed, such a wave is a solution to the wave equation, and so the solution applies to all manner of waves (it has been accomplished, for example, with sound waves).

Time-reversed EM waves were controversial at first, since many physicists (even today!) naively equate time reversal with the science fiction notion of "traveling backwards in time". It is nothing of the sort, of course. For an object to travel backwards in time, the entire universe sans the object would have to be time-reversed to a previous state, so that the "present object" is seen as being present in a "past state of the universe". Time-reversing a single wave means that only the single wave is seen by the external observer to be affected by the reversal process. He sees (in his own forward time) a successive series of spatial positions of the reversing wave. He sees the rest of the universe moving forward in time in a normal fashion.

Since time is not observable, we simply see such a TR wave, not as time-reversed, but as spatially-reversed. (We see a time-reversed particle as both spatially-reversed and charge-reversed.) Though much is still unsure about time reversal, it is legitimate and has been in physics for decades. (See Robert G. Sachs, "The Physics of Time Reversal", University of Chicago Press, Chicago, 1987, for a broad and comprehensive coverage of the role of time-reversal in physics, and its clear distinctions from space reversal and velocity reversal. For an excellent introduction to the nonlinear optics time-reversed EM wave, see David M. Pepper, "Nonlinear Optical Phase Conjugation", Optical Engineering, 21 (2), Mar./Apr. 1982, p. 156-183.)

So today we are aware that a time-reversed EM wave can readily be produced. The present author has already pointed out that an EM wave carries both energy and time, and that a time-reversed EM wave has both its energy and time content reversed in sign. Such a TR wave carries negative energy and negative time. So a normal (forward-time) photon must be considered as comprised of (+


)(+

), while an anti-photon (time-reversed photon) must be considered by the external observer as comprised of (-

)(-

). (See Bearden, "Extraordinary Physics" in: "AIDS: Biological Warfare", Tesla Book Co., Greenville, Texas 1988, p.74-203.) For important involvement of negative time/negative energy in the nucleus, see C.W. Rietdijk, "How Do 'Virtual' Photons and Mesons Transmit Forces Between Charged Particles and Nucleons?", Foundations of Physics, 7 (5-6), June 1977, p. 351-374. As early as 1973, the present authour pointed out the involvement of negative time in mass; see Bearden, "Quiton/Perception Physics: A Theory of Existence, Perception and Physical Phenomena", NTIS, AD 763210, 1973. For a beautiful consideration of negative energy in a theory of gravitation, see Frederick E. Alzofon, "Antigravity With Present Technology: Implementation and Theoretical Foundation", in AIAA/SAE/ASME Joint Propulsion Conference, 17th, Colorado Springs, Colorado, July 27-29, 1981, New York: American Institute of Aeronautics and Astronautics Report Number AIAA-81 - 1608, 1981.

In an atom, normally photons are emitted by radiation from negative charges, while anti-photons are emitted by radiation from positive charges - such as the positively charged nucleus of an atom.

EM mixing stress and time reversal can be utilized to create an amplified time-reversed replica of a small input signal wave. When EM wave mixing stress is rhythmically applied to the atom, a scalar EM stress wave system is formed with zero E and B vector resultants. This wave passes through the electron shells and pumps the nucleus to an excited state. Input of another EM wave to the atom modulates the scalar pump wave, in turn modulating the pumping of the nucleus and accomplishing 4-wave mixing. The time-reversed (positively-charged) nucleus acts as a pumped phase conjugate mirror and emits a phase conjugate (time-reversed) EM wave, which travels out of the nucleus as a modulation upon the scalar pump wave "bridge to the outside".

As is well-known in four-wave mixing, the emission of a phase conjugate replica does not change the momentum of the mirror; i.e., it does not cause recoil. When it strikes another stressed atom, however, it modulates the stressing scalar waves and penetrates into the nucleus, where it is absorbed. Absorption of the negative energy/negative time wave in the nucleus does cause recoil negatively! Thus all atomic nuclei are continually being drawn to each other by negative recoil from four-wave mixing reactions. This is the genesis of gravity.

Of course, since time is not observable, we do not observe reversed time - we do not at all see any sort of "travel into the past", as explained previously. In a universe moving in positive time, we will simply see the time-reversed wave as spatially reversed - since time itself is not observable - and exhibiting very peculiar behaviour and negentropy by proceeding from disorder back to order. We will also see such a TR wave converging upon its path rather than diverging which characteristic itself is a move from disorder to order, and negentropic.

As can be seen, when we include TR waves, the second law of thermodynamics (which assumes only positive-time EM energy) must be reversed, so that it becomes the law of negentropy. This present second law is only half the case; addition of the law of negentropy completes the other half of it.

Note also that random time-reversed EM waves (time-reversed "heat") when added to normal heat, cools the region (reduces the algebraic size of the positive heat). It reduces the disorder of the region by multi-photon (multi-wave) mixing. Electrostatic cooling should be reexamined in light of this characteristic union of disordering and ordering.

29. The negative energy solutions and potentials of quaternion theory are particularly interesting, though they have been little pursued by theorists. It is also to be highly regretted that very early work on "reversed EM waves", such as the paper by Barus, ibid., was not vigourously followed up at the time.

30. As will be seen, the scalar component of the quaternion can infold and capture the stress energy of a zero-translation-resultant electromagnetic stress system, which constitutes the capture of an electrogravitational potential. Regular periodic oscillations (in magnitude, relative components, phase, etc.) of this potential constitute powerful standing gravitational waves of local curvature in space-time. Direct and significant local general relativity (GR) then exists in the laboratory, and direct GR experiments may readily be conducted.

31. In Heaviside EM theory, we are taught to discard zero translation resultant electromagnetic stresses. We are taught to discard those aspects of EM that: (1) form gravitational and inertial effects, and (2) are capable of directly reaching and engineering the atomic nucleus in a controlled fashion. Consequently, our present crude "engineering" of the nucleus is largely restricted to introducing a whole neutral particle or violently striking the atom with a speeding particle "hammer". With scalar EM we should be able to tune, change and control the nucleus like fine-tuning a precision watch.

32. See also James D. Edmonds Jr., "Quaternion Quantum Theory: New Physics or Number Mysticism?", American Journal of Physics, 42 (3), Mar. 1974, p. 220-223. See also his paper, "Maxwell's Eight Equations As One Quaternion Equation", American Journal of Physics, 46, Apr. 1978, p. 430-431.

33. See also Dyson, ibid. and Josephs, ibid. Also, Dr. Henry Monteith of Sandia National Laboratories has independently discovered that Maxwell's quaternion theory contained a unified field theory of gravitation and electromagnetics. See Monteith, "Dynamic Gravity and Electromagnetic Processes: Parts I and II", July 1988. See also Monteith, "Visualization and Duality in Mathematical Physics", Sandia National Laboratories, Albuquerque, April 15, 1986. Monteith has extended quaternion theory to include the hyperbolic quaternion, and has shown that his extended theory contains both spinor and twistor theory as subsets, and is a full theory of anisotropic space-time. Presently, he is preparing a major book on this subject, and he may very well be the scientist who writes the great new unified EM-G field theory so long sought by physicists.

34. See papers by T.E. Bearden, published by the Tesla Book Co., POB 1469, Greenville, Texas: "Comments on the New Tesla Electromagnetics: Part 1: Discrepancies in Present EM Theory", 1982; "Part III: Clarifying the Vector Concept", 1983; Part IV: "Vectors and Mechanisms Clarified", 1983; "Solutions to Tesla's Secrets and the Soviet Tesla Weapons", 1981; "Soviet Weather Engineering Over North America", 1-hr. videotape, 1985; "Star Wars Now! The Bohm - Aharonov Effect, Scalar Interferometry, and Soviet Weaponization", 1984; "Far-de-Lance: A Briefing on Soviet Scalar Electromagnetic Weapons", 1986; Chapter 4: "Extraordinary Physics" in "AIDS: Biological Warfare", 1988, p. 74-203. See also Bearden, "Tesla's Electromagnetics and Its Soviet Weaponization", Proc. 1984 Tesla Centennial Symp., International Tesla Society, Colorado Springs, Colorado 1984; Bearden, "Soviet Phase Conjugate Weapons: Weapons That Use Time-Reversed Electromagnetic Waves", Bulletin, Committee to Restore the Constitution, POB 986, Ft. Collins, Colorado 80522, Jan. 1988.

35. Of interest in EM theory is the appearance of closed circuital fluxes of EM energy in a region, which has bothered a very great number of physicists and electrical engineers including Maxwell and Heaviside. Heaviside's derivation of the Pointing vector with a vector G term describing these "close-loop energy traps" was published in The Electrician on Feb. 21, 1885. Heaviside wrestled with this G-vector, but dismissed it as an unnecessary and useless introduction of an auxiliary circuital flux. Actually, such a trapped closed-loop energy flow constitutes a special kind of dynamic structure internal to an electromagnetic potential, in the view of the present author. Further, since this EM potential has a definite deterministic pattern and structure, it may be regarded as an artificial potential, in contradiction to a normal potential with a random energy flux structure. If this view is correct, then Heaviside (and other electrical physicists, subsequently) may again have discarded one part of electrogravitation - because, after all, in modern general relativity, gravitation primarily consists of a number of potentials, trapped energy density of vacuum constitutes a curvature of space-time, and closed circuital fluxes of EM energy represent dynamic internal energy-density structures.

Such closed loop energy flows actually exist and are definitely real.

For example, the Earth's atmosphere experiences


-field lines pointing radially downward, and magnetic field lines directed from the Magnetic North Pole to the South Pole. The

expression gives a perpetual energy flow from East to West, in closed circle loops, even G = 0. H. Skilling ("Fundamentals of Electric Waves", John Wiley, New York, 1948, p. 132) called the idea that such loops have physical significance "absurd". Richard Feynman ("Lectures On Physics", Vol. 2, Addison-Wesley, Reading, Massachusetts 1964, p. 17-5 to 17-6 and 27-11) showed that such an energy flow is required by the conservation of angular momentum. It has also been shown for the Earth (E.M. Pugh and G.E. Pugh, "Physical Significance of the Pointing Vector in Static Fields", American Journal of Physics 35, Feb. 1967, p.153-156) that Feynman's thought experiment is quantitatively correct.

36. E.T. Whittaker, "On an Expression of the Electromagnetic Field Due to Electrons By Means of Two Scalar Potential Functions", Proc. Lond. Math. Soc., Series 2, Volume 1, 1903, p. 367-372.

37. Even today it is still in vogue in physics and electrical engineering. Specifically, engineers almost never try to design equipment that utilizes potentials in the absence of the force fields. There are a few notable exceptions, of course. The author and Frank Golden did work in the 1970's with "free A-field" equipment developed by Golden, and Dr. William Tiller did important theoretical work in curl-free vector potentials (free A-field). See his US Patent No. 4,447,779, "Apparatus and Method For Determination of a Receiving Device Relative to a Transmitting Device Utilizing a Curl-Free Magnetic Vector Potential Field", fan. 31, 1981; US Patent No. 4,429,280, "Apparatus and Method For Demodulation of a Modulated Curl-Free Magnetic Vector Potential", fan. 31, 1984; US Patent No. 4,432,098, "Apparatus and Method For Transfer of Information By Means of a Curl-Free Magnetic Vector Potential Field", Feb. 14, 1984. For an exhaustive discussion of the Aharonov-Bohm effect (which establishes the reality and primacy of the potentials), see S. Olariu, "The Quantum Effects of Electromagnetic Fluxes", Reviews of Modern Physics, 57 (2), Apr. 1985, p. 339.

38. Indeed, so far as is known to the present author, there is still not a single EM textbook that even recognizes and addresses the issue of whether or not a zero-resultant force vector system can be exclusively equated to - and totally replaced by - a zero translation vector resultant. The reason may be that, unconsciously, physicists have wished to avoid the incredible implications of infolded and inwardly structured EM energy (as perhaps witnessed by the rather short shift given to David Bohm's fundamental and revolutionary "hidden variable" theory of quantum mechanics). Not only does one face the implications of the internal structuring of EM energy, but one also faces the implications of curving local space-time (violating Einstein's general relativity) and deterministically substructuring that curvature of space-time, vacuum potentials, and the very vacuum itself.

This profoundly affects one of the fundamental assumptions of quantum mechanics: that the nature of quantum change is totally statistical. With internally structured potentials and direct control and manipulation of "hidden variables", one can engineer quantum change itself, even before collapse of the wave function. That is, one can speak of engineering physical reality itself.

Indeed, one would now be dealing with physics on a notion of the "information content" of hidden physical processes, where the hidden informational content of interacting energies can produce startling and unusual phenomenology, including violation of all present macroscopic conservation and exclusion laws. Very strange effects are already known in quantum mechanics, which conceivably may be due to the interaction of such infolded information structures. For example, one can sometimes even influence or decide the outcome of at least one type of experimental interaction after it has apparently already happened. In the two-slit experiment, one can wait until after the interactions with the electron are completed, and still select whether the electron will exhibit a classical particle or a quantum interference (wave) nature in the interaction. See, for example, John Archibald Wheeler, "The 'Past' and the 'Delayed-Choice' Double-Slit Experiment", in A.R. Marlow, ea., "Mathematical Foundations of Quantum Theory", Academic Press, N.Y., 1978.

Soviet scientists have particularly focused on the infolded structure of electromagnetic waves, referring to this structure as the "information content" of the fields. They have intensely applied this approach to the study of biological systems; for example, see N.D. Devyatkov and M.B. Golant, "Prospects for the use of millimeter-range electromagnetic radiation as a highly informative instrument for studying specific processes in living organisms", Soviet Technical Physics Letters, 12 (3), Mar. 1986, p. 118-119; See also N.D. Devyatkov (ed.), Applications of low-intensity millimeter-wave radiation in biology and medicine (in Russian), IRE Akad. Nauk. SSR, Moscow 1985. Further, each type of cellular disease has its particular EM radiation structure; it has been shown that the EM radiation structure (the EM information) emitted by diseased cells are capable of inducing that same disease physiology and symptomology in distant cells. See Vlail Kaznacheyev, "Electromagnetic bioinformation in intercellular interactions", PSI Research, 1(1), Mar. 1982, p.47-76. It follows that time-reversing (phase-conjugating) the mitogenetic "disease" information signal could provide a "healing" signal for a specific cellular disease condition (See Bearden, "AIDS: Biological Warfare", 1988 for an extended discussion, and appreciable details of the Priore device which utilized such an approach to demonstrate nearly 100% cures of terminal cancers, leukemias, and other diseases in laboratory animals).

The weapon implications using modulated electromagnetic carriers are obvious, and it is significant that: (1) over-the-horizon (OTH) beams from Soviet giant microwave OTH radars continually intersect over North America, (2) the world's greatest expert in EM induction of cellular disease at a distance V. Kaznacheyev - is associated with two secret institutes in the outskirts of Moscow which produce microwave-directed energy weapons, and (3) extensive health changes have occurred over the decades in personnel in the US Embassy in Moscow, where weak microwave radiation has been beamed against the building since the early 1950's. Actual measured EM field data inside the Embassy reveals a strong correlation between the locations where induced health problems in Embassy personnel occurred, and the locations where the EM force fields from the Soviet microwave radiation were minimal or zero. Note that the areas where EM force fields are absent or minimal represents those areas where the potentials are strongest. The high correlation of disease induction to those specific areas, strongly indicates that the Soviets have deliberately used structured EM potentials in the microwave radiation to induce diseases in Embassy personnel. It is obvious that this has been a continuing test stimulus to see (by US response at the site or lack of it) whether or not we are knowledgeable in scalar EM and in structured EM potential disease induction technology and weapons. Installation of aluminum screens over the windows merely decreased the force field components, not the potentials. Obviously we have continually certified our ignorance of scalar EM.

39. The physical interpretation of the zero vector is interesting. To the external observer, a zero translation vector applied to an object or point merely means the absence of observed translation of that object or point. If, in addition, no infolded finite vector components exist in the zero-vector, then no internal stress due to the zero-vector exists internal to the object or point. If, on the other hand, infolded finite vector components exist in the zero-vector (that is, if it is a zero-vector-resultant system of multiple non-zero translation vectors), then internal stress due to the zero-vector system exists internal to the object or point. In other words, there are two kinds of zero-translation vectors: (1) those that have no infolded internal finite structure, and (2) those that do have an internal, infolded finite structure. One of these zero vectors is stress-free, while the other is a stressing system. The latter class of zero translation vector constitutes a potential, contains a massless charge, and represents a direct curvature of local space-time with a deterministic, infolded structure that affects a surrounding region of space-time.

40. See also R. Chen, "Cancellation of Internal Forces", American Journal of Physics, 49(4), Apr. 1981, p. 372 for a discussion of summation vectors and internal vectors. Internal forces occur in equal and opposite pairs (i.e., as stresses), so they contribute nothing to the sum (i.e. to external translation).

41. Unfortunately Einstein studied the prevalent Heaviside version of Maxwell's electromagnetic theory in university. Therefore he studied only a subset, and one in which EM and G are mutually exclusive a priori. Since potential with curvature of space-time, that meant that he would of necessity assume that EM forces did not curve local space-time. After all, EM forces simply radiate away as EM radiation, fleeing at the speed of light. So no EM force was going to be around long enough to warp space-time.

Accordingly, Einstein was left with the weak gravitational force between masses as the only force with which to curve space-time. The mass-attraction G-force is so weak that only adjacent to a stupendous collection of mass - such as the Sun or a star - would there be sufficient space-time curvature to even detect. Einstein reasoned that the observer and the laboratory would never be on the surface of the Sun or at the surface of a star; consequently, he assumed that - where the observer and the laboratory were located - the local frame would be a Lorentz frame and local space-time would never be curved. In other words, Einstein did not write a general theory of curved (anisotropic) space-time at all. Instead, he wrote a severely restricted subset of such a theory. He wrote a sort of special relativity with distant perturbations, where all the "general" relativity occurs at an appreciable distance from the observer, and then only at or near a huge collection of mass.

This had several results (considered advantages by orthodox scientists). (1) it strongly implied that one would never have a direct experimental science of general relativity on the laboratory bench. After all, if the local space-time is uncurved, it means there is no local general relativity. (2) it assumed that it was impossible to utilize the far stronger EM forces (which are some 10exp36 to 10exp42 times as strong as the G-force) to make gravitational potential. And with the denigration of the EM potentials as having no physical reality, Heaviside's force field-oriented EM theory diverted one directly away from considering the constitution of EM potentials as having any relevance to anything physical. (31 it saved the sacrosanct conservation laws, which had been raised to a dogma.

Ironically, after assuming that local space-time was never curved, Einstein spent the rest of his life futilely striving to get electromagnetics back into general relativity fold, to form a unified field theory. He failed - never realizing that it was his own assumption of a locally-uncurved space-time that doomed all his efforts to failure.

In the West, Einstein's severely restricted general relativity has itself become tantamount to dogma, and any challenge to the sacrosanctity of the conservation laws results in immediate alarm. This is not true at all in the Soviet Union, where leading academicians regularly publish papers detailing aspects of an unrestricted general relativity, where local space-time can be curved and where all conservation laws can be violated.

For example, see A.A. Vlasov and V.I. Denisov, "Einstein's formula for gravitational radiation is not a consequence of the general theory of relativity", Theoretical and Mathematical Physics, 53 (3) June 1983 (English translation; Russian publication Dec. 1982), p. 406-418. Quoting: "... in general relativity there are no energy-momentum conservation laws for a system consisting of matter and the gravitational field."

See also V.I. Denisov and A.A. Logunov, "New theory of space-time and gravitation", July 1982, p. 3-76. This paper (p.3) points out that "... the gravitational field in general relativity is completely different from other physical fields and is not a field in the spirit of Faraday and Maxwell.".

A 1984 Soviet paper by senior Russian physicist C. Yu. Bosgoslovsky, "Generalization of Einstein's Relativity Theory for the anisotropic space-time", is also very relevant. See also V.I. Denisov and A.A. Logunov, "The inertial mass defined in General Relativity has no physical meaning", preprint p. 0214, Institute of Nuclear Research, USSR Academy of Science, Moscow, 1981.

For documentation of a near-conspiracy in the West against refutation of Einstein's restricted general relativity, see Ruggero Maria Santilli, "Ethical Probe on Einstein's followers in the U.S.A.: An Insider's View", Alpha Publishing, POB 82, Newtonville, MA 02160, 1984.

The assumed mutual exclusion of EM and G can be theoretically shown to be false. See R.M. Santilli, "Parsons and Gravitation: Some puzzling questions", Annals of Physics, 83 (1), Mar. 1974, p. 108-157. It can also be experimentally proven to be false.

For some years John Hutchison of Vancouver, Canada has performed experiments where he places a sample (material or object) between two giant Tesla coils, then violently activates the coils. The two coils provide "strong bucking EM force fields" into and onto the sample, with many, many frequencies and randomly varying multi-wave interactions. The irradiation also produces strong, fluctuating ELF components, in and on the central object. Gravitation in a mass varies as a function of the magnitude of the


carried by the interacting pump photons, hence inversely as the energy and frequency of the pump photons.

When phasing conditions and target internal conditions are just right, nonlinearities in the central target object act to a certain degree as a nonlinear medium of sufficient reflectivity to be considered a pumped phase conjugate mirror (though certainly one of very low efficiency). Under fortuitous conditions, the object is levitated, since its nuclei are being pumped with an ELF scalar wave and forced to produce a great deal of excess negative time. In negative time, masses repel rather than attract; hence the more


in the photons, the more antigravity produced and built up in the nucleus.

While Hutchison's experiments are relatively uncontrolled, he has produced verifiable results: for example, a major German laboratory has found alterations in the metal of his sample that are previously unknown, and which cannot be duplicated by any other known procedure. His approach is essentially no cruder than the "get a bigger hammer" approach of high energy physics; he just does not have the facilities, funds, and team of supporting scientists to meticulously instrument his results and stabilize his fields. And he certainly has suffered great derision from orthodox engineers and scientists who do not at all understand the principles utilized in his experiments. Nevertheless, Hutchison is right and all the deriding pundits are wrong.

42. Utilizing the microstructure of the vacuum is something which orthodox scientists have never even tried. For confirmation, see Tsung Dao Lee, "Particle physics and Introduction to Field Theory", Harwood Academic Publishers, N. Y., 1981, Second printing with corrections, 1982, p. 1957.

43. The sine-squared wave is of extraordinary importance. Its characteristics were independently discovered by John Bedini experimentally, as a special means he discovered and utilized to greatly enhance and control EM effects on cells and cellular structures. In a nonlinear medium such as living tissue, an ordinary EM sine-squared wave of appropriate frequency causes generation of its own phase conjugated replica in the body. The two waves are locked together by the modulation effect, and form a sine-squared scalar EM wave which penetrates widely throughout the body, even to the atomic nuclei, with greatly decreased power levels required. By modulating this wave with specific "healing photons" designed for a specific disease, internal EM healing via structured EM potentials (i.e., by the specifically-tailored information content of the EM potentials) can be introduced throughout the body's own "cellular information system". Photobiology, little known in the West, nevertheless has great promise and offers a potential for healing a great many diseases presently impossible or difficult to cure.

44. Along the lines developed in Bearden, "AIDS: Biological Warfare", Tesla Book Co., 1988.

45. For a comprehensive engineering overview of the theory of four-wave mixing, see David M. Pepper, "Nonlinear Optical Phase Conjugation", Optical Engineering, 21 (2), Mar./Apr. 1982, p. 150-183.

46. For a detailed, cautious overview of time-reversal in physics in general, see Robert C. Sachs, "The Physics of Time Reversal", University of Chicago Press, Chicago, 1987.

47. For a useful discussion of the theory of parametric oscillation, see V.V. Migulin et al, "Basic Theory of Oscillations", Ed. V.V. Migulin, translated from Russian by George Yankovsky, Mir Publishers, Moscow, 1983 (revised from the 1978 Russian edition).

48. See Bearden, "Extraordinary Physics", ibid. for a discussion of negative time.

49. A close associate, John Bedini, has accomplished such transmutation of elements in the laboratory in proprietary experiments. Indeed, transmutation can be accomplished by scalar EM means at extremely low power and energy - at levels so weak that living systems can and do accomplish limited transmutation of elements. Louis Kervran was a Nobel nominee in 1977 for proving just that. For example, see his "Biological transmutations", Crosby Lockwood, London, 1972; "Transmutations Biologiques", Libraire Maloine, Paris, 1962; "Transmutations a faible energie (naturelles et biologiques)", Libraire Maloine, Paris, 1972.

50. For an extended discussion, the reader is again referred to Bearden, "Extraordinary Physics" in "AIDS: Biological Warfare", 1988.

51. Maxwell himself was well aware of the importance of EM stress in the medium, though he had apparently not realized that this represented electrogravitational potential. For example, quoting from his "Treatise", Vol. 1 3rd edition, (New York, 1954), p. 10: "There are physical quantities of another kind which are related to directions to space, but which are not vectors. Stresses and strains in solid bodies are examples, and so are some of the properties of bodies considered in the theory of elasticity and in the theory of double refraction. Quantities of this class require for their definition nine numerical specifications. They are expressed in the language of quaternions by linear and vector functions of a vector."

Note that, since Maxwell assumed a material ether, he obviously assumed it to have such stress and strain characteristics, and knew that this situation was captured by the quaternions.

52. Santilli, "Ethical Probe... ", 1984.

53. Denisov and Logunov, "New Theory... ", ibid.; Vlasov and Denisov, ibid., Bosgoslovsky, ibid.

54. Denisov and Logunov, "The Inertial Mass... ", ibid.

55. See the several weapons references by Bearden, previously listed above.

Orgone energy as a motor force

Richard A. Blasband, M.D.
P. O. Box 0870
INVERNESS, California 94937
United States of America

In 1940, the course of investigations into the nature of the emotions, the psychoanalyst Wilhelm Reich discovered a heretofore little known cosmic energy that functioned within biological systems as the life energy. Further investigations over the next seven years revealed that this energy, "orgone energy", could be accumulated from the atmosphere, concentrated within an enclosure, and used as a motor force.

In 1937, believing that human emotions were essentially bioelectrical in nature, Reich studied changes in electrical skin potentials on subjects in various states of emotive expression (1). This study confirmed his clinical impression and theoretical concept that in emotional states "something moved" within the organism. This "something", however, while measurable in electrical terms, could not be electricity per se, as the large quantities of feelings felt and expressed by subjects in the study could not be accounted for by the few dozens of millivolts registered on the skin surface. Furthermore the application of electricity to the body was always perceived as alien and disturbing.

In order to further delineate the nature of the "something", Reich studied biological energy sources, foodstuffs, under the microscope. He discovered that all foods, regardless of their nature, when subjected to boiling, broke down into microscopic vesicles that moved from place to place, showed internal pulsation, and had a bluish glimmer in their transparent, liquid content. Reich named the vesicles "bions" (2). Reich then found that inorganic materials such as earth, iron fillings, carbon and sand, when subjected to autoclavation or incandescent heat, would also break down into bions similar in appearance to those obtained from organic substances. Furthermore, when placed in nutrient media, the carbon and sand bions could be cultured.

The bions showed some remarkable properties. These included the ability to immobilize or destroy bacteria, produce a strong inflammatory reaction when placed close to the skin, to ruminate, and to charge rubber with static electricity. The atmosphere in Reich's laboratory in Oslo was always "heavy"; metallic instruments spontaneously became magnetized; photographic plates spontaneously fogged; Reich tanned, even in the winter, and felt unusually strong and well, except for an inflammation of the eyes, that was apparently related to observing the bions through the microscope.

Orgone energy from bions and the atmosphere

Reich was, however, concerned that the cultures might be radiating radioactively. He consulted a radiation specialist, who ruled out radioactivity. Experiments over the following months convinced Reich that the radiation from the bion cultures could not be accounted for by any known conventional form of energy. He was forced to conclude that he was working with a natural force previously unknown to Western science. He named it "orgone energy" because of its ability to be absorbed by organic materials and the fact that his research began with the clinical study of the function of the orgasm in humans.

The orgone energy ruminated in the form of "purple fogs" and fine, lightning-like, whitish sparks. In order to better visualize the rumination Reich placed bion culture dishes inside a metal-lined, wooden box, thinking that the metal would reflect the radiation to the inside of the box, thereby making it more visible, while the wood would prevent the radiation from escaping. Through a glass plate in one side of the box he was, indeed, better able to see the rumination. To his surprise, however, he found the light effects persisted after removal of the cultures and even after thoroughly airing and washing the enclosure. It was then that Reich realized that the orgone energy was everywhere and that in some way the structure of the enclosure made it possible to concentrate the energy from the atmosphere.

Through further experimentation he found that orgone energy was atracted to and repelled from metals and absorbed by non-metallic substances. Therefore, an enclosure consisting of alternating layers of non-metallic and metallic materials with the metal innermost would establish a gradient of energy from the atmosphere to the interior of the enclosure. Most often the materials wed were celotex, rock wool, steel wool and galvanized iron, although one could also use plastic, fiberglass and other metals. (Figure 1).

Aluminum could be used in strictly physical experimentation, but could be toxic in living organisms. Reich later found by objective measurement that a box consisting of six alternating layers of material would concentrate orgone energy eight times what it was in the surrounding atmosphere.


Figure 1. Section of the basic design of an Orgone accumulator

To = temp above accumulator; Ti = temp within; T = control (room temp) El = electroscope;


= direction of radiation. Size: 1 cubic foot

The interior of the "orgone energy accumulator" as Reich called the enclosure felt warm and tingling even though the inner metal wall felt cold. This subjective impression was objectively confirmed by measuring the temperature within the accumulator. It was always warmer than the ambient air or a suitable control box by several tenths of a degree to up to two degrees Centigrade (3).

Reich knew the tremendous significance of this finding. It was a violation of the Second Law of Thermodynamics, which was considered inviolable by classical physics. The accumulator could raise its own temperature without work being done to do so. A variety of controlled studies by Reich and, in recent years by his students, has confirmed this phenomenon (4,5,6).


Figure 2. "Background, cosmic" radiation at about 1000 volts.

Another means of objectifying the presence of an anomalous energetic force is the behaviour of a static electroscope placed within the enclosure. The so-called "natural leak" of charge from a statically-charged electroscope is significantly slowed down within the accumulator (7). No known classical electrical process can account for this phenomenon. Reich came to see static electricity as being a common manifestation of orgone energy. The natural leak from the electroscope within the accumulator is slowed because the electroscope discharges into a higher orgone concentration than exists in the outside air.

Slowing of the electroscopic discharge rate and the temperature increase within the accumulator (temperature within the accumulator minus temperature of the ambient air, To - T) parallel each other. Both are dependent on external energetic factors, most significantly the weather, and can be used to forecast coming weather changes.

The genesis of the orgone energy motor

In 1947, following seven years of investigation of the biological and physical properties of orgone energy, Reich acquired a Geiger-Muller field meter in preparation for studying the interaction between orgone energy and radioactivity. At his laboratory in Rangely, Maine, Reich found that the GM counter initially reacted normally, registering the background count caused by natural radioactivity and gamma radiation of cosmic origin. It was, however, unresponsive to proximity to orgone energy accumulating structures. Within a few days, inexplicably, the instrument appeared "dead", being unresponsive to background radiation and even to a small x-ray source.

The device was checked periodically, but remained completely unreactive until about two months later, when on routine check the pointer of the impulse recorder began instantly to rotate at the rate of one full turn per second, a great velocity for this device. This corresponded to about 100 impulses per second, an enormous reaction compared to the normal background count of 15 - 25 counts per minute. On further testing Reich obtained counts of six to eight thousand counts per minute (cpm), yielding 1.15 rotations per second, a continuous rotation of the recorder (8). At that time the highest counts ever obtained with radioactive substances was 3000 cpm with that brand of GM counter. Reich realized that he was witnessing a possible motor force in orgone energy. The orgone energy was, somehow, through the GM counter, being transformed into electromagnetic and mechanical energy. By a detailed functional dissection of the GM effect and the use of special vacuum tubes to intensify the concentration of orgone energy, Reich later found the way to more directly run an electric motor on orgone energy.


Figure 3. Geiger effect of orgone energy by cpm and voltage increases.


Figure 4. Self-charging capacity of orgone energy.


Figure 5. Reaction of a control tube 100 feet away from laboratory for one month.

Reich found that the metal outer cylinder of the counter tube of the GM device attracted orgone energy from the atmosphere and that the GM effect could be killed by simply removing the metal cylinder from the glass counting tube. The motor effect reappeared instantly upon replacing the metal cylinder or on putting the naked glass counter tube into an orgone energy accumulator. The effect would diminish before rainstorms and recover after a storm had passed through. This was consistent with earlier observations of temperature and electroscopic changes within the accumulator in varying weather conditions.

These and other observations convinced Reich that the motor effect occurred because the counter tube had "soaked up" orgone energy through constant exposure to the high ambient orgonotic charge in the laboratory. Since the counter tube consists of an inner metal cathode and an outer nonmetallic protective coating the counter tube is essentially an orgone energy accumulator. The orgone energy within the counter tube was then excited to a lightning-like state of pointed rays by the electrical stimulus from the GM device. In this state, orgone energy could be counted by the device. A variety of control experiments demonstrated that the GM motor effect of high cpm could be explained only by the excitation of high concentrations of orgone energy within the counter tube.

Reich obtained a more sophisticated GM counter that permitted varying the voltage to the counter tube. Like the field meter it initially counted only the background radiation, but within three days registered 3600 cpm in bursts and at four weeks showed continuous rotation of the impulse counter, close to 2000 cpm at 1000 volts of excitation.

Figure 2 shows the power of concentrated orgone energy when compared to a radioactive source (radium), and so-called cosmic radiation (9).

Figure 3 shows the non-mechanical, functional qualities of orgone energy by the non-linear changes in cpm with increases in voltage (10).

Figure 4 shows the capacity of orgone energy to charge itself. Counts were made with the GM tube placed within a 1 cm lead and 1/4 cm iron cylinder within a one cubic foot orgone energy accumulator. The GM counter was operating during six consecutive minutes at a steady 950 volts. Note the sharp increase in impulses after 2 minutes without any additional voltage to excite the energy in the counter tube (11).

Figure 5 shows the reaction of a control tube kept 100 feet away from the laboratory for one month. Despite the distance it had soaked up sufficient orgone energy to yield a rotary motor effect by merely being in the energy field of the laboratory (12).

The classical view of the operation of a Geiger-Muller Counter is that radioactivity triggers the gas within the counter tube into an ionized state. The ionization then lowers the resistance to the passage of electricity between a cathode and anode within the tube. A circuit amplifies the electrical flow so that it may be read out and thus register indirectly the quantity of radioactivity passing through the counter tube. In the classical view then, the incident radiation indirectly produces the impulse which activates the recorder. Reich's next task was to determine whether or not this theory held true for the motor phenomenon with which he was working. Or, could it be possible, he asked, that atmospheric orgone energy impulses counted by the GM counter tube, were directly activating the electromagnetic system of the impulse recorder?

To answer this question Reich performed an ingenious series of experiments wherein he functionally dissected the orgone-charged GM system utilizing calibrated electroscopes and a volt-ammeter attached in a variety of ways to the counter tube and GM amplifier. In this way he found that the amount of energy coming from the tube ranged from 100 to 500 electrostatic volts, a tremendous amount of voltage, which could not in no way be accounted for by classical ionization theory. He also found the energy entering the amplifier from the counter tube was different from the energy leaving it, that it was in the process of moving through the amplifier that orgone energy was transformed into electromagnetic energy.

Reich felt that the motor reaction could be improved if he could simplify the whole system by eliminating everything that stood in the way of the direct transformation of orgone energy into a mechanical motor force. His first step was to try an orgone charged, gas-free counter tube in the GM counter. This failed to produce any reaction. But when he used a specially constructed vacuum tube that functioned like an orgone energy accumulator (the "Vacor" tube), he got a powerful reaction. It was constructed with inner parallel aluminum plates, attached to the cathode and anode respectively. The vacuum was 1/2 micron of pressure, sufficient to rule out the presence of any gas.

After soaking in an orgone energy accumulator for several weeks, despite the absence of gas, this tube ruminated a deep blue colour when excited by an orgone-charged plastic rod. With excitation by an electrical tension of 100 to 1000 volts, the ruminating colour in the tube went through changes identical to that seen as the night sky changes to dawn, and then full daylight. It seemed very likely to Reich that dawn and daylight rumination on the planet were a result of excitation from the Sun, triggering changes in the orgone energy field of the Earth (13).

With the hook-up to the GM counter, the Vacor Tube yielded thousands of impulses per second at 350 - 500 volts of tension. This was much higher than the yield from the usual GM counter tube, which required 750 - 1000 volts to trigger 100 - 130 counts per second, at best. Elimination of the high voltage circuit between the Vacor Tube and the impulse counter permitted even higher counts to come through from the tube, up to 20 - 25,000 counts per second. An electroscope measured the tension between two aluminum plates in the Vacor Tube. It was an extremely high 34,000 volts.

In 1949 Reich reported his success with the Orgone Energy Motor Force:

On June 24th, 1948, at 1 p.m., I succeeded in setting a motor (Western Electric, KS-9154, Serial No. 1227) into motion by means of the Orgone Energy Motor Force which I had discovered by way by way of the Geiger-Muller counter on August 8th, 1947... An activated filament of electronic amplifiers, without any high voltage, is sufficient to transmit the ORGONOTIC MOTOR FORCE.

In order to set the Orgone Motor into motion, a certain function, called Y. is necessary. This function cannot be divulged at the present time.

The sources of orgone energy used hitherto are the following:

a) Orgone-charged Vacor tubes
b) Atmospheric Orgone
c) Earth Orgone
d) Organismic Orgone Energy

No material as is being used in the process of nuclear fission is required. The succession of impulses can be regulated. The sequence of impulses is even and continuous. The relation of the amount of used orgone energy to the tremendous reservoir of the Cosmic Energy Source is minimal.

The speed of the motor action can be regulated. It depends on:

a) the number of vacor tubes connected,
b) weather conditions in accordance with orgonotic functions found hitherto, such as temperature difference To-T, speed of electroscopic discharge, etc.,
c) Function Y.

The functions of the vacuum tubes (vacor tubes), refute the theories of "empty space." Field actions are due to the activity of the universal cosmic orgone energy. The strength of the energy field within the vacuum tube can be demonstrated and measured with a specific functional set-up (14).

Reich demonstrated the motor to reliable witnesses including a reporter from a local newspaper. He died, however, without revealing the nature of function Y. because he felt the world was not prepared to assume responsibility for what would be an unlimited source of power.

Bibliography

1. Reich, W., "Experimental Investigation of the Electrical Function of Sexuality and Anxiety". The Journal of Orgonomy. 3: 1. 1969.

2. Reich, W., The Cancer Biopathy. Orgone Institute Press. N.Y., 1948. p.11.

3. ibid. p.95.

4. Risenblum, C.F. (pseudonym for C.F. Baker). "The Orgone Accumulator Temperature Difference: Experimental Protocol". J. Orgonomy. 6:1. 1972.

5. Blasband, R.A.. "Thermal Orgonometry". J. Orgonomy. 5:2. 1971.

6. Seiler, H.P. "New Experiments in Thermal Orgonometry". J. Orgonomy. 16:2. 1982.

7. Reich, W.. The Cancer Biopathy. p.108.

8. Reich, W.. "The Geiger-Muller Effect of Cosmic Orgone Energy (1947)". Orgone Energy Bulletin. 3:4. 1951. p.201.

9. ibid. p.231.

10. ibid. p.230.

11. ibid. p.232.

12. ibid. p.233.

13. ibid. p.249.

14. Reich, W.. "A Motor Force in Orgone Energy". Orgone Energy Bulletin. 1:1. 1949. p.7.

The Hutchison effect - a lift and disruption system

George D. Hathaway, P. Eng.
Hathaway Consulting Services
39 Kendal Avenue
TORONTO, Ontario M5R 1L5
Canada

The following may shed light on a most unusual phenomenon which we have called the "Hutchison Effect". It is a very strange arrangement of technologies including those of Nikola Tesla and Robert Van de Graaf. This is a topic that is very conducive to wandering because it brings in all of the most amazing kinds of effects that one would love to have in their basement, such as material levitating and floating around, being able to break steel bars without the use of your bare hands, and all sorts of other weird and wonderful things.

Pharos Technologies Ltd. was a company formed by myself and a gentleman by the name of Alex Pezarro, who you may recall made a presentation at the 1983 2nd International Symposium on Non-Conventional Energy Technology in Atlanta. Alex talked about one of his pet projects, which was oil and gas discovery by novel means. In 1980, we formed this small company to try to promote what we then called the Hutchison Effect. We also termed it in our early presentations: LADS or the Lift and Disruption System. The following series of graphs were created in 1984 to present to various parties interested in funding this technology. The first graph indicates the topics covered in these presentations.

I INTRODUCTION HISTORY

L.A.D.S. IS CAPABLE OF:

- INDUCING LIFT AND TRANSLATION IN BODIES OF ANY MATERIAL

PROPULSIVE

- SEVERELY DISRUPTING MOLECULAR BONDS IN ANY MATERIAL RESULTING IN CATASTROPHIC DISRUPTIVE FRACTURING
- CAUSING CONTROLLED PLASTIC DEFORMATION IN METALS
- CREATING UNUSUAL AURORA-LIKE LIGHTING EFFECTS IN MID-AIR
- INDUCING APPARENT URGE-SCALE MAGNETIC MONOPOLES IN METALS
- CAUSING CHANGES IN CHEMICAL COMPOSITION OF METALS
- OTHER LONG-RANGE EFFECTS

ENERGETIC

ALL AT LOW POWER AND AT A DISTANCE


The Lift and Disruption System or the Hutchison Effect is divided primarily into two categories of phenomena: propulsive and energetic. The system is capable of inducing lift and translation in bodies of any material. That means it will propel bodies upwards, and it will also move them sideways. There are actually 4 kinds of trajectories which are capable of being produced and I'll explain these shortly. It also has very strange energetic properties including severely disrupting intermolecular bonds in any material resulting in catastrophic and disruptive fracturing, samples of which are described here. It is also capable of causing controlled plastic deformation in metals, creating unusual aurora-like lighting effects in mid-air, causing changes in chemical composition of metals (it varies the distribution of the chemical content), and other long-range effects at distances up to around 80 feet (24 metres) away from the central core of the apparatus - all at low power and at a distance.

I INTRODUCTION + HISTORY

II VISUAL EVIDENCE - FILM + STILLS

- UNPRECEDENTED ACTIVITY FROM A SINGLE SYSTEM

- OVERVIEW OF CROWDED, PRIMITIVE ORIGINAL LAB

- TRUE SYSTEM WITH MANY INTERRELATED PARTS

- STATE OF ORIGINAL SET-UP:

- POOR CONNECTIONS
- HAND WOUND COILS ETC.
- LACK OF INSTRUMENTATION

- DEVELOPED TOTALLY FORTUITOUSLY FROM EXPERIMENTATION WITH EARLY A.C. AND STATIC MACHINES

- DIFFERENCE BETWEEN ORIGINAL AND LATEST LABORATORIES

- BASED OK IDEA OF INDUCING "SWIRL" OR ROTATION IN EM. FIELDS

- BASEMENT OF HOUSE - L.A.D.S. DRAWS MAXIMUM OF 1.5 KW. FROM HOUSE MAINS

- EARLY EVIDENCE OF POWER OF L.A.D.S.

- BEST LIFT EPISODES IN EARLY BASEMENT LAB

- PHASE 0 DEVELOPMENT OF PROGRAM BY PHAROS TECHNOLOGIES LTD.

- POOR PHOTOGRAPHIC RECORD IN EARLIEST TRIALS

- UNPREDICTABILITY OF L.A.D.S. IN EARLY TRIALS

- HIGHLIGHT - BURNOUT OF ARMATURE + FIELD COILS OF SABRE SAW

- RE-ESTABLISHMENT OF L.A.D.S. LAB UNDER PHASE 1 PROGRAM

- MANY MATERIALS CAPABLE OF BEING SELECTIVELY INFLUENCED

- SUCCESS AT RE-CONSTRUCTING L.A.D.S. IN NEW ENVIRONMENT

- INDEPENDENT QUALIFIED WITNESSES

The system is a single entity, made up of many discrete components. It has many interrelated parts, unfortunately continually being added to by the inventor. It was discovered fortuitously by Hutchison, who was experimenting with early Tesla systems and static machines such as Van de Graaf generators.

The earliest explanation was given by Mel Winfield of Vancouver, whose name may be familiar from Dr. Nieper's 1988 Congress in Germany. He suggested that the explanation for the phenomena was due to a method of making the electro-magnetic fields spin or swirl in some unknown way.

Pharos Technologies was involved in three phases of development, the first phase of which was in the basement of a house in Vancouver. This is where John Hutchison's original work was done. The collection of apparatus which will boggle the mind can be seen on the video (shown during the lecture and available from the publisher) and replicated in Figures 11 and 12. That was the Phase 0 development. Phase I was when we stepped in with some money and took the equipment from the original location and put it in a more reasonable setting. Phase II was a third location prior to its being dismantled and put into storage by John.

The main thing about this technology, apart from its unusual phenomenology, is that it is highly transitory. The phenomena come and go virtually as they please. One has to sit with this apparatus from between six hours and six days before one actually sees something occurring. This makes it virtually impossible to interest someone who would like to try to develop it, to assist in funding, for instance. You can't assume that someone will sit there who is ready to help develop a technology, and have him wait and wait, and perhaps nothing will happen. It's unusual to ask someone to wait six days for a phenomena that they're interested in developing commercially. So one can imagine that we've had some difficulty in the past in financing this program.

Note in Figure 11 one of the Tesla coils in the foreground. The main coil is 4 1/2 feet (1.4 m) high. It was extremely difficult to get around in the first lab (Phase 0). The first laboratory in Vancouver was so densely packed with equipment that you could not find a place to put your foot down. You had to step around all sorts of objects that were put on the floor.

Disruptive phenomena

In the video a bushing is shown breaking up. It was a steel bushing about 2 inches (5 cm) in diameter by 3 to 4 inches (9 cm) long. John still has that in his lab and I have some to show as well (Figures 1 and 2).

The next part of the video is well known. I will try to explain some of its phenomenology. It starts with John warming up the system. To determine where the optimum place for positioning the test objects, which will either take off or burst, he put coins and bits of styrofoam where he believes is going to be the active zone. The first thing that happens is a quarter ($.25 coin) starts to flip and vibrate. Now he knows he should concentrate putting specimens in that zone and he does so. We see some water in a coffee cup that appears to be swirling, although it's not. It is merely the surface rippling by some electromagnetic means and the coffee cup is dancing around the top of a yellow milk carton. It's another way for him to determine where the zone is. Then we see a flat file 8 inches (20 cm) long breaking apart. This file broke into four more or less equal-length sections. Normally, if you break a bar magnet, you know that you break it north-south, north-south, north-south, etc.. So the parts tend to stick back together again. In this case the segments were magnetized the wrong way by some phenomena I do not know and they repel each other when they're put together at the breaks. This may be indicative of the development of large-scale monopolar regions that are of such intensity that they disrupt the material itself. It's as reasonable an explanation as I've been able to come up with, or anyone else.

Lifting phenomena

We then proceed to document some lifting phenomena. The objects that are lifted in the first part of this section are on the order of a few pounds. All of them lift off with a twist. They spiral as they lift off. There has to be a particular geometry with respect to down (gravity) for them to take off. Some objects, if you lie them on their sides, won't take off. If you turn them on their ends, they will take off. The geometrical form of the objects, their composition and their relationship to their environment, the field structure around them that is being created by the device, all play a part in how these things take off.

There are four main modes of trajectory that these objects can follow if they do choose to take off. There's a slow looping arc where the objects will basically take off very slowly in a matter of a couple of seconds and loop and fall back somewhere else It is almost as if the Earth moves underneath them while they are in flight, and they fall back in different locations. The second type of trajectory is a ballistic take-off. In other words, there's an impulse of energy at the beginning of the trajectory with no further power applied to the lifting thereafter, and the object hits the ceiling and comes back down. A third type of trajectory is a powered one where there appears to be continuous application of lifting force. I have some evidence taken from the video. The fourth trajectory is hovering - where objects just rise up and sit there. The objects can be of any material whatsoever: sheet metal, wood, styrofoam, lead, copper, zinc, amalgams and they all either take off or they burst apart, or they do nothing - that's 99% of the time.

Lighting phenomena

Following that is a strange lighting phenomenon. This only occurred once but fortunately, while John was filming. Incidentally, this early film, with the most spectacular results observed, was taken by John himself. It was taken in 1981 and all of a sudden a sheet of iridescence descended between the camera and some of the apparati and one sees that sheet of light. It has a strange pinkish centre to it and hovered there for a while, and then disappeared. John thought he was hallucinating, but when we developed the film it turned out something was definitely there.

In this same video, we observe heavier objects taking off, including a 19-pound (8.6 kg) bronze bushing and water in a cup that's dancing around, the surface of which is vibrating. There are no ultrasonic or sonic devices in this particular series of experiments. There are no magnetic components underneath or over top. There are no field coils underneath over top or anywhere within 6 feet (1.8 m). These images were taken while the apparatus was performing at peak, and shows best results for the earliest experiments.

Sometimes, instead of lifting objects, John will purposely try to destroy them. In one case, a 1/4" round rattail file rests on a plywood base and is held down from taking off by two plywood pieces. Beside it are some quarter and penny coins. The file is glowing white hot and yet there is no scorching of the wooden plywood pieces which are holding it down. Neither are any of the coins affected. This is explainable in terms of RF heating theory because you can have eddy current heating on the surface and it's almost cool to the touch very shortly thereafter. It's still unusual that there is no conductive heat transferred to the wood.

From time to time there are scorch marks on the boards from other experiments. The apparatus makes fire spontaneously in parts of the lab if you're not careful.

The original (Phase 0) lab set-up was primitive, crowded, had poor connections, and had hand-wound coils. However, the films that have most of the best lift episodes were done in this early set-up, drawing a maximum of 1.5 kilowatts continuously from house-mains.

Disruption effects


RESULTS OF PHYSICAL, CHEMICAL AND ENERGETIC ANALYSES

- A WEALTH OF CONFIRMATORY PHYSICAL SAMPLES INCLUDING:

WATER
ALUMINUM
IRON, STEEL
MOLYBDENUM STEEL
WOOD
COPPER, BRONZE
+COMBINATIONS OF ABOVE

- B.C. INSTITUTE OF TECHNOLOGY:

- HARDNESS
- BRITTLENESS & DUCTILITY
- OPTICAL MICROSCOPY

- ALL SHAPES, SIZES. AND MASSES

- B.C. HYDRO R/D LABORATORY:

- SCANNING ELECTRON MICROSCOPY
- ENERGY DISPERSIVE ANALYSIS

- CERTAIN MATERIALS SUBJECT TO CERTAIN INFLUENCES PREFERENTIALLY

- U. OF TORONTO DEPARTMENT OF METALLURGY:

- SCANNING ELECTRON MICROSCOPY
- ENERGY DISPERSIVE ANALYSIS (X-RAY)


- LOS ALAMOS TESTS

The disruption part of this Lift and Disruption System has produced confirmatory physical samples that include water, aluminum, iron, steel, molybdenum, wood, copper, bronze, etc., with many shapes, sizes and masses. Certain materials are subject to certain influences depending on shape, composition and other factors.

We have tested various pieces that have broken apart for hardness, ductility, etc.. We have used optical and electron microscopes. We have taken SEM's with EDA's (Energy Dispersive Analysis) to determine the composition at various points.

Two samples of aluminum are shown, one of which is in the centre of Figure 1, which is twisted up in a left-handed spiral, and in Figure 2 on the left which was blown into little fibers. Lying on the ruler in Figure I to the left of centre is a molybdenum rod used in nuclear reactors. These things are supposed to withstand temperatures of about 5,000° F. We watched these things wiggle back and forth, and stopped the apparatus halfway through a wiggle and that's the result. Figure 2 (left) shows the piece of cast aluminum that burst apart.

In general, Figure 1 shows a collection of pieces of metal that have been blasted apart or twisted. The largest piece (in the background) is about 12 to 13 inches long. It's two inches in diameter, of regular mild steel, and a 3/8 of an inch long part was blasted off the end and crumbled like a cookie. Fragments have been analyzed to have anomalously high silicon content although the original material was not a silicon steel. The standing piece on the left is 5 - 6 inches tall, 1 and 1/4 inches in diameter. It is a piece of case-hardened steel. The case-hardening has been blown off at the top and about 3/4" of it vapourized during an experiment. Then there are various pieces of aluminum and steel. On the right of Figure 2 is a boring bar. You can still see the old tool bit that John was using through it. It was on a shelf about 10 feet away from the centre of the apparatus and he did not see it happen. It just bent up into a tight U and deposited a quantity of copper at the bend. The copper seemed to somehow magically come out of the solid solution, if it was ever in solution in the first place, and agglomerate as globs at the break. As far as the aluminum is concerned, it's a volume effect, not merely an eddy-current surface effect. The whole thing is blasted right through.

Figures 3 to 6 show some of the scanning electron microscope photos taken by the University of Toronto. Figure 3 shows an aluminum specimen at about 70 times magnification and the whole surface is torn apart, as if it was gouged randomly by some mechanical means. It has not been smoothed and polished and subject to x-ray or dispersion analysis yet. A piece of iron is shown in Figure 4, and was analyzed for composition which showed anomalously high amounts of copper.

With a little higher magnification for Figures 7 and 8, we see what happens in a polished aluminum sample under the SEM. Figure 7 shows two main horizontal fracture zones.

This is a polished sample, that is why it looks nice and clean. Notice the unusual globules forming (positions B & C). We examined these particular globules and they're virtually pure elements. One is copper, another is manganese and others are different elements. These globules seem to arrange themselves along planes and these planes are no doubt the ones that split apart and delaminate into fibers.

Figures 9 and 10 show the relative elemental abundances of locations H and D of Figures 7 and 8. Normally, the aluminum comes out looking like Figure 9. The average is mostly aluminum, of course, but with a bit of copper in it. And yet (Figure 10) shows an area around where the fractures occur and we see we have actually located one of the copper blobs, plus some chlorine from our fingers. Usually you see some chlorine and sodium from salt in your hands if you're touching samples. It's certainly telling us that something unusual is happening. I have not seen another apparatus which makes the alloying material in an alloy come out of the solid solution. Usually it's totally dispersed in the melt but in this case we're "undispersing" it somehow.

The Pharos experimental set-up for the Hutchison effect

PHASE 1: PHYSICAL LAYOUT


This plan view shows the first (1983) set-up under Pharos' control.

The field-shaping unit is basically an elevated aluminum sphere about 11 inches in diameter. The essential ingredients of the power supply are two 15 kilovolt neon transformers. Large steel masses were all over the place. In his first and most effective experiments, John had a 400 kilohertz continuous wave generator instead of the small Tesla coil. It's basically a low frequency radio transmitter that he had switched on for the operation, and it had a 3-foot whip antenna. Later he replaced that, likely because it broke, with the small Tesla coil, which is about three feet off the ground and is about I 1/2 feet high.

This lab was set up to try to attract some more funding and I personally put it together, trying to pick the essential bits of the apparatus out and assemble them myself. That is the lab from which a number of these samples came.

Spark gaps and tank circuits line one wall. There's a 21 kilovolt transformer in front of the inductors from a Picker X-ray machine which powers a number of these spark gaps. The gaps fire at 60 cycle rep rate. There is a double-ended "dumbbell" Tesla coil suspended from the ceiling. The large Tesla coil, the field-shaper, Van de Graaf generator, and a Tesla disruptive discharge coil are also shown. This latter is a double-ended, iron-core transformer. The distance is approximately 12 feet between the large Tesla coil and the small Tesla coil. Between them is what is called the active area, and that is basically a platform on which we put objects of whatever material we wish, and hope that they'll leap to the ceiling or burst apart. The main tuning control consists of several high-voltage variable capacitors and various inductors.

Figure 11 shows the lab that I set up in 1983. I admit it is rather messy. I tried to set it up exactly as John had set it up, and so I did not make nice connections, etc.. I wanted it to be just the same as what he had done, except I tried to use a minimum number of components. The large Tesla coil is 4 1/2 feet tall (secondary), a few thousand windings of number 27 or 30 enameled. It has a toroidal coil of about 12 gauge resting near its top. The Van de Graaf is about 250,000 volts DC maximum. It has an approximately 11 to 13 inch diameter ball. Also visible are various tuning capacitors. You can see high voltage transmitting caps of very large capacity and RF coils here and there. Overhead is the double-ended "dumbbell" Tesla coil with its electrodes with the double toroid primary. Down below, out of sight, is a spark gap that snaps every 40 seconds or so, and in the back corner is the small Tesla coil. It's a double 807 triode Tesla coil which has a nice spot frequency of about 760 kilohertz. The large Tesla coil, when powered normally, resonates at somewhere around 330 kilohertz.

Figure 12 shows another photo of a later set-up (Phase 2) in early 1987, where several unusual phenomena were filmed by a television crew and was shown on the national new This was John's lab before he tore it apart. It is shown merely to suggest the size and scale of the devices.


Block - Circuit Diagram

The general block diagram shows the Van de Graaf by itself on the left and it goes through a gap and a capacitor. The gap is never firing to ground! The small Tesla coil is shown underneath. It is a little experimental Tesla coil powered all by itself (dual 807 tubes). All components are powered from a single 15 Amp, 110 volt, 60 Hz supply. The main spark gap shown by itself is about 3/8" wide which is powered by a 15 kilovolt DC supply across a capacitor. It snaps every 40 seconds or so and causes a great blast. There is no time correspondence between the snapping of that gap and objects taking off or dismembering themselves.

Neither John nor I know the specific function of any of this apparatus in producing these phenomena, and one of the primary reasons for this presentation is to foster collective investigation leading an understanding of what is going on. I do not know the mechanism whereby this assemblage of components causes objects to lift. I can come to some reasonable conclusions and explanations as to why this assemblage of apparati causes things to burst apart. What is not understandable is how it causes objects to lift.

Field strength readings

I should mention some of the field strength readings that we have taken. Some of these results are shown in Figures 13 and 14. The magnetic field is taken with a strength field meter using an 8" vertical loop. Electric field measurements were also made. The top two traces of Figure 13 show the 60 cycle bursts, a classical kind of Tesla decaying waveforms. The bottom four traces are spectral analyses. The middle left shows the small Tesla coil by itself with a little side band, but its main peak is approximately 760 kHz. (CTR is the centre frequency used in spectrum analysis terminology). In this case, centre frequency: 760 kilohertz; dispersion: 10 KHz, and the vertical scale is relative strength. The large Tesla coil shown bottom left, (centre frequency: around 350 kilohertz), is a very messy, noisy spectrum because the large Tesla coil is not powered in the normal way. It is powered merely inductively. There is also a peak around 610 kilohertz (middle right) which is probably a side band. Bottom right has a centre frequency of 300, probably from the fluorescent lights. We tried to scan from low frequency right up to several megahertz.

Figure 14 shows field strength measurements at approximately 350 kilohertz. We took a relative field strength reading from which I have imputed a strength in microvolts per metre, the vertical scale going up to about 7,000. The solid line indicates the measurements that we made with approximate error bounds, and the horizontal scale is in feet from the centre of the apparatus. The dotted line is an inverse square line just for reference. There is nothing very unusual here.

Tom Valone (Buffalo, New York): Are you actually telling us that you only have 2,000 microvolts per metre as the peak? Its amazing, I expected at least kilovolts per metre.

George Hathaway: The maximum, if we extrapolate that curve is about 100K microvolts per metre right in the centre of the active area. I should caution: this measurement was taken when the apparatus was not working to full potential. Whether, when major events happen, the field strength goes way up, I'm not sure. This was a normal run where some slight movement was happening to make sure the apparatus was functioning, but nothing major was occurring.

Tom Valone: When you say the field strength may go way up, how far do you mean?

George Hathaway: 1 have no idea. We were not able to have the field strength meter at the time as the best lifting was taking place or disruption was taking place. Therefore, I cannot tell you what the electrical field strength would be when the major phenomenon was occurring. I could only imagine based on engineering principles that it would be much higher than 0.1 volts/metre. Don't forget this is only the AC portion of the field.

Something I have a little more control over is an analysis of the lifting capability. Figure 15 shows a strip of the 8 mm film of that 19-pound bronze bushing taking off in slow motion. This is what I consider the powered take-off and its confirmed by the measurements. I measured the distance between the bottom end in its resting position and the bottom end when it actually leaves the frame and plotted that.

Marcel Vogel (San Jose, California): Look at the right-hand side at the series of patterns that you are seeing there. (Figure 15)

George Hathaway: That's the pattern of the milk carton on which this sample is sitting.

Marcel Vogel: Is it a milk carton or is it a reflection from that surface?

George Hathaway: That's a milk carton. If you wish me to run the video again with this particular segment, I will and you can confirm that.

Marcel Vogel: If it was a beat wave you would have a very valuable bit of information.

George Hathaway: That's true. We also have another valuable bit of information in the length of the breaks of the file. That gives us an indication of the wavelength of impinging fields, but nowhere near the kinds of frequency that I would expect to be required to do any of this. But that's a good point. One should always analyze the spatial distribution of how things break for the clues as to the range of operating frequencies.

Now if we plot this take-off and derive an acceleration versus time graph we get Figure 16. I do not have my error analysis so I can't give you a standard deviation on some of these points, but the result is that there is a linearly-rising acceleration curve. There is increasing power being provided to the object as it lifts! It's a 19 pound bushing!

Increasing propulsive power is being applied to this as witnessed by this increasing acceleration curve. These are the actual measurements to about 0.16 seconds and beyond is an extrapolation. The -9.1 in the acceleration equation is merely an artifact of my measuring problem, analyzing that film strip. Keep in mind, this means that when it hits the ceiling, this 19 lb. bushing is traveling at 20 m/sec. (45 mph, 72 kmh) and increasing!

I am at sea in trying to determine how the device can provide a lift. In this "Theoretical background" listing, I mention a few names that might have something to do with an explanation of it.

DISCUSSION OF CURRENT & EARLY THEORIES IN CLASSICAL & QUANTUM PHYSICS

ENERGETIC EFFECTS

PROPULSIVE EFFECTS

- G. LeBON

- HOOPER

- VALLEE

- HOLT

- BOYER

- GRAHAM & LAHOZ

- PRIGOGINE

- ZINSSER/PESCHKA

PLUS MANY OTHERS NOT MENTIONED HERE

Finally here is a listing of a few potential applications of this effect if it can be produced in such a format that it is repeatable and controllable: rocket payload assist, materials handling and warehousing, floating things into position, materials handling of hot objects, objects that are highly radioactive or dangerous, forging and casting, extruding of metals, alloying, power production, conversion, etc., and defence applications.

In conclusion, this is an extremely difficult technology to wrap one's mind around. I have had a great deal of difficulty in convincing scientists to think about this possibility, let alone try to provide some mechanisms for understanding its operation.

APPLICATIONS

PROPULSIVE :

- MICRO-GRAVITY ENVIRONMENTS ON EARTH
- ROCKET PAYLOAD BOOST ASSIST
- MATERIALS HANDLING & WHAREHOUSING

ENERGETIC :

- FORGING, CASTING, EXTRUDING OF METALS
- ALLOYING
- POWER PRODUCTION, CONVERSION & TRANSMISSION

OTHER :

- DEFENSE APPLICATIONS

ETC. ETC. ETC.

I hope I'll be able to engender some interest so that people will think about it. Perhaps some will, if they have some equipment, do some experiments as well.

I must caution anyone who is pursuing this that it is an extremely dangerous apparatus. It has never knocked any of my fillings out, but it certainly has a potential for doing so. It has smashed mirrors, in one of its incarnations, 80 feet away. It has overturned a large metal object about 50 or 60 pounds about 100 feet away. And its effects can't be pinpointed unless we're lucky. We try to find the active area and then we hope that something will happen but perhaps something very far away will happen. The apparatus is capable of starting fires anywhere. It will start fires in concrete, little bursts of flame here and there and it will cause your main circuits to have problems. We've blown fuses out as well as circuit-breakers and large lights.

It also tends to destroy itself and a classic case of that is when we had some important potential investors looking to help develop it. In the morning of its being shown, it blew one of its own transformers apart, and so, needless to say, we could not do a successful demonstration.

Marcel Vogel: Congratulations. I find it exceedingly exciting and interesting. I too have experienced the generation of power like this with a crystal. Just a single, natural quartz crystal cut in a special form. I generated fields which have knocked out electrical equipment and generated power which has destroyed matter. My suggestion to you is to do specific gravity measurements on the pieces of metal on the beginning and end of the specimen. What I think is happening is that there is intervibrational activity going on; namely, you're stimulating the lattice motion, and when it gets to a critical space, the lattice collapse and then you get that stratification that is characteristic. I saw it in a series of metal samples. They look like they was leafing in the aluminum and metal. That should be critically studied as it is a very important thing that can help you to understand.

George Hathaway: You're suggesting specific gravity measurements?

Marcel Vogel: Absolutely.

Jacques Gagnon (Montreal, Quebec): Were there any of these effects when John was not there?

George Hathaway: None of the large effects have occurred when John was not there. We had some minor occurrences when I was personally adjusting the set-up, but I can't suggest that these were the same kinds of things that you saw because they could easily be blamed on merely electrostatics. And anyone can do lots of funny things with electrostatics. They were rather unusual, but I cannot claim to have seen anyone else, including myself, make the apparatus work. Basically that translates into: have the patience to sit with it and adjust it without John being there himself for hours and hours.

Jacques Gagnon: Roughly what is his background? Did he study how he thinks he is doing this?

George Hathaway: That's a good point. John has a high school education, and he does not have any formal electrical or university training. He has been experimenting with Tesla coils. In fact, the way he stumbled upon this was to try to duplicate Tesla's transmission of electrical power without wires. At an experiment, he inserted the Van de Graaf generator which he was repairing for a friend.

He cannot explain these things in terms that people who've had training in these fields would like to use. He talks about energy fields, he talks about energy moving around and being transported from one place to another. He talks about interaction between energy and gravity. That is the extent to which he can explain what his understanding is. He has an incredible intuitive capacity to follow the flow of energy that he is trying to manipulate. Something far beyond me. I have no concept of the kind of understanding that he has. He's been at it since he was about 6 or 7 years old, continuously. He has a government pension for a medical problem so he has lots of time. Time is necessary to develop that kind of technology, if you are not concerned about particular results in getting somewhere. Unfortunately, most of the rest of us don't have that kind of time and we want to produce something that is tangible, something usable, something that we can develop into useful products. That is of very little interest to John per se. He's interested certainly in getting the technology moving, but not at our pace. And that has been one of the causes of having this thing sitting in storage and taking a long time to develop. So he has a good intuitive feel of what is going on. He cannot explain it in words that you and I could understand, and he's been at it for so long that it doesn't really matter. He has no need to converse with us in those kinds of terms, and I doubt that he could.

Dr. Harold Aspden (University of Southampton, England): I've been greatly impressed by this, of course. It's incredible. I would not have believed this from a distance, but it's great to see the demonstration and I have the confidence now that this is a real effect. My first reaction is that I would want to look at the breaking of the specimens with an eye to what is called the exploding phenomena. This is where you pass very rapidly, very big currents through the various wires and they break up into very tiny mm sections, as if they had been chopped up, with no evidence of melting. This is a phenomena being studied by Peter Graneau particularly and that should be considered in regard to rupturing process. I cannot escape from the fact that there must be some evidence, there must be some action of the ether in this activity.

I think the relevance of the tornado to this is of very great interest because there is evidence of patterns in fields, circular patterns in special groups and that has something to do with the magnetic fields that are created. That, to me, is evidence that you can get some kind of vortex or spin in the ether itself and I would look at this phenomenon perhaps arising from the induction of filamentary vortices in spins which tend to pull up these specimens. Having said that, and suspecting that there's another way, I would never go over a cup of coffee that's vibrating with a camera just above it, because my poor head would get in the way of these things and I'd be very scared to go anywhere near that type of activity. So I am a bit concerned that you can have all these things happening, and then moving with a camera to take those pictures! How close did you dare go to the real centre of activity?

George Hathaway: We were within 6 to 8 feet of it. John respects his apparatus when it's going, and he will not enter into it. He knows the limits of it and he tells us what the limits are, and we stay outside those limits. I suffered a severe migraine headache after my first two encounters with it, but I cannot ascribe them directly to the apparatus. I was so excited after seeing this thing work for the first time, and the second time, that my mind was going at 1200 miles per hour, and that is what I attribute my headaches to. John, on the other hand, has complained of microwave clicks deep inside his head. The microwave clicks are a phenomenon that has occurred in radar technicians, where for some time they hear clicking sounds deep inside their heads. John has complained about that but he has not complained about any major effects. We perhaps have just been lucky, or perhaps somehow, he has been protecting us. I don't like to bring up the PK (psychokinetic) end of all this but it certainly may be relevant.

Regarding Peter Graneau's work, I have discussed this with him and he is aware of what is going on. He is very interested in following it up, and as regards tornadoes, it's something as well that might be relevant. There is film evidence of the fact that tornadoes have very interesting electromagnetic phenomena going on inside them. Bodies levitating, going up and down very slowly in the eye of a tornado, and emitting showers of sparks.

Marcel Vogel: I want to add one more thing as a word of caution. Just taking water and spinning it around a crystal in the wrong direction I did but once in my life in 1984 and I was flung 10 ft. away from the experiment against the wall and the next day my face was burnt as if exposed to intense radiation. My eyes were closed. It was witnessed by five persons. That was only letting 100 cc of water spin around a crystal that was charged. So you must proceed cautiously these forces. I speak with experience.

Bernard Grad (Institut Armand Frappier, Montreal, Quebec): Just one comment. First of all, let me explain that I'm no physicist. I've had conventional university training in physics but I'm essentially a biologist and I'm especially interested in the energy fields of living things. The immediate thing that struck me about your talk is that the phenomena is very reminiscient of poltergeist activity. I don't want you to begin to think mystically as soon as I say this: I myself see a lot of poltergeist activity as a direct result of intense and disturbed energy fields in people living under specific circumstances. The fact that you noticed that this phenomena is seen only in the presence of this man and has been working in this from a very early age implies to me that his organism has a specific need in this regard.

I can tell you one little experiment. I've done work in relation to the energy. A healer was onstage, and to his side (the audience was facing him) was his wife sitting at a table, such as you are, with a microphone. The healer was there, and his wife was sitting in front of the microphone there. Over on the side of the stage was a generator. This was an unusual situation in the sense that the generator was there. While he was healing, to the surprise and astonishment of everyone, a waveform appeared directly towards the motor to such an extent that it frightened and astonished everybody, but the thing was able to be dampened as soon as he stopped healing and as soon as she turned the microphone away. I just want to put some focus on this direction. I think these are very interesting phenomena, by no means mystical phenomena, I want to emphasize, but phenomena that can be investigated scientifically. Another total surprise: he's a person who never had a formal education, but he constantly speaks of energy field which is, by the way, the way many healers speak.

George Hathaway: We had considered that kind of approach (the PK psychokinetic approach) as a possible explanation as well. We tended to downplay that for a number of reasons including the fact that John was very excited about two particular demonstrations we were going to give for rather high-powered investors. On both occasions the apparatus failed. One could say that there was some kind of negative influence, and John's one unconcious side was fooling his other unconcious side into saying that he was not going to proceed with this. But he certainly was excited and he wanted to get going again.

Anonymous: My wife and I are in touch with John Hutchison regularly and we have a large archive of his information and he has stated that he does not wish this technology to be used for any destructive or military means and that he has kept certain information, so that it will not be able to be used by other people. And this may be one of the reasons why no one else has been able to replicate exactly what he has done, because he has not told anyone everything that he is doing, so that's one point I wanted to make and that may be why no one else has been able to replicate this.

Bernard Grad: Have you tried to selectively isolate components in the electrical experiment so as to pinpoint whatever may be the cause of this?

George Hathaway: We were going to embark on a program of doing just that in our phase of work in 1982, but unfortunately things fell apart contradictorly with John and we were not able to continue that research. John has an interest in putting more things into the apparatus, not less. Unfortunately we were not able to continue.


Figure 1. Examples of disruptive phenomena, including a broken bushing.


Figure 2. Two samples of disruptive phenomena: contorsion and segmentalisation


Figure 3. Aluminium specimen from one of John Hutchison's experiments October 1984 (70x magnification)


Figure 4. Fractured iron rod/bar which includes regions which were mapped by x-ray: see also figures 9 and 10


Figure 5. Scanning electron microscope photo taken at the University of Toronto of an aluminium sample subjected to the Hutchison effect


Figure 6. Scanning electron microscope photo taken of an iron sample subjected to the Hutchison effect


Figure 7, Figure 8. Higher magnification of polished aluminium sample with pure element globules emerging after Hutchison effect


Figure 9, Figure 10. Spectral plots of typical aluminium sample compared with an area where fractures developed under Hutchinson effect occured


Figure 13. Field strength reasings during experiments. Trace: 60 Hz bursts with classical Tesla coil decays.


Figure 13. Field strength reasings during experiments. Trace: 60 Hz bursts with classical Tesla coil decays.


Figure 13. Field strength reasings during experiments. Small coil peaking at 760 KHz;


Figure 13. Field strength reasings during experiments. 610 KHz sideband.


Figure 13. Field strength reasings during experiments. Large coil at 610 KHz.


Figure 13. Field strength reasings during experiments. A 300 KHz emission source


Figure 14. Field strength measurements during Hutchison effect experiments at about 350 KHz, showing strength versus distance from source


Figure 15. Strip of 8 mm film of a 19 pound bronze bushing in powered take-off, in slow motion.


Figure 16. Plot of linearly-rising powered take-off of a 19 pound bushing calculated on an acceleration / time graph.

Developments in inertial thrust

J. Scott Strachan
6 Marchhall Crescent
EDINBURGH EH 16 5HN
United Kingdom

The obvious primary target for inertial thrust is "parity", i.e. the ability to lift its own weight, and then an improvement of efficiency and reliability. However, any demonstration of parity would inevitably allow unlimited funds for development. It should be noted that even without parity but with high reliability, the system would be far and away the most useful dirigible satellite power source yet created and would have a substantial market straight away for this purpose.

It is my opinion that the RLF project in particular is of critical importance at this time. It is a complex project and many problems lie between our present level of knowledge and a practical system. If we were to embark on this project and fail, the whole field would be set back by many years. It must not fail.

Theoretical Framework

The question immediately thrown up by Professor Eric Laithwaite's "Through the Looking Glass" experiments was: "Was Newton wrong?". To answer this question it is worthwhile looking at exactly what the assertions were, and what implications are general and which specific. The assertions are:

1. that a processing massive spin plane has no inertia in the direction of precession;
2. that consequently there is no apparent centrifugal force (implying no centripetal force?);
3. that the forced precession of a massive spin plane will result in an inertia-less precession at right angles to the force;
4. that this means that displacement of a mass can occur without an equal and opposite reaction.

It can be seen that the proof of assertion (1) leads inevitably to the veracity of assertions (2), (3) and (4).

Consider a flywheel mounted on a shaft which is itself mounted on a second shaft by means of a gimbal such that:

shaft 1 may pivot in the plane in line with shaft 2, and
shaft 2 can be rotated. This forms a system which precesses about the axis of shaft 2 in response to a force on the flywheel in line with shaft 2 (Fig.1).


Figure 1.

The behaviour of a flywheel mounted such that its precession and spin planes are about the same point is well-known. The behaviour of a system such as that defined above, where the spin plane of the flywheel and the precession axis are separate, has given rise to a number of interesting observations, some of which it has been claimed would make Newton "turn in his grave". It is the purpose of this paper to explain these observations and to demonstrate that some of the hopes raised by the said observations, of the realization of a linear force from a rotating system, are valid but that Newton's laws remain substantially intact, the effect, as will be seen, being the result of a more rigourous definition of mass than was hitherto. thought necessary.

Mathematical Expression for the Precessing System

The problem of finding a mathematical expression for the behaviour of the processing system described above is one of defining a consistent frame of reference from which the forces and velocities can be measured. To this end, I have chosen the axis and spin plane of the flywheel as the origin of measurement of force and velocity. Thus the spin speed of the flywheel is expressed in the conventional


i.e. radians per second, but the velocity of the flywheel through space is expressed in metres per second. The importance of this becomes clear when it is realized that the radius of precession measured to a point on the rim of the flywheel in the precession plane is greater than the radius of precession at the axis of the flywheel. Thus the velocity of a point on the rim of a stationary flywheel is greater than the velocity of the axis of the flywheel. Thus in terms of these velocities a point in the rim of a rotating flywheel can be said to accelerate at a rate defined by (from Figure 2):



Figure 2.

This acceleration is in the direction of precession as the point moves from a vertical position in line with the precession axis to a horizontal position in line with the precession plane. It is counter to the direction of precession as the point moves from the horizontal to the vertical position.

Now, the important point is that at both sides of the flywheel in precession plane the velocity of the point relative to the precession axis is greater than at the vertical position. If the rotation of the flywheel is subtracted, this is perfectly valid since we are considering only the acceleration as defined by the precession velocity.

The resultant of this effect is best appreciated by doing a simple thought experiment or, if you prefer, by doing the actual experiment described, on a record turntable. Imagine a flywheel constructed of a number of hollow spokes each containing a measured mass. Now imagine the flywheel rotating at a constant speed. At a point in its rotation a mass on side of the wheel is displaced towards the axis. At the opposite side, at the same time, a mass is displaced away from the axis. It can be seen that the Coriolis force from this displacement is in the same direction on both sides and if the displacement is equal the acceleration of the masses is equal, in the sense that one mass is decelerated in the opposite direction to the other mass's acceleration.

The magnitude of acceleration is defined: sin


R1 - R2 on one side and sin

R1 + R2 on the other side, bearing in mind that the actual vector of velocity is opposite on each side of the flywheel. In this case, provided that no other force is applied to the masses (that is other than the new centripetal force the masses now experience), the displacement so caused will be returned to origin during the period 90° 180° later. The system will then oscillate about its axis due to the imbalance. But if the axis is allowed to displace in response to the said Coriolis force and the masses are continually displaced in the manner described, but are allowed to return to their original position, it can be seen that a total displacement of the axis of the flywheel is achieved in direct response to the force applied to the said masses but at right angles to the applied force. This occurs because, provided the masses balance 90° after the impulse, the reaction takes place out of phase.

Defining the Flywheel Process

In a real flywheel the process is slightly more complex. Effectively, the acceleration between the precession plane and the precession axis plane can be considered as an addition and subtraction to the centripetal force on the mass points on the flywheel rim as defined by:


Since we are dealing with acceleration only M can be dispensed with for the time being. Since there is no actual displacement of mass with respect to the axis of the flywheel and we are dealing only with the apparent acceleration caused by:


a giving a resultant V at right angles to a (Vr):



and

At it can be seen that a vertical is directly proportional to V at the precession plane rim of the flywheel minus V at the axis of the flywheel, provided that the velocities are measured as a vector through the plane of the flywheel. It can be seen therefore a vertical is proportional to the angular velocity of the precession and the spin velocity of the flywheel which defines the


of the acceleration between V at a point on the rim of the flywheel vertical to the axis of precession to V at a point horizontal to the plane of precession. Hence, if we use the following notation:

Spin velocity = spin of flywheel in rads/s =



Precession radius at axis of flywheel= p
Precession radius at rim of flywheel on the precession plane= L
Precession velocity at rim of flywheel= Z
Precession velocity at axis of flywheel= V
Radius of flywheel= R

it can be seen that


the vector of

velocity through the flywheel plane. So a vertical (notation a) is:


N.B.


for 90° rotation of the flywheel. 2(1)

The force required is defined by:


being the resultant vector of force in the flywheel plane. This vector is used to change the


of the flywheel and increases the resistance to the motion of the flywheel precession.

The equations can be simplified using


flywheel and

' R1 and R2 for the precesion, but in this form the "nuts and bolts" of the theory are more readily seen.

It should also be noted that the conventional rotation of the flywheel plane about its axis at right angles to an induced rotation about its axis still takes place, and this results in any movement of the flywheel following a path rotating about the axis of precession which, as can be seen from the above equations, results in a constant angular velocity in a plane at right angles to the applied force. Note that when the flywheel is under an acceleration equal to:


it need not displace vertically for the force to balance, but that if a is greater than any vertical acceleration (such as gravity) it must displace in the path described. If it does not, then the force applied results in a direct acceleration of the flywheel with the force F = Ma modified by:


the difference between F and F


being absorbed in the rotation of the flywheel just as before.

It should also be apparent that if R is greater than P then the conventional precession rotation will result in a net downward force on the axis of precession, and if R is less than P the precession rotation will result in a net side force on the axis of precession which if restricted returns us to the losses described in the discussion on restricted displacement above.

Use of these equations can predict the behaviour of all of the observed phenomena described in Professor Laithwaite's "Engineer Through the Looking Glass" and reveal a number of useful methods of exploiting the effects. I should point out, of course, that these equations deal with an idealized flywheel where all mass is on the rim and that for the real world the acceleration is an integration between Z and V. A simplification is to define the effective mass radius and calculate Z from the effective R. For instance, for a flat disc flywheel effective, R would equal:


being the radius of the effective centre of mass at the effective rim of the flywheel. Obviously for a very good flywheel where most of the mass is on the rim, the basic equations are quite adequate and will give results within a few percent of the measured values. If a test of accuracy of these equations is desired, then the use of the flat disc flywheel with the radius correction taken into account results in exact values although, as the equations show, the efficiency of such a system in terms of force for mass is pretty poor.

Working Systems

One system is described in the accompanying synopsis of my patent and is, in terms of duty cycle, one of the most dramatic. However, its efficiency is not great and the immense stresses caused in the structure from wasted energy make it difficult to scale up to a more useful power.

The simplest system makes use of the displacement principle and consists of a system such as the one described throughout the paper, driven in the following manner. A limit is placed on the precession vertical to the precession axis and means are provided to drive the precession through approximately 90° around the precession axis. The resultant reaction moves the system opposite to the direction of the flywheel during the driven portion of precession. The force required to drive through the 90° is, of course, equal to the mass of the flywheel times its acceleration plus:



is always through the plane of the flywheel so that this force behaves as friction to the motor. The displaced mass of the flywheel is, however, decelerated by the internal force, as discussed earlier, giving the effect of the disappearance of the inertia of the flywheel as soon as the 90 forward precession period is over. Thus the system axis continues at the velocity attained in each pulse as a reaction to the mass of the flywheel until the next pulse. It can be seen that the duty cycle of this system depends on the force applied to the flywheel vertical to the precession axis. Thus, if the force is gravity, the duty cycle will be poor but can be improved by the use of a separate force on the flywheel vertical to the axis of precession such as a powerful spring. This allows the system to operate at any angle as well.

The forces on this system are very direct and thus the efficiency is quite high despite the losses due to:


It should be noted that in this system R must equal P. otherwise the precession axis would tend to wobble alarmingly and the losses would be greatly increased.

Flywheel Apparatus


Scheme 1

The apparatus comprises a pair of flywheels (30a and 30b) each rigidly mounted on a respective shaft (32a and 32b) so as to pivotably movable in a vertical plane about a central point (34). It is preferred that the distance (D) between the centre of the flywheel (30) and the central point (34) be equal to the radius (R) of the flywheel. The shafts (32a and 32b) are each journalled in bearings (36a and 36b) fixed to arms (38a and 38b), mounted by pivot bearings (40) to a central tubular shaft (42). The tubular shaft (42) is journalled in bearings (41) in a base frame (43). A stationary central shaft (45) is secured to the base frame (43), and bearings (47) are provided between it and the tubular shaft (42).

Means are provided for driving the flywheels in rotation. In this embodiment, each flywheel (30a and 30b) is driven by a respective electric motor (44a and 44b) via pulleys (46) and flexible (e.g. rubber) bands (48). The motors (44) are secured to a beam (49) fixed to tubular shaft (42) to rotate herewith.

Means are provided to rotate the central pivot bearings (42) so that the flywheels (30) may be forcibly precessed. A critical aspect of the invention is that these means for forcibly processing the flywheels must be capable of rotating the assembly at a fixed speed irrespective of the forces acting upon it, and most particularly it must not be possible to appreciably accelerate the forced precesion. One suitable method for achieving this is shown in the present embodiment, in which the tubular shaft (42) is driven in rotation by a motor (52) via a substantial worm and pinion reduction transmission (54, 56) which prevents back transmission of force.

Further, means are provided for rapidly displacing the flywheels (30) downwards against their precession resultant at least one position in the precession path. Such means may be, as shown in the present embodiment, a cylindrical cam (58) acting on cam followers (60) on the arms (38), the cam (58) being mounted on the stationary central shaft (45).

Preferably, the cam (58) has a profile (58a) which forces the flywheels (30) down twice in each precession revolution at 0° and 180° positions, and allows them to rise to a maximum height at 90° and 270° positions. Alternatively, solenoids or other linear actuators may be used for this purpose.

The precession resultant will hereinafter be referred to as the precesion lift. Since the precession lift does not result in an equal and opposite down force on the pivot (40), the cam (58) which accelerates the flywheels (30) downwards experiences a precession reaction upwards. At the bottom of the cam form (58a), the flywheels (30) are again swept upwards, with the reaction-reduced precession lift resulting in a net linear force. This action depends on the precession not being allowed to accelerate as the flywheels are forced downwards.

The energy for the work done comes from the flywheel spin inertia, and thus from the motors (44) or other means driving the flywheels. It is necessary for this drive to be delivered in a manner which does not restrict the vertical motion of the flywheels. The use of the rubber band (48) has teen found to impose an acceptably small restriction on this motion in a small-scale system. Other possible means of delivering drive would include flexible shafts, and forming the flywheels with blades driven by gas jets.

It is believed that the efficiency of the system -

(linear force)/(power supplied) x 100%

is approximately 40%, disregarding mechanical losses in the drive.

''EZKL'' - Energy zoned kinetic leverage: next generation propulsion system

Brandson Roy Thomson
Fortune Ventures Inc.
118 Emerald Grove Drive
WINNIPEG, Manitoba R3J 1H2
Canada

The Energy Zoned Kinetic Leverage (EZKL) propulsion system is referred to as the next generation because its principles differ from all presently used propulsion systems applied to transportation. These propulsion force systems develop a force imposed with friction against an external medium to cause a reaction that translates as a propulsion force or thrust. We are familiar with these because they are the only ones manufactured for cars, trucks, trains and aircraft.

This is the only form of system we have available at present because the perspective of the engineered design of all systems is focused upon the accepted concept that a "medium must be present to produce a propulsion force by way of reaction". The reason this perspective is focused upon mediums, is basically founded upon the acceptance that Sir Isaac Newton's Laws of Motion must be applied verbatim, and it is interpreted that a medium is absolutely necessary. The EZKL system's principles do not violate any of these laws of motion in physics, rather this new propulsion force relies upon the dependability of the second half of Newton's First Law of Motion "unless it is compelled to change that state by forces impressed upon it", which subjects the rigidity of all laws to conditional change.

The concept of converting rotary motion into linear motion producing no reaction is not a new idea, but it is assumed to be impossible as it appears to violate certain laws in physics, and its pursuit, up until now, deemed to be foolhardy.

EZKL converts rotary motion into linear motion producing thrust with no apparent external reaction. Internally manipulated masses cause vibratory centripetal forces to be imposed upon each wheel's axle in a controlled manner. This centripetal force is the same force that causes that potent action termed as "vibration" which is usually viewed as a negative and destructive element. A single wheel is "out of balance" and if rotated simply oscillates around its centre of mass or balance (Figure 1).


Figure 1

Converting oscillating motion to linear motion

The first step is to convert this oscillating motion into linear motion confined in two directions. Then, by joining two wheels to a common plane with their axles parallel and causing the wheels to be turned symmetrically opposite, all of these centripetal forces are regimented to impose these forces in four equal directions. (Figure 2)


Figure 2

The forces imposed in the direction along the common plane allows no movement along that direction, as they are neutralized by equal and opposite forces produced from within each opposing wheel, but the forces imposed perpendicular to the plane are imposed in an alternating manner and causes the common plane to vibrate. A bias of the perpendicular centripetal forces imposed is caused, and produces a continuous resultant force being imposed in a single direction that affects the whole of the mass of the system, and causes the mass of the system to be propelled and affects any mass to which it is attached.

The EZKL concept

If a six ounce mass is attached to a 2 1/2" arm, which is fixed to an axle that is being turned at 1,000 revolutions per minute, it is observed that the 6 ounce mass is attempting to pull the axles in that direction toward the mass and is caused by centrifuge (Figure 3).


Figure 3

We determine the strength of that centripetal force imposed upon the axle by first determining the mass velocity in feet per second:


ft/sec

and secondly the centripetal force in pounds using:


lb

Examining one EZKL wheel module and its components, we see internally mounted a 1.25" pitch diameter sun gear which is held stationary by a shaft protruding through its axle as the wheel is rotated (Figure 4).


Figure 4

Two like gears mounted on opposite sides of the sun gear inside the wheel are bearings mounted to the wheel and caused to rotate two revolutions for each wheel revolution, as they are meshed with the fixed sun gear (Figure 5: "p"). A crankshaft throw located at the centre line of mesh of each planetary gear causes the throw to continuously alter its distance from the centre of the wheel axle. Thus, its velocity continuously accelerates and decelerates, relative to the wheel axle centre, producing an inverted heart-shaped orbit (Figure 5: throw "t", t-orbit "tO").

Attaching a free-swinging, pendulum-shaped planet mass to each throw in which the centre of mass is equidistant to the centre of the planetary gear axle, and causing the wheel to turn, allows the planet mass to describe a concentric orbit path with a continuously changing degree of curve. This causes the planet to describe a loop over the centre of the wheel's axle, causing the wheel to be out of balance, and oscillate around its centre of mass (Figure 5: planet path "pO").


Figure 5.

Relative to the centre of the wheel's axle (which will be referred to as the common plane), it is observed that the velocity of the planet is continuously accelerating and peaking at that furthest distance from the wheel's axle, which position is referred to as the major arc peak. This produces the greatest centripetal force upon the axle (Figure 6: v"M").

Upon passing the peak position of the major arc, the planet velocity continuously decelerates relative to the axle centre until the crank throw arrives at the "point" of its inverted heart-shaped orbit and the velocity of the throw ceases momentarily as the planet describes the minor arc over the wheel axle. This produces the lowest centripetal force imposed upon the axle in the same direction as the major arc (Figure 6: v"m").


Figure 6.

Passing this "point", the throw causes the planet velocity to return to the acceleration phase in the planet orbit, and begin to increase the centripetal force strength that is imposed upon its axle.

Determining the velocity of the planet mass at that moment its position is at the peak of the major arc, is accomplished in two steps:

1) A spot is determined upon the inside of the wheel that locates the centre of the planet mass, and its radius from the wheel's centre is calculated as 2 1/2", assuming the wheel is turning at 1,000 rpm. We determine the velocity of the spot as (equation):


ft/sec

2) The planetary gear is rotating at 2,000 rpm, and the radius from the planetary axle to the centre of mass of the planet is 1 1/4". Therefore, the planet is being carried past this "spot" at:


ft/sec

Therefore, the velocity of the planet mass at this position relative to the axle, which is part of the common plane, is determined as:


= 21.8166 + 21.8166 = 43.6332 feet/second

The centripetal force imposed upon the axle or common plane at this moment in time in the direction toward the six ounce planet mass, is determined as:


lbs.

Determining the velocity of the planet mass at the peak of the minor arc is calculated from the centre of the crank throw as it has come to rest at the "point" of the throw orbit, but the planetary gear axle maintains a uniform 1,000 rpm. As the radius from the throw centre to the planetary axle centre is equal in length to the centre of the planet mass, it is determined that the radius of the circle described by the planet around the throw would be 0.625 inches, and the velocity of the planet mass at the peak of the minor arc is calculated as:


feet/second.

Therefore the centripetal force imposed upon the wheel's axle in the direction of the planet mass at the peak of the minor arc is calculated at:


lbs.

Aggregating the forces for concurrent imposition

Therefore, the centripetal force imposed upon the wheel's axle is continuously changing as its velocity continuously changes, and its strength is dependent upon the planet location in its orbit, as its velocity is relative to the axle centre. The total force imposed upon the axle by the major and minor arc peak positions by the two planets at the same time aggregate to:

178.48651 + 2.7888 = 181.2 lbs

are imposed upon the common plane in one direction toward the planet at the major arc position away from the common plane.

To allow continuance of centripetal force calculations of the other arcs, two of these wheels are joined with parallel axles to a common plane and turned symmetrically opposite, and allowing the major arc in each wheel to impose their force at the same moment in a direction perpendicular to, and away from, the common plane (Figure 7)


Figure 7: Four equal arc centrifugal forces imposed in one direction away from the common plane

It is observed that the centripetal forces imposed in the direction along the common plane are neutralized by equal and opposite like forces produced from within each opposing wheel and prohibits each wheel axle from movement along the direction of the common plane. Also, when each wheel turns 90 degrees away from the peak of the major arc, each of the four planets are turned 180 degrees, and describe four arcs which radii are all equal in length as the axle of each planetary gear align their centres to the common plane.

It is determined that the velocity of each of the planets in this equal arc position are equal to each other, and that as each planet is equal in mass size and weight (6 oz. each), by way of axiom, the centripetal force transferred and imposed upon the common plane in their direction are equal to each other.

The force produced by the major and minor arc of each wheel causes the common plane to be propelled in that direction perpendicular to this plane, and at I ,000 rpm, it was determined that each major arc imposed 178.48 lbs. and each minor arc 2.78 lbs. This totalled 181.26, and as two wheels are imposing this force upon this plane, the common plane is propelled in this direction with a force of 362.53 lbs., and is imposed over that length of time each planet occupies this zone. Newton's Third Law of Motion indicates that the four equal arcs imposing their centripetal force in the opposite direction to the two major and two minor arcs, are aggregated and are equal and opposite with a total force of 362.53 lbs. Therefore, each equal arc produces a centripetal force of:


lbs.

The common plane is now in a state of vibration, as a force of 362 lbs is imposed alternately in opposing directions as a result of joining two out-of-balance wheels at their axis, and converting their rotational oscillation around the centre of their mass into a vibratory action in two directions only. Obviously, this vibration would cause a two-wheel contraption to self-destruct. Therefore, this vibratory action is neutralized by attaching two additional like wheels to the common plane, with their rotation advanced 90 degrees, so that the second pair are imposing their equal arc centripetal force at the same time the first set of wheels are describing the two major and two minor arc centripetally imposed forces. This neutralizes all forces in all directions affecting the whole of the system's mass, and vibratory movement ceases (Figure 8).


Figure 8.

We observe the four wheels rotating symmetrically opposite, and the system remains stationary as all internally imposed centripetal forces affecting the whole of its mass are at present neutralized by equal and opposite like and unlike forces.

Role of magnetic force

Located directly prior to the wheel's axle centre are two plunger poles for each planet of a unique but powerful electromagnet between which each planet passes in an alternating manner. These poles are referred to as the planet trap, and when activated impress a magnetic force upon the planet mass, causing its movement to cease prior to reaching the peak of the minor arc, and converts its momentum into kinetic energy which is transferred to the wheel's axle through the medium of the electromagnetic field. The kinetic force imposed upon the axle is along that direction of the common plane where it is neutralized by equal and opposite like forces produced from within the opposing wheel (Figure 9).


Figure 9.

The magnetic force continues to be impressed upon the planet mass as the wheel is rotated 90 degrees from that position of the peak of the minor arc. The planetary gear has at this position been rotated 180 degrees, and although the crank throw is in that position where the planet describes an equal arc, it is prohibited from describing this equal arc because the centre line of the planet mass is taken out of alignment to the planetary gear axle at the throw of 105 degrees. Since this arc is not described, the normal centripetal force of this equal arc is not produced nor imposed upon each axle. Therefore only two of the four equal arc centripetal forces are imposed upon the common plane (Figure 10).


Figure 10.

The captive planet is released when the throw reaches the peak position of the equal arc, and as the wheel continues to rotate, the planetary gear rotating at double the wheel rpm causes the throw to pull the planet with a whip-like action, returning it to its planet orbit position prior to the major arc peak position being reached. The new forces causing this whip-like action are imposed in that direction along the common plane, where they are neutralized by equal and opposite like forces produced at the same time by the opposing wheel.

Prohibiting two equal arcs from being described reduces the total centripetal force of 362.53 lbs. to 181.26 lbs being imposed in the direction of the equal arcs perpendicular to the common plane. Therefore, a bias of vibration is caused as the reinstated major arc is described and the minor arc is described and trapped, producing an aggregated centripetal force of 362.53 lb imposed in their direction, and overcomes the opposing force of 181.26 lbs imposed at the same moment in the opposite direction (Figure 11).


Figure 11.

Therefore, a resultant force of 181.26 lbs is caused and imposed upon the common plane, in a single direction away from the common plane, in the direction of the peak of the major arc at a frequency of four 181 lbs force applications per revolution of the wheels.

At 1,000 rpm, there are 66 2/3 applications of 181 lbs force strength imposed in one direction per second, causing the system to be propelled with continuous acceleration. The magnet field must produce a shear force sufficient to overcome the kinetic value produced at the peak of the minor arc as its momentum is converted to kinetic energy and the strength of this kinetic force is: (v = 5.454 ft./see relative to the common plane; m =.375)


lbs.

Experimental research has determined that the kinetic force is overcome by doubled shear force. The shear force strength of the planet trap is now developed to produce a range from 5 lbs to 40 lbs, utilizing a 12 volt current supply to the magnet coils. The working prototype being developed at personal expense measures 18" x 18" x 10", with a gross weight of 85 lbs. A 12 volt battery supplies the magnet coils and the prime force turning the wheels is a 3/8" chuck rechargeable Mikita drill. This prototype is designed to operate in the range of 500 to 1,500 rpm providing the availability of 40 lbs of thrust being imposed 33 times per second, varying up to 400 lbs if thrust being imposed 100 times per second with instant reverse available.

EZKL advantages

The advantages of the EZKL system are noteworthy. These are attractive for numerous applications. In the case of recreational boating these are:

1. Elimination of accidental injury from a propellor blade
2. Fuel consumption reduction reduces hydrocarbon emissions
3. Noise pollution is reduced due to lesser prime force required.

Similarly, advantages have been noted for helping manned gliders remain airborne, new designs for air planing vehicles designed to exploit continuous acceleration characteristics.

Suitability has been recognized for long-distance urban bus and tractor trailer transportation, taking advantage of the 2 to 3,000 lb force capability. Fuel consumption is expected to be reduced by approximately 70%. Because the force can be instantly reversed, the vehicle could be pulled to a stop without relying upon friction between the rubber tires and the road surface, thereby increasing passenger and cargo safety, even on icy terrain.

The EZKL models

The photo to the left shows the simplified version built specifically to demonstrate that a bias of the centripetal forces does in fact produce a resultant linear propelling force converted from rotary motion. The photo to the right shows the first working model for practical transportation application.


The simplified version


The first working model for practical transportation application.


2-wheel system

2-wheel system (1983) showed self-propulsion on foam, across water; and (1989) to swing to one side only in a pendulum test.


Troller

Troller, 30 - 40 rpm, 60 lbs., self-contained with 12V battery. Propelled author 200 on 10' aluminium canoe 200' (360 lbs. mass).


4-wheel pan

4-wheel pan (1988) built to observe interaction of the internal masses during operation and to study the advance and retard of planet trapping and release at varying rates of rotation. Weight: 45 lbs.

The structuring of fluidic materials by crystals

Marcel J. Vogel, Ph.D.
Jennet Grover
Birthe Madsen
P.R.I.
1725 Little Orchard Street C
SAN JOSE, California 95125
United States of America

This paper deals with the formation in fluidic materials of an intermediate state which is given the term a "lyotropic mesophase system". This system, once achieved, may be detected by means of melting point determination under cross field polarized light and spectrophotometry, ultraviolet, visible and infrared. We have further examined the water specimens with the Omega 5 Metatronics Machine, a modified radionic unit, and have been able to detect the differences between:

A) different crystal shapes (i.e., 6-sided, 8-sided and 13-sided)
B) crystals with and without a program
C) projecting colour through the Omega 5 to the fluid
D) information transfer from one crystal to another

Laboratory Equipment

1. Perkin-Elmer Model 267 Infrared Spectrophotometer
2. Cary Model #15 Ultraviolet/Visible Spectrophotometer
3. Fisher Surface Tensiomat Model #21
4. BE-Vincent Machine for pH and rH measurements
5. Fisher Electrophotometer #11
6. Amber Conductivity Meter # 64
7. Zeiss Ultraphot IIIB Microscope with Microspectrophotometric Attachment with Computer
8. Omega 5 Metatronics Machine (instrument designed to detect and measure fields stored in crystals.

Industrial Unit

The industrial unit consists of a coil of stainless steel tubing 3/4" in diameter, 7 turns housed in a wooden box. In the center of the coil the crystal is mounted in a replaceable mounting. The crystals that were used are the following:

A) In the laboratory unit (large unit):

1. 6-sided double-terminated crystal 6 1/4" x 2 1/4" (15.875 x 5.7 cm)
2. 8-sided double-terminated crystal 5 3/4" x 2" (14.6 x 5.1 cm)
3. 13-sided double-terminated crystal 5 3/4" x 2" (14.6 x 5.1 cm)

B) In the Omega 5 unit:

4. 6-sided double-terminated crystal 4 1/4" x 1 3/4" (10.8 x 4.45 cm)
5. 8-sided double-terminated crystal 4 1/2" x 1 1/2" (11.4 x 3.8 cm)

The fluids used are:

Water:
a) Alhambra Purified Water for all distilled water purposes, sodium-free
b) Tap water
c) Reverse Osmosis tap water

Wine:
Varietal red and white wines

Experimental Procedure

1000 cc samples of the three waters listed above were obtained. Each water was poured once around the large crystal in the industrial unit. The crystal was cleared from any charge and program when it left the laboratory.

A sample of the processed water was collected (50 cc) in a plastic vessel and another sample was taken for spectrophotometry in its plastic cuvette.

The control runs were:

a) water samples as received with no treatment;
b) water samples run through industrial unit with crystals cleared;
c) water samples run through the industrial unit and projecting from the Omega 5 unit with the 6- and 8-sided charged crystals;
d) water samples run with the Omega 5 unit turned "off"; and,
e) water samples run with Omega 5 unit turned "on".

This series of runs establishes a set of base line measurements which can be used for comparisons. Runs d) and e) are a repeat of b) and c) for repeatability check.

Colour Experimentation

Next, we broadcast colours with the Omega 5 through both the 6 - and 8-sided crystals. The colours chosen were: F. Red #25, H. Orange #23, J. Yellow #12 L. Green #89, N. Blue #80, P. Purple #48, and R. Black Lite (4 watt UV lamp).

Between each colour transmission, a run was made with the Omega 5 unit turned "off". The run numbers assigned to these procedures were the following: G.,I.,K.,M.,O.,Q.,S..

The results of the Omega 5 readings from transmission of colour projected and amplified via a charged 6-sided crystal to a 6-sided crystal in the industrial unit which picks up a charge. The results measured one week after the runs were completed, are shown in Figure 1. They indicate the energy level stored in the water after treatment.


Figure 1. OM-5 readings 6 to 6 projection.

The following observations can made:

a. Red suppresses any field stored in the water.
b. The fields increase in each colour until we have a maximum at the UV or black light region.
c. The greatest effect is with the Alhambra Purified Water.
d. The next greatest effect is with Reverse Osmosis water.
e. Energy can be transferred to tap water by projecting purple or W.
f. When the projection was turned "off", the field decreased to 0 (zero).

This experiment offers a good indication that the fields which crystals emit can be transferred to and stored by fluids, especially water.

The same procedures were then followed in the laboratory with 8-sided crystals being used in the industrial unit and the OM-5 machine. A primary observation is an overall increase in the energy storage in the waters and the very high capacity of the UV mode to store energy in tap water.

Many of the events did not come down to zero. We are studying these results and feel that what may have happened is that there was a "contamination" of the waters we used from the previous broadcasting in the 6


6 experiment.

Taking each of the water samples and doing an off-line test, gave the following results. Freshly acquired, untreated samples (controls) gave zero readings with the OM-5 instrument. When we broadcast purple #48 through the 8-sided crystal in the presence of these samples, the samples changed in value from the broadcasting. We then erased the water samples with the bulk demagnetizer and they all returned to their original values. These values are listed below: (RO = reverse osmosis water sample)

Water controls:

ALH-

111

000

000

Untreated samples, OM-5 Readings

RO-

000

000

000


TAP-

000

000

000


ALH-

454

111

111

Broadcasting charged 8-sided

RO-

454

000

000

crystal with purple #48 for 10

TAP-

454

000

000

seconds and then measured.

ALH-

111

000

000

Demagnetized same sample sand

RO-

000

000

000

re-measured.

TAP-

000

000

000


In summary of our 8


8 projection (Figure 2): a. Alhambra Purified Water gave the strongest set of stored fields. b. Greatest effect is with UV c. There was very little difference between RO and tap water.


Figure 2. OM-5 readings 8 to 8 projection.

When projecting from a 8-sided to a 13-sided crystal in the industrial unit, it was observed that there was a further shift away from the baseline and that, surprisingly, Red (F) did not suppress as it did in the other two sets of runs (Figure 3).


Figure 3. OM-5 readings 8 to 13 projection

Figure 3.

The dotted lines represent areas that we did not run as we ran out of Alhambra water. The highest reading took place through the projection of purple # 48 (P) to the water. Readings were high on all three samples.

Microspectrophotometry of the Water Samples

A Zonax attachment to the Zeiss Ultraphot IIIB Microscope gave the ability to make transmission spectrophotometric readings of the water samples. A typical reading can be found in Figure 4.


Figure 4. Typical spectroscopic reading Reverse Osmosis, yellow #12, OM-5 projected water sample

From all the readings run on the samples, we took the following areas of the spectra and plotted the changes we have noticed in the samples. At all times we compared in the same graph the three samples of water - Alhambra Purified water, reverse osmosis water and tap water. The wavelengths selected for evaluation were:

440 510 610 690 (in nanometers)

and the transmittance plotted as a function of: a) change after colour transmission and b) charge or no charge to crystal.


Figure 5. Transmittance values obtained with 6-sided crystal

The graph in Figure 5 gives the transmittance values for the four wavelengths as obtained with a 6-sided crystal. Note that tap water and the RO water match. The Alhambra water is different. These graphs set the standard for comparison to all subsequent work with the 6-sided crystal.


Figure 6. Transmittance values obtained with 8-sided crystal

In Figure 6, transmittance values were obtained with a 8-sided crystal. Here we find a very close matching of the three samples with a deviation at 690 nm. This deviation was the same with the 6-sided crystal, with a reversal absorbency at 690 nm for the Alhambra water. This is the standard of comparison for the 8-sided crystal.

With the application of a 13-sided crystal, we find a similar pattern, with the Alhambra water showing the highest value. The feeling is that this change results from the bottled water picking up a charge as we progressed in our experimentation. This is also true of the RO water which was in a container.

The Omega 5 readings on the control waters were:

A


ALH-

111

000

000


RO-

000

000

000


TAP-

000

000

000

A- 8-sided samples -


ALH-

454

000

000


RO-

454

000

000


TAP-

454

000

000

A- 13-sided samples


ALH -

454

4.54

x 10


RO-

454

111

111


TAP-

454

000

000

Notice the large variation in the field in the in the control water to start. This was not known at the start of the experiment as these readings were all done one week after the series were run in the laboratory. This can help to account for the variation of the samples at the start of the experiment. As Fred Allan Wolf speaks about the Quantum Effect between humans and matter, so too we are seeing this Quantum Effect between matter itself.

Spinning of Water Around The Crystal

This set of results is a control for the broadcast of colours each of the crystals. What was done here was to spin 1000 cc of each of the waters around the 6, 8 and 13-sided crystals. Each of these crystals was cleared of any charge, as far as we knew at that time. In these graphs, we summarize the effect of the 6, 8 and 13-sided crystals.

In the Alhambra Purified Water - B. the greatest deviation noted was with 6-sided crystal. As we progressed to the 8 and 13-sided crystals, there seemed to be an imprinting in the equipment from the first run. The same thing occurs with reverse osmosis water - B. that is, a shift from 6 to 8 and 13-sided crystals. There is a greater variation between the three waters at 690 nm.

With tap water, the same effect occurs. The dates of each sample run are noted.


Figure 7. Transmittance values obtained with 13-sided crystal


Figure 8. Effect of spinning 1000 cc of water around 6-. 3- and 13-sided crystals


Figure 9. Imprinting in the equipment from first spinning "run" with reverse osmosis water


Figure 10. Imprinting in the equipment from first spinning "run" with tap water

Projecting colours

We next went to projecting colours from a remote unit, the Omega 5, through to a 6 and 8-sided crystal. These crystals were charged beforehand. We then transmitted red, orange, yellow, green, blue, purple and ultraviolet. Between each run, the Omega 5 was turned "off" and a blank run was made.

The 6


6 results on Alhambra Purified water show that, when compared to the B control, the red (F) stands out and the purple (P) and the yellow (J) stand out. These show the greatest deviation from the normal.

The 8


8 results on Alhambra Purified water indicate a significant drop in transmittance throughout the visible spectra of all samples compared to the standard Alhambra curve. We feel that this is an indication of a change in state in the water. With this phenomenon appearing in 6 separate samples, this could not be due to chance nor an accidental event. Red and orange showed the greatest deviation while green was the lowest. When compared to the 6

6, the same difference was noted.

The 8


13 series indicates transmission values normalizing around the standard. The projection of blue (M) and red (F) have increased absorbency, whereas yellow (J) shows a lowering of value.

In summary, in treating Alhambra water with projected colours through an 8-sided crystal, the most significant result was with the 8


8 projection giving an overall lowering of the absorbance of the water. This meant that the sample became denser when compared to the other samples.

Colour treatment with Reverse Osmosis and tap waters

For the 6


6 cycle, when compared to the standard, all of the samples had lower absorbance than the standard run except for orange (H). The major deviation lower was purple (P). For the 8

8 run, a significant drop in absorbance in all the values of transmission from red to purple. This was very similar to the 8

8 run with Alhambra water. In the 8

13, we are back to the same values as with the standard, as with the Alhambra water.

The 6


6 colour treatment with tap water indicates very little difference. noticed against standard. Purple (P) is the only run which stands out in a lower value. For the 8

8, all values were significantly lower in absorbance than the standard. This corresponded with the other two samples. This is consistent all through the run. With 8

13, we are back to the same absorbance as with the previous waters. It is surprising to measure such consistency.

Figure 11. Projecting colours from OM-5 6 to 6 on Alhambra water


Figure 12. Projecting colours from OM-5 8 to 8 on Alhambra water


Figure 13. Projecting colours from OM-5 8 to 13 on Alhambra water


Figure 14. Projecting colours from OM-5 6 to 6 on reverse osmosis water


Figure 15, 16, 17. Projecting from OM-5: 8 to 8 and 8 to 13 (reverse osmosis water) and. 8 to 8 on tap water


Figure 18, 19. pH and conductivity measurements on 8 to 8 "runs" with colour projections, with lab model (left) and with industrial unit (right)

In summary, in projecting 8


8 with the colours, there is a lowering of the absorbance in all three water samples. This is not true in 6

6 or the 8

13 projections. The major source of influence here is the crystal geometry (8-sided double-terminated to 8-sided double terminated crystal). There is a real communication link which we have measured by spectrophotometry.

pH and Conductivity Measurements

We measured the pH, conductivity and surface tension of each of the water samples which we prepared. In the 8


8 colour run made with the small laboratory model projecting to an 8-sided crystal, there were significant pH changes in the Alhambra water with no change in the conductivity. There were large changes in conductivity in the tap and RO waters with little change in pH. We cannot comment on this at present.

In the industrial unit colour runs, we see in the 8


8 a repeat of the pH changes with the Alhambra water when projecting red (pH 6.0) and orange (pH 8.3). These units were in separate rooms, a good 15-20 feet (6 to 8 m) apart. These were equally dramatic events with the tap water and reverse osmosis waters in their changes in conductivity with projected colour. Analysis and understanding will come from further experimentation.

Summary and Conclusions

1. Water can be modified in its conductivity and pH by spinning the fluid around a crystal tuned to a particular chromatic frequency.

2. This frequency can be transmitted from one location to another by using a matched pair of crystals.

3. Different groups of crystals produce significant variation in the characteristics of the water treated by them.

4. The fields that are stored in the water by crystals can be detected by Radionic type measurements.

5. These fields have a magnetic characteristic and can be erased by an AC bulk eraser.

6. The fields stored in water by crystals and colour are capable of doing useful work in purifying water and enhancing the flavour of wines, beverages and foodstuffs.

7. The fields that are created in the water are a permanent part of the system, unless deliberately erased.

8. Water may be boiled and not lose this charge.

Solar energy - The nature of natural and ''EWEC'' solar collectors

Philip S. Callahan
2016 N. W. 27th Street
GAINESVILLE, Florida 32605
United States of America

The physical symbol for energy is E, and is defined as that property of a system that is a measure of its capacity for doing work. It is hardly necessary to point out that every farm crop, grassland, forest and creature there is, plus all the seas of the world, depend on the sun for energy. It would be impossible to even begin to calculate a value for E that represents the amount of work in joules that spreads across the face of the Earth each second of each day. Imagine, for instance, the amount of work accomplished each day by each tree, each insect, each monkey, etc. etc., being multiplied by the thousands of different species, and the millions, or billions upon billions, of these "working" individuals. Despite this example of the power of the Sun, I was constantly informed by energy "experts" that there is not enough energy from the Sun to run the miserly few electrical systems in my home.

As the Japanese haiku poem by the gracious poet Shiko states:

"Through scarlet maple leaves. the western rays
Have set the finches' flitting wings ablaze."

The poets eyes, of course, were the collectors of that flitting scarlet blaze, and the eye is composed of a lens and numerous spines, or rod and cones if you prefer.

In this paper I use the word spine to designate any natural object that takes the form of an elongated lens. In other words, a dielectric material (insulative substance) that geometrically may be considered as a lens that is taken at its center point and pulled out into an elongated rod, cone, pyramid, etc., such as helical cones on the legs of some mites (Figure 1).


Figure 1. An insect helical, dielectric waveguide receiving antenna, called spine or sensilla responding to specific infrared wavelengths.

Spines on living systems are often set in complex patterns somewhat as in the case of manmade antennae (Figure 2).


Figure 2. Scanning electron photograph of tapered sensilla.

The amount of energy from the Sun impinging on collectors depends on the angle of the Sun on the horizon, the presence of water vapour and other atmospheric constituents and atmospheric radiance itself (Figure 3).


Figure 3. Spectrum of absorbed Solar energy as a factor of atmospheric radiance and the zenith angle of the Sun on the horizon.

Solar cells presently have a conversion efficiency of about 15%. Thus, if 1,000 watts of sunlight fall upon one square meter of solar cell material, the resulting DC output is 150 watts.

The EWEC promises some major advantages over solar cells. Early calculations show that EWEC may have a theoretical conversion efficiency rate of 50 - 70%. Mechanical flexibility appears inherent in the EWEC if the absorber elements are mounted on a flexible substrate, while solar cells often crack and lose efficiency if not mounted on a sheet-roll process - a feat of economy not presently achieved in solar cells.

The biggest problem is not in miniaturizing the small collector spines, but rather in coupling into the collected infrared energy, and converting it to electricity. From my observations of insects and plants, there is no doubt this can be done by copying nature!

My experiments indicate the hairs on the leaves of plants are really dielectric waveguide-antennae for collecting energy in the form of infrared or microwave signals from the Sun. Bailey and I have excellent models in plants and insects, for our solar energy research. (The Sun has millions of narrowband radiating emissions in all portions of the spectrum.)

I am suspicious of a researcher who is not infatuated with the elegance of nature, spines being a part of that elegance. Electromagnetic waves are as much a part of nature as are the insects and the plants themselves. I am disappointed in Hertz, for whom the electromagnetic waves are named. He was not alone in the discovery of radio waves - Hughes deserves equal recognition - and he once wrote to the Dresden Chamber of Commerce that all research into the characteristics of electromagnetic waves should be discouraged, because they could not be utilized to any good purpose. So be it!

A working model

The Sun gives off tremendous amounts of radiation in the 3 to 10 cm region. Lately, I have been utilizing a Gunn diode to generate 3 cm radiation and modeling wax-coated dielectric spines to test their efficiency as open resonator collectors of specific radiation bands (Figs. 4 & 5). I have demonstrated that long tapered spines, as occur on insect antennae, are directional (Fig. 4) whereas short curved spines, as occur on many plants (called trichomes) are omni-directional (Fig. 5). They collect and amplify in a circular (omni-directional) pattern. Such dielectric open resonators increase by focusing radiation (or amplify) up to 4 times, e.g. from a baseline of 10 micro A to between 30 to 45 micro A (Figs. 4 & 5). As pointed out in our work, an array of such spines or cones, matched to a solar cell, would increase its efficiency from 15 or 20% to 70 or 80%, well within practical economic utilization.


Figures 4 and 5. Polar plots of long, smooth tapered and short steeply bent sensilla. Long sensillum is highly efficient directional end aperature antenna and has 4x amplification. Short one is omnidirectional and has 2x amplification.

The atmosphere does indeed complicate the study of solar radiation considered as an energy source. The fact that the transmission of solar energy through the atmosphere is a complex problem should not be used, as it often as is, to proclaim resonant solar collector systems impractical.

As I pointed out in my book "Tuning in to Nature" (1), the candle in many ways resembles the Sun. It gives off numerous narrow band visible and infrared emissions. If we tune to a candle from across a few yards of space, we should be able to tune to the Sun across a few million miles of space. To accomplish tuning in to the sun, we will without a doubt have to consider doing it with micrometer-long spines, or pits, in the same manner that plants and insects accomplish this.

Spinelike projections, considered as waveguides for collecting energy, have been thought about by other researchers. There are numerous historical examples of researchers in various parts of the world having the same idea at the same time. The best example is the simultaneous American and Soviet invention of the laser. My own work is also a good example of two researchers working on different projects, but with the same theoretical viewpoint: both of us were doing research on the same campus at the University of Florida, totally unaware of one another! ,

Professor Robert Bailey, of the Electrical Engineering Department, had designed and produced a metal model of what he calls an Electromagnetic Wave Energy Converter (EWEC). In 1973, he obtained a patent through NASA for his EWEC.

The drawing from his patent, which is a design for one that works in the microwave portion of the spectrum, is very similar to the tapered insect dielectric spines. When Professor Bailey was told about my work by another researcher who had heard my seminar before a meeting of electrical engineers, he immediately recognized the similarities of the problems involved in our respective research fields. He crossed the campus - a distance of approximately one mile - to tell me about his work. As fate would have it, we were slightly closer together than were the American and Soviet developers of the laser.

The EWEC was designed for the specific purpose of collecting the Sun's electromagnetic energy and converting it directly into electricity for domestic use.

Professor Bailey believes, as do 1, that to efficiently tune into the Sun we must miniaturize his EWEC down to visible and infrared wavelengths where the greater portion of the Sun's energy lies. This means utilizing dielectric materials, not metals, as dielectrics are the best resonators at these short waves. We must design the EWEC as a dielectric antenna-collector of radiation.

The current chief solar energy conversion technique is the silicon solar cell, an established technology created for the space program. Sunlight impinging upon these smooth-surfaced cells causes electrons to flow, and thereby creates direct current.

The EWEC Converter

In essence, Professor Bailey's patent consists of a series of dielectric absorbers of solar radiation (Fig. 6). As he points out, there are three main considerations for such a solar collector modeled after natural insect and plant spine-like absorbers and focusers.


Figure 6. The EWEC (Electromagnetic Wave Energy Converter) US Patent Number 3,760,257 issued to Prof. Robert L. Bailey on September 18, 1973. It consists of a series of dielectric absorbers.

1. They are one of the principal elements of the invention. Without them, the invention is useless.

2. The least is known about such absorbers of any element used in the invention. Considerably more precise scientific and engineering knowledge is needed before practical converters could be designed with assurance they would work.

3. Knowledge learned about absorbers probably will have useful spinoffs to other areas, e.g. solar thermal absorbers for terrestrial and space use. There is some preliminary indication that better solar thermal absorbers could be made by not just "roughing up" the surface as is now done, but by causing the surface to have a coating of uniform height cones appropriately arrayed.

4. An additional spinoff from this research may accrue to the agricultural area of insect control. The understanding of absorbers from this research will have direct application to insect antennae and may lead to super-efficient electronic means of trapping insects, preventing crop and food destruction.

Information about the Electromagnetic Wave Energy Converter (EWEC) and its stage of development can be found in Professor Bailey's US Patent No. 3760257. Sept. 18, 1973 NASA. It is hoped other researchers will continue the development of this elegant concept.

References

1. Callahan, Philip S.. Tuning in to Nature. Devin Adair. Greenwich. 1975.

2. Callahan, Philip S.. Dielectric waveguide modeling at 3.0 cm of the antenna sensilla of the lovebug, Plecia neartica Hardy. Applied Optics. 24, April 15, 1985. p. 1094-1097

3. Engineering and Industrial Experiment Station, University of Florida, Gainesville. Final Report: Electromagnetic Wave Energy Conversion Research to NASA Goddard Space Flight Center, Greenbelt MD. UF Project 2451 -E43. September 30, 1975. 49p.

The electro-resonance generator

Leon R. Dragone
P.O. Box 175
SPRINGFIELD, Massachusetts 01101
United States of America

The basic equation governing the operation of this device is:


(1)
where:

L is the inductance of the coil,

Rt is the total resistance including the coil's, load's and battery's,

Eb is the battery voltage,

C is the total capacitance in the circuit, parasitic and otherwise,

Ep is the voltage across the arc.

Now it is an established fact that an arc has a negative dynamic resistance. Much empirical work on the electrical characteristics of the arcs was carried out during the period 1920-40. One typical voltage current relation for an arc is:


Using this, we can write (1) as:


(2)

It is clear that the power consumed as Joule heating of the resistances in the circuit is P = RtI² and the electrical energy consumed as heat during the time


is:


Now let us differentiate (2) with respect to time:


(3)

This can be put into the form (set n = 1 for simplicity)


(4)

Now:


is the equation of an undamped oscillator and the term


is the damping term. It is clear that

can be positive (dissipative), zero (superconductive), or negative. If negative, the second term acts to produce a negative Joule heating effect; i.e. heat is absorbed from the surroundings by the circuit, and produces a spontaneous increase of the current in the circuit. Clearly, this is in violation of the Second Law of Thermodynamics. When I increases to the point where


the oscillations become damped out. The decaying exponential


is actually observed on the scope during operation of our device and we can write:


(c = constant)

as an empirical equation of this dissipative part of our current oscillations. During these decaying oscillations, the overall resistance in the circuit acts to damp out the current. This is a conventional and well-understood phenomenon - there is no free energy production in this mode.

The new or anomalous behaviour occurs when


for then the so-called dissipative term clearly acts to sustain and build the amplitude of the current oscillations. (In this part of our analysis, only positive currents are used; i.e., I > 0). This increase in current amplitude is due to the negative resistance Rt - B/I²<0, and since we can identify resistance times current squared as a negative Joule heating effect; i.e., instead of electrical energy as heat, we are absorbing heat and converting it into electrical energy! To find the power dissipated, we write:

P = (Rt - B/I²) I²

It is clear that if


the coefficient of I² is negative and so the power dissipated is negative. This means that the circuit is suffering a Joule cooling effect, and adding electrical power to the system. This electrical power is evidenced by an increasing amplitude of the current (an experimentally observed fact) and by absorbing heat from the surroundings. The heat energy produced in the circuit in time

, is:

Heat = Q =


and when the second term predominates over the first, i.e., when I is small,

Heat = Q < 0

which shows that the circuit is producing negative heat; i.e., it is cooling or taking in heat.

This heat is converted into an increase in amplitude of the current, i.e., into higher grade electrical energy.

This theoretical description of the system fits well into our experimental observations and can act as a guideline for further improvements. However, the system seems to demonstrate other anomalies. For example, while one part of the circuit shows negative current flow, another part simultaneously shows a positive current flow. Also, there is strong radiative coupling between various parts of the system and the surroundings. Further, two types of arcs have been experimentally discerned: the cool arc, white in colour, and the anomalous current is associated with this arc; the hot arc, blue in colour, no oscillation or anomalous current occurs with this arc.

Hence the identification of


as the fundamental voltage current relation for our arc may be premature. However, the basic idea of a negative resistance (dynamic or otherwise) is fundamental to this system. These concepts are by no means new and the literature contains much on this topic. For example, Van der Pol's equation which governs the current flow in a triode is:


(5)

Here the second term can be positive, zero, or negative leading to damped, undamped, or building oscillatory solutions. We can calculate the negative or free energy directly from (5). The second, or damping term is:


If we integrate this term with respect to time, we get the voltage:


This is the voltage across the circuit element which gives rise to the damping term. The energy delivered into this element in time


is:


Clearly if I > 0 is small, Ei n < 0 is negative, showing that the circuit element is putting out electrical energy - free energy. This calculation is an example of non-dynamic negative resistance and only shows why it is so important to discern the correct voltage current relation for our arc. The measure of the free energy produced depends on a knowledge of this relation. More research is needed to get the correct empirical relation between voltage and current for our arc.


Electric arc

Advances with Viktor Schauberger's implosion system

Borge Frokjaer-Jensen
Danish Institute of Ecological Techniques
Ellebuen 21
2950 VEDBAEK
Denmark

This following confines the presentation at the 1st International Symposium on Non-Conventional Energy Technology held in Toronto in 1981. At that time I emphasized the implosion theory and some advances made by the Austrian scientist Viktor Schauberger. Now I am focusing on one of his designs - a water cleansing device, which has since been successfully developed.

When Viktor Schauberger died in 1958, at the age of 73, it was after a bitter fight against his contemporary scientific community. His ideas about the structure and function of nature were seen as controversial and difficult to comprehend. He advanced new ideas and theories about the essence of water, about plants, about agriculture and forestry and about the energy supply. Yet his ideas did not die with him; interest in his theories is spreading.

Literature about Schauberger's inventions and theories is now available. In my opinion, the best guide is Bertil Gustavsson's book "Connections between the Implosion Theory and the classic sciences". Unfortunately, this book is in Swedish only. Another book, in German, is "Die Geniale Bewegungskraft" (The genious force of moving), by Schauberger's friend and collaborator Aloys Kokaly, who is also editor of "Kosmische Evolution". This periodical gives firsthand information of natural biological processes observed by Schauberger. Still another recommended book is written by his friend, Olof Alexandersson: "Living Water", Turnstone Press, 1982. Finally, the German periodical "Mensch und Technik - Naturgemass", has a special issue on Viktor and Walter Schauberger (issue H4, 1982, written by Harthun, Fischer, Neumann and Wieseke).

Schauberger noted that everything in nature has bipolarity: light and darkness; warm and cold; male and female; pressure and draw, etc.. Between these opposites exists a constant flow of elevating qualities running from the lower to the higher one in a logarithmically spiralling movement. There are two spiralling movements: one outward-going sucking energy away from the centrum in excentric centrifugence, and one inward-going which presses energy in a concentric centripetence towards the condensed centrum.


Movements

Natural modes of motions

The core philosophy of Schauberger could be expressed as follows: The force is life, and the secret of life is bipolarity. Without opposite poles in nature, there is no attraction and repulsion, there is no movement. Without movement, there is no life.

Each movement has its characteristic qualities: The outgoing movement is followed by an increased friction and increased pressure with a rise of temperature, biological breakdown and decomposition. It is characterized by the processes of explosion, expansion and gravitation. The in-going movement is followed by a decreased friction, decreased pressure combined with decreased temperature and biological improvement. The corresponding processes are implosion, impansion and levitation.

Even when the two movements are naturally balanced as two opposing forces of equal strength, the inwards-going concentric spiral dominates in its reconstructive role.

The spiral movement may turn either counter-clockwise or clockwise. When the motion spirals counterclockwise, it has a structuring and formative influence on material flowing (fluid or gas) in the implosion spiral. If it turns the opposite way around, its function is decomposing and disintegrative. The greatest vitalization is obtained when both spirals move inside each other. Such resultant or double concentric spiral movement is called the implosion spiral and constitutes the essence of Schauberger's implosion theory.

Schauberger used this theory to explain certain natural phenomena such as typhoons, whirlpools, certain shell forms, the meandering of waters, and new forms of energy generators (including his levitating implosion disk).

The water cleansing device

The water-cleansing device is based on the implosion process. Figure 2 shows the concept presented in the 1981 Toronto symposium. Water runs from the left into an egg-shaped copper bowl through small nozzles in a spiralling copper tube. The nozzles rotate the water beams and the spiral-wrenched copper tube causes the whole amount of water to start a spiralling whirl, which spins in the bowl and down through the outlet tube. Experiments showed that impurities such as iron fillings were pressed together into small balls in the center of the tube by the impansion force. Figure 3 shows the copper spiral tube used in the prototypes. Unfortunately, this type of spiral tube could not give sufficient water velocity in the vortex.


Figure 3.

Figure 4 shows a similar product which is made of copper. This is dangerous if such an appliance is operating continuously. Impurities in water may combine with copper and develop toxic byproducts.


Figure 4.

Figure 5 shows the current water cleansing device, now in production. One of the participants in our research group is Irma Hoyrup. She visualized a golden pear-shaped water-cleansing device viewed externally as well as internally. In our device, water is sprayed into a golden pear-shaped bowl, and forms a counter-clockwise spiralling rotation, the implosion movement, around an electrode in a conical golden tube. This water undergoes the changes pointed out by Schauberger: namely a reduced temperature and an increased vitality. Treated water is still spiralling when it leaves the device, as can be shown in photography at shutter speeds of 1/1000 of second.


Figure 5.

The mechanical energy for the vitalizing process is supplied by the water tap pressure. The normal working range is a pressure of 3-7 bar, which corresponds approximately to 15-40 litres of water per minute, an advantage over other water-cleansing devices on the market. In the centre of the golden bowl there is an electrode made of layers of copper, silver and gold. This electrode is soldered to the bowl in order to give a good electric connection. The electrical energy for the cleansing function is directly tapped from the whirling water. The faster the implosion whirl is, the stronger is the electric current through the spiralling water in the implosion centre around the structuring electrode. Measurements carried out on water containing average pollution levels show a 140 mV potential between the negative copper/silver/gold electrode and the positive copper/gold bowl. The electrical potential depends on the rotational speed in the spiral and the current depends on the degree of pollution, reaching as high as a few milliampere in averagely polluted water. Bacteria and virus are effectively killed with this arrangement. Moreover, all the inner surfaces of the device are gold-plated, and therefore are chemically inert. The silver electrode's surface is treated to maximize the active area. As the water turbulence is broken up by the granulated silver surface, microscopic air bubbles are formed in the implosion spiral. This mechanical treatment causes the lime in the water to decompose into carbon dioxide and released in the air, and calcium carbonate, which crystallizes into aragonite crystals, which do not precipitate as lime.

Finally, the water-cleansing and water-vitalizing function depends on a special interaction between pure materials: gold, silver and copper, which are the constituent parts of the device. Such interaction between the gold and copper on the surface of a golden copper plate apparently was known by alchemists. Irma Hoyrup informed us about this layering process and the technique of how to properly fashion the golden copper bowl.

Test results

This device has no filter, no reverse osmosis membrane and there are no magnets attached. Water is pouring out at a rate of up to 50 litres per minute. But does it cleanse the water? In Denmark, traditional chemical and biological analyses are made by Stein's Laboratories, authorized by the Danish government. Figure 7 shows the certificate of an analysis of polluted lake water before and after passing through the device.


Certificate of an analysis.

The main function of the device is to kill bacteria. Colon and faeces bacillus are reduced to 15% of the originally very high numbers. Several biologists have confirmed that these results are unique, especially since there is no filter or semipermeable membrane in the device. Furthermore, the pH-value is increased from 7.3 to 7.5, or greater alkalinity.

Vitalised water

Viktor Schauberger stated that water could be "dead" or "alive". Water becomes dead after passing through kilometers of straight iron pipes. Such water loses biological effects and does not add support to living organisms. Living water may be found in wells and in unpolluted rivers with meandering flows. And, water spiralled through an implosion whirl is biologically active.

Table 1

Extract of the results carried out by Stein's Laboratory with the improved Schauberger-type of water cleansing device

Reduction of minerals:

Permanganate, calcium, sodium, bicarbonate, sulphate, nitrate, nitrite and phosphorous (These reductions are minor)

Reduction of bacteriological counts:

Colon bacillus:

542 to 79 per 100 ml

Faeces bacillus:

542 to 79 per 100 ml

Number of germs:

7200 to 4500 per ml

Fluorescent germs:

400 to 100 per ml

pH value:

7.3 to 7.5

There exist several methods of measuring biological activity. More than 100 Kirlian photographs have been used by us to evaluate water before and after passing through the device. In Figure 8, the photo to the left shows the pretreated tap water.


Pretreated tap water.

The treated water has at least three important differences:

1) The blue belt of light around the treated water is broader than around the untreated water;
2) the sparks penetrating the blue belt of light are more indistinct or fuzzy in the tap water; and
3) the sparks are measured to be approximately 25% longer around the treated water

The little red circle in these photos indicates where the lead from the high-voltage source has touched the drop of water placed on the back of the film. The exposure is made with 10 short single sparks of 15 kilovolts from a piezoelectric crystal. When specialists on Kirlian photography are requested to interpret these results, they concur that the water has improved its biological quality. Such water appears to have been structured as the one treated by the method employed by Dr. Marcel Vogel.

The water has been tested by the Helix Centre for Liquid Flow Research in Odense. This research centre applies non-conventional methods, such as those developed by Dr. Rudolph Steiner and kinesiological muscle tests, which have been used in the following analyses. In Table 2, the left column indicates the ideal rating. Then comes the analysis with tap water, followed by reverse osmosis water and our device's. Water treated by our device parallels the ideal case rating.

Table 2

Comparative analysis of water for vitality,

Helix Centre, Odense
(June 15, 1988 data)


Ideal

Tap

R O

Device

COLD TAP WATER

Parathyroid

+

+

+

+

Overstress

+

/

+

+

Allergy

/

+

/

/

Thymus

100/100

/

94/100

100/100

BOILED WATER FROM COLD TAP WATER SOURCE

Parathyroid

+

/

+

+

Overstress

+

/

+

+

Allergy

/

+

/

/

Thymus

100/100

/

92/100

100/100

HOT TAP WATER

Parathyroid

+

+

n a

+

Overstress

+

/

n a

/

Allergy

/

+

n a

/

Thymus

100/100

/

n a

10/100

The device appears to perform better than a reverse osmosis aggregate, which is the best commercial competitor. An analysis of hot tap water indicates that such water is so spoiled that it is impossible to regenerate it, even with our device.

Sprouting tests

A good way of testing biological vitality of water is to grow seeds in it. Different seed varieties have been used and the overall observation is that the sprouts from the seeds grown in treated water are somewhat stronger and thicker than sprouts grown in tap water. In Figure 9, the left photo shows beans grown five days in tap water. Notice the mould growths. Smell of putrefaction was pronounced. Beans grown in treated water have sprouts that are stronger and longer. There is no sign of mould and no bad smell.


Beans grown.

In treated water the seeds consume about 20% less water than when grown in tap water. These results are measured after 20 hours, at statistically significant levels (99.9%). It is possible that the seeds require less of the biologically active water to grow.

In a "Future Technology" gathering in Berlin, Dr. Hacheney from Dortmund, while presenting a paper on Kirlian photography of water, said: "We are living in a community where it is necessary to have a centralized supply of water, even though we know that the quality of tap water is not good for drinking. Therefore, we are forced to develop a device for water improvement which can be installed in the water supply at the consumers' end". Preliminary tests show that we really have got something special. Furthermore, it gives an excellent reduction of microorganisms.

Plastic engine technology

Covert Harris
PETCO
KINGSTON, Ontario
Canada

This paper will take a look at a short history of PETCO (Plastic Energy Technology Corporation), a leading firm in the development of plastic engines, as well as a quick look at the many advantages of portable plastic engines over the conventional two-cycle engines that are presently used in 5 H.P. and down applications.

PETCO was founded in July 1985 by its first president, Gerry McKendry, along with senior vice-president and director of engineering, Lee Lilley. Lilley, the technical mastermind of PETCO, used to design and drive race cars, thus has a great natural understanding and appreciation for the importance of producing light parts, as lighter means faster, which is the bottom line for car racing. Because of this, he first started experimenting with injection-moulded thermoplastics, and was able to move up to seventh place overall in the Formula I Grand Prix circuit. Lilley later designed an engine that runs 200 degrees cooler than conventional two-cycle engines. With the increasing thermodynamics, however, the temperature of the engine may become less of a factor even though plastic engines run cooler than conventional ones.

When PETCO first came into existence, and after building its first successful prototype in May of 1986, they established a plant in Kingston. The facilities were quite small, only 33,000 square feet, with an ability to produce 250,000 units annually. Even with the acquisition of a plastics company, which greatly improved self-sufficiency, PETCO was limited in its potential due to the small size of the plant. However, in the fall of 1988, a new 157,000 square ft. plant was opened with a capability for producing 2 million units per year.

Figure 1 shows the basic design of a small working plastic engine. The connecting rod, crankcase, the gearcases, everything but the cylinder, piston and crankshaft, are plastic. Since this design, the crankshaft is made of plastic as well. The standard two-cycle engine consists of two strokes, where the piston goes up and compresses the gases, the gases are burned, the piston descends, forcing out the burnt gas, and goes up bringing in fresh gas, and so on. Most of these engines bring in the gas on one side of the cylinder, about midway, and expel the gas out on the other side, again about midway. This makes it inevitable that they will mix to some degree, so it is impossible to keep the gas completely pure. Lilley instead decided to run the gas into the crankcase below, timed, and thus synchronized by a rotary poppet valve, ran into the crankshaft through a series of holes, so the air from the outside would actually cool the crankshaft from 300 degrees.


Plastic engine.

The holes help atomize the gas even further as well, so the gas would then be even easier to burn than in a conventional engine. From the poppet valve in the head, fuel, as well as incoming air, are sprayed directly at the bottom of the sparkplug, generally the hottest part of the engine. That squirting of the air actually cools the engine on top, without negatively affecting its operation efficiency. With the cool gases drifting towards the bottom they push the spent gases out the end holes. This expels almost all of the burnt gas before it can contaminate the fresh gas.

Efficient, cleaner

Due to this greater purity, the gas can also burn more efficiently. Instead of burning at about 50% in conventional internal combustion engines it burns at 95% efficiency. This, in turn, creates carbon dioxide instead of carbon monoxide, so it is not only environmentally cleaner, but utilizes the fuel more thoroughly.

More advantages with plastic include being able to hold tighter tolerances than with die-cast metal parts, as the plastic doesn't have to be machined nor ground, which is not as consistent in substance as injection-moulding, as is done with plastic used in engine parts. At this point, we can make engines up to 70% plastic, and we are aiming for a much higher percentage in the future. This allows manufacturers of products that use light engines, such as lawnmowers and chainsaws, to not have to design around the engine since the plastic is more reliable.

Heat insulation also doesn't take its toll on plastic engines the way it does on the standard internal combustion two-cycle engines. The plastic doesn't crack as easily, and the plastic won't suffer from structural fatigue. Noise is also better contained within the plastic, unlike metal which tends to transmit, and even amplify it.

On the question of lifetime, our first prototype at PETCO lasted for more than 6,000 hours before we dismantled it. And even then, it was not dismantled due to deterioration, but because of obsolescence. The standard two-cycle engine in the 2 to 5 H.P. range will rarely last for longer than 300 hours.

There are generally three different plastics used by PETCO in building our engines, and these are normally supplied to us by I.C.I. (International Chemicals, Inc), as well as from our wholly-owned subsidiary, Paragon Plastics (1983) Limited. The main one of these plastics is known as Feutron, a compound of reinforcing glass-fiber with polyetherimide thermoplastic resin, is actually 50% glass, in 10 mm glass fibers, which gives this plastic great strength, and is excellent against vibrations, as well as being heat-resistant to 257 degrees Celsius. There are also higher resistance factors to fuels and fuel mixtures.

So the definite advantages of plastic engines and the practicality of their applications make this a wide open field for greater advancement for decades to come. At the present time, it has been restricted to usage for portable motors rarely more than 20 H.P., and for the most part, no more than 5 H.P. However, as plastic engines become more even more energy and cost-efficient, large applications will undoubtably be explored.

A novel D.C. motor

Gareth Jones
Talcen Eiddew, Carreglefn
AMLWCH, LL68 OPW
United Kingdom

This paper describes the design of a novel electric motor with unique performance characteristics. The motor was designed originally as a scientific experiment to verify certain conclusions following studies into the behaviour and properties of magnetic fields and therefore the approach to electric motor design is, to say the least, unorthodox, however, it is believed that one would not arrive at this very successful design if the designer adhered rigidly to the tenets of electromagnetic theory.

Indeed such is the present state of knowledge and understanding of magnetic fields that all too often an electromagnetic machine, designed in accordance with accepted theory, fails to live up to its expectations in practice and this is such a common occurrence that electric motor manufacturers show a marked reluctance to develop radically new machines and prefer instead to innovate the tried and tested designs, some of which are over a century old.

This paper also describes how existing doubly excited machines such as alternators and synchronous motors can be converted easily and cheaply to DC motors with a performance at least comparable with their conventional counterparts. Incredibly, therefore, electric machine manufacturers have been making alternators and DC motors for over a century not knowing that one machine, the alternator, the cheapest and easiest to make, could fulfill both roles. However, it should also be borne in mind that a new theory does not make a new design possible, it has always been possible to design the motor described in this paper, it was only theoretical predictions that prevented it from happening sooner.

Overview of the Electric Motor

Electric motors are energy converters converting electrical energy to mechanical energy in two stages, first, electrical energy is converted into magnetic energy and secondly the magnetic energy is converted into mechanical energy. There are numerous ways in which these conversions can be achieved and subsequently there are numerous designs of electric motors.

All known electric motors have one property in common, which is that mechanical energy is produced from the interaction between two magnetic fields, a primary field and a secondary field. It is not always obvious that two magnetic fields are involved but closer examination will invariably prove this to be the case. Cases of doubt such as reluctance motors, for example, can often be resolved by considering a coil wound around a component and establishing whether or not an e.m.f. will be induced in the coil.

The primary or main field is produced from a winding connected either directly to the supply or through switches such as brushes or solid state devices. The secondary field can be produced in three different ways:

a) by induction from the main magnetic field;
b) by having a second winding connected to the supply;
c) from a permanent magnet.

Electric motors are often categorised by the method used to produce the fields, motors in which the secondary field is induced from the primary field, for example, are known as induction motors and those that produce a second magnetic from a secondary winding or a permanent magnet are called doubly excited motors.

To design an electric motor from first principles we need to consider three activities:

1) producing the main or primary magnetic field,
2) producing the secondary magnetic field,
3) producing a mechanical torque from the interaction between the two fields.

The Primary Field

To provide a continous rotating torque, the current in the main field winding of every multi-pole electric motor must be interrupted or reversed at the end of each half cycle, and of course with a.c. supplies this is achieved automatically, but in DC machines the current has to be switched or commutated and this can be accomplished either by mechanical switches such as brushes and commutator, or by electronic switches such as: thyristors and transistors.

At first glance, therefore, a primary field connected to an AC supply would appear to be the "best" main field winding as this would obviate the need for switching devices, and for most domestic and industrial applications this is true, but many applications require a variable speed over a wide range, but since the frequency of the supply is fixed this is not easily achieved with AC motors, and therefore an AC motor has to run at or near-synchronous speed. In general AC motors are constant speed low torque machines.

To produce a variable speed drive with full torque throughout the speed range, the main _ winding of an electric motor has to be switched when the two fields are in a predetermined position relative to one another, and if we take an extreme case where a high continuous torque is required under stall conditions, then clearly a DC motor offers the best solution.

From the performance viewpoint, conventional doubly excited brushed DC motors are probably the "best" all-round motors since the speed can be varied over a wide range, or, if the application requires, the speed can be maintained constant within fine limits over the full power range by providing incremental control of the field current. By connecting the field windings in series or in parallel or in a mixture of the two, called a compound connection, a very wide range of operating characteristics can be obtained, and, certainly of all the hitherto known motor designs, offers the greatest variety of performance characteristics as well as the highest torque from a given volume.

With so many advantages in its favour, and the ease and the ease and low cost of converting AC supplies to DC, one would expect conventional DC motors to be the most popular drive for both domestic and industrial applications, but, in fact, this is not the case because conventional DC motors suffer from very serious design limitations, which make them expensive to manufacture compared to an AC induction motor, and also the brushes and commutator need regular maintenance, which add to operating costs and decrease reliability.

The Specification

From the performance viewpoint the conventional brushed DC motor is the motor to emulate, and the specification for this design is a motor that:

a) will perform as well if not better than a conventional brushed DC motor,
b) is cheaper to manufacture than conventional DC motors,
c) is relatively maintenance-free.

To make the machine cheaper to manufacture and reduce maintenance, it is desirable to have the main field on the stator for ease of access to the power supply, otherwise, with the main field on the rotor, brushes will be needed to carry the full load power, and, as well as taking up space, these components require regular maintenance.

To provide continuous rotation and to avoid large variations in torque, it is desirable to have a rotating magnetic field on the stator, which Is achieved by switching the stator windings in sequence as in AC motors, and the most efficient way of achieving the effect is with three phase windings. Three phase windings are standard industrial windings, and can also be easily wound on automatic weapons which further reduce production costs.

For a motor operating on a DC supply, the secondary field cannot be induced, and besides, having a separate secondary field allows incremental changes in performance, which is a useful characteristic to have in many applications, and therefore, a separately excited secondary field can be desirable although brushes and slip-rings would be required. Alternatively, the secondary field can be produced from permanent magnets fixed to the rotor. In this design, therefore, the secondary field can be produced by a wound field or permanent magnets to suit the application.

Construction of the Jones Motor

The basic construction of the motor is now described and a typical example is illustrated in Figure 1.


Jones motor.

The stator is fabricated from steel laminations having semi-enclosed slots to accommodate windings, and is wound with three phase windings connected in a wye configuration. The manufacture and cost of the stator is therefore the same as that of a three phase AC motor.

The rotor magnetic field can be produced from either a winding connected to the supply through brushes and slip-rings as in rotating field alternators, or from permanent magnets attached to the rotor as shown in Figure 1.

Hitherto, it has not been possible to commutate large windings such as are found on AC motors because of the very high-induced e.m.f.'s involved. When the current passing through a coil is interrupted, a high voltage is induced in the coil, a phenomenon put to good use in boiler igniters and automobile ignition systems, but in electric motors, generating a spark often leads to catastrophic failure by flashover in brushed DC motors and rupture in solid state devices.

Commutation is effected in brushed DC motors by short-circuiting the segments to which the coil is connected, and this produces the desired rapid decrease in current and flux in the coil. Unfortunately, in the case of DC motors, this rapid change in magnetic field produces a very undesirable large e.m.f. in the coil proportional to the rate of change of current, and if commutation is not completed by the time the short circuit is removed, then sparking will occur with consequential damage to the brushes and commutator.

It has been established that sparkless commutation cannot be achieved unless the inductive e.m.f. is limited to about 10 volts per coil and the mean voltage between commutator segments does not exceed about 20 volts. Clearly, these factors impose a severe design limitation on the design of conventional DC motors because even without a factor of safety, a 240v DC motor would require a minimum of 24 segments on the commutator, and since each segment has to carry the full load current, the magnitude of the problem can be appreciated.

These design restrictions are largely overcome in conventional DC motors by producing the main field from several small coils connected to several commutator segments, rather than a few large coils, and in that way, the induced e.m.f. per coil is reduced to a safe value. This solution, however, determines the characteristic rotating armature design of conventional DC motors having a large number of coils connected to several commutator segments, because it would be impractical to have a large number of coils on the stator, particularly for large motors.

The question, therefore, arises as to how commutation is going to be achieved in the present design.

How it Works

To answer this question the operating principle of the Jones motor will be described in simple terms with reference to Figure 2 which shows a simple rotating field two pole motor. With the rotor poles near alignment with the stator poles, the stator winding can be energized in one of two directions, either to produce stator poles which attract the rotor poles as shown in Figure 2a, or to produce stator poles which repel the rotor poles as shown in Figure 2b. The question is, "which method will produce the best motor?"


Rotating field.

From conventional concepts, the "best" method would be to energize the stator poles to attract the rotor poles, and this conclusion can be established by considering the energy stored in two coils La and Lb having mutual inductance M then the energy stored:


(1)

when



and k = coupling coefficient.

When the coils are energized to attract, the energy (Ea) stored in the circuit is:


(2)

and the mutual inductance M is a maximum when the poles are in alignment, and consequently, at the end of a cycle, when the force of attraction is a maximum and we need to commutate the windings, the stored energy in the windings is also a maximum. When the coils are energized to attract, therefore, the greater the force of attraction, the greater the energy to be dissipated.

From theoretical considerations, this is a logical conclusion, and it appears that since the electrical energy is a maximum when the coils are attracting, the mechanical energy is also a maximum, and therefore the greater the electrical energy converted, the greater the mechanical energy produced, and the "best" method must be to energize the stator poles to attract the rotor poles as shown in Figure 2a, and all classical electric motors operate on this principle.

When the coils are energized to repel, the energy stored in the circuit (Er) is:


(3)

which is less than the energy stored when the coils are attracting (Ea) by a factor of 2MIaIb.

Let us now consider the mechanical forces on the rotor. In a practical machine, the stator windings are inserted in slots around the inside diameter of the stator, and when a current-carrying conductor is placed in a magnetic field, it experiences a force in accordance with the fundamental equation:

F = BI1 (4)

From this equation and indeed from the S.I. definition of the ampere, we can deduce that the magnitude of the force acting on a current-carrying conductor in a magnetic field is the same irrespective of the direction of the current.

We therefore have an anomaly, because, as we have seen from the above equation and from the definition of the ampere, the magnitude of the force between two coils is the same whether the coils are attracting or repelling, but the amount of energy required to produce a force of attraction, from equation (2) is much greater than that required to produce a force of repulsion, from equation (3) and therefore, since the function of an electric motor is to produce a torque from the force between two magnetic fields, the "best" method now appears to be a force of repulsion as shown in Figure 2b rather than a force of attraction as shown in Figure 2a.

If the coils are identical and connected in series so that the current of both coils has the same magnitude, then it can be shown that when the coefficient of coupling is unity, the energy is required to produce a force of attraction from equation (2) simplifies to:


(5)

and the energy required to produce a force of repulsion from equation (3) simplifies to:


(6)

Another advantage of using repelling fields, is that the mutual inductance is a maximum when the force is a maximum, but this occurs at the beginning of the cycle and not at the end, and also, with repelling fields and assuming unity coefficient, the stored energy is zero, and no matter what magnitude of force we produce by increasing the current, the stored energy will remain zero.

So much for producing a force, but we still have a commutation problem. Or do we? Let us now consider the induced e.m.f.'s, and to simplify the description the magnetic flux produced as a result of current in the stator coil will henceforward be referred to as the stator flux and similarly, the flux produced as a result of current in the rotor coil or from a permanent magnet, will be referred to as the rotor flux.

With the stator coil de-energized, and the rotor flux threading the stator, the flux in the stator will be referred to as rotor flux, as it originates in the rotor.

Consider the stator winding de-energized and the rotor poles in alignment with the stator poles as shown in Figure 2c. The stator provides a low reluctance path for the rotor flux, and therefore most of the rotor flux locates in the stator.


Figure 2c+2d

When the stator winding is energized to repel the rotor, the stator flux makes the stator a high reluctance path for the rotor flux, which is expelled from the stator as shown diagrammatically in Figure 2d. However, the two fields exert a force on one another in the airgap, and an increase in stator flux is accompanied by a reduction in rotor flux, linking the stator, and similarly a decrease in stator flux causes an increase in rotor flux linking the stator winding.

This is an important concept in this invention and indeed to electromagnetic theory, since it is implied that magnetic fields do not expand to infinity as is generally believed, and also that magnetic fields are compressible. These were some of the concepts that the motor was originally designed to demonstrate.

To understand the principles involved, consider the rotor locked in the alignment position and the stator winding energized to, say, 50% current, then an increase in stator current and consequently stator flux is accompanied by an equal reduction in rotor flux linking the stator, and similarly, a decrease in stator current is accompanied by an increase in rotor flux linking the stator, because of the reduction in stator flux.

Contending fields and fluxes in the airgap

In practice, the boundary between the fields is in the airgap, and an increase in one field will compress the other, and similarly a decrease in one field will cause the other to expand. These simple concepts are, of course, contrary to electromagnetic theory, and an anathema to modern physicists, but the facts are that each phenomenon described can be proved quite easily by monitoring the e.m.f.'s induced in the windings. However, further discussion of the theory involved is beyond the scope of this paper.

Let us now consider the e.m.f. induced in the stator winding. Again consider the locked rotor position. When the stator current is increased producing more stator flux, the e.m.f. induced in the stator winding, in accordance with Lenz's Law, opposes the applied e.m.f. induced in the stator winding and therefore, acts in a direction which will tend to reduce current in the stator winding. At the same time, the increase in stator flux causes a decrease in rotor flux linking the stator and, again, from Lenz's law, the e.m.f. induced in the stator winding due to the receding rotor flux will be in a direction which will oppose the change; i.e. in a direction which will cause an increase in current.

Annulling the opposing fluxes

The induced e.m.f.'s are therefore in opposite directions, the growth of stator flux producing an opposing e.m.f. and the receding rotor flux producing an assisting e.m.f., and since the change in rotor flux linking the stator winding is caused by changes in the stator flux, the e.m.f.'s induced in the stator are equal and opposite and sum to zero, which allows the stator winding to be interrupted without causing the high inductive e.m.f.'s, which have plagued conventional DC motors for well over a century. For all intents and purposes, therefore, the stator winding of a Jones Motor presents a resistive load to the supply, and the commutation problems associated with commutating DC motors are obviated.

Figures 3 through 11 illustrate how the principle is applied to a practical machine. Figures 3, 6 and 9 show a cross-section through a two-pole motor and the current distribution around the stator with the rotor shown in different positions. In practice, the motor would be connected to the DC supply through an electronic inverter, but for descriptive purposes a mechanical inverter is shown in Figures 4, 7 and 10, which is also a practical solution. Indeed, sparkless commutation has been achieved on a 4Kw two-pole machine, with just two commutator segments and 380 volts between the segments. Figures 5, 8 and 11 show the effective connections to the supply for the different rotor positions of Figures 3, 6 and 9, with the switches omitted for clarity.


Figures 3 to 11.

A mechanical commutator has as many segments as there are field poles, and the machine illustrated has two field poles. The commutator therefore has two segments 6a and 6b in Figure 4. The segments are connected to the terminals of the supply through slip-rings and brushes, which are not shown, with segment 6a connected to the positive terminal and segment 6b to the negative terminal.

Description of the current distribution in the Jones Motor

Three brushes 7a, 7b and 7c are symmetrically distributed around the commutator, and connect the segments to the stator windings; brush 7a is connected to winding 2a, brush 7c is connected to winding 2b, and brush 7c is connected to winding 2c as shown in Figures 4, 7 and 10.

At the instant shown in Figures 3, 4 and 10, the start of winding 2a is connected via brush 7a to the positive segment 6a, the start of winding 2b is also connected to the positive segment 6a via brush 7b, and the start of winding 2c negative as shown in Figure 5.

Current distribution around the stator at this instant is shown in Figure 3. The conductors on one half of the stator carry current in one direction, and those on the other half carry current in the opposite direction so that two stator poles are formed having a North pole between slots 2 and 3, and a South pole between slots 8 and 9. The stator North pole exerts a force of repulsion in the clockwise direction on the rotor North pole, and the stator South pole exerts a force of repulsion in the clockwise direction on the rotor South pole.

As the rotor and the commutator segments advance in the clockwise direction, brush 7b disengages from segment 6a and makes contact with negative segment 6b. The start of winding 2b is now connected to negative segment 6b via brush 7b, so that at this instant the start of winding 2a is connected to the positive terminal, and the start of windings 2b and 2c connected to the negative terminal as shown in Figure 8. The current distribution around the stator at this instant is illustrated in Figure 6 and again the conductors on one half of the stator carry current in one direction, and those on the other half carry current in the opposite direction so that again two stator poles are formed, a North pole between slots 4 and 5, and a South pole between slots 10 and 11.

The stator North pole exerts a force of repulsion in the clockwise direction on the rotor North pole, and similarly the stator South pole exerts a force of repulsion in the clockwise direction on the rotor South pole, and again the rotor advances in a clockwise direction until brush 7c disengages from segment 6b and makes contact with segment 6a and the cycle is repeated as illustrated in Figures 9, 10 and 11. In this way, the stator poles produced by the three phase windings advance around the stator "pushing" the rotor as illustrated in Figures 3, 6 and 9.

From Figs.3, 6 and 9, it can also be seen that all the stator conductors are energized at any given instant just as in a conventional brushed DC motor, and again like a conventional DC motor, all the conductors are in the secondary magnetic field, and all contribute to producing a torque on the rotor. The torque per amp conductor produced by a motor operating on the Jones principle is, therefore, the same as that produced in conventional DC motors.

The mechanical commutator can be replaced with an electronic commutator or inverter, by switching a three-phase inverter in synchronism, and in phase with the rotor position and a diagram of conditions is shown in Figure 12:


Diagram

Advantages of the Jones Motor - brushless, more power, smaller

The torque per amp conductor produced by the motor is, therefore, the same as that in conventional DC motors, and in an electronically commutated version, the Jones Motor has the advantage that it does not require brushes and commutator, and therefore, a Jones motor will produce the same power from a smaller volume. The design allows the main winding or armature to be the stator component which has the advantages of more conductor diameter and lower winding resistance, and a greater surface area, which increases heat dissipation, all factors which contribute to reducing the size of motor for a given power, and therefore, a motor operating on the Jones principle is substantially smaller than a DC motor of the same rating.

The same is true in the case of conventional brushless DC motors, because in this case only two of the three phases are energized at any given instant, and therefore, the power produced is appreciably less than from a Jones Motor of the same volume, and, as was shown above, when the magnetic fields are attracting, not all of the input energy is utilised to produce a useful mechanical energy output, although the construction is similar and these motors can be operated on the Jones principle. Also, since the back e.m.f. waveform is trapezoidal, conventional brushless DC motors produce a high ripple torque which is a severe disadvantage in applications, otherwise suited to this machine, however, the ripple torque in a well-designed motor operating on the Jones principle, like its a.c. counterpart, has been found to be very low and negligible for most applications.

Performance Characteristics

Apart from presenting a resistive load to the supply, producing torque from the reaction between two opposing magnetic fields provides another unique effect. Consider the effect of an increase in field current and consequently of the rotor flux, there will then be a reduction in stator current owing to the increased back e.m.f., but the force exerted on the fields will be greater because of the additional rotor flux. The result is a decrease in stator current and the additional force will either cause an increase in speed if the torque remains constant, or an increase in torque if the speed is maintained constant. The motor, therefore, has the unique characteristic that both speed and torque are proportional to the field current.

This unique property of a motor operating on the Jones principle can be also established from equation (6) which shows that the energy stored in the circuit remains constant irrespective of changes in current, but in a conventional motor a change in current is accompanied by a change in stored energy in accordance with equation (5) and therefore, in conventional motors only a portion of the input energy is converted to produce useful mechanical energy; the remainder being stored as magnetic energy, which has to be dissipated at the end of each cycle.

The result of this phenomenon is that when the field current is increased, the input power decreases because of the increased back e.m.f., but the output power increases because of the additional force produced between the two fields, and therefore, the efficiency of a Jones Motor is proportional to the field current.

This characteristic, as far as it is known, is unique to motors operating on this principle, known as the Jones principle, and it was to demonstrate this phenomenon that the motor was originally designed. The efficiency of a motor can be increased by increasing the field current, and this phenomenon can be demonstrated until the back e.m.f. exceeds the applied e.m.f. over a portion of the cycle, and the machine acts as a generator. When this occurs, the speed decreases, reducing the back e.m.f., and the machine reverts to motor action again. This cycle occurs in a very short time interval, and to all intents and purposes, the machine speed is constant.

Designing a Jones Motor

The design and construction of a Jones motor is similar to that of a rotating field three phase alternator, and indeed conventional rotating field alternators can easily be adapted to operate as DC motors operating on the Jones principle.

Typical performance characteristics are shown in the graphs of Figure 13, which are the curves obtained with a 12 volt car alternator which are also three phase rotating field machines. The design of three phase alternators is well known, and it would be useful, therefore, to describe the performance of three phase alternators after conversion to operate as Jones motors, and in this way direct comparisons can be made on cost and performance.


Graphs

Adapting Three Phase Alternators and cancelling inductive e.m.f.'s

Three phase rotating field alternators are converted to DC motors operating on the Jones principle by simply adding inexpensive, low resolution rotor position sensors and driving the motor through a three phase DC/AC inverter. Three rotor position sensors are required, one for each phase, and the signal from each sensor is connected to the inverter which, in turn, connects the phase winding to either the positive or negative DC supply terminal depending on the signal from the rotor position sensor.

The rotor position sensors are arranged such that they can be adjusted in relation to the stator, and it is clear from the above that correct phasing is essential to ensure the inductive e.m.f.'s cancel one another. An approximate setting may be established as follows:

1) Energize the field winding.

2) Inject DC current into one phase, making the phase terminal negative, and provide sufficient current to cause the rotor to move into alignment with the stator poles produced by the phase winding.

3) The rotor position sensor is adjusted so that switching occurs at this pole alignment point and the position of the rotor sensor, as well as the phase and sensor, are marked.

4) The sensor is then connected to the inverter, such that the output of the inverter switches from negative to positive when the rotor is caused to move through this position in the required direction of rotation.

5) Repeat for the other two phases.

6) Connect an anmeter in series with the supply to the stator windings.

The phase sequence of the inverter as determined by the rotor position sensors will now be correct but the switching position may not be the optimum, and high inductive e.m.f.'s can still be produced when switching. It is therefore recommended that initially the motor be energized from a low voltage supply or through resistors to limit current and with the motor running, the rotor position sensor adjusted to give a minimum current reading on the anmeter.

When adjusting the motor at this stage, an incorrect setting can produce a weak field effect reminiscent of series connected DC motors, which will cause both the stator current and the speed to increase. This is a condition which should be avoided by ensuring that the sensor is adjusted to the location which gives minimum stator current.

Once this setting has been found, the current can be increased by removing the temporary resistors, or connecting the motor to the normal supply voltage, and loading to full load torque. Again the rotor position sensor is adjusted to give a minimum current. The dip in current is quite pronounced, and is similar to adjusting the field of a synchronous motor from leading to lagging power factor. Once this location has been found, the sensors can be locked in this position as no further adjustment is required.

Performance Calculations

Whether we start with a purpose designed machine or with an alternator, the finished machine will have three parameters which will give a good approximation of its performance as a DC motor:

1. The back e.m.f. constant, Ke, measured in v rms 1/red 1/see (at a given field current in the case of a wound field machine).

2. The resistance of the stator winding.

3. The iron and mechanical losses due to friction and windage and brush friction, where applicable.

Since it is a DC motor, it is convenient to convert the parameters to their DC equivalent, and we begin with the back e.m.f. The back e.m.f. is measured by driving the machine with another motor and measuring the open-circuit terminal volts (r.m.s. line-to-line), at the rated field current (wound field machine) and the speed of rotation.

The back e.m.f. constant (Ke) is found from:


(6)

but since we are interested in the back e.m.f. reflected to the DC supply:

Ke(DC) = 1.35 Ke(AC) = V.DC 1/rad 1/ sec (7)

Probably the most important parameter for a DC motor is the torque constant Kt, and as in conventional brushed DC motors, the torque constant and back e.m.f. constant are numerically equal:

Kt = Ke(DC) = Nm 1/amp (8)

The second parameter is the resistance of the windings, and from Figures 5, 8 and 11 it can be seen that at any given instant, the phases are connected in a series parallel combination so that if Rp is the resistance per phase, then the resistance (R) reflected to the DC supply:

R = 1.5 Rp = Ohms (9)

The third parameter is the iron and mechanical losses of the machine, which are the same as for other comparable machines. The performance can then be found from the classical DC equation:


(10).

Permanent magnet motors - A brief overview

D.A. Kelly
Technidyne Associates
P.O. Box 11422
CLEARWATER, Florida 34616
United States of America

The quest to develop more efficient means of producing energy has mushroomed in recent years. Continued industrial expansion worldwide, the deleterious impact on the environment of current energy systems and diminishing resources are pressing clean energy solutions. The ideal solution would be a low-, or even a no-power, non-polluting, unlimited resource device that could, upon mass production and deployment, keep up with the growing technology in other fields.

One avenue that is being explored, but without any real tangible success are Permanent Magnet Motors (PMMs).

Of course the PMM is, in a sense, a basically simple notion. It involves the idea of placing magnets in such a position that when they repel each other, they create a spinning motion. What still has to be solved, however, is the ability to have this create enough power to be a viable source of energy.

The first documented PMM was built in 1269 by Peter Peregrinus. However, only in the mid 20th century have real gains been made in this research field.

There was a PMM built in the late '40's which used a 4-wheel/quad design. Each 18" diameter magnetic wheel weighed about 150 lbs.. A picture of this device is shown (Figure 1). That photograph was published in "Transverse Paraphysics" by J. Gallimore in 1982 and his book gave no text explanation.

In 1954, Lee Bowman of California built a small-scale model PMM (Figure 2). The device consisted of three parallel shafts supported in bearings within end plates secured to a solid base plate. Three gears were secured at one end of each of the three shafts, at a two-to-one ratio, with one larger gear on the central shaft. At the opposite end, three discs were secured to the shaft ends with one larger disc on the central shaft, and the two equal size smaller discs on the two outer shafts. The discs were also fixed at a two-to-one ratio, the same as the ratios at the opposite shaft ends. Eight Alnico rod permanent magnets were equally spaced on the large one disc, and four magnets each on the two smaller discs, so that they would coincide in position when the three discs were revolved. The elongated Alnico permanent magnets were placed on each of the discs so that they revolved parallel to the shafts, and their ends passed each other with a close air gap of about .005". When the discs were moved by hand, the magnets passing each other were so phased as to be synchronized at each passing position.


Figure 1. Table-top, 4-Wheel/Quad design Permanent Magnet Motor.

Estimated construction date: late 1940s to early 1950s, shown in, "Transverse paraphysics", 1982 by J. G. Gallimore (Tesla Book Company). Estimated diameter of each magnetic wheel: 15" to 18". Estimated weight of each wheel 150 lbs.

The operation of the magnetic device required the positioning of a single cylindrical permanent magnet which was placed at an angle relative to the lower quadrant of the end discs. This single magnet acted as the actuator magnet which caused the rotation of the disc by unbalancing the magnetic force of the three magnetic discs.

While these early demonstrations did spark some interest, Bowman never received enough financial support to continue his research and his PMM was eventually dismantled and destroyed.

Probably the best-known attempt at a single-rotor PMM was by Howard Johnson (Patent # 4,151,431). But its failure to reach any degree of commercial acceptance must be attributed to its natural slow speed of rotation.

Johnson's model was divided into two types of distinct units, one linear and one rotary, with the most interest directed toward the rotary version due to the possibility of it leading to a no-power input generator. However, after considerable effort and exaggerated success reports, it was obvious that the modest torque produced too low HP.


Figure 2. Bowman Permanent Magnet Motor.

The Johnson rotary PMM consists of multiple, equally spaced rectangular permanent magnets secured to a rotor component, with multiple arcuate, or banana-shaped (special) permanent magnets evenly spaced as the stator P/M's causes a positive preponderance of magnetic force vectors to act on the rotor P/M's, thus causing rotation in one direction. The main problem with these special types of P/M's is that it is impossible to get a major preponderance of magnetic force vectors to act the rotor magnets, so that a severe operational tradeoff must be made in order to achieve some degree of positive rotation.

A magnetic force preponderance of around 25% is about all that can be expected as an optimum value for this type of design, and there is very little that can be done to improve on this basic geometric configuration. Some performance improvement and slight cost-reduction can be expected by switching from the high cost samarium-cobalt P/M's to the latest NIB's (neodynium-iron-boron) P/M's but this factor will not overcome the basic deficiencies of this PMM concept. This research thus proved the impracticality of the single-rotor PMM's.

Another PMM was by Kure-Tekko of Japan. The compound P/M-E/M motor function' in a high-speed motor hybrid environment. The unit consists of a high-induction permanent magnet (samarium/cobalt P/M) located within a plain rotor component which is given an initial rotating impulse by a precisely-timed electromagnetic station slightly offset from top-dead center of the stator.


Figure 3. Kure-Tekko type of permanent magnet motor, with top spinner.

Later, there was a modified version of the Kure-Tekko unit, with a top magnetic attraction spinner added. This unit revolves independently of the main rotor to attract each of the rotor magnets and drives them into a small air gap to start each rotational cycle. The spinner is powered by a 12-volt DC motor. While there is some potential here, and there is a continuous operation of 60 rpm for the main rotor, it cannot be truly considered as completely successful since a major portion of the rotor's torque is from the spinner (and its motor).

Another PMM prototype built in Japan was a large, multi-wheel model built by Kuhei Minato of Tokyo. The interesting feature of Mr. Minato's PMM geometry is that the angling of the magnets on both wheels provides a uniformly shifting attitude between the opposite magnet sets, above and below the wheel centerlines. This uniformly shifting geometry is used to advantage in this concept.


Figure 5. Large multi-wheel prototype by Kouhei Minato of Tokyo.

The most unique feature of the P/M section is the novel uniformly opening spiral path of permanent magnets arrayed as the stator of the unit. This uniformly diminishing repulsive magnetic path directly interacts with the SAM P/M segment, causing a rotational "squeeze" on the rotor, so that it revolves rapidly. Another way to consider the reaction is that the rotor's P/M segment is forced to revolve from a higher, magnetic repulsive potential to a lower one, by natural magnetic potentials.

The specific permanent magnet motor application overcomes one of the serious deficiencies in all previous permanent motor designs, bar none! The usual permanent magnet motor design, such as the Johnson PMM are very much handicapped by low speed operation. However a compound or hybrid E/M P/M unit can result in a significant operating arc over the whole 290 degrees, and this full arcuate motion translates directly into high speed.


Figure 6. Minato's permanent magnet motor. Multi-wheel set up overcomes slow speed problem while torque outputs of rotors become cumulative.

The present dual rotor-spinner PMM concept being developed at Technidyne features the use of a relatively high-speed, motor-driven magnetic "spinner" as the input drive means. The spinner concept was first used on the Kure-Tekko PMM unit, but its form is elongated in this design. The unit employs stationary reactive permanent magnet sets in an arc around the lower portion of the main rotor (not shown). These stator magnets assist the revolving of the main rotor.


Figure 7. Dual rotor and spinner permanent magnet motor by Technidyne, Florida. A high-speed motor-driven magnetic spinner is used as input drive means, after the Kure-Tekko system. The goal is to achieve over-efficiency.

Aneutronic energy - Search for nonradioactive nonproliferating nuclear power

Bogdan Maglich
The Tesla Foundation Inc.
P. O. Box 3037
Princeton, New Jersey 08543
United States of America

Can we design a nuclear power source that -- like Robbie in Asimov's classic tale "I, Robot" -- is preprogrammed never to harm a human?

Can there be a nuclear process whose fuel will never be converted into nuclear weapons?

The recent report (1) of a special committee of the US National Research Council implies that the world may be only one step away from being able to say "yes" to both of these questions. Conclusions of the First International Symposium on Feasibility of Aneutronic Power, held at the Institute for Advanced Study in Princeton in the Fall of 1987, suggest that this last step may well be imminent.

What is aneutronic?

Energy-releasing nuclear reactions involving nonradioactive nuclei (both as the reactants and reaction products) and producing no neutrons have been known for half a century. The first one discovered was the fission of lithium-7 by protons, p+7 Li


2

+ 17 MeV, observed at the Cavendish Lab by Cockroft and Walton in 1932. A number of similar fission and fusion reactions were subsequently found. They can be divided into three classes: fission of light metals by protons, fission of light metals by 3He nuclei, which produces protons, and fusion reactions involving 3He, which produce protons. This is why these reactions have been referred to as the "proton-based fuel cycle." We will refer to them as aneutronic reactions.

We define a nuclear reaction as "aneutronic" if not more than 1% of the total energy released is carried by neutrons and if not more than 1% of the reactants ("fuel") and reaction products ("waste") are radionuclides. The definition is somewhat arbitrary and serves only as a guideline. Aneutronic reactions are neither conventional fusion. Their final product in all cases is predominantly helium, a nonradioactive inert gas.

Before the 1970's, no effort was made to develop a reactor based on aneutronic fuels as a power source, even though these reactions have the potential to release twice as much power per fuel weight as uranium fission. This neglect was due to the absence of the necessary technology and lack of ecological or political motivations. Owing to the absence of chain reactions, aneutronic power production has no weapon (i.e., explosive) applications, hence there has no military interest either.

Stimulated by the energy crisis, vigorous studies of aneutronic energy processes were begun in the 1970's by one individual (J. Rand McNally, Jr., Oak Ridge) and six groups worldwide. Described as "advanced-fuel fusion" (1-3), the research encompassed a broad class of reactions ranging from the neutronic DD fusion to the pure aneutronic fusion of Helium-3 or fission of light metals by protons. McNally showed that proton- and 3He-induced fission of light metals could be made into a chainlike reaction, though different from the uranium fission chain (4). Theoretical studies were carried out at the University of Illinois (5), UCLA (6-8), TRW, the Technical University of Graz, Austria (9), and the University of Buenos Aires (10). Experimental work (1,8,9,11,12), supported by theory, was done by Aneutronic Energy Labs of United Sciences, Inc. at Princeton ("AELabs," formerly known as Fusion Energy Corp. a.k.a. Aneutronix).

In 1978, the Department of Energy's Ad Hoc Committee on Fusion, headed by John Foster of TRW and Burton Richter of SLAC, strongly recommended research on the proton-based cycles as an area "in which significant breakthroughs might be made that might change the whole picture" (13). In 1980, the Department of Energy's Research Advisory Board, led by S. Buchsbaum and involving NASA Director J. Fletcher, M. Goldberger and W. Panofsky, et al., recommended a "strong program on the proton-based cycles." (14)

Between 1972 and 1984, 93% (about $21.5 million) of the $23 million in funding for aneutronic research came from the private sector: EPRI ($1.5 million) and AELabs, Inc. ($20 million). Seven percent ($1.5 million) came from the DOE. EPRI and DOE funding of all advanced fuel research was stopped, however, in 1980, after TRW and UCLA groups concluded jointly that Tokamak-type thermal plasma fusion machines could not burn aneutronic fuels (15).

Independently, an MIT study commissioned by the NSF concluded that (a) DT reactors would not be economically competitive with the next generation of fission reactors, and (b) advanced fuel could not burn in a Maxwellian plasma (16). The revival of interest in this energy source in the mid-1980's resulted from the introduction of new concepts that were borrowed from particle physics research.

Success of the Migma IV experiment

AELabs continued its experimental and theoretical efforts, based on a technique derived from colliding beams known as self-colliding orbits or migma-plasma (17). Migma-plasma, a nonthermal (non-Maxwellian) system, appeared to be free from the problems endemic to thermal plasma. It is a hybrid physical state between colliding beams and plasma, but it does not meet the definition of either. In an experiment carried out in 1982, referred to as Migma IV, AELabs demonstrated that a 1-MeV deuteron migma can be neutralized by oscillating electrons and exceed the space charge limit density without instabilities. It was confined for 30 seconds at the fuel densities at which thermal plasma undergoing similar confinement ("simple mirror") breaks down due to disruptive instabilities (18). The technical figure of merit known as the triple product (temperature x confinement time x density) reached 4x10 14 keV per cubic centimeter in Migma IV, exceeding that reached by tokamaks. Fuel density of migma was 1000 times lower than that of the best tokamak but migma's temperature was 100 times higher than that of the best tokamak and its confinement 15 times longer, so that their product is 1,500 times higher than that in tokamak. The migma program had spent $23 million over 10 years. The Western world has spent $10 billion on the conventional plasma fusion program over the past 30 years. Describing this breakthrough in an article entitled "Clean Nuclear Power?" MIT's Technology Review reported in 1982:

"A growing community of physicists believes it may be possible to develop a type of nuclear power that does not require radioactive fuel and does not produce radioactive waste. Unlike today's fission plants or the fusion plants or the fusion plants generally promoted by government-sponsored research, a nuclear power plant of this sort could not be converted to an atomic bomb factory." (19)

At about the same time, Professor Bruno Coppo of MIT showed theoretically that 3He-based fusion, which is nonradioactive and nearly aneutronic, could ignite in a tokamak (20).

Reflecting this development, the Senate's Energy Appropriations Committee stated in 1982:

"To date, basic research in the field of nuclear fission and fusion has largely overlooked the potential for aneutronic nuclear alternatives using light metals, such as lithium, that produce no radioactive side effects. The Committee recommends that the Department of Energy give higher priority to this non-radioactive and nonproliferative nuclear potential. The Department should allocate the funds necessary to conduct a feasibility study to test the conditions necessary to produce a prototype aneutronic reactor, and to fully examine the theoretical principles of self-sustaining aneutronic energy." (21)

The DOE has not responded to the recommendation.

In 1984, Congress asked the Defense Department to undertake a feasibility study of aneutronic energy as a space power source (22). Since an aneutronic reactor would be small and light, unencumbered by massive shielding, it could offer a power source for aerospace, among other applications. Funds for aneutronic energy research were included in the budget for FY 1985. The initial phase of the program sought to establish the concept's feasibility and identify the subsequent steps needed to develop an aneutronic electrical power source for space applications.

In 1985, the US Air Force contracted Aneutronic Energy Laboratories of United Sciences, Inc., to conduct $1 million worth of plasma and energy balance simulation studies using two state-of-the-art CRAY supercomputers. The early results, presented at APS meetings in Washington in May 1986, and in Baltimore in November 1986, indicated that: (a) proton- and 3He-based fission of lithium-6 is indeed a chaining process; (b) in a diamagnetic migma, the scientific power out-to-input ratio Q could exceed 1, as compared to Q


in the thermal plasma case; and (c) the 3He+d fusion reactor, as calculated by Coppi, seems feasible with a "scientific" Q =

and with 100 times less radioactive waste and 20 times less neutron flux than uranium fission or tritium fusion (23,24).

In 1987, encouraged by the initial results of these computer simulation studies, the Air Force-sponsored R&D program calls for $1.5 million in further feasibility studies for 1987-88, aimed at conceptual point design of an aneutronic reactor for space uses, to be done by AELabs, with Bechtel National, Inc., and others as subcontractors.

In 1986, the US Air Force asked the Air Force Studies Board of the National Academy of Science to form a Committee on Aneutronic Fusion to "provide an assessment of the merits of pursuing aneutronic research for space-based propulsion applications." The committee report was released in the Fall of 1987.

The NRC report singled out one nuclear reaction that is the borderline case of this definition as most promising and closest to realization: the reactor fueled with a mixture of deuterium and helium-3, which is 1-3% neutronic. We refer to it as "DeHe-3 reactor". The report describes it as "more feasible and attractive for select Air Force applications," and states that, for it, "... no insurmountable technical problems are envisioned" after the "... uncertainty of the physics of the fuel containment" is resolved. According to the report, if "further research" on aneutronic fusion fuels like the mixture of deuterium and light helium ("DeHe-3") should "demonstrate that substantial improvements in plasma lifetimes, density, and energy can be obtained," it would become a "viable option" since "no other insurmountable technical problems are envisioned." This means that, in the case of aneutronic fusion, its scientific demonstration will be practically the engineering demonstration, because of the use of aneutronic fuels "to reduce neutrons would reduce shielding requirements, radiation damage to materials, and radioactivity." This is in sharp contrast to the ongoing conventional fusion program (tokamak), which is based on radioactive tritium fuel, which is believed to require 30 years from its scientific to engineering demonstration.

The "further research" referred to in the report is the only active laboratory research program of its kind in the world: that of Aneutronic Energy Labs, Princeton, New Jersey, presently working under USAF funding. It has completed four out of five stages of its migma program of aneutronic energy at a cost of $23,000,000 (see diagram of progress). Its planned fifth and last stage of research is designed exactly to demonstrate the "substantial improvements" cited by the committee as the last step needed to prove the viability. But already in the fourth stage of its research (completed in 1982), the Aneutronic group has exceeded in the technical figure of merit the performance of the 30-year-old deuterium tritium fusion program, on which the Western world alone has spent over $10,000,000,000.

The self-collider or migma device is the only device built, so far, in which the ability to burn aneutronic fuel has been demonstrated. The "uncertainty" referred to in the NRC report arises from the fact that high fuel density is required for net power production, while only low and medium fuel densities were demonstrated in Self-Collider. High fuel densities have been achieved in plasmas but not in a migma; a plasma cannot burn aneutronic fuels because it is not sufficiently hot. Four increase-in-density experiments on self-collider devices were successfully completed by United Sciences, Inc. in the period 1973-82, at a cost of $23,000,000 (1987 dollars), all of which came from private sources. In these experiments, referred to as Migma I, II, III, and IV, the fuel density increased 100 million-fold (see Diagram of Progress). Another 1000-fold density increase is required for a reactor demonstration.

International symposium on aneutronic power

At the initiative of AELabs, the First International Symposium on the feasibility of Aneutronic Power was held at the Institute of Advanced Study, Princeton, NJ, on September 10-11, 1987. About 100 nuclear and plasma scientists from twenty countries the meeting. The keynote speaker was Professor Murray Gell-Mann of Cal Tech, Nobel Laureate in Physics.

The important results of the Symposium were:

- Independent confirmations of the results of United Sciences were reported by research groups (Japan, Austria, MIT and Science Applications International) obtained under its USAF contract on the aneutronic reactor simulation: that the DeHe-3 reactor is only 0.5% radioactive and 1 -2% neutronic, that is, 200 times less radioactive and 100 times less neutronic than any other nuclear power system, fission or fusion.

- Professor M. Rosenbluth, director of the Center for Fusion Studies, University of Texas at Austin, who is considered to be the nation's leading plasma physicist, has produced a theory of migma stability. The results of his research group were presented at the Symposium.

The symposium proceedings are published as a book entitled "Aneutronic Energy" by the international Journal Nuclear Instruments and Methods in Physics Research, which has the largest circulation in the nuclear sciences.

Strategic and commercial ramifications of aneutronic power

A: Aerospace

(1) Reactor weight. The absence of neutrons and radioactivity obviates the need for shielding. Since the weight of shielding in a nuclear fission reactor is at least 100 times greater than that of the reactor itself, a very large power-to-weight ratio (specific power) is projected: I megawatt per ton versus I megawatt per 100 tons with a uranium reactor (Bechtel's engineering studies are aimed at checking this).

(2) Fuel weight. As in all nuclear systems, aneutronic fuel energy is 100,000 times more concentrated than that of non-nuclear fuels, and the fuel weight is negligible: one kilogram of an aneutronic fuel such as helium-3 is equivalent to approximately 100 metric tons of fuel oil.

(3) Cost of fuel is projected to be 10% of the cost of uranium, as displayed below:


Fuel

Supplier

Purity(%)

Cost $K/kg

Unit Fuel Cost
(FBU= 1.0) mil/kw(th) hr

Fusion

D

S.R.L.

99.1

1

0.0008

Conventional Fusion

T

M.L.

94

7,500

42

Aneutronic Fusion

3He

M.L.

99.9

750

4.5

(4) Fuel availability. Helium-3 exists in nature in small quantities, but it can be bred in the same reactor that it fuels. The US government has 500 kilograms in reserve, which would run 200 space-based reactors for 20 years. Helium-3 is bred, and the price quoted is the breeding price. (The proposals to mine it on the moon where it is 10 times more abundant are unnecessary, as any reactor that can burn helium can breed helium from hydrogen).

(5) Plant capital cost per kilowatt capacity is projected to be less than 10% of the fission reactor cost for a large reactor, dropping to 1% for a small reactor (small fission reactors are uneconomical).

(6) Heat loss (waste heat or "heat pollution"). Almost all energy produced in aneutronic reactions is converted into electricity, versus 33% in conventional nuclear reactors. What to do with waste heat with a space reactor is a major technical problem, as the waste heat has nowhere to go (there is no air to conduct it away). An aneutronic plant's waste heat is 10-15% of the total energy generated, while for fission the figure is 67%.

(7) Modular. Units as small as 1 megawatt may be economical; thus a large reactor consisting of many small nuclear power units becomes feasible.

B. Power supply for radar and telecommunications

The smallest aneutronic power plant (30 KWe), similar in size to the proposed Migma V, would have a wide application: this is the power needed to run a radar or CCC station.

C. Naval application

The advantage of lightweight aneutronic power production also applies to ship propulsion, where specific power is less critical than in the aerospace case, and where some shielding (allowing some neutronicism) can be tolerated.

D. Terrestrial applications for utilities

First, an aneutronic reactor can be small, producing 1-10 megawatts of electric power (MWe), while the minimum economical size of a fission or (projected) fusion power plants is about 1000 MWe. Hence, the small nuclear power plant, impossible today, becomes feasible. A small power unit implies mass production, which results in a much lower capital cost per kilowatt of capacity than with large reactors that are built one or two at a time. (Initial capital cost is one of the major barriers to nuclear energy in developing countries and smaller communities of developed countries). Second, there are clear environmental advantages of nonradioactive fuel, nonradioactive waste, and the absence of waste heat (heat pollution).

E. Nonproliferation

Absence of neutrons means that the aneutronic reactor cannot breed plutonium for nuclear weapons. The weapons proliferation issue has been a second major barrier to the free sale of nuclear power plants to the developing world. The combination of a small power unit (small capital cost) and absence of proliferation restrictions would open the way for American nuclear industry for massive export of power density.

Since radioactive fuel, radioactive waste, heat pollution, and proliferation are the main current environmental and political issues for nuclear power, the implications of aneutronic nuclear energy for the environment are obvious: not only an acceptable but an attractive nuclear power technology.

References

1. "First Symposium on Clean Fusion (Advanced Fuel Fusion) 1976". Nuclear Instruments and Methods. 144, p. 1-86. 1977.

2. "Proceedings of the EPRI meeting on Advanced Fuel Fusion, 1978." Electric Power Research Institute, Palo Alto, California.

3. Ashworth, C. P.. "A user's perspective on fusion" Part II, 1977. AAAS Meeting, Denver (Pacific Gas and Electric Co., San Francisco). 1977.

4. McNally, J.R. Jr.. Nuclear Fusion. 11. p. 187. 1971; ibid 18, p. 133. 1978.

5. Miley, G.. Nuclear Instrument and Methods. 144, p. 9. 1977.

6. Conn, R., et al.. 8th International Conference on Plasma Physics and Controlled Nuclear Fusion Research. Brussels 1-10 July 1980. IAEA-CN-38/v-5.

7. Conn, R. and J. Shuy, "p+6 Li ignition and multipoles as advanced fuel cycle reactors". Nuclear Eng. Dept, University of Wisconsin. UWFDM-262. 26 sept. 1978.

8. Dawson, J.. "Advanced fusion reactors". Fusion. Vol. 1,B Academic Press. 1981.

9. Harms, A. and M. Heindler. Acta Phys. Austriaca. 52. p. 201. 1980.

10. Gratton, F.. Atomkernenergie (in English) 32. p. 121. 1978.

11. Ferrer, J. et al. Nuclear Instruments and Methods. 157. p. 269. 1978.

12. Maglich, B.. Atomkernenergie (in English). 32. p. 100. 1978.

13. Foster, J.S. et al. "Final Report of the ad hoc experts group on fusion. U.S. Dept. of Energy. Washington, D.C. June 7, 1978. Summarized in Phys. Today. Sept. 1978. p. 85.

14. Buchsbaum, S. J. et al. "Report on the DOE magnetic fusion program prepared by the Fusion Review Panel of the Energy Research Advisory Board". August 1980.

15. Gordon, J.D. et al. TRW Energy Division Group TRW-FRE-006 and EPRI RP 1663-1. 1981. (unpubl); S. Tamor, SAIC-85/3005/APPAT-63. 1986. (unpubl).

16. Lidsky, L.D. (MIT). "End product economics and fusion research program priorities". Study prepared for National Science Foundation. 1982.

17. Macek, R. and B. Maglich. "Particle accelerators. 1. p.121. 1970.

18. Salameh, D. Al et al. Physical Review Letters. 54. p. 796. 1985; L. Lara and F. Gratton. Physics of Fluids. 29. p. 2332. 1986.

19. Technology Review. Nov. -Dec. 1982. p. 46.

20. Atzemi, S., B. Coppi, "Comments". Plasma Physics. 6. p. 77. 1980; B. Coppi. Physica Scripta, T2. p. 590. 1982.

21. 98th Congress, Senate Report 98-153. June 16, 1983.

22. 98th Congress, H.R. Report of the Comm. on Appropriations 98- 1086. p. 240.

23. "Final report of phase 1 aneutronic power source feasibility study and simulation". USAF Contract F49620-85-C-0098. 1986. United Sciences Report UNS-82-042.

24. Bulletin of the American Physical Society. 31. No. 9, 1557-8. 1986. (Baltimore PPD meeting).

LUMELOID* solar plastic film and LEPCON* submicron dipolar antennae on glass


_____________________________________
* a registered trademark of Phototherm, Inc.

Dr. Alvin M. Marks
Advanced Research Development, Inc.
359R Main Street
ATHOL, Massachusetts 01331
United States of America

A most profound change in the electric utility industry could be wrought by a commercially available, low-cost, efficient source of electric power from the sun.

Examples of such forthcoming solar energy conversion technologies are the LEPCON* and LUMELOID* systems, which are the trademarks of Phototherm, Inc. of Amherst, New Hampshire, a public corporation (OTC), dedicated to the research, development, manufacture and marketing of these products.

Glass panels and plastic sheets of LEPCON* and LUMELOID* respectively convert sunlight to electric power with an efficiency of 70 to 80%, at a cost of $ 0.01 to $ 0.02 per kwhr. The investment in I square meter of a LEPCON* glass panel is about $ 250.00. It produces 500 W of electric power in bright sunlight. The investment then is $ 0.50/W, spread over a life expected to exceed 25 years.

The investment in LUMELOID*, a thin, continuously cast polymer film, including electrodes, and lamination to a supporting sheet is about $ 5/sq m. The investment cost will be $ 0.01/w, spread over an expected life of 6 to 12 months in strong sunlight.

LEPCON* panels are particularly applicable to large-scale solar/electric power farms. They may be sited to produce an average of 400 w/sq m or 400 Mw/sq km during the daytime, for example, in New Mexico, Nevada and similar regions where clouds seldom obscure the sun. An area 200 km x 200 km will produce 16 million Mw at $ 0.01 /kwhr during the daytime hours. Two-thirds of this energy must be stored for use during the dark hours. Electric energy storage technologies are known, and are being developed, which would serve this requirement. This would be enough to supply the electric grid for the entire U.S..

(* a registered trademark of Phototherm, Inc.)

Alternatively, LUMELOID Sheets will be utilized by many consumers of electric power to produce their own electric power. Such sheets may be installed on roofs or building sides, and connected through an electric storage device and an AC/DC inverter to directly provide electric power for all domestic needs at a few cents per kwhr. Excess electric power may be fed into the grid and charged to the local electric company, which will provide standby power to the consumer. The existing electric power grid will be essential, however, for industry and urban use, particularly in those areas where the sunlight is frequently obscured by clouds.

The economic implications

To totally convert the electric utility industry to solar electric power farms using LEPCON* panels will require an investment of trillions of dollars over many years. The economic and health benefits to the nation will be enormous:

1. Lower energy costs

2. Elimination of nuclear hazards

3. Elimination of the need to burn coal or oil fuel, thus diminishing air pollution, and preventing a disastrous Greenhouse Effect.

4. Decreased dependence on foreign oil imports, with consequent improvement in our balance of trade and reduction of the federal deficit.

5. A substantial increase in useful employment on a vast long-term project, which will enable a cutback in the funding of the wasteful military industrial complex.

6. If the electric utility industry becomes involved, as it must, then it can benefit from the large profits to be made in this huge endeavour. To start, it must provide the funds for the R & D, manufacturing facilities and the installation of the LEPCON* and LUMELOID* technologies.

The Technologies

Figure 1 shows a conventional metal-insulator-metal (MIM) tunnel diode, in which two dissimilar metals are separated by a small gap of about 30 Angstroms. Electrons can pass easily in one direction but not in reverse. Each metal has a different work function or natural electric barrier surrounding the metal. An electron moves readily in a metal, as though it were in empty space, but it bounces off a wall of the metal due to the potential barrier at the wall.


Figure 1. Prior art

Figure 2 shows a potential diagram for an MIM diode.


Figure 2. Potential diagram for MIM (metal-insulator-metal) diode

Figure 3 illustrates a quantum property of electrons known as "tunnelling". As an electron approaches a barrier with a small insulating gap, and an electric potential difference across it, it is either transmitted or reflected across the gap without loss of energy. In an MIM diode the electron can move more readily in one direction than in the other.


Figure 3. Quantum-mechanical tunnel penetration of a barrier. A plot of potential energy versus distance for a symmetrical rectangular barrier. E is the kinetic energy of the approaching particle; V is the barrier height above the particle energy; b is the barrier width.

Figure 4 shows a submicron rectenna element of an array in a LEPCON* panel. This element is also known as an "antenna-well diode".


Figure 4. Submicron rectenna element, or "antenna-well diode"

The light photon has an electric field with its direction at right angles to the light ray direction. The photon energy is totally absorbed by an electron in a metal strip. The photon transfers its energy without loss to the electron as kinetic energy in the mostly empty space in the metal. The electron moves parallel to the electric vector of the light, to the right or to the left. If the electron moves to the right, it bounces off the high-potential barrier at the wall, and the moves to the left. So all the electrons eventually approach the tunnel diode on the left, where the electron is either transmitted or reflected without energy loss, as shown in Figure 3; all electrons being eventually transmitted through the tunnel diode without energy loss. However, a potential difference across the diode will convert the kinetic energy to electric energy which will just equal the photon energy. Thus the light photon energy is converted to electric energy without loss.

This differs from the conventional photovoltaic device, which requires that the electron move parallel to the light beam into a semiconductor layer, which it can only do after losing energy, and so the photovoltaic devices are fundamentally flawed.

There are also present many low energy (thermal) electrons in the metal strip which do not take part in this energy conversion except to provide an extra electron for photon-electron interaction; and from another part of the circuit to replace the electrons being transmitted through the diode. In bight sunlight, about one photon-electron energy conversion will occur on the average of every nanosec.

Figure 5 shows a LEPCON* Series-Parallel configuration. Light is resolved into two electric vectors; a first electric vector parallel to the array axis is totally absorbed and converted to electric power; and a second electric vector normal to the array axis is totally transmitted as polarized light. Previous work with polarized light materials and microwave rectannae arrays shows the system to be about 80% efficient. In this system, 40% of the light is then converted to electric power by the antenna array, and 40% is transmitted. The transmitted light may be passed through a second LEPCON* array at right angle to the first, which will convert 80% of the transmitted component to electric power, thus converting a total of 72% of the incident light to electric power.


Figure 5. Series-parallel configuration

Figure 6 shows how a single LEPCON* array may be used in combination with a quarter-wave retardation sheet and light-reflecting layer to accomplish the complete conversion of the incident light to electric power at about the same efficiency.


Figure 6. Array with retardation, sheet

LUMELOID*

A LUMELOID* sheet is a light/electric power converter. The sheet is a thin (8 micrometers, or .0003") polymeric film. The polymer film is prepared by a method similar to that now employed commercially in the manufacture of polarized film, and using much the same equipment, but with a different chemistry, and with electrodes embedded in the film to gather electric power.

Recent work in the field of photosynthesis in green plants has resulted in the synthetic chemicals which mimic the natural process. In the plant the photosynthetic chemical comprises an antenna which is similar to a long-chain carbon molecule, known as polyacetylene. This is attached to an electron donor acceptor complex, here shown as a large ring and a small ring, respectively known as porphyrin and quinone. The long-chain molecule acts as a conductive antenna, which resolves and converts one-half of the photon energy to electron energy, and transmits the other one-half, as described above for the LEPCON*. The electron energy is stored on the large donor ring, and transmitted by tunneling across a small gap which is the insulating chain of carbon atoms, between the large donor ring and the small acceptor ring. The small acceptor ring is then holding the electron at a greater potential, than at the start. So far this is analogous to the LEPCON*.

Figure 8 shows an energy diagram typical of a Donor-Acceptor Complex, which is analogous to the energy diagram shown in Figure 2 for a LEPCON*.


Figure 8. Energy diagram of a typical Donor-Acceptor Complex

However, in the green plant, natural photosynthesis uses the electric energy it has stored on the acceptor to drive the chemical synthesis of the carbohydrates and other complex chemicals in the living cell.

Figures 9 to 12 inclusive show the similarities and differences of LUMELOID* compared to natural photosynthesis. The basic difference is that the chemical synthesis step of the natural photosynthesis process is eliminated, and an entire photosynthetic molecule such as is shown in Figure 7 is connected head-to-tail to another such molecule. This is shown in Figure 12, where the long-chain conductor molecules (52) and the donor-acceptor molecular diodes (53), are oriented parallel to each other and connected head-to-tail within the polymer sheet.


Figure 7. A synthetic molecule which mimics photosynthesis

Figure 9 shows a cross-section of the polymer sheet parallel to the long axis of the photosynthetic molecule. Electrodes 41 and 42 are shown in Figures 9, 11 and 12, connecting the electric power output to the load 75. The light power input is represented by the photon 2, and the direction of the resolved electric vector is along the X-axis.


Figure 9.


Figure 10.


Figure 11.


Figure 12.

Figure 10 is a cross-section through the XOZ plane. The conductive chains are shown as large dots.

The manufacturing process

The manufacturing process resembles the conventional commercial manufacture of polarizing film. In Stage 1, a viscous polymer solution is made with these chemical constituents of polarizing film:

1) Solvent molecules;
2) long-chain polymer molecules;
3) iodine molecules;
4) cross-linking chemicals to tie the chains in a bundle after they are aligned;
5) (OH) groups on the side of the polymer chains to react with the cross-linkers.

In Stage 2, the polymer solution is cast on a moving belt of a non-reactant metal, partially dried to eliminate most of the solvent, and stretch oriented. The result is that the polymer chains are drawn parallel and the cross-linkers hold them that way. The separate iodine molecules now crystallize in the spaces between the parallel chains forming a linear electrical conductor. These react with light photons as described above, only in this case, since polarizers lack molecular diodes and electrodes, the electric power is dissipated internally as heat.

Figure 6 shows the first step in the manufacture of a LUMELOID* film, which is the preparation of the polymer solution similar to that used in the manufacturing process. In this case, however, there is a constituent No. 6 added: the molecular diode. When a molecular diode is exposed to light, its electric charges separate and it acquires a dipole moment; that is, experiences a torque to align it parallel to the direction of the applied electric field.

The final stage in the manufacture of the LUMELOID* is similar to that described for polarizing film, except the additional steps of simultaneously illuminating the film and applying an electric field are utilized in the stretching step, and the electrodes are then subsequently applied.

References

The following US Patents include extensive bibliographies: 4,445,050 (LEPCON*) and 4,574,161 (LUMELOID*). Additional patents have been filed, which will issue in due course, in the US and foreign countries, which contain additional references.

Charged aerosol air purifiers for the suppression of acid rain

Alvin M. Marks
Advanced Research Development, Inc.
359R Main Street
ATHOL, Massachusetts 01331
United States of America

The emission of acid fumes from the combustion of fossil fuels has led to "Acid Rain". The emissions comprise fumes containing sulphur and nitrogen oxides which are converted to sulphuric and nitric acid, organic and metal carcinogens, solid and liquid particulates and carbon dioxide. The fumes originate in the industrial areas of the US and are carried by the wind to other states and to Canada, with a detrimental effect on the environment and health. The increase in carbon dioxide and other chemicals is causing a "Greenhouse effect", with a general warming of the climate of the Earth and possibly disastrous effects on agriculture. Moreover, the ozone layer is being depleted, and harmful ultraviolet rays from the Sun is causing an increase in skin cancer. "Acid Rain" does the most visible damage to trees, lakes, fish, and other wildlife. In recent years, the public has become aware of these problems, and is now aroused. Political action is demanded to clean up this air pollution.

However, proposed methods for the elimination of acid rain and other pollutants have been to costly, or ineffective, and this has impeded progress on the cleanup.

During the early 1940's I became interested in charged aerosols as a means for the direct conversion of the heat-kinetic/power to electric power in a moving gas stream. A result of many years of R & D on charged aerosols are an efficient means to cleanup air pollution. In 1965, 1 appeared before the Los Angeles Air Pollution Control Board and suggested that the smog afflicting that city could be eliminated by a charged aerosol spray from airplanes (6.3.1). In 1967, dramatic demonstrations of the charged aerosol air purifier were made to the predecessor of the U.S. Environmental Protection Agency, the Cincinnati Air Pollution Control Agency, and to the U.S. Senate, fully reported in the extensive testimony of record (10). After all this effort, no action was taken, until in the late 1970's there was a large-scale use by an industrial giant (TRW, Inc.) using micron-sized (not submicron) charged droplets. Many electric power and chemical plants in the U.S., Japan and probably Canada and elsewhere were equipped with large-scale charged aerosol purifiers, resulting in millions of dollars in sales. I endeavoured to collect a royalty without success, because litigation was prohibitive. TRW designed equipment using a charged droplet with too large a diameter. The surface area of the droplet was too small, thus decreasing its absorption effectiveness.

On March 31, 1970, U.S. Patent 3,503,704 was granted to Alvin M. Marks, entitled: "Method and apparatus for suppressing fumes with charged aerosols" for air purification and other uses. Droplets from a capillary tube are passed through an electric field having an intensity just below breakdown. The droplets are thereby broken into minute particles and produce a fine spray. The large surface area of the spray effectively absorbs noxious gases. A simple device was invented based on this principle which should be placed on the smokestacks of every home, building and factory to eliminate noxious fumes at their source. It can be made in small manufacturing facilities, and its use should be mandated by law (2). When a charged aerosol source is placed in a moving gas stream, electric power can be generated in excess of that needed to operate the apparatus. A flow of clean air may be introduced around the capillary tube and charging electrode to avoid fouling and shorting (3). High temperature gases in another arrangement are first partially cooled before introduction to the charged aerosol.

Experimental work on charged aerosol air purifiers is reported (3). A flow of noxious gases and particulates is passed through a conduit pipe. One or more ring-shaped electrodes are mounted downstream of the capillary tubes, and an intense electric field is applied between the capillary and the ring electrodes. A charged aerosol comprising monopolar charged liquid droplets passes through the ring electrode and mixes with the stream of noxious gas. The charged aerosol droplets have a very great surface area and rapidly and substantially completely absorb the noxious gases. As the liquid droplets progress through the conduit, they mutually repel each other by reason of their like charge, and coalesce upon the inner walls of the conduit from which they are collected as a liquid, together with the entrained and absorbed pollutants. Various apparatus employing charged aerosols, mathematical physics equations relating the variables, operating conditions and ranges to achieve high efficiency are derived, and the parameters experimentally evaluated.

It is shown that two charged aerosol air purifiers can be used simultaneously, interconnected as electric generators, so the electric power required to operate the devices is obtained from the gas stream and so is free (3).

Why are charged aerosols so effective, when other methods are relatively ineffective? Because by charging the water droplets, they break up into submicron droplets with over 10,000 times the surface area of the volume of uncharged water droplets; this greatly increases the speed and the amount of noxious gases and particulates absorbed and reacted. Alkaline reactants may be included in the submicron water droplets to neutralize the acid in the noxious gases. The alkaline reacts with sequesters and renders harmless the noxious gases. In stationary air purifiers, chemicals of considerable value may be recovered from the reactants.

Charged aerosol purifiers are simple, low-cost and effective means to eliminate "Acid Rain" which should be attached to every stack or chimney exhausting noxious gases.

Explanation of the Figures

Figure 1 a is a cross-section of a simple charged aerosol air purifier (CAAP). It comprises a single grounded capillary tube mounted axially within a tube. An input of polluted air into the tube passes through a ring electrode charged to about 6 kv (+ or -). When a jet of fluid (water, water solution or suspension) issues from the capillary tube, a spray of submicron mono-charged droplets is produced into the polluted air. The charged droplets have an enormous surface area which quickly absorb with the gas and particulates. The charged droplets and their load of pollutants repel each other and deposit on the wall of the tube where they coalesce to a liquid film which runs off into a waste container. Only clean air is emitted (1).


Figure 1 a.

Figure 1 b is a sectional view of a CAAP mounted on the top of a chimney. The chimney could be on a house burning a fuel such as wood which produces a polluted effluent. The upward flow of the polluted air is redirected downward and passes through a plurality of capillaries which produces a charged aerosol spray. The charged droplets containing the pollutants collect on the wall of the bottom container and run off as waste fluid. Clean air is drawn upward through as central tube aided by a small fan if necessary (2).


Figure 1 b.

Figure 2 shows an airplane with two charged aerosol air purifiers, each mounted under one wing. One or more tanks are provided to hold a supply of alkaline water. Calcium hydroxide may be used as alkali.

However, due to its low solubility in water (7), to obtain a high concentration (for example, 20%), a calcium hydroxide sol should be employed (8). Calcium hydroxide is often used as a fertilizer; and, when reacted with nitrogen oxides produces other fertilizers such as calcium nitrate. Calcium sulphate and calcium sulphite also precipitate, and may be refined to provide pure sulphur. Hence the waste effluent recovered from stationary CAAP may have considerable commercial value. The CAAP with calcium hydroxide in a charged aerosol droplet will react with CO2 in the air and form insoluble calcium carbonate, which precipitates the water solution.

Calcium oxide is a major industrial chemical. In the commercial manufacture of calcium oxide, carbon dioxide is given off when calcium carbonate rock is heated (9). This CO2 goes into the atmosphere and contributes to the Greenhouse Effect. The trend could be reversed if this carbon dioxide were absorbed in greenhouses where useful plants are grown at accelerated rates due to a high concentration of CO2 or if other processes utilizing it in organic synthesis were incorporated in the cycle.


Figure 2.

In Figure 2, a positive (left) and a negative (right) charged aerosols are produced and meet in the atmosphere downstream of the airplane, where they may partially or wholly neutralize the droplets.

Alternatively, it may be advantageous to produce charged aerosols of one charge only. When sprayed over foliage, the charged aerosol particles are attracted to any surface, top or bottom, and may be effective in neutralizing deposited acid on such surfaces. In this case charged aerosol of one sign may be emitted but the airplane will become oppositely charged and must be periodically or continuously discharged to the atmosphere. This may be done, for example, by ion-emitting points. The ion-emitting points may be located at a distance from the charged aerosol source on the wings or fuselage, or in a wire which may trail at a considerable distance. Another method is to alternately emit a positive- and a negative-charged aerosol from the same source (not shown).

Previous experimental work on the Charged Aerosol Wind/Electric Generator Project (4) resulted in a single capillary design suitable for laboratory tests. From this test device the data shown in Figure 3 were obtained, showing that capillaries made by a photo-lithographic process in a 12


thick stainless plate were suitable, and would result in excellent efficiencies of 70% with low water pressures of less than 5 psig.


Figure 3.

Figure 4 shows a charged aerosol source comprising a plurality of capillaries formed in thin stainless steel plates; and, Figures 5a and 5b show front and cross-sectional side views. Figure 6 shows a plurality of charged aerosol sources mounted in one square meter frames, which may be suspended under the airplane (instead of the simple, single capillary shown in Fig. 2).

Calculations were based on these assumed conditions: 1) A spray of 1000 kg of 20% calcium hydroxide suspension in water, to cover one square kilometer of ground or lake; 2) an airplane traveling at 200 m/s; 3) the spray covers a width of 25 m. The results of a calculation shows: About 200 s (3.4 min.) is required to deposit 200 mg/sq m of Ca(OH)2. The charged aerosol is emitted at the rate of 5 cubic meters at 5 psig pressure 33,000 newtons/sq m (Pascals). The array which may comprise 75 Em orifices shown in Figure 6.


Figure 4,5,6.

In this example about 25 KW of electric power is required by the pumps and electrical system. The electric power may be supplied by a charged aerosol generator placed downstream of the charged aerosol jets, as shown in Figure 2. The example is illustrative. The parameters may be modified by actual environmental data on acid rain.

References

1. U.S. Patent 3,503,704 issued to Alvin M. Marks on March 31, 1970. "Method and apparatus for suppressing fumes with charged aerosols"

2. U.S. Patent 3,502,662 issued to Alvin M. Marks on July 14, 1970. "Smokestack aerosol gas purifier".

3. U.S. Patent 3,960,505 issued to Alvin M. Marks on June 1, 1976. "Electrostatic air purifier using charged droplets".

4. Marks, Alvin M.. "A charged aerosol wind/electric power generator using induction electric charging with a microwater jet". 17th IECEC Proceedings. 1982.

5. List of 18 patents in field of charge aerosols.

6. Additional information on: papers, Patents and publicity, by categories:
.1 Wind/Electric Power Generator
.1.1 ibid 4.
.1.2 Science and Mechanics. Winter 1980, Cover Story "Incredible power fence" by James Hyypea.
.1.3 New York Times, January 25, 1984, "Generating low cost electricity"
.1.4 Patents: 4,206,396 and 4,443,248
.2 Heat/Electric (Tin-Aerosol) Generator
.2.1 17th IECEC Proceedings, 1982 "An Electrothermodynamic Ericsson Cycle heat/electric generator" by Alvin M. Marks.
.2.2 Science and Mechanics. September-October 1983. "Amazing tin aerosol generator" by James Hyypea
.2.3 Patents 4,395,648, 4,523,112 and 4,677,326
.2.4 19th IECEC Proceedings. 1984. "Electrothermodynamic equations of a charged aerosol generator"
.3 Charged Aerosol Air Purifier
.3.1 Los Angeles Herald-Examiner, January 25, 1965. "Instant rain to kill smog"
.3.2 Modern Plastics Magazine, August 1970. "More help for solid-waste disposability"
.3.3 Chemical Engineering. August 21, 1972.
.3.4 Patents 3,503,704, 3,520,662 and 3,960,505

7. CRC Handbook of Chemistry and Physics. 65th Ed. 1984-1985, Page B 82, No. c 124, Calcium Hydroxide Solubility in Water": 0.186 g/100 g of water at 0 C; 0.077 at 100 C. Molecular Weight 74.09.

8. The Merck Index Tenth Edition. 1983. p. 1657. No. 1663, Calcium Oxide.

9. General Chemistry, 2nd Edition. McQuarrie and Rock, W.H. Freeman and Company. p. 120-1.

10. Hearings before the Subcommittee on Antitrust and Monopoly of the Committee on the Judiciary, United States Senate, Ninetieth Congress First Session Pursuant to S. Res. 26; Part 6; "New technologies and concentration", October 2,3,4,6, 1967; Testimony of Alvin M. Marks, p 3340-60; on the air purifier, illustrations p 3352-53; discussion p 3360.

Clean engines - A combination of advanced materials and a new engine design

James E. Smith
Randolph A. Churchill
Jacky Prucz
West Virginia University
MORGANTOWN, West Virginia 26506
United States of America

The initial concept behind the Stiller-Smith Engine was stimulated by a children's toy called a "do-nothing" machine. The toy is a simple wooden block divided into quarters by two grooves through which two small wooden pieces slide when the handle is turned (Figure 1). If the linkage that constitutes the handle is rotated at constant angular velocity the action of the sliders is sinusoidal, linear motion.

The "do-nothing" machine is in essence a kinematic inversion of a Scotch yoke often called an eliptic trammel. Or, in Beyer (1) it is referred to as a double slider or swinging cross slider crank device (Figure 2). The principal of this device was used by Leonardo da Vinci as his "oval-former" to shape and cut ellipses. Further work on identifying the motion of this device was described in 1875 by Reuleaux (2) and also by Prechtel (3).

To understand the Stiller-Smith Mechanism first consider the motion of the double-cross slider. Figure 2 is a pictorial representation illustrating that the center point of the cross-slider link travels in a circular path.

Previous attempts to utilize a double-cross slider took advantage of this translation component. Figure 2 further indicates how the area around point B rotates in a direction counter to that of the translation. It is this rotational phenomenon which is harnessed by the Stiller-Smith Mechanism. Note that all points on the connecting "bar" travel in an ellipse; point B being unique, travels in a circle.

Engine description

Though the Stiller-Smith Mechanism is planar and the resulting engine has planar characteristics, there is a certain 3-dimensionality to the actual construction. The mechanism may be used in a number of configurations, however, the fundamental version involves four cylinders in a cruciform layout. Each piston is directly connected to another piston across from it via a non-articulating connecting rod. This arrangement promotes an efficient exhaust cycle opposite the firing cylinder. The two connecting rods are normally perpendicular to each other (Figure 3) (4).


Figure 1. The "do-nothing machine"


Figure 2. The double cross-slider motion


Figure 3. Engine component layout model

At the center of each rod is a yoke for a radial bearing. The trammel link is now a circular gear with pins protruding, one in each direction. This lets the "floating gear" sit between the two connecting rods with a pin in each bearing/yoke. Output shafts are placed in opposite corners from each other with eccentrically mounted gears meshed with the floating gear. Figure 4 illustrates this layout as it appears in the actual prototype.


Figure 4. Stiller-Smith engine prototype floating gear mechanism

The block has linear bearings for the connecting rods to pass through which serve to isolate the combustion process from the mechanism. They also provide a scavenging area behind each piston for two-cycle operation, if preferred. (Two-cycle cylinders were easily adapted to the prototype shown by bolting "off-the-shelf" cylinders onto the square block.) The completed engine with mounts and test stand is shown in Figure 5. The engine is in an upright configuration to facilitate testing but it may also be placed and oriented in a variety of positions both vertical and horizontal.


Figure 5. Stiller-Smith engine prototype.

Device advantages

Exciting advantages are predicted using this mechanism in an internal combustion engine. Performance reliability and user familiarity must first be developed before large-scale commercialization can be realized. Until that time this type of engine may find application in research fields as a test frame for materials or other engine systems (5). Potential advantages of this device can be summarized as follows:

Table 1. Potential Advantages

1.

Increased tap/weight ratio

2.

Fewer moving parts

3.

Improved balancing characteristics

4.

Sinusoidal piston motion

5.

Orientation variability

6.

Isolated combustion/motion conversion processes

7.

Non-articulating connecting rod

8.

Improved ignition delay characteristics

9.

Commonality of components

10.

Decreased maintenance/down-time

These potential advantages are viewed as fundamental requirements for improvements in efficiency and in the processes relating to environmental improvements. Being lighter and smaller reduces vehicle size requirements and decreases the fuel consumption rates. The utilization of some of the specialty materials will allow for higher combustion chamber temperatures and thus the use of heavier or multiple fuels. In addition to the end-use benefits derived by the consumer using this engine, the user-transparent expenses (both monetary and environmental) of producing the engine and the fuels to operate it would be greatly reduced.

To make an engine of this type, or any engine, effective in a clean energy environment it must be able to operate reliably at its maximum efficiency. To achieve this goal new materials and techniques must be adopted. The following section briefly details some of the materials and processes that will be used in the future to maximize the effective use of fuel resources.

Ceramic materials

Tremendous achievements in materials science have been presented for several years; no longer is a designer restrained to using simple irons and metals. New materials, however, both complicate and enhance the design procedure. The breadth of material possibilities and speed of new material introduction makes it difficult to remain up-to-date with available products. With this expansive selection though comes the possibility of vastly improving current products and promoting new ones. Plastics, polymers, ceramics, composites, and even organic materials are quickly being used in more applications.

The application of ceramic materials to internal combustion engines has become an important research and production strategy. High insulating characteristics of these ceramics are tested in an attempt to reduce the heat transfer away from the hot combustion gases. Ceramics have other advantages such as lower density which decreases moving mass. (This eases valve train requirements.)

Progress in raising combustion temperatures in the early days of engine design was restricted by the limitations of cast irons and other construction materials. Thick walled combustion chambers were built to conduct heat away from the burning gases in the cylinder. Little else was accomplished in the fields of reducing heat transfer and raising engine cylinder temperatures until after World War II. Materials that were then examined included glass derivatives and others thought to have low thermal conductivities (6). Glass has excellent insulating qualities, low expansion ratios, low cost, but unfortunately lacks sufficient strength for engines. Interest in glass (ceramic) coatings for engine components dates back to the 1950's (7). Some ceramics have been used in engines in small quantities, for example spark plug cases. (These act as an electrical insulator more than a thermal insulator.)

Not until the mid 1970's was significant progress made in combustion chamber materials. At this time compounds of silicon-carbide (SIC) and silicon-nitride (Si3N4) were made and used in cylinder construction. In Table 2 are listed some popular engine materials and their properties (8).

For example, SiC is a common abrasive used industrially. Advantages of SiC include good wear, half the density of steel (reduced inertia), high temperature capabilities (sintered at 2200 C), low coefficient of friction, low cost fabrication, good corrosion resistance (9). Disadvantages are the material's brittleness, notch sensitivity, 18 % shrinking during sintering. SiC also is a poor insulator causing large induced thermal stresses. It is used in some high temperature internal combustion engine applications and is to some degree successful. Efforts with SiC now center on monolithic structures (one piece construction) because of their low cost and ease of manufacturing.

Table 2. Engine structural and insulating materials


Ultimate Flexure Strength

Den

Young's Modulus
1260^K

Coeff of
Therm Exp. 300-1260^K

Coeff of
Therm Cond

Material

MP3

g/cc

GP3


^K

W/m^K

Si3N4

300

3.1

300

3.2

12

SiC

450

3.15

400

4.5

40

AMS

20

2.2

12

0.6

1

ZrO2

300

5.7

200

9.8

2.5

Al2O3, TiO2

20

3.2

23

3.0

2

Cast Iron

275

7.6

85

10

75

Silicon Nitride is still a popular constituent for engines and is currently being investigated by GTE laboratories (10). GTE has developed a composite structure which is a silicon nitride matrix with a silicon carbide whisker component. Such structure should decrease ceramic brittleness in engines. Increased resistance to cracking and breaking are advantages of a whisker reinforced material over a non-reinforced one. As with SiC the low densities of these materials make them especially adaptable to reciprocating or oscillating parts such as pistons and valves.

Aluminium titanate (Al2O3, TiO2) has a desirable low thermal conductivity. However the low strength of the material requires that a supporting base material be used such as a metallic substrate. The low density of this material makes it a desirable material for oscillating parts where component mass is an a important consideration. Piston inserts and exhaust system liners are good applications of aluminum titanate.

Aluminum magnesium silicate (AMS) is another candidate that has a low coefficient of thermal expansion but poor strength. A particularly low coefficient of thermal expansion and a high resistance to thermal shock make this material applicable to situations of transient changing thermal loads but not mechanical loads. Difficulty in joining with metal structure would be encountered due to large discrepancies in expansion coefficients.

Another ceramic application, although not shown in the table, should be presented. That is the use of ceramic fibers of aluminum oxide as reinforcing material in nonferrous metals (11). Fiber quantities of 35-40% by volume in cast aluminum parts show rigidity improvements of three or four times the conventional. The problem with this concept is production cost.

Opinions differ slightly on what the desired requirements should be for new materials in engine components. The largest discrepancies are in thermal conductivity. Some researchers like the concept of high conductivity and others low (12,13). Another consideration is the thermal capacity of the material. Ceramics have about half the capacity of metal substance (11). This is possible because of the lower density and conductivity. Lower thermal conductance means that less heat is dissipated into the metal structure from a ceramic lining or insert. A ceramic combustion chamber surface will reach cyclic operating conditions faster than a similar metal surface.

Zirconia is a ceramic material that has very low thermal conductivity values, good strength, thermal expansion coefficients similar to metals, and is able to withstand much higher temperatures than metals. Full stabilization of zirconia can be accomplished with the addition of CaO, MgO, or Y203.

Alloys with 20% yttria or 5% calcia create fully stabilized zirconia with good thermal coefficients of expansion (14). Unfortunately, these tend to crack and spell off quickly. Resistance of yttria to fuel impurities is low which decreases its reliability in an engine cylinder environment.

Magnesium or nickel have been added to PSZ to improve strength and ductility characteristics respectively. Magnesia partially stabilized zirconia (MgPSZ) has a thermal expansion coefficient and elastic modulus close to that of iron and steel and is suitable for liners, valve guides and seats, hot plates, tappet inserts, and piston caps in the engine cylinder. MgPSZ is made of 20-24% magnesia. This has the highest fracture toughness of all the PSZ materials yet developed. Curing of MgPSZ is performed at 1700 C with an after density of 5.70-5.76g/cubic cm.

Nickel PSZ has been developed in hopes of producing a material that is more ductile than other ceramics. In the harsh transient atmospheres of an IC engine this quality will help compensate for the induced thermal and mechanical stresses. Nickel acts as a spherical ball bearing on the molecular level, due to its size, that allows the molecules of the material to slide over each other.(15,16) This decreases the possibilities of sudden brittle failure.

Another promising material is Syalon (Si-Al-O-N) ceramics systems of silicon, aluminum, oxygen, and nitrogen (17). Silicon Nitride mentioned above is one derivative of this classification. A major advantage of the material is the low creep characteristics at high temperatures. Properties are held to around 1400 C. The materials also has a low density and low coefficient of friction. This will be good for reciprocating parts such as valves and bearings.

Several other materials are presently being investigated for possible use in engine applications. One such coating material which exhibits very promising, high emissivity characteristics is called, simply, Ceramic Refractory type CT (18,19). High emissivity actually decreases the heat transfer effects by reflecting much of the heat back at the combustion process. Figure 6 illustrates the emissivity of common materials.


Figure 6. Emissivity characteristics of ceramic CT

At 3000 F the emissivity is listed at 98%. This material is water based silica-alumina and is only good as a coating over an existing structure. Monolithic pieces have not yet been developed. It is now used in furnaces, boilers, on quench baskets and racks, and potentially may improve fuel efficiencies. Recent developments in surface finishes and hardness have made this material attractive to engine designers. Exact finishes are possible and the coating is relatively easy to apply to the structure.

Note that emissivity is considered more important than the heat transfer characteristics. Reflecting the heat away from the surface can be achieved without a dangerous rise in the surface temperature. Adding Carbon Black will further increase emissivity at the expense of some strength. These materials were also designed to resist corrosion, be very durable, and be non-toxic or flammable.

This refractory material also has an interesting cured structure that is worth presenting. The material is sprayed on much as a paint would be. Drying takes between 12 and 24 hours and then the piece is cured. This involves bringing the piece up to operating temperature. With this curing process the coating divides into three distinctive layers as shown in Figure 7.


Figure 7. Layered structure of silica-alumina based mineral

Closest to the base material is a bonding or fusion layer that forms a chemical bond with the surface. This has proven to be very durable and able to withstand any corrosion. The middle layer is one of expansion. This accommodates any difference in thermal expansions of the other two layers. The outer layer will be under more severe thermal stresses and want to expand more than the others. This layer allows this to happen without impeding the bonding layer. The outer layer is case hardened to provide a hard surface against wear. The material is also shown to decrease the amount of deposition onto cylinder walls, due to the non-reactive nature of the surface. The manufacturer indicates that Caterpillar put this material onto a cylinder wall in a test engine and that Purdue University also did some work with this material in engines.

Other ceramic, composite, or advanced materials are being developed for different engine applications. Use of the lighter weight ceramics should improve response time in moving parts. Engine drive train components may use ceramic materials due to their improved hardness and wear capabilities. Friction coefficients and contact surfaces may also be improved. Some of the materials listed above may also work well as components in exhaust turbines, as rotors and port linings. Particulate filters may be another useful application of ceramics or composites.

While the information listed above is only a cursory review of ceramics and their potential importance to clean engines, it is presented to stress the complexity and breadth of the art and the importance of expanding this science. Each of these materials, a combination thereof or one yet to be discovered, will help solve the combustion chamber materials problems, but only after exhaustive years of research.

Composite materials

Analysis and use of composite materials in internal combustion engines has become an important research topic (20). Several modeling approaches examine the elasto-dynamic response of composite connecting rods in an engine. The emphasis is on tailoring the material system and lay-up to the need of reducing the magnitude and elasto-dynamic oscillations of bearing loads. This constitutive modeling technique for composite links is, consequently, more complete and accurate since it tries to account for strength characteristics, extensional deformations and elastic coupling effects not included in prior efforts (21,22).

As an example the physical model and notation selected for the analysis of an in-line slider-crank mechanism are shown in Figure 8. The crank-shaft is rotating at a constant angular velocity and there is no energy dissipation in the system, due to either friction or material damping. The fiber-reinforced connecting rod is the only elastic link of the mechanism (Figure 9); other links are made of steel and considered rigid. A large-mass flywheel is presumed to be attached to the crank-shaft in order to keep its angular velocity constant.

A parametric study has been carried out on the analytical model described above. Its primary objective is to investigate the effects of the material design selected for the connecting rod on the dynamic response of the overall mechanism (20). The response characteristic chosen for this analysis is the vertical bearing load in the wrist-pin, which determines the side-wall force on the piston. The main input parameters are material properties and laminate lay-up of the fiber-reinforced connecting rod.


Figure 8. Physical model and notation.

The time variation of the side-wall force on the piston during a one cycle operation is shown in Figure 10 as reductions in side-wall force magnitude; which is due to superior longitudinal strength and lower mass density.

These results indicate an upper limit for such benefits as they correspond to unidirectional fiber orientation along the connecting rod axis, which is the optimal lay-up for the particular loading case considered here. Lower benefits should be expected for different lay-ups which may be required by more general loading conditions. Besides mass characteristics, elasto-dynamic oscillations of the side-wall force magnitude are governed in this model primarily by the extensional and flexural stiffness of the connecting rod along its longitudinal axis. Since steel is stiffer than fiber-reinforced composites selected for this investigation, the elasto-dynamic benefits of using composite, rather than steel, connecting rods are due to reduced inertia effects.

Summary

Research and technology are at hand to create new concepts in internal combustion engines. To take advantage of these, re-direction of present manufacturing is required, while not demanding the retooling of a whole industry, only rethinking about the way we convert our fuel to power.


Figure 9. Schematic of the ith ply in the connecting rod.


Figure 10. Side-wall forces predicted from rigid-body dynamics for various materials.

The composite material model presented permits a quantitative evaluation of the elasto-dynamic effects associated with material systems and lay-ups selected for fiber-reinforced tubular connecting rods in engines. It may be regarded as a step toward the development of a rational methodology for tailoring the directional dependent properties of composite-made machine parts to functional requirements of high-speed linkages. This is only one of many new techniques that are presently being used or emerging.

The Stiller-Smith Engine could revolutionize internal combustion engine design. The need for cleaner energy conversion and improved performance makes this engine very attractive for further study. What will make it become a leader in clean energy conversion will be the effective use of advanced materials, which is the primary goal of this on-going project.

References

1. Beyer, Rudolf. "The kinematic synthesis of mechanisms". McGraw Hill, New York, p. 62.

2. Reuleaux, F.. "Theoretische kinematik". Friedrich Veiweg & Sohn. Braunschweig. 1875. p. 336.

3. Prechtel, "Technologische Enzyklopadie". (4) p. 425; (7) p. 246.

4. Smith, James E.. "The dynamic analysis of an elliptical mechanism for possible application to an internal combustion engine with a floating crank". Dissertation. West Virginia University. 1984.

5. George, Aaron C.. "A method for predicting cylinder pressure in the Stiller-Smith or other sinusoidal type engines". Thesis. West Virginia University. 1988.

6. Kamo, R.(Cummins) and Bryzik, W. (U.S. Army TARADCOM). "Adiabatic turbocompound engine performance prediction". SAE 780068.

7. French, D.D.J. (Ricardo Consulting Engineers) "Ceramics in reciprocating internal combustion engines". SAE 841135.

8. Kamo, R. and Bryzik, W., "Uncooled, unlubricated diesel?" Automotive Engineering, Vol. 87, n. 6. June 1979. pp. 59-61.

9. Timoney, S., (University College of Dublin, Ireland) and Flynn, G. (Carborundum Resistant Materials Co.). "A low friction, unlubricated SiC diesel engine". SAE 830313.

10. "Ceramic Composite". Automotive Engineering. December 1986. p. 22.

11. Walzer, Peter, Harmut Heinrich, and Manfred Langer. "Ceramic components in passenger-car diesel engine". SAE 850567.

12. Bryzik, Walter and Roy Kamo. "TACOM/Cummins adiabatic engine program". SAE 830314.

13. Marmach, M., et al. "Toughened PSZ ceramic: their role as advanced engine components". SAE 830318.

14. Moorhouse, Peter and Michael P. Johnson. (NEI-International Research and Development Co. Ltd.). "Development of tribological surfaces and insulating coatings for diesel engines". SAE 870161. SP-700.

15. Larson, D.C., J. W. Adams, L. R. Johnson, A. P. S. Teotia, and L. G. Hill. "Ceramic materials for advanced heat engines". Noyes. New Jersey. 1985.

16. Carr, Jeffrey and Jack Jones (Kaman Sciences Corp.). "Post densified Cr2O3 Coatings for adiabatic engines". SAE 840432. SP-571.

17. Lumby, R. J., P. Hodgson, N. E. Cother and A. Szweda (Lucas Cookson Syalon Limited). "Syalon ceramics for advanced engine components". SAE 850521.

18. Hellander, John C. (Ceramic-Refractory Corporation, Transfer, PA). SAE Pittsburgh Section. 2/17/87.

19. Ceramic-Refractory Corp. information sheet, Pittsburgh, PA.

20. Prucz, Jacky., Joseph D'Acquisto, James Smith. "Performance enhancement of flexible linkages by using fiber-reinforced composites", American Institute of Aeronautics and Astronautics, Inc., 1988.

21. Thompson, B.S., D. Zuccaro, D. Gamache and M. V. Gandhi. "An experimental and analytical study of a four bar mechanism with links fabricated from a fiber-reinforced composite material". Mechanism & Machine Theory. (18) 2. 1983. p. 165-71.

22. Sung, C.K., B. S. Thompson. "Material selection: an important parameter in the design of high-speed linkages". Mechanism & Machine Theory. (19) 4/5. 1984. p. 389-96.

23. Churchill, R.A., J.E. Smith, N. M. Clark and R. Turton. "Low-heat rejection engines - a concept review". SAE 880014 (International Congress and Exposition). Feb 29 - Mar 4, 1988.

Developments on the flexible mirror

Scott Strachan
6 Marchhall Court,
EDINBURGH EH 16 5HN
United Kingdom

Over the past few years our engineering group in Scotland have been working with the concept of optical grade variable focus mirrors.

A group at Strathclyde University had devoted considerable research effort to the problem of generating a parabolic figure from a pressure-stressed membrane and had expanded on the work of Mueller in this field, mainly by introducing a separate stretching frame in addition to the mounting rim of the membrane mirror itself.

Our group concluded that this approach was probably limited, in that even if successful, the cost of the final structure was unlikely to be competitive with conventional glass mirrors.

We therefore adopted the alternative approach of aiming at maximum repeatability of a predictable figure even if this figure was not ideally parabolic.

We believed that the error from parabolic and the inevitable slight astigmatic distortion could probably be corrected at low cost with a small diameter distortable plastic miniscus lens.

We found that provided the membrane was bonded to a rim of sufficient accuracy, that is approximately 1/60000th of the diameter as a tolerance in the height of the rim and 1/6000th circularity tolerance, that the figure was entirely predictable for any membrane which has a thickness of between 1/1000th and 1/5000th of the diameter.

Such mirrors form near parabolic figures in the focal range of F5. to F8. The parabolic error can be corrected by a miniscus lens at a cost of less than $10.00. The resulting mirror has a performance equivalent to a glass mirror with a figure accuracy of l/8th of a wave of (yellow) light.

We are now in small scale production of these mirrors and have even at our present level of production achieved the low cost of $600.00 for a variable focus 300 mm mirror with an uncorrected accuracy of 1/6th of a wave.

These mirrors have many uses. Examples are as light intensifiers thermal amplifiers, laser colimators, variable focus-cutting laser optics and, of course, telescopes. Though the mirror accuracy and stability and will never seriously compete with the best glass mirrors the advantage of variable focus means that there are many applications not open to glass. When the low cost is taken into account, there are many applications for large aperture mirrors which were simply not worth the cost of a glass mirror.

Extracting electromagnetic energy from the nonlinear Earth as a self-pumped phase conjugate mirror

T.E. Bearden
A.D.A.S.
P. O. Box 1472
HUNTSVILLE, Alabama 35807
United States of America

At the beginning of the 20th century, Nikola Tesla constructed and attempted to complete a giant Earth transmitter on Long Island, New York, which he believed would energize the Earth itself into giant, amplified standing waves which could be "tapped" at other locations around the globe to provide cheap and amplified electrical power, all fueled by enormous energy freely fed into the standing wave from the Earth itself. Tesla's U.S. Patent No. 1,119,732, "Apparatus for transmitting electrical energy", on his "Magnifying Transmitter", was granted on Dec. 1, 1914, after nearly thirteen years of struggle with the Patent Office. His patent No. 645,576, "System of transmission of electrical energy", Mar. 20, 1900 and his Patent No. 649,621, "Apparatus for transmission of electrical energy", May 15, 1900, were also related to the magnifying transmitter.

In this paper we present a possible means of converting the Earth into a self-pumped phase conjugate mirror (SPPCM), containing a powerful spherical standing scalar EM wave. The SPPCM Earth then produces a self-powered, highly amplified phase conjugate EM replica wave in response to another input EM wave of nominal power transmitted separately into the Earth from any point.

We point out the special characteristics of this standing scalar EM wave and the Earth's reaction, both in the four spatial dimensions (1-3 and 5) of 5-dimensional Kaluza-Klein space and in the time (4th) dimension. (1)

Also we point out that quantization results in a reciprocating wave between electromechanical stress in the four spatial dimensions and canonical stress in the time dimension. This allows the stressed nonlinear Earth to act as a self-pumped phase conjugate mirror (2) so that stress energy of the Earth feeds energy into the standing scalar EM wave, resulting in very high energy gain and production of a very powerful electrogravitational resonance condition (standing wave) in the Earth itself.

This powerful resonant standing wave is then "tapped" in accordance with standard 4-wave mixing theory (3) to produce a highly amplified phase conjugate replica in response to a relatively small input signal from any distant Earth-coupled transmitter on the Earth's surface. (4) The phase conjugate replica signal is coupled back to the distant transmitter/receiver site where it is received, processed and fed into the electrical power grid to distribute electrical power.

A single inducing transmitter can be used to energize the entire Earth on a single frequency or "energy channel" (5). This channel can be tapped by numerous extraction transceivers around the Earth, furnishing energy from the Earth itself at each extraction site. Additional inducing transmitters can be added on different frequencies, to provide additional energy extraction channels (6).


Nikola Tesla's Magnifying Transmitter built on Long Island, New York

In this manner, enormous amounts of clean electrical energy, free from present Earth-polluting generator systems, can be cheaply and continuously extracted from the massive energy of the Earth itself and used to power the requirements of the nations and citizens of the Earth.

The primary energy source (Earth's heat and stress energy) is replenished from the Sun, the Earth's own internal reactions, and the vacuum itself; therefore, this method provides a permanent, clean and self replenishing source of electrical power for all humankind.

Establishing a Standing Scalar EM Wave in the Earth

Let us momentarily treat the Earth as an isotropic but broadly resonant nonlinear medium. We will consider the Earth's deep interior, which is under intense heat and pressure, as a special kind of cathode. We will attempt to produce a transformation of the Earth into a special triode, so that we may then introduce a relatively small "grid signal" and gate highly amplified energy from the Earth's interior self-powered cathode to an external plate load on the surface.

We envision a powerful transmitter operating at a fixed frequency within the Earth's resonant frequency band, transmitting a signal vertically into the Earth, and utilizing a deeply buried ground plane for good Earth coupling (Figure 1). Due to Newton's Third Law (which we will further discuss shortly), a phase-reversed and opposite EM wave (Figure 2) is also produced. We assume that the two opposing EM waves are to function as a "pump wave", well-known in nonlinear optics as the pumped phase conjugate mirror theory (7).

The nonlinear Earth medium acts as a modulator, causing the two opposite waves to mutually modulate each other and lock together as a single wave. This resulting scalar EM wave is a resonant, standing spherical wave with zero E and B electromagnetic force field resultants. This standing scalar EM wave is set up throughout the spherical Earth medium (Figure 3). This wave has zero-resultant E and B fields, but consists of oscillations in the stress energy density of the vacuum itself. It is, therefore, a gravitational wave and travels through the atomic nuclei of the Earth as its transmission medium, rather than through the atomic orbital electron shells (8).

For each of the two EM vector wave components of the standing scalar wave, the reverse piezoelectric and magnetic nature of the Earth causes a concomitant standing mechanical wave to be produced, and again Newton's Third Law causes an opposing mechanical wave to also be produced.

At any small point in the Earth, opposing and balanced mechanical forces are produced which, being mechanically superposed by the nonlinear mass application point, sum to a zero mechanical translation resultant.

However, even though the summed mechanical wave has a zero-vector resultant with regards to translation, each of its paired and opposite internal mechanical force components forms a mechanical stress. Since the two zero-summed mechanical components are rhythmically varying in a directly opposing manner, together they constitute a rhythmically varying mechanical stress wave at each and every point. The summation of all such rhythmic, time-synchronous stress pairs at each point produces the total mechanical stress wave at that point in the Earth - much like repeatedly squeezing a ball in place between the outstretched fingers of one's two hands, without translating the ball. An analogy comparing the E and B field electromagnetic translation and swirl forces, and the electrogravitational "squeeze" potential G is shown in Figure 4.

Production of a Standing Wave in Four Spatial Dimensions

Further, this rhythmically varying mechanical stress wave in the Earth is a standing wave, and is spirally co-incident with the standing scalar wave.

Thus there is formed a single, phase-locked, oscillating, standing EM/mechanical stress wave in the Earth medium, but this stress wave has no external electromagnetic field or mechanical strain vector.

In both cases (scalar EM wave and scalar mechanical stress wave), the wave constitutes a special kind of Kaluza-Klein gravitational wave. The scalar EM component constitutes an electrogravitational potential standing wave in the 5th dimension, since all EM is fifth-dimensional in the well-known Kaluza-Klein theory. The mechanical (3-space) component constitutes a standing, ordinary gravitational wave in our normal 3-space. Taken together, the two phase-locked waves constitute a full gravitational standing wave in the 4 spatial dimensions of KK 5-space, 3-space and the 5th dimension. We call this "spatial-only gravitational wave" in 5-space a k-wave, and we call the four spatial dimensions k-space.


Figure 1. Input into a cathode earth

[NOTE: PUMP WAVE IS A STANDING WHITTAKER WAVE]


Figure 2. Input + reaction + modulation

[NOTE: STANDING WHITTAKER WAVES]


Figure 3. Spherical scalar wave in internal earth stresses


Figure 4. Electrogravitational fields E, B. and G


Figure 5. Newton's Third Law

NOTE: emission of TR/PCR does not affect momentum of ball No. 2

Formation of a Canonical Wave of Time Stress

Now we examine the situation in the remaining KK dimension, the fourth or "time" dimension, caused by the creation of the k-wave.

In the five-dimensional KK model, we may regard a fundamental quantum as existing in five dimensions. This quantum consists of the product of two parts, one in the 4th dimension (time) and one in the combined 3-space and 5th dimension. (The 5th dimension in KK theory is "wrapped around" each point in 3-space, and intimately related to it.) Again, we shall refer to ordinary 3-space and the 5th dimension, taken together, as "k-space". Note that our two phase-locked "3-spatial and 5th dimensional stress waves" constitute a stress wave in k-space.

Since the 5-space quantum must possess a constant magnitude, then when a regular stress oscillation is established in the k-space dimensional component of the 5-d quantum, an inverse stress oscillation also canonically exists in its 4th dimensions, or time, component.

Thus in 5-space we have actually created and established two phase-locked canonical stress waves: one stress wave in k-space, and its inverse replica in time - which we shall call the "t-wave".

When the magnitude of the k-wave is increasing, the magnitude of the t-wave is decreasing, and vice versa. We shall call this canonical symbiosis of the two waves the reciprocal wave principle. We call the wave itself the reciprocating wave, to accent that stress energy is oscillating back and forth between the time dimension and k-space. We also refer to the reciprocating wave as the r-wave.

Mechanism Producing Relativity and Time Dilation

We point out that this mechanism may actually produce the rotation of a frame of reference, and time dilation, as follows:

When a frame is rotating (the object is accelerating), the accelerated object has an external force acting upon it. By Newton's Third Law, the accelerated object itself also produces an equal and opposite reaction force, acting back upon the system furnishing the accelerating force..

In the accelerated object itself, its mass particles act as nonlinear media, and consequently the object is internally stressed in 3-space.

Quantum mechanics shows that the mechanism transporting the mechanical forces is electromagnetic (it is due to virtual photon exchange at base), and so the object is also stressed electromagnetically, or in the 5th dimension. Hence the object is stressed in k-space.

Since - quantum mechanically - all "static stress" actually exists dynamically by exchange of virtual state flux, a "static" force to the external classical observer is actually due to a dynamic flow at base in the k-space of the change quanta produced.

At "zero velocity", a nonmoving object experiences (and each of its mass particles interacts with) the basic, background virtual particle stress (flux density) of vacuum. When the object is moving through the vacuum at some velocity, it meets a greater virtual particle flux density - much as a moving vehicle in the rain strikes more raindrops then when still. Thus the moving object experiences greater vacuum flux density, hence greater vacuum virtual particle stress.

So when the acceleration on the accelerating object is removed after a higher velocity has been obtained, the dynamic vacuum flux being "met" and interacted with, by the mass particles of the moving object has been increased, and the object is under greater vacuum potential stress.

The reciprocal wave principle applies: the increased internal k-stress in the accelerated object produces a concomitant decrease in the t-stress. Hence accelerating an object to a higher velocity produces time dilation, while at the same time producing length contraction.

The Reciprocating Wave

The reciprocal wave principle and mechanism are directly analogous to an inductive-capacitative oscillatory circuit: in the LC oscillator, electrical change stress is oscillating back and forth between a capacitance and an inductance. Five dimensionally speaking, in the r-wave, "gravitational stress potential" is oscillating back and forth between mechanical stress in k-space and time stress in the 4th dimension. We shall refer to this fundamental type of oscillatory stress wave as a reciprocating wave or r-wave.

Due to its zero-vector resultant E and B field nature, the r-wave primarily exists in, and acts upon the nucleus of each atom, doing little to the atom's electron shells except "squeeze" the electrons in passage. it-waves are primarily emitted from, and absorbed by, the nucleus of the atom.

When a reciprocating wave is present in a mass object, the k-space component causes the object to act as a pumped phase conjugate mirror (PPCM). As a PPCM, it may be pumped by:

1. mechanical stress,
2. electromagnetic stress,
3. any combination of the two.

Further, standard 4-wave mixing theory may be employed to produce high amplification of the r-wave. If an external signal (such as an oscillating force, which quantum mechanically is electromagnetic and dynamic at its "virtual particle exchange" base level) is applied, then a time-reversed or phase conjugate replica is produced.

For a useful ratio, we shall divide the energy density of the phase conjugate replica wave by the energy density of the external input signal, and call the resulting dimensional constant the gain of the pumped phase conjugate mirror.

In the nominal case, where external pumping is not employed, a gain of unity is experienced. If external pumping by electromagnetic or mechanical stress - or a combination of the two - is applied, then a gain of greater than unity can be readily obtained. For pumping of great magnitude, a very large gain of truly great magnitude can be obtained. In the oscillation condition, for example, the effective theoretical gain of a real system approaches infinity, which simply means that all the available energy in the scalar pump wave appears in the amplified phase conjugate replica output.

Immediately it can be seen that, theoretically, any PPCM device which will utilize environmental heat and/or stress for pumping, can be used with a weak "grid" input as a scavenging device to gather waste heat and stress and convert them into useful, coherent output.

It also follows that any such device is negentropic, since it converts disordered energy into ordered energy. The impact of this type of device upon the second law of thermodynamics is obvious. However, we should state in passing that the conventional second law is stated for positive-time operations only, and the device referred to here is utilizing time reversal (phase conjugated waves). As a gedanken experiment, the exercise strongly suggests that a corollary to the second law - the law of negentropy - is required to address time-reversed situations.

Mechanism Producing Newton's Third Law

The foregoing mechanisms are in fact responsible for producing the reaction force in Newton's Third Law.

For example, in two colliding rigid spheres (Figure 5), when ball one closely approaches to strike ball two, it produces a force in ball two due to its emitted virtual photons being absorbed in the mass particles - particularly the atomic nuclei - of ball two. Being positively charged, the atomic nuclei act as phase conjugate mirrors, and produce and emit a "time-reversed replica" of the virtual photons absorbed as a result of collision with ball one, without themselves changing the momentum of ball two. (It is well-known in phase conjugate mirror theory that the emission of a time-reversed wave by a PCM does not produce a reaction force on the emitting mirror, although absorption of a TR wave does produce a force on the absorbing mirror.) The time-reversed virtual photons emitted by ball two reverse back down the path taken by the input "signal" photons from approaching ball one, and travel back to the nuclei of the atoms in ball one. There they are absorbed and produce an absorption force (which we observe as a spatially-reversed replica force) in and on ball one.

In a linear situation (uncurved space-time), where nothing is done to interfere with the production and emission of the phase conjugate replica from the ball two "phase conjugate mirror" and its absorption in ball one, then an equal and opposite reaction force is produced in, and on ball one. As can be seen, this is the electromagnetic mechanism that generates Newton's Third Law.

We note in passing that, in a linear situation, whenever work is accomplished on a "receiving" object or system, an equal amount of work is accomplished on the "initiating" object or system. In a nonlinear situation (locally curved space-time), this need not be true since the vacuum now may contain either a sink for, or a source of, energy.

Given this mechanism for Newton's Third Law, if we curve local space-time (tamper with and change the local vacuum's virtual particle flux density), we may directly affect the phase conjugation mechanism and consequently alter the "linear case" Newtonian Third Law. In that case, the reaction force need not be exactly opposite to, nor equal in magnitude to, the action force.

This means that:

1. It is possible to build a Maxwell's Demon (9),
2. the conservation of energy law can be violated (10),
3. it is possible to build a so-called "free energy" device (11).

Conceivably, we may also utilize the nonlinear effect of a locally curved space-time on Newton's Third Law, to electromagnetically produce a unilateral space-drive translation force in and on a vehicle without the ejection of mass, but this is beyond the scope of the present paper.

The Earth as a Self-Pumped Phase Conjugate Mirror

To lend credence to Tesla's magnifying transmitter, we shall now apply these new principles to the situation produced in the Earth by our special standing reciprocal wave, so that almost limitless energy can be obtained for practical use. Bear in mind that we are still modeling an idealized isotropic medium, and the results will still have to be modified to function in the "real" Earth, which normally departs from the isotropic ideal sufficiently to cause destructive damping of our reciprocal wave.

To continue with an idealized Earth model: In the interior of the Earth, the core is considered to be under very great mechanical pressure and so hot that it is molten. It also consists of a great mix of materials, and so constitutes a highly nonlinear material medium.

With the establishment of a reciprocal wave in this idealized nonlinear material medium, a situation exists where,

1) the Earth's core forms an extremely efficient phase conjugate mirror (PCM), and,

2) due to its internal heat and pressure, this PCM is self-pumped with pump waves of extreme magnitude, particularly in the frequency region from slightly above 0 Hertz to 1 - 2 megaHertz. Figure 6 shows the Earth as a self-pumped phase conjugate mirror after initiation.

Further, spherical symmetry exists. The core of the Earth forms a spherical self-pumped phase conjugate mirror, the mid-sphere section of the Earth forms another, and so does the mantle of the Earth. All three self-pumped conjugate mirrors are further cohered together into a single self-pumped phase conjugate mirror system, phase-locked together by the standing reciprocal wave.

Now with a second well-grounded transmitter (part of a transceiver) located at any other point on the Earth's surface, let us transmit a signal into the Earth at the same frequency used by the original transmitter.

A standard 4-wave mixing (FWM) situation now exists, where a powerful "pump" wave (waves 1 and 2 in FWM) is self-furnished by the heat and pressure of the interior of the Earth and the input signal (wave 4 in FWM) is furnished by transmitter number two. According to the well-known FWM principle, in this case a powerfully amplified phase conjugate replica (PCR) signal (wave 3 in FWM) will be returned from the Earth to the transceiver site. (12)

In effect, the Earth has been converted into a giant self-powered triode, and now any distant input site will function as both a grid input and a plate collector for the triode Earth. Figure 7 represents this extraction of a powerfully amplified PCR wave, allowing energy to be extracted directly from the "self-pumped Earth PCM".

At the distant extraction site, a very large, specially tuned LC oscillatory circuit is used to receive the powerful electromagnetic PCR from the Earth, with very high amperage and voltage. Connecting transmission lines conduct the electromagnetic energy to a separate station where the surging power from the pumped Earth is processed by standard techniques and fed onto transmission lines that connect to a large electrical power grid.

According to well-known 4-wave mixing theory, for the oscillation condition (90 degrees), the gain of a pumped phase conjugate mirror approaches such a large number that it may be regarded as nearly infinite. (13)

Very large amounts of electrical power could be extracted from the Earth in this fashion, if the Earth behaved strictly as our idealized isotropic nonlinear medium.


Figure 6. Self-pumped phase conjugate mirror earth


Figure 7. Energy extracted from a single site


Figure 8. Energy extraction from multiple sites

Corrections for the Real Earth's Departure From the Ideal Model

However, the Earth deviates from an idealized isotropic medium, and its deviations will disrupt the idealized situation so that very appreciable damping of the reciprocal wave occurs, extinguishing the self-pumping feature. Thus large disturbances in and on the Earth - such as explosions, earthquakes, tremors, etc.- cause trembling throughout the Earth, but do not normally result in 4-wave mixing amplification of the effect since the r-waves they temporarily initiate are severely damped.

Accordingly, the idealized scheme as previously described will not work without modification to offset the Earth's deviation from isotropy. Without modification of the technique, the coherent phase-locking of the k-wave and t-wave will be broken, and the reciprocating wave will be sharply damped and will not form a standing wave in the Earth. Consequently, random in-phase occurrences of opposing mechanical stress "tremor frequencies" and opposing heat "EM" stress waves will not "feed-in" to an established standing k-wave, for there will be none established to receive such inputs.

The major problem, then, is to establish in the Earth the standing EM wave of our frequency choice and a standing, phase-locked mechanical wave in phase with it, so that a standing k-wave and canonical standing t-wave - and hence a standing reciprocating wave - are established in the Earth's sphere.

So let us consider why a wave breaks up in a nonlinear medium.

The "breaking phase-lock" problem exists because the speed of a wave in a material medium depends not only on the medium's characteristics but also at least somewhat upon wave's amplitude. Hence, for a sine wave, the peaks travel faster than the lower parts of the wave, overtaking them and causing destructive interference, with consequent wave breakup and severe damping.

This exact problem has been met and successfully overcome with ultrasonic sound waves in the ocean (14). We shall apply the same technique to overcome our breakup and de-phasing problem.

A remarkable phenomenon occurs if two sine waves, separated by a delta frequency, are simultaneously transmitted into the Earth (the real nonlinear medium). In this case, we really wish to utilize the difference frequency between the 2 waves, and pretend that we have transmitted a single sine wave into the Earth at that beat frequency.

It has been shown mathematically that the difference frequency will be propagated through the nonlinear medium as a sine wave, and will not be subject to breakup and damping. We shall refer to this scheme of dual interference frequencies and the resulting beat (difference) frequency as triad usage. With this scheme, we can transmit two waves a fixed frequency apart, and later extract and use the beat frequency as if it were a single sine wave not subject to damping and breakup.

Accordingly, we modify our original transmission scheme and include triad usage. We transmit two frequencies into the Earth a fixed frequency apart. The difference frequency then will first form a standing EM scalar wave, and then our standing in-phase mechanical scalar wave, so that the two together constitute the desired standing k-wave.

From the characteristics of the fundamental quantum, nature itself then forms the phase-locked standing t-wave in inverse phase, completing a standing reciprocating wave in the real Earth.

Extracting Power From the K-Resonant Earth

We now modify our distant "tapping" station transceiver to incorporate triad usage. Again, twin frequencies with the proper separation frequency are transmitted at small power into the Earth as the "input". Twin-tuned LC oscillatory receivers are utilized and their outputs beat together as a beat frequency oscillator. The highly amplified beat frequency (the phase conjugate replica response wave from the self-pumped Earth phase conjugate mirror) is extracted as a sine wave and fed to the transmission lines connecting to the processing station.

A convenient beat frequency, for example, might be 12,000 Hz, and the two transmission frequencies might be 500,000 Hz and 512,000 Hz.

At the receiver, the beat frequency oscillator puts out 12,000 Hz, at very high voltage and amperage. The voltage may be adjusted by biasing the ground potential of one of the LC oscillators in the two that are beat together. The voltage and amperage received from the Earth are varied by adjusting the transmitted voltage and amperage. Standard feedback control techniques are used to provide stability of the output power.

At the processing station, the frequency is stepped down to standard 60 Hz power frequency, and the voltage is stepped down to a convenient high voltage transmission line voltage. To the transmission line leading to the power grid, the processing station appears as any other power station.

Since the Earth is spherical, the oscillation (90-degree) condition for the pumped phase conjugate Earth mirror exists at every point on the Earth's surface.

Therefore additional power stations can readily be added to extract additional power from the same standing r-wave in the Earth's core, without increasing the amount of power input at the activation transmitter site, and without adding any additional activation transmitters. All other transceiver sites will automatically be in oscillation condition for very high gain, so that great power can be extracted from each site. Only a very small transmitted input triad signal need be utilized at each of these additional Earth power extraction stations. Figure 8 shows the case for multiple-site energy extraction.

A single activation transmitter and a single triad transmission creates a single channel (beat frequency) which can be tapped by a large number of separated power extraction stations. Together the triad activation station, single triad beat frequency transmission channel, and multiple power extraction stations on that channel constitute a single channel power distribution system.

Multiple single channel power distribution systems may be established as desired, but one triad activation transmitter is required for each one (15).

Using these principles for portable power units and other systems

We now extend our discussion to point out that the vacuum itself is filled with powerful virtual energy fluxes, penetrating each and every point in space. The particles in the nucleus of every atom. of physical matter are in constant and violent flux exchange with the vacuum, according to modern physics.

Accordingly, a proper nonlinear isotropic model of the Earth - using a mixture of materials such as normally comprise amorphous semi-conductors - can conceivably be made to work in the same fashion so as to extract energy directly from the vacuum flux (ether). The basic "model Earth" may be simply smoothly pressed pellet of material in external appearance (16).

The "pumping" of the model may be accomplished by placing it under very strong static pressure, such as by special hydraulic means (17). An alternate method is to press and sinter the proper mix of nonlinear materials under very heavy pressure, so that locked-in stresses remain in the fine grains of the amorphous aggregate after the pressed material is removed from the press (18).

A signal input to the stressed "model Earth" medium may be made either electrically through an implanted electrode, by magnetic resonance if ceramic magnetic material is included in the stressed model, by sound, by pulsed infrared frequencies, or any other convenient method. In each case, an amplified phase conjugate replica of the input signal can conceivably be obtained, for the proper conditions and the proper resonant input frequency (19).

It is also conceivable that materials and assemblies can be found which will function to establish themselves as self-pumping PCM triodes in the presence of even small heat and pressure differentials in the local environment. If so, then tiny "triad usage" grid signals to the devices would directly "scavenge" the disordered heat energy and stress energy continually appearing in the environment. Such units would be negentropic, as previously pointed out.

The present author has performed consulting work on one device which utilizes sound to convert the human body to a PPCM-triode and applies "triad-usage" grid input sound signals to "scavenge" long-term, locked-in physical stress from the body, radiating the stress energy away as PCR sound energy. The device is safe and self-regulated, since only excess stress is scavenged and ejected. (Technical details cannot be given because of non-disclosure agreement.)

A great number of other applications for these new effects also exist. For example, there is no discernible reason why inertial strata and material composition of the Earth cannot be determined by analysis of the PCR spectra received from the SPPCM Earth in response to triad-usage stimuli. In this way geolocations of scarce minerals, petroleum, etc., could be readily and accurately determined.

Use of Circular Polarized Transmission Waves

In addition to a triad usage of the best frequency, another means may be utilized to establish a standing scalar EM wave in the Earth. It has been shown that circular polarized waves have standing wave solutions in an isotropic nonlinear medium, while plane polarized waves do not (20). Thus circularly polarized energy offers a means of accomplishing the same end.

In the real Earth, the use of both the circular polarized wave and triad usage is probably advisable, since the Earth medium is not precisely isotropic. Use of both methods simultaneously would appear to minimize the effects of the anisotropic Earth and yield the greatest obtainable efficiency.

Summary and Conclusions

Using a scalar EM approach, we have presented and discussed a means of producing or creating the following:

1. A standing scalar EM wave resonance in and through the Earth, through the medium of the atomic nuclei of the material comprising the Earth's sphere.

2. A special five-dimensional reciprocating wave where stress energy is oscillated canonically between the four spatial dimensions of Kaluza-Klein 5-space and the time dimension.

3. A special five-dimensional reciprocating wave where both mechanical and electrical energy are phase-locked together, and varied canonically with time oscillation.

4. A physical condition in the Earth so that the Earth itself acts as a self-pumped phase conjugate mirror, with very large self-pumping energy furnished by its internal pressure and heat.

5. A means of directly extracting controlled and variable amounts of electrical energy from the internal heat and pressure energy of the Earth, from one or more distant points, with miniscule input energy at each extraction point to initiate the extraction process.

6. A means of converting the Earth to a self-pumped phase conjugate mirror in a four wave mixing system, so that a very large power gain, much greater than unity, can be achieved in output energy and power versus input energy and power.

7. A mechanism underlying Newton's Third Law of Motion, and a means by which the Third Law can be manipulated to allow the production of a free energy device or a unilateral space translation force without the ejection of mass.

8. A mechanism underlying relativity. The curvature of space-time, and time dilation.

9. Conceptual adaptation of the techniques in principle to provide small, clean, portable power units.

10. A possible explanation for the results for such scientific giants as Nikola Tesla and T. Henry Moray: Tesla in powering the Earth with a single giant magnifying transmitter and Moray in producing the first authenticated and rigorously demonstrated practical, portable, "free-energy" device.

Up to now, scalar EM has attracted only scant attention from the scientific establishment in the Western world, even though the well-known "pump wave" in four-wave mixing is a scalar EM (electrogravitational) mechanism that makes possible the pumped phase conjugate mirror amplification of time-reversed EM waves.

Western scientists still have not grasped the great potential of scalar EM to utilize the phase conjugate replica, time-reversed waves, and vacuum structuring to achieve antigravity electromagnetically, engineer the nucleus of the atom in a controlled fashion, produce a unilateral thrust for propulsion without ejection of mass, directly tap and use the boundless energy of the universal vacuum, control and cure diseases electromagnetically, reverse the aging process, and rid the world of chemical, nuclear, electromagnetic and sonic pollution by our present industries and power systems (21).

Nature has been most kind. It appears that she has no immutable laws, for any natural law is subject to change if one can discover her higher methodology beyond the assumed limitations posed by that law.

Let us hasten to apply the new scalar electromagnetic principles to secure a fuller, healthier, more prosperous life for everyone, with liberty, justice, energy, transportation and health for all.

NOTES AND REFERENCES

1. In general relativity, time stress is particularly important in producing space-time curvature, since time is "denser" than length by a factor of c. We desire to form a local curvature of space-time in such a manner as to provide a local source of EM energy. Thus it is imperative to produce a wave of stress in and on time.

2. Since the Earth's internal pressure can furnish energy into (i.e., pump) the mirror during the mechanical (3-space) stress phase of the standing wave, and the zero-multiplied (modulated) zero-vector resultant heat energy components can pump the mirror during the EM (5th dimensional) stress phase of the standing wave. For a comprehensive introduction to standard 4-space phase conjugation and pumped PCMs, see David M. Pepper, "Nonlinear Optical Phase Conjugation," Optical Engineering, 21 (2), Mar./Apr. 1982, pp. 156-183. See also B. Ya Zeldovich et al, Principles of Phase Conjugation, vol. 42, Springer Series in Optical Sciences, Theodor Tamir, Ed., Springer-Verlag, New York, 1985. See also Ammon Yariv, Optical Electronics, Third Edition, Holt, Rinehart and Winston, New York, 1985. See particularly Chapter 16: "Phase Conjugate Optics - Theory and Applications." For examples of laboratory-demonstrated self-pumping, see J.O. White et al, "Coherent Oscillation by Self-Induced Gratings in the Photorefractive Crystal BI12 SiO20 (BSO) crystals," Appl. Phys. Lett., vol. 5, 1980, p.102; Mary J. Miller et al, "Time Response of a Cerium-Doped Sr0.75 Ba0.25 Nb2O6 Self-Pumped Phase-Conjugate Mirror," Opt. Lett., 12 (5), May 1987, pp. 340-342; Mark Cronin-Golomb et al, "Passive (Self-Pumped) Phase Conjugate Mirror: Theoretical and Experimental Investigation," Appl. Phys. Lett., 41 (8) Oct. 15, 1982, pp. 689-691.

3. For an introduction to 4-wave mixing theory, see Pepper, ibid; Zeldovich. ibid; Yariv, ibid.

4. Per 4-wave mixing theory analogy, the distant transmitter inputs wave A4. The stress energy of the Earth inputs opposing waves A1 and A2, which collectively are referred to as the "pump wave". The nonlinear medium then produces a powerful A3 phase conjugate replica wave, which in a perfect case could contain as much power as is contained in pump waves A1 and A2. In the real Earth there is an efficiency factor for the process which must be determined by experiment.

5. Given that the self-pumping effect is attained successfully. The present author believes that conversion of the Earth to a self-pumped phase conjugate mirror was the secret of Nikola Tesla's magnifying transmitter.

6. In and around May Day, 1985 the Soviet Union conducted a giant, world-wide strategic exercise, powered by multiple such power extraction taps into the Earth. At the height of the exercise Frank Golden - who appears to be the only Westerner who presently can make such scalar EM measurements - measured and placed on the oscilloscope a total of 27 such power channels, each utilizing a pair of frequencies separated by 12 kHz. The author still recalls the feeling of absolute awe that came over him when he realized that beneath his feet, the entire Earth was in entrained giant electrogravitational resonance on 54 different frequencies, captured and held tightly in the grasp of the Soviet Union.

7. For an introduction to the theory of time-reversed EM waves, optical conjugation, pumped phase conjugate mirrors, see Pepper, ibid; Zeldovich. ibid. See also Robert A. Fisher, Editor, "Optical Phase Conjugation", Academic Press, New York, 1983. For a more general discussion of the importance of time reversal to quantum mechanics and other aspects of modern physics, see Robert C. Sachs, 'the Physics of Time Reversal", University of Chicago Press, Chicago, 1987.

Other interpretations of time-reversed electromagnetics are also possible. For an interesting and quite different adjunct type of time-reversed EM - one similar to the thinking of the present author - see Shiuji Inomata, "Consciousness and Complex Electromagnetic Fields", Electrotechnical Laboratory, 5-4-1 Mukodai-cho, Tanashi-City, Tokyo, Japan, 1976. Inomata adds a complex magnetic field to Maxwell's equations, restoring their symmetry. He does this by introducing an imaginary magnetic current and an imaginary magnetic charge. Both the electric and magnetic fields now are complex; each is the sum of a real component and an imaginary component. An equation then results for the new imaginary or "shadow" EM in which: (1) the roles of electricity and magnetism are inverted, (2) time is reversed, (3) every electronic charge is accompanied by a shadow monopole, (4) every electronic current coexists with a shadow magnetic current, (5) for the shadow current, everything is transparent, (6) negative energy is fed from an infinity point as an "advanced wave" in acausal and "action-at-a-distance" fashion, and (7) by using shadow electromagnetics and negative energy, "free energy" and negentropic devices are possible.

It should also be realized that the real nature of a photon vis-à-vis an EM wave, has not been resolved in physics. A view in agreement with my own that, for a monochromatic EM wave, a single wave cycle constitutes a photon, is presented by Robert L. Wadlinger, "What Does E = hv mean?", Speculations in Science and Technology, I (5), 1978, p. 469-476. See also W.M. Honig, Found. Phys. 4, 1974, p. 367-380; Found. Phys. 6, 1976, p. 37-57 and p. 46-49; Int. Jour. Theor. Phys. 15, 1976, p.673-676. The concept of the single cycle as the simplest photon may be closely related to the solitary wave (solition) concept introduced by Scott-Russell in 1844; see B.C. Scott et al, Proc. IEEE 61, 1973, pp. 14431482. Such solitions exist as discontinuous solutions of many of the wave equations in physics. I personally view a modulated wave and a complex wavefront as having created a "giant photon" of complex shape, containing a specific substructure. From the quantum mechanical idea that a photon is a virtual electron/positron pair, I also view one part of the giant photon as carrying or containing positive-charge/negative-time, and the other part of the giant photon as carrying or containing negative-charge/positive time. In this view, I consider the photon in vacuum as a deterministic structure in the vacuum's EM potentials, and refer to such structured photons as "vacuum engines". The "coupling together by modulation" of such a giant photon with its phase conjugate replica provides a "scalar vacuum engine," one which converts all its EM stress energy into gravitational stress of time. It is thus a "time-stress engine" of purely electrogravitational nature.

The scalar vacuum engine contains a stabilized, dynamic, deterministic pattern impressed in and on the local curvature of space-time. This type of vacuum engine - even one of miniscule power - passes through the electron shells of an atom and interacts directly within the atomic nucleus. Continual irradiation of the atomic nucleus with a specifically patterned scalar vacuum engine gradually "charges up" the nuclear potential with that specific engine's charge pattern or substructure. By producing the desired potential substructure in the atomic nucleus, the structure of the nucleus itself can be altered and directly "engineered" in controlled fashion, rather than just being bluntly struck with a particle hammer as in normal high-energy physics.

8. This point has previously been well-covered by the author. For example, see Bearden, "Extraordinary Physics", AIDS: Biological Warfare, Tesla Book Co., POB 1649, Greenville, Texas 75401, 1988, p. 74-203.

9. Regard "closure of a system" as essentially a statement of a locally flat space-time, and a linear statement of Newton's Third Law. On the other hand, if the local space-time is curved, then Newton's Linear Third Law need not apply because in this case the system is opened and can exhibit "hidden source" and "hidden sink" effects. If we control the local curvature and vacuum structuring, we can control and change the magnitude and direction of Newton's Third Law reaction force. Creation and use of structured potentials with macroscopic deterministic substructures in the local vacuum is one means of achieving local, deterministic space-time curvature with directed macroscopic source or sink currents - in effect, this process produces usable "Maxwell's Demons".

For a special kind of "Maxwell's Demon", see Cynthia Kolb Whitney, "Field-to-matter energy transfer", 1988 (to be published). Quoting: "Physicists have always believed that classical field-matter energy exchange is strictly one-way, fields receiving energy and matter losing it, via radiation. That belief is consistent with the accepted formulation for potentials created by relativistically moving sources. But that formulation has recently been shown to embed an error. Correction of the error allows reverse energy transfer, from fields to matter. Though previously unexpected, this mechanism becomes credible by offering a candidate explanation for certain otherwise mysterious natural phenomena. The mechanism behind the reverse energy transfer is relativistic torquing within any interacting multi-body system. The existence of relativistic torquing invites human intervention, to induce controlled energy transfer that can be tapped for human purposes such as propulsion. The design of an engineering system to demonstrate such a function on a laboratory scale is discussed."

For a deeper insight into this fundamental EM mistake that Whitney has discovered - one that has long been made and perpetuated in relativistic potential theory - see Cynthia Kolb Whitney, "Manifest Covariance in Relativistic Potential Theory", Physics Essays, 1(1), 1988, p. 18-19; "Generalized Functions in Relativistic Potential Theory", Hadronic J. 10, 1987, p 91-93; Whitney, "Electromagnetic Fields Near Dynamic Systems of Charged Particles", Hadronic Journal, 10, 1987, p. 299-301.

For an important expose of the importance of zero-point vacuum energy fluctuations and the interaction potential to a naturally-arising "already-unified" field theory, see H.E. Puthoff, "Zero-point Fluctuations of the Vacuum as the Source of Atomic Stability and the Gravitational Interaction", Proceedings, British Society of Science International Conference 'Physical Interpretations of Relativity Theory"', Imperial College, London, Sept. 1988. Puthoff puts it plainly: "Whether addressed simply in terms of Newton's Law, or with the full rigor of general relativity, gravitational theory is basically descriptive in nature, without revealing the underlying dynamics for that description. As a result, attempts to unify gravity with the other forces (electromagnetic, strong and weak nuclear forces), or to develop a quantum theory of gravity, have foundered again and again on difficulties that can be traced back to a lack of understanding at the fundamental level." Puthoff's paper now points the way directly to the understanding of gravitation at the fundamental level.

My own additional comment is that macroscopically structuring the local vacuum potential by use of scalar vacuum engines (as developed in Note 7 above) accomplishes limited directional structuring of zero-point vacuum fluctuations. By this means the nucleus of the atom can easily be reached and engineered, with milliwatts or even microwatts of input electromagnetic power. By using scalar vacuum engines, one can directly perform electrogravitational engineering. The results may be dramatic, particularly when higher power levels are utilized. For example, John Hutchison of Vancouver utilizes two separated, opposing, violently discharging Tesla coils to provide a 4-wave mixing "scalar pump wave" into a target material or object geometrically between the coils. When conditions are just right, the powerful scalar vacuum engines deposited into the nuclei of the target object create nuclear potentials that have an excess of negative time flux. Since in negative time the gravitation is a repulsive force, levitation of macroscopic objects - some weighing over 60 pounds - have been achieved. The major barrier to the experiments is that presently it uses relatively uncontrolled frequencies. Antigravity (production of excess negative time flux in the atomic nucleus) is largely an extremely low frequency effect, since the lower the energy increment that a photon carries, the greater the time increment it carries. All that is necessary to produce controlled antigravity is to produce a pumped phase conjugate mirror at under, say, 400 Hz. In that case a few hundred watts of pump power is sufficient to levitate one pound of mass. On the other hand, the effect cannot even be detected at optical frequencies, since the energy increment of an optical photon is extremely large and its time increment is extremely small. Pump waves at optical frequencies produce such little excess negative time flow in the atomic nuclei of the pumped phase conjugate mirror that the antigravity effect is essentially immeasurable.

10. This follows immediately since the electrogravitational standing wave represents a stabilized, standing oscillation in the local curvature of space-time. Einstein kept his general relativity in accord with conservation laws only by imposing the severe restriction that local space-time was never curved. For proof that in a curved space-time all conservation laws can be and are violated, see V.I. Denisov and A.A. Logunov, 'the Inertial Mass Defined in the General Theory of Relativity Has No Physical Meaning", Teor. i. Matemat. Fizika, 51 (2), May 1982, p. 163-170 (in Russian); A.A. Vlasov and V.I. Denisov, "Einstein's Formula For Gravitational Radiation is Not a Consequence of the General Theory of Relativity", Teor. i. Matemat. Fizika, 53 (3), Dec. 1982, p. 406418; V.I. Denisov and A.A. Logunov, "New Theory of Space-time and Gravitation", Teor. i. Matemat. Fizika, 50 (1), July 1982, pp. 3-76. Quoting Denisov: "... in general relativity there are no energy-momentum conservation laws for a system consisting of matter and the gravitational field... The gravitational field in general relativity is completely different from other physical fields, and is not a field in the spirit of Faraday and Maxwell."

Whitney has also identified a completely new process for accelerated systems: the reverse transfer of energy from field to matter, using the mechanism of relativistic torquing within any interacting multi-body system. Since this mechanism moves energy from fields to particle orbits (Whitney, "Equations of Motion for the Gravitational Two-Body Problem", Hadronic J. (in publication), controlled interventions by humans is suggested - for example, to alternate between a relativistic torquing state to excite orbital particles from the background field, and a non-torquing state where the excited orbital particle decays, releasing ordinary energy externally for use. See Cynthia Kolb Whitney, "Field-to-Matter Energy Transfer". The combined dual process might provide a sort of "one-way gate valve" to gate energy from background fields to external loads - a special kind of "Maxwell's demon".

Also, nonlinear effects can result in increased localized absorption effects in small particulate matter. The "extra energy density" is extracted from localized regions of virtual particle flux (vacuum charge), and the vacuum simply replenishes itself from regions outside that locality. See H. Paul and R. Fischer, "Comment on 'How Can a Particle Absorb More Than the Light Incident on It?"', American Journal of Physics, 51 (4), Apr. 1983, p. 327.

11. A local space-time curvature in one direction yields a hidden local source in the vacuum itself (curvature in the opposite sense would yield a hidden sink). This vacuum energy hidden source can conceivably be tapped to produce useful energy. If so, since the disintegrated energy of vacuum would be the source, such a device would most probably produce negative energy consisting of phase-conjugated or time-reversed EM waves. (See C.W. Rietdijk, "How Do 'Virtual' photons and Mesons Transmit Forces Between Charged Particles and Nucleons?", Foundations of Physics, 7 (5-6), June 1977, p. 351-374. As pointed out by Rietdijk, virtual photons and mesons transmitting coulomb and nuclear forces do not arise from temporary violations of energy conservation, as commonly accepted. They involve negative energy transmission, and traveling backward in time.) Circuits in purported prototype "vacuum energy tapping" devices (which in their mechanisms ultimately deal with extraction from the atomic nucleus) built by Moray, Bedini, Nelson et al, are reputed to "run cool", suggestive of a negative energy effect.

Indeed, the present author's own theory of ball lightning is that counterbalancing negative and positive energy radiation interactions are simultaneously and abruptly produced by enmeshed, multiple 4-wave mixing in a local region of air in a lightning strike or potential discharge. Multiple "EM stress waves" caused by multiple forks, etc. of the lightning strike/discharge are generated in a local nonlinear region of the air, to form a swirling plasma. When formative conditions are just so, this highly nonlinear plasma together with the "EM stress waves" becomes a pumped phase conjugate mirror ensemble. Other external input EM forks from all directions into each part of this mirror ensemble result in release of phase conjugate (time reversed) replicas from each portion (submirror), radially outbound and containing negative energy. The outbound negative energy in turn constitutes inputs to the pumped phase conjugate mirror plasma and to each contacted submirror, generating time-reversed, amplified radially inbound energy. Reverberation and adjustment back and forth between the 2 modes results in an equilibrium between outbound (negative) and inbound (positive) energy states. When the configuration is disrupted sufficiently by anisotropic leakage or physical contact, the mirror ensemble effect is ruptured in the explosive dissipation from the ball/mirror ensemble of its entrapped energy.

12. See Pepper, ibid.; Yariv, ibid.; Fisher, ibid.; Zeldovich. ibid. for discussions of 4-wave mixing, pumped phase conjugate mirrors, and nonlinear power reflection coefficients (i.e., gain).

13. See Amnon Yariv, ibid., p. 510 for a plot showing the nonlinear power reflection coefficient's rise to infinite gain at the oscillation condition. It is stressed that "infinite gain" means that the device can extract and get out in the PCR - up to all the available energy with which it is being pumped (assuming a 100% efficient device). A 100% efficient device literally "scavenges up, orders, and transmits out" all available pumping energy. That is, in one sense it can scavenge out and radiate away all stress encountered, - even if the stress's force components are disordered - if that stress acts as a pump.

Note especially that this effects offers a general and universal mechanism to freely tap energy from any source of continuing stress, whether the stress is due to magnetic force fields, magnetostatics, hydraulic pressure, mechanical stress, gas or fluid pressure, electrical force fields, electrostatics, etc. Quantum mechanically, all stress component are (at base level) electromagnetic in nature, even though the basic stress is due to virtual particle (e.g., virtual photon) exchange. Each of these virtual particle streams is fluctuating and dithering, and so can act as a "pump wave's to the proper PCM material. The stress source (e.g., opposing north poles of a permanent magnet) and the proper nonlinear PCM material will constitute a PPCM. All that is necessary is: (1) find a material and conditions for the oscillation condition, so that over-unity gain can be realized, (2) furnish the small "grid" input (input A4 in standard 4-wave mixing theory), (3) receive, collect and distribute to the load the concomitant amplified PCR energy. From this viewpoint, Tesla's magnifying transmitter concept actually utilizes a universal "free-energy" mechanism, and one where the existence of the basic "self-pumping" and "self-organizing" phenomena is now well-established in nonlinear optics.

14. Owen Flynn, "Parametric Arrays: A New Concept For Sonar", Electronic Warfare Magazine, June 1977, p. 107-112. Any two sine-wave frequencies as simultaneous drivers combine to produce a sine-wave difference frequency propagating in water, essentially without sidebands or reverberations. Its pattern has a main lobe approximately equal to that of the high frequency drive, but devoid of sidelobes. The level of the propagating difference frequency is proportional to both the product of the two fundamental drive levels and to the square of the desired value of difference frequency. An experimental parametric array was built by Westinghouse Electric Corp., Baltimore, for the Naval Underwater Systems Center, New London, Connecticut. Though derived for ultrasonic waves in oceanic medium, the mathematics is general. It applies to the wave equation and hence to electromagnetic waves as well.

15. Apparently the Soviets have already developed and tested such multiple-channel "Earth power" systems, and used them to power a series of giant strategic scalar EM weapons. See note 6, above. Powering such weapon complexes from local power taps into the Earth itself would provide a major advantage in case of nuclear war, where the wholesale disruption of normal systems by, say, giant electromagnetic pulses from high nuclear bursts of large yield, is almost a certainty.

16. Significantly, T.H. Moray used just such internally stressed pellets of nonlinear material in his well-documented energy device which reportedly produced 50,000 watts of power from a 55-pound device. He also incorporated a radioactive material so that a convenient "decay channel" existed in the virtual (potential) state of the vacuum flux, as a sort of "one-way railroad" from the nucleus to the external, macroscopic world outside the atom. See T.H. Moray, "The Sea of Energy", 5th Edition, History and biography by John E. Moray. Foreword by Thomas E. Bearden, Cosray Research Institute, 2505 South 4th East, Salt Lake City, Utah 84115, 1978 for details of the Moray device and its testing.

17. On one occasion known to the present author, a company engaged in pressing pellets of nonlinear material (with ingredients similar to the Moray pellet constituents) to very high pressure (i.e., to pressures of artificial diamond magnitude) encountered a very strange accident. During the intense part of the pressing operation, a giant, brilliant flash suddenly erupted from the mold, and a surge of thousands of amperes from the pellet materials destroyed the electrical system of the press and other nearby equipment.

18. T.H. Moray pressed his pellets in large railroad presses, achieving as much locked-in stress in his powdered pellet materials as possible. A sort of sintering process also seems to have been utilized by Moray, to "lock-in" stresses in the grains of his pellets.

19. Note that at radar frequencies, radar-absorbing materials vastly slow the travel of an entering EM radar wave. With special nonlinear RAM-type materials and multi-beam radiation, therefore, standing scalar EM waves can be obtained in materials of small size, simply by applying the pump concept. Theoretically, portable power units using the principles stated in this present paper can be small, and need not involve unduly large and bulky equipment.

We also accent that, at the quantum level, often a particle may already absorb more energy than is incident upon it, simply by taking extra energy from the local vacuum flux (which, in our opinion, acts as a "pump" for the charged particle in this case, converting the particle into a PPCM). For example, see Craig F. Bohren, "How can a particle absorb more than the light incident on it?", Am. J. Physics, 51 (4), Apr. 1983, p. 323-7. Also: H. Paul and R. Fischer, "Comments on 'How can a particle... ', Am. J. Phys., 51 (4), p. 327. Note that changes in effective absorption cross sections may actually be the result of the changing degree of operation of the particle/field or the macroscopic object/field as a PPCM.

20. The circular polarization standing wave closed solution in isotropic nonlinear media is well-known in Soviet literature (see A. Ya. Terletskii, "Some exact wave solutions of nonlinear electromagnetic field equations", Dokl. Akad. Nauk. SSR., 19 (6), 1974, p. 344-5. The standing wave solutions of circularly polarized waves in an isotropic nonlinear medium are, non-sinusoidal in form).

21. For possibilities - good and bad - posed by scalar EM and TR wave applications see Bearden," Fer-de-Lance", Tesla Book Co., 1986; "AIDS: Biological Warfare", Tesla Book Co., 1988; "Soviet Phase Conjugate Weapons: Weapons that use time-reversed electromagnetic waves", Bulletin, CRCC, Ft. Collins, CO. Jan. 1988.

The distribution of electrical power by means of terrestrial cavity resonator modes

James F. Corum, Ph.D *
Associate Professor
Department of Electrical Engineering
West Virginia University
MORGANTOWN West Virginia 26506-6101
United States of America

* now:

Senior Research Scientist
Battelle Memorial Institute
505 King Avenue
COLUMBUS, Ohio 43201-2693
United States of America

The Earth-Ionosphere cavity can be used as a means to distribute electrical energy for industrial purposes at extremely low frequencies. The technology which will permit the wireless distribution of electrical power to or from remote geographical regions is now available for research and development.

It is advanced that the Earth-Ionosphere cavity possesses electrical properties which are appropriate for the wireless distribution of electrical energy to any point on Earth on an industrial scale. Such a remarkable proposition, though seemingly a fanciful concept, is actually no more profound than the notion advanced in the early 1950's to use a cavity resonator to wirelessly distribute microwave power to process food, i.e. - a microwave oven.

Electrical energy at the appropriate frequency may be introduced at one point in a cavity resonator and efficiently collected at another by devices tuned to the same frequency. The resonator itself serves as a two port reactive distribution system. The ELF (extremely low frequency) resonator formed by the cavity between the Earth and the lower E-region of the ionosphere is a nature! resource, that will actually permit the terrestrial distribution of electrical power across a continent, without the necessity of an interconnecting land-line grid of high tension transmission lines.

History

From an historical standpoint, it is significant that Nikola Tesla long ago envisaged such a global power distribution system. A flag of caution should be raised here. It has been common in the past to discard Tesla's far-sighted vision as baseless. We believe that such depreciation has stemmed from critics who were, in fact, uninformed as to Tesla's techniques, measurements and physical observations. After reviewing Tesla's technical disclosures, it is our considered judgement that not only is industrial scale power transmission practical, but that Tesla's actual data is consistent with the very best experimental data available today. It could have only been gotten as a result of authentic terrestrial resonance and power transmission measurements.

Tesla proposed that the Earth itself could be set into a resonant mode at frequencies on the order of 7.5 Hz. From his notes, his private correspondence, his diary and his patent disclosures, it is clear that Tesla's physical explanation and interpretations were erroneous. However, as is often the case, significant explorations or inventions have been made on the basis of faulty physical concepts. Experiments were done, demonstrations performed and data taken. Let us review some of the history behind this early research.

Tesla's ELF enterprise

In May of 1899, Tesla arrived at Colorado Springs, Colorado with $100,000. This is the same Tesla whose patented AC power system, purchased by the Westinghouse Corporation which was selected and installed in the original Niagara Falls electric power project of the 1890's. Perhaps it is not unremarkable that almost a century later most of the civilized world still employs a power generation and distribution system in virtually the same form as his early disclosures.

Within three months of his arrival at Colorado Springs, he and his associates constructed a laboratory which housed a prodigious RF signal generator. The primary and secondary were wound on a circular fence 51 feet in diameter and had an input power in excess of 250 KW provided by the Colorado Springs Electric Power Company. The secondary was used to drive a helical resonator, or extra coil, 10 feet high, wound with 100 turns of c6 gauge wire on a coil form about 8 feet in diameter. Emanating from the midst of the extra coil was a tower about 150 feet high, capped with a copper sphere 3 feet in diameter. The resonant frequencies of the driving transmitter have been variously reported as between 50 KHz to 150 KHz. This transmitter, we believe, was used as one component in a recently uncovered process to produce significant currents in the vertical tower and its attachments at pulse frequencies of 7.5 Hz to 15 KHz.

A very colourful account of the first time Tesla fired up his equipment is given in O'Neill's now classic, though somewhat unreliable, biography. Tesla, on various occasions actually said that he had created sparks 150 feet in length. His experiments in Colorado Springs lasted nine months and cost in excess of $200,000.

Tesla returned to New York on January 21, 1900 and soon received the financial backing of J.P. Morgan, Thomas Fortune Ryan, John Jacob Astor and others. His patent application of January 18, 1902 reveals his intention to construct a massive Tesla coil driven generator for global power distribution.

The installation was subsequently constructed at Wardenclyffe, Long Island in 1902. The tower was 154 feet tall and the cap sphere was 50 feet in diameter. It was never completed, however, and was destroyed during WW 1. Similar towers were to have been built at Niagara Falls, in Australia and in Europe. Tesla, however, had to abandon the Wardenclyffe project when his financial backers withdrew their support.

Physical Operation

Tesla had proposed that the Earth itself could be set into resonant electrical oscillations which he experimentally determined to be no lower than 6 Hz and no greater than 20 KHz. He claimed to have resonated the Earth in this frequency range by using a huge spark gap transmitter energized by the standard secondary of his monstrous Tesla Coil. His patent application of February 19, 1900, entitled "Apparatus for Transmission of Electrical Energy" is probably the closest description available of the equipment used at Colorado Springs the previous summer. Assuming Tesla's claimed demonstration of distant power transmission without wires as a working hypothesis, then a plausible physical explanation is that the discharges from the electrode at the top of his giant tower would have significant spectral components at the Schumann resonance frequencies and excite a standing wave mode in the Earth-Ionosphere cavity. These physical issues have been addressed in recently presented technical publications. The overwhelming documented technical evidence clearly substantiates the above position.

Schumann Resonances

In 1952, the German physicist, W. O. Schumann recognized the possibility that a somewhat unusual example of a resonant cavity might be provided by the Earth itself as one boundary surface, and the ionosphere as the other. These two concentric spheres could then form the boundaries of a resonant electromagnetic cavity. (Sea water has a conductivity of 4 Siemens/meter while the ionosphere has an effective conductivity on the order of 1 milli-Siemen/m. Evidently, the structure can easily support damped oscillations.)

Determination of the cavity resonant frequencies follows from a solution of Maxwell's Equations subject to the given boundary conditions. At extremely low frequencies (ELF), where the wavelength is large compared to the effective height of the ionosphere, the electric field is essentially radial, and its amplitude distribution varies as the cosine of the polar angle measured from the position of the source antenna. Amplitude distributions for the first and second modes of oscillation of the Schumann cavity are as shown in Figure 1, when the Earth-Ionosphere cavity is excited by a source which launches vertically polarized electromagnetic waves from the North Pole.

Measured Electrical Properties

There are a variety of electrical properties of the Earth-Ionosphere cavity which have been experimentally determined over the past twenty years and are now well documented in scientific literature.

(a) Spectral Response

The resonant frequencies of the cavity have been predicted and observed. One would expect natural phenomena to excite cavity oscillations. This, indeed, does happen. The cavity is set into oscillation by solar flares, for example. But, by far the dominant natural phenomenon exciting cavity resonances is thunderstorm activity occurring world-wide. The power density spectrum of a lightning stroke is very broad, containing a wide band of frequencies. Electrically, the Earth-Ionosphere cavity behaves like a multiply tuned LC network driven by an impulse generator, and oscillations are excited at the natural resonant frequencies of the network. Thunderstorm activity is more or less continuously present on Earth, with the main centers of activity being Southeast Asia, the Congo and the Amazon Basin. Consequently, experimental measurements of the atmospheric noise power density spectrum would be expected to reveal peaks at the cavity resonant frequencies, should Schumann's hypothesis be correct. Figure 2 is a typical measurement of the atmospheric noise spectral density vs. frequency. The first few cavity resonances reported above are quite evident. This is how the measured values were determined.

These sorts of measurements have been reported by many observers over the past 20 years. The spectra are frequently skewed about the center frequency and may undergo variations up to about 1 Hz in periods on the order of a minute or so.


Figure 1. Radial electric field Er and azimuthal magnetic field intensity He, for the first two cavity resonator modes of the Earth-Ionosphere shell.


Figure 2. Typical spectrum of cavity noise. Prominent Schumann resonances at 8, 14, 20 and 26 Hz are visible. Peaks at 32, 37 and 43 Hz are apparent.

(b) Cavity Q

An important practical question associated with the Earth-Ionosphere cavity is its ability to store or contain energy without dissipating it by heating up the Earth or the ionosphere boundaries. In electrical and microwave circuit theory, a quantity called the Q of the resonant cavity is determined as a ratio between the stored energy and the energy loss per cycle in the cavity,


where


is the resonant angular frequency assuming no losses. The Earth-Ionosphere cavity Q has been measured and documented experimental data places it in the range between 3.8 to 7.8.

(c) Propagation Attenuation Constant

While the above Q is relatively low for a tuned circuit, it does indicate that the waveguide propagation losses are surprisingly small. For electromagnetic propagation on a transmission line or in a waveguide, a forward traveling wave attenuates as


where


is the attenuation constant in nepers per meter. The measured value of the attenuation constant for ELF waves propagating in the Earth-Ionosphere cavity has been experimentally determined to be on the order of one quarter of a dB per thousand kilometers. By way of comparison, single circuit 200 KV 60 Hz overhead power transmission lines have attenuation constants on the order of 1.15 dB per thousand kilometers. Experimentally established transmission and distribution losses are 23% less for the Schumann Cavity than for conventional power transmission lines.

The issue of the practicality of the proposed distribution system does not rest upon the efficiency of the transmission medium. Rather, the technical issue to be faced concerns the electromagnetic coupling mechanism to be used. This issue, we believe, was addressed by Tesla and the experimental results which he disclosed testify to his conspicuous success

The Earth-Ionosphere cavity is, indeed, capable of being artificially excited into oscillation and the cavity can be employed as a medium for the global distribution of electrical power.

What is required is the creation of a practical engineering capability to efficiently launch electrical power into the cavity and to couple energy from the cavity.

It is absolutely astonishing that Tesla's public disclosures and technical publications match the electrical properties of the Schumann Cavity fifty years before there was even a theoretical model to predict rough values.

The conclusion should be obvious: Tesla could have only obtained these numbers by successfully stimulating the cavity. Tesla had to have solved the problem of launching energy into the Earth-Ionosphere waveguide and coupling energy from the cavity. We believe that the technical aspects of his apparatus have been sufficiently disclosed, in his patents, to be able to replicate his cavity stimulation and power transmission experiments. This experimental investigation should be carried out immediately. Clearly, whoever executes a sound and careful program of research along these lines will develop a technology capable of distributing electrical energy on a vast scale without the necessity of a land-line network.

It is evident that we are advocating one of the most visionary energy distribution systems ever conceived. And yet we maintain that it is technically sound and can be swiftly inaugurated at a fraction of the capital investment required by the only other alternative electrical power distribution system - high voltage overhead power transmission lines.

Tesla was aware of this and could clearly see through to the logical conclusion. When he returned to New York City in 1900, he wrote:

"Men could settle down everywhere, fertilize and irrigate the soil with little effort, and convert barren deserts into gardens, and thus the entire globe could be transformed and made fitter abode for mankind."

This program will inevitably have an even broader impact upon all the civilized world. The electrical power industry will experience a major innovation. The global economics of today, which is so petroleum dominated, would be transformed overnight to reflect the importance of those nations which are happily endowed with natural resources appropriate for the generation of electrical energy.

Such research will not only revolutionize the areas of energy, transportation, agriculture and commerce, but, in all probability, could even inspire significant alterations in the present structure of world governing bodies. We are referring to the consequences initiated by a. global diffusion of energy. International society could perhaps be on the verge of a metamorphosis comparable in magnitude to the great agitation, evolution and achievement which so characterized the European Renaissance and the forward progress of civilization to which it gave birth.

During the last century, natural science seemed, for all intents and purposes, to have reached its maturity. From our vantage point today, that period is called 'the Golden Age of Classical Physics". Yet, almost a hundred years ago, remarkable discoveries began to be made which would engender profound modifications of classical physics. It was the experimental science of the 1890's which would soon give birth to what, today, we call modern physics. It was a renaissance no less than the transition which had occurred several centuries earlier in art, literature and natural philosophy.

It has been observed that, standing on the threshold of the 1890's, only a writer of science fiction could have dreamed of the revolution on physical thought which was to occur over the next few years. And even the poets and writers of that day were unable to grasp the impact which the new science would soon have on industry, the military and the political life of the entire planet, which we have observed during the twentieth century.

Today, we stand on a similar threshold. But now it is technology which is experiencing such radical growth We submit that the "high tech" society which we enjoy today may be but a destitute and primitive shadow of the flourishing civilization which could soon emerge across the threshold of the 21st century.

The power distribution system which we are proposing will surely require careful and considered investigation. There are no simple engineering answers.

Engineering has been called "that profession which utilizes the resources of the Earth for the benefit of mankind". We are proposing the initial step in what eventually will be an engineering project the scale of which civilization has never endeavored to attain before. But never since the days of Columbus, could so much be gotten for so small a financial investment. Never before in recorded history has it been within the grasp of the technical community to so dynamically influence the advancement of civilization.

The engineering challenge

There is a need for a practical waveguide probe capable of exciting the Earth-Ionosphere cavity at 8Hz where the wavelength is about 37.5 million meters. Poor radiation efficiency and physical size limitations for such probes in previously known technology have been overcome with our inventions, patented in the U.S. and other nations. (Nos. 4,751,515 and 4,622,558). With these, a contrawound structure waveguide probe of reasonable size can be built which can excite the Earth-Ionosphere cavity. It employs the earth as an image current source and has a maximum dimension of 0.001 free space wavelengths (with much smaller sizes possible), designed to launch vertically polarized, omnidirectional energy efficiently into the cavity at its primary resonant frequency, or sufficiently close to a resonance frequency so as to be within the resonance frequency bandwidth. Because propagation losses are so low at the primary Schumann resonance frequency, signals at that frequency may be transmitted to any point of the earth without significant attenuation.

An important element of the inventions is that the path inhibit propagation, thereby creating slow waves, and provide an electromagnetically closed path so that a standing inhibited-velocity wave, or resonant operation, can be established in response to the flow of electrical current through the path.

One half of the electrically conducting path may be eliminated in embodiments of the structure by employing the image theory technique. Thus, a conducting image surface electrically supplies the missing portion of the path. The image surface may be a conducting sheet, a screen or wires arranged to act electrically as a conducting sheet, or may be the earth, in accordance with the improvement disclosed in the patents of the known electromagnetic theory.

COMPARISON OF PHYSICAL PARAMETERS

Physical
Parameter

Accepted
Experimental
Values

Predicted from
Tesla's
Disclosures

Attenuation Constant (dB/Mm)

.20




.30

.26

Resonant Frequency (Hz)

6.8




7.8

6

Cavity Q

3.8


Q

7.8

3.2


Q

6.4

Coherence Time (sec.)

no data available

0.08484

Phase Velocity

.71




.83

0.8

Cavity Mode Structure



"Projections of all the stationary nodes onto the earth's diameter are equal."

Cavity Thickness (Km)

35


h

80

"greater than 8 Km"
"about 20 Km"

Table 1. Documented numerical evidence that Tesla excited terrestrial resonances in 1899. Additionally there is a host of descriptive evidence.

Wireless transmission of power - resonating planet Earth

Toby Grotz
Project Tesla
Box 277
LEADVILLE, Colorado 80461
United States of America

Many researchers have speculated on the meaning of the phrase "non-Hertzian waves" as used by Dr. Nikola Tesla. Dr. Tesla first began to use this term in the mid 1890's in order to explain his proposed system of wireless transmission of power. In fact, it was not until the distinction between the method that Heinrich Hertz was using and the system Dr. Tesla had designed, that Dr. Tesla was able to receive the endorsement of the renowned physicist, Lord Kelvin. (1)

To this day, however, there exists a confusion amongst researchers, experimentalists, popular authors and laymen as to the meaning of non-Hertzian waves and the method Dr. Tesla was promoting for the wireless transmission of power. In this paper, the terms pertinent to wireless transmission of power will be explained and the method to be used by present researchers in a recreation of the Colorado Springs experiment will be defined.

Early Theories of Electromagnetic Propagation

In pre-World War I physics, scientists postulated a number of theories to explain the propagation of electromagnetic energy through the ether. There were three popular theories present in the literature of the late 1800's and early 1900's. They were:

1. Transmission through or along the Earth.
2. Propagation as a result of terrestrial resonances.
3. Coupling to the ionosphere using propagation through electrified gases.

We shall concern our examination at this time to the latter two theories as they were both used by Dr. Tesla at various times to explain his system of wireless transmission of power. It should be noted, however, that the first theory was supported by Fritz Lowenstein, the first vice-president of the Institute of Radio Engineers, a man who had the enviable experience of assisting Dr. Tesla during the Colorado Springs experiments of 1899. Lowenstein presented what came to be known as the "gliding wave" theory of electromagnetic radiation and propagation during a 1915 IRE lecture.

Dr. Tesla delivered lectures to the Franklin Institute at Philadelphia in February, 1903, and to the National Electric Light Association St. Louis in March 1903. The theory presented in those lectures proposed that the Earth could be considered as a conducting sphere and that it could support a large electrical charge. Dr. Tesla proposed to disturb the charge distribution on the surface of the Earth and record the period of the resulting oscillations as the charge returned to its state of equilibrium. The problem of a single charged sphere had been analyzed at that time by J.J. Thompson and A.G. Webster in "The Spherical Oscillator." This was the beginning of the science of terrestrial resonances, culminating in the 1950's and 60's with VLF radio engineering and discoveries of W.O. Schumann and J.R. Waite.

The second method of energy propagation proposed by Dr. Tesla was that of the propagation of electrical energy through electrified gases. Dr. Tesla experimented with the use of high frequency RF currents to examine the properties of gases over a wide range of pressures. It was determined by Dr. Tesla that air under a partial vacuum could conduct high frequency electrical currents as well or better than copper wires. If a transmitter could be elevated to a level where the air pressure was on the order of 75 to 130 millimeters in pressure and an excitation of megavolts was applied, it was theorized that;

"... the air will serve as a conductor for the current produced, and the latter will be transmitted through the air with, it may be, even less resistance than through an ordinary copper wire. " (2)

Resonating Planet Earth

Dr. James T. Corum, in chapter two of his soon to be published book, "A Tesla Primer", points out a number of statements made by Dr. Tesla which indicate that he was using resonator fields and transmission line modes:

1. When he speaks of tuning his apparatus until Hertzian radiations have been eliminated, he is referring to using ELF vibrations: "... the Hertzian effect has gradually been reduced through the lowering of frequency." (3)

2. "... the energy received does not diminish with the square of the distance, as it should, since the Hertzian radiation propagates in a hemisphere." (3)

3. He apparently detected resonator or standing wave modes: "... my discovery of the wonderful law governing the movement of electricity through the globe... the projection of the wavelengths (measured along the surface) on the earth's diameter or axis of symmetry... are all equal." (3)

4. "We are living on a conducting globe surrounded by a thin layer of insulating air, above which is a rarefied and conducting atmosphere... The Hertz waves represent energy which is radiated and unrecoverable. The current energy, on the other hand, is preserved and can be recovered, theoretically at least, in its entirety." (4)

As Dr. Corum points out, "The last sentence seems to indicate that Tesla's Colorado Springs experiments could be properly interpreted as characteristic of a wave-guide probe in a cavity resonator. (5) This was in fact what led Dr. Tesla to report a measurement which to this day is not understood and has led many to erroneously assume that he was dealing with faster than light velocities.

The controversial measurement: It does not indicate faster than light velocity

The mathematical models and experimental data used by Schumann and Waite to describe ELF transmission and propagation are complex beyond the scope of this paper. Dr. James F. Corum, Kenneth L. Corum and Dr. A-Hamid Aidinejad have, however, in a series of papers presented at the 1984 Tesla Centennial Symposium and the 1986 International Tesla Symposium, applied the experimental values obtained by Dr. Tesla during his Colorado Springs experiments to the models and equations used by Schumann and Waite. The results of this exercise have proved that the Earth and the surrounding atmosphere can be used as a cavity resonator for the wireless transmission of electrical power.

Dr. Tesla reported that .08484 seconds was the time that a pulse emitted from his laboratory took to propagate to the opposite side of the planet and to return. From this statement many have assumed that his transmissions exceeded the speed of light and many esoteric and fallacious theories and publications have been generated. As Corum and Aidinejad point out, in their 1986 paper, "The Transient Propagation of ELF Pulses in the Earth Ionosphere Cavity", this measurement represents the coherence time of the Earth cavity resonator system. This is also known to students of radar systems as a determination of the range dependent parameter. The accompanying diagrams from Corum's and Aidinejad's paper graphically illustrate the point.

We now turn to a description of the methods to be used to build, as Dr. Tesla did in 1899, a cavity resonator for the wireless transmission of electrical power.

PROJECT TESLA: The wireless transmission of electrical energy using Schumann resonance

It has been proven that electrical energy can be propagated around the world between the surface of the Earth and the ionosphere at extreme low frequencies in what is known as the Schumann Cavity. Experiments to date have shown that the electromagnetic waves of extreme low frequencies in the range of 8Hz, the fundamental Schumann resonance frequency, propagate with little attenuation around the planet within the Schumann resonance cavity. Knowing that a resonant cavity can be excited and that power can be delivered to that cavity similar to the methods used in microwave ovens for home use, it should be possible to resonate and deliver power via the Schumann cavity to any point on Earth. This will result in practical wireless transmission of electrical power.

Background

Although it was not until 1954-1959 when experimental measurements were made of the frequency that is propagated in the resonant cavity surrounding the Earth, recent analysis shows that it was Nikola Tesla who, in 1899, first noticed the existence of stationary waves in the Schumann cavity. Tesla's experimental measurements of the wavelength and frequency involved closely match Schumann's theoretical calculations. Some of these observations were made in 1899 while Tesla was monitoring the electromagnetic radiations due to lightning discharges in a thunderstorm which passed over his Colorado Springs laboratory and then moved more than 200 miles eastward across the plains. In his "Colorado Springs Notes", Tesla noted that these stationary waves "... can be produced with an oscillator," and added in parenthesis, "This is of immense importance." (6) The importance of his observations is due to the support they lend to the prime objective of the Colorado Springs laboratory. The intent of the experiments and the laboratory Tesla had constructed was to prove that wireless transmission of electrical power was possible.

Schumann resonance is analogous to pushing a pendulum. The intent of Project Tesla is to create pulses or electrical disturbances that would travel in all directions around the Earth in the thin membrane of non-conductive air between the ground and the ionosphere. The pulses of waves would follow the surface of the Earth in all directions expanding outward to the maximum circumference of the Earth and contracting inward until meeting at a point opposite to that of the transmitter. This point is called the antipode. The traveling waves would be reflected back from the anti-pode to the transmitter to be reinforced and sent out again.

At the time of his measurements Tesla was experimenting with and researching methods for "... power transmission and transmission of intelligible messages to any point on the globe." Although Tesla was not able to commercially market a system to transmit power around the globe, modern scientific theory and mathematical calculations support his contention that the wireless propagation of electrical power is possible and a feasible alternative to the extensive and costly grid of electrical transmission lines used today for electrical power distribution.

The Need for a Wireless System of Energy Transmission

A great concern has been voiced in recent years over the extensive use of energy, the limited supply of resources, and the pollution the environment from the use of present energy conversion systems. Electrical power accounts for much of the energy consumed. Much of this power is wasted during transmission from power plant generators to the consumer. The resistance of the wire used in the electrical grid distribution system causes a loss of 26-30% of the energy generated. This loss implies that our present system of electrical distribution is only 70-74% efficient.

A system of power distribution with little or no loss would conserve energy. It would reduce pollution and expenses resulting from the need to generate power to overcome and compensate for losses in the present grid system. Based on the 1971 world-wide power generation of 908 million kilowatts, approximately 207 million kilowatts are being produced to make up losses. This results in a cost of 454 billion U.S. dollars at 5 cents a kilowatt. The power wasted in transmission now costs over 100 billion dollars a year. Wireless transmission of power, if fully utilized, could save over 90 billion dollars per year. Any technology that can reduce these losses and the corresponding costs is of extreme importance.

The proposed project would demonstrate a method of energy distribution calculated to be 90-94% efficient. An electrical distribution system, based on this method would eliminate the need for an inefficient, costly, and capital intensive grid of cables, towers, and substations. The system would reduce the cost of electrical energy used by the consumer and rid the landscape of wires, cables, and transmission towers.

There are areas of the world where the need for electrical power exists, yet there is no method for delivering power. Africa is in need of power to run pumps to tap into the vast resources of water under the Sahara Desert. Rural areas, such as those in China, require the electrical power necessary to bring them into the 20th century and to equal standing with western nations.

As first proposed by Buckminster Fuller, wireless transmission of power would enable world-wide distribution of off-peak demand capacity. This concept is based on the fact that some nations, especially the United States, have the capacity to generate much more power than is needed. This situation is accentuated at night. The greatest amount of power used, the peak demand, is during the day. The extra power available during the night could be sold to the side of the planet where it is day time. Considering the huge capacity of power plants in the U.S. this system would provide a saleable product which could much to aid our balance of payments.

In 1971, nine industrialized nations, (with 25% of the world's population), used 690 million kilowatts, 76% of all power generated. The rest of the world used only 218 million kilowatts. By comparison, China generated only 17 million kilowatts and India generated only 15 million kilowatts (less than two percent each). If a conservative assumption was made that the three-quarters of the world which is only using one-quarter of the current power production were to eventually consume as much as the first quarter, then an additional 908 million kilowatts will be needed. The demand for electrical power will continue to increase with the industrialization of the world.

A system of wireless transmission of power would make electrical energy available to people and nations which are not now privileged with the access to power developed nations take for granted.

Project Tesla: Objectives

The objectives of Project Tesla are divided into three areas of investigation:

1. Demonstration that the Schumann Cavity can be resonated with an open air, vertical dipole antenna;

2. Measurement of power insertion losses;

3. Measurement of power retrieval looses; locally and at a distance.

Methods

A full size, 51 foot diameter, air core, radio frequency resonating coil and a 120 foot tower have been constructed and are operational at an elevation of approximately 11,000 feet (3350 meters) for the experiment. This system is centered around a very powerful resonating Tesla coil. It was originally built in 1973-1974 and used until 1982 by the United States Air Force at Wendover AFB in Wendover, Utah. The USAF used the coil for simulating natural lightning for testing and hardening fighter aircraft. The system has a capacity of 150 kilowatts. The coil, which is the largest part of the system, has already been built, tested, and is operational.

A location at a high altitude is initially advantageous for reducing atmospheric losses which work against an efficient coupling to the Schumnann cavity. The high frequency, high voltage output of the coil will be half wave rectified using a uniquely designed single electrode X-ray tube. The X-ray tube will be used to electrostatically charge a 120 ft. (37 m) tall, vertical mast which will function to provide a vertical current moment. The mast is topped by a metal sphere 30 inches (75 cm) in diameter. A circulating current of 1,000 amperes in the system will create an ionization and corona causing a large virtual electrical capacitance in the medium surrounding the sphere. Discharging the antenna 7-8 times per second through a fixed or rotary spark gap will create electrical disturbances, which will resonantly excite the Schumann cavity, and propagate around the entire Earth.

The propagated wave front will be reflected from the antipode and reflected to the transmitter site. The reflected wave will be reinforced and again radiated when it returns to the transmitter. As a result, an oscillation will be established and maintained in the Schumann cavity. The loss of power in the cavity has been estimated to be about 6% per round trip. If the same amount of power is put into the cavity on each cycle of oscillation of the transmitter, there will be a net energy gain which will result in a net voltage, or amplitude increase. This will result in reactive energy storage in the cavity. As long as energy is delivered to the cavity, the process will continue until the energy is removed by heating, lightning discharges, or as is proposed by this project, loading by tuned circuits at distant locations for power distribution.

The resonating cavity field will be detected by stations both in the United States and overseas. These will be staffed by engineers and scientists who have agreed to participate in the experiment.

Measurement of power insertion and retrieval losses will be made at the transmitter site and at distant receiving locations. Equipment constructed especially for measurement of low frequency electromagnetic waves will be employed to measure the effectiveness of using the Schumann cavity as a means of electrical power distribution. The detection equipment used by project personnel will consist of a pick-up coil and industry standard low-noise, high gain operational amplifiers and active band pass filters.

In addition to project detection there will be a record of the experiment recorded by a network of monitoring stations that have been set up specifically to monitor electromagnetic activity in the Schumann cavity. This effort is headed by Dr. D.D. Sentman who is with the Institute of Geophysics and Planetary Physics at the University of California at Los Angeles. Dr. Sentman's project is funded by Los Alamos National Laboratory, the Lawrence Livermore Laboratory, and the National Aeronautics and Space Administration. Dr. Sentman has agreed to participate in verification of the goal of this proposal.

Evaluation Procedure

The project will be evaluated by an analysis of the data provided by local and distant measurement stations. The output of the transmitter will produce a 7-8 Hz sine wave as a result of the discharges from the antenna. The recordings made by distant stations will be time synchronized to ensure that the data received is a result of the operation of the transmitter.

Power insertion and retrieval losses will be analyzed after the measurements taken during the transmission are recorded. Attenuation, field strength, and cavity Q will be calculated using the equations presented in Dr. Corum's papers. If recorded results indicate power can be efficiently coupled into or transmitted in the Schumann cavity, a second phase of research involving power reception will be initiated.

Regulating Agencies

The Radio Regulations of the International Telecommunications Union (ITU), Article 2, Section 11, Geneva; 1959, list world-wide frequency allocations from 10 kilohertz to 275 gigahertz. Frequencies below 10 kilohertz and above 275 gigahertz are not allocated. In the United States the Federal Communications Commission has allocated frequencies in accordance with ITU regulations. In effect, there is no governmental agency in the world that has jurisdiction over the frequency of operation of Project Tesla.

Environmental Considerations

The extreme low frequencies (ELF), present in the environment have several origins. The time varying magnetic fields produced as a result of solar and lunar influences on ionospheric currents are on the order of 30 nanoteslas. The largest time varying fields are those generated by solar activity and thunderstorms. These magnetic fields reach a maximum of 0.5 microteslas (uT). The magnetic fields produced as a result of lightning discharges in the Schumann cavity peak at 7, 14, 20 and 26 Hz. The magnetic flux densities associated with these resonant frequencies vary from 0.25 to 3.6 picoteslas per root hertz.

Exposure to man-made sources of ELF can be up to 1 billion (1000 million) times stronger than that of naturally occurring fields. Household appliances operated at 60 Hz can produce fields as high as 2.5 mT. The field under a 765 kV, 60 Hz power line carrying I amp per phase is 15 uT. ELF antennae systems that are used for submarine communication produce fields of 20 uT. Video display terminals produce fields of 2 uT, 1,000,000 times the strength of the Schumann resonance frequencies. Project Tesla will use a 150 kw generator to excite the Schumann cavity. Dr. Corum's calculations predict that the field strength due to this excitation at 7.8 Hz will be on the order of 46 picoteslas.

References

1. "Tesla said". Compiled by John T. Ratzlaff. Tesla Book Company. Greenville, Texas. 1984.

2. "Dr. Nikola Tesla: selected patent wrappers". Compiled by John T. Ratzlaff. Tesla Book Company, 1980, Vol. 1, p. 128.

3. Tesla, Nikola. "The disturbing influence of solar radiation on the wireless transmission of energy". Electrical Review. July 6, 1912. p. 34-5.

4. Tesla, Nikola. "The effect of static on wireless transmission". Electrical Experimenter. January 1919, p. 627, 658.

5. Corum, James T., Kenneth L. Corum. "Tesla Primer and Handbook". unpublished.

6. Tesla, Nikola. "Colorado Springs notes, 1899 - 1900". Nikola Tesla Museum. Belgrad. 1978. p. 62.


High altitude Project Tesla transmitter, Leadville, Colorado

A quarter-wave coaxial cavity as a power processing plant

Mark Nash, James Smith, Robert Craven
339 Engineering Sciences Building
West Virginia University
MORGANTOWN, West Virginia 26506
United States of America

The coaxial conductor geometry has been a subject of interest in electrical engineering since the advent of radio and the foundations of the radio engineering discipline. The simplicity of this geometry and the resonant modes associated with it, have attracted scientists to return to the development of different technologies and analytic models utilizing the coaxial geometry time and again. Almost all of these have been developed in the areas of communication technologies and have utilized portions of classical transmission line theory for the basis of their respective analytic models.

The limited scope of communication technologies has focused attention to a limited level (small energies) of applications of the complex electromagnetic nature of resonant circuits and physical properties they possess which may be exploited. The analytical simplicity of the coaxial geometry provides for a distributed resonant structure which has modes that can be physically interpreted and visualized with qualitative links to the analysis through transmission line theory but, as is often the case, the complexity of model analysis of cavity resonators hinders the physical interpretation of their properties.

The coaxial resonator may be viewed as a transitional geometry falling between transmission line and cavity resonators. With a proper analytic synthesis of the field theory, transmission line models, the lumped equivalent model, and cavity and waveguide analytic considerations, a broader and more complete understanding of all distributed resonant structures may be gained.

It becomes apparent with a review of analytic models, that appreciation for the potential offered by resonant cavities as RF power processing elements has remained undeveloped. This essentially new area (RF Power Processing) of Radio Frequency Engineering has potential.

Lumped Circuits

The lumped equivalent transmission line model is presented in many undergraduate electromagnetics texts and is considered one of the classical analytic techniques of electrical engineering. The origins of this analysis can be traced to the works of Oliver Heaviside (1880's) and S. A. Schelkunoff (1) (1930's). It provides the means by which distributed circuits may be compared to, and analyzed as, lumped circuits and is a critical analytical link between lumped and distributed resonant systems. The usual presentation of the model is focused on the development of the impedance transforming properties of transmission line matching networks and does little to demonstrate or emphasize the resonant rise phenomenon. Figure 1 illustrates the it,


coaxial resonator as it would be represented in lumped equivalent circuit theory. As the analysis of distributed circuits represented in this form is extensive it will not be repeated here.


Figure 1(a): The

coaxial resonator - 1(b): The lumped equivalent circuit

The equations for the lumped equivalent parameters and the characteristic impedance are:


(1)


(2)


(3)


(4)


Resonance

The phenomenon of the resonant rise on a transmission line, and in any cavity resonator, is physically due to coherent reflection of forward and backward traveling waves occurring at critically spaced terminating surfaces. The occurrence of standing waves, or stationary field distributions, is common to all resonators and is indeed the mechanism by which energy storage and voltage, current and impedance transformation are realized in distributed circuits.

The simple field distributions of most cavity resonators share great qualitative, and indeed some analytic similarities with the standing waves present on transmission line resonators. A thorough conceptual appreciation of the phenomenon of resonant rise may be obtained from consideration of a vector diagram representation of the incident (E1) and reflected (E2) waves on a resonant length of line. Figure 2 shows the familiar sinusoidal voltage distribution associated with the open circuited lossless resonant line and the resulting phases of the two waves at different points in the distribution (2). Under lossless conditions the load reflection coefficient is unity with zero phase angle and the load impedance is infinite. As a result, the incident and reflected waves have equal magnitudes at the load, and the reflection occurs such that the resulting phases of the two waves are also the same. As the figure demonstrates, this phenomenon yields a terminal point voltage that is the arithmetic sum of the incident and reflected components.

EL =


=

(6)

It should also be noted that in this ideal case the currents of the incident and reflected wave are equal in magnitude but opposite in phase. Thus, the vector sum is zero and the current into the load is zero.


Figure 2. Vector Diagram representation of the incident and reflected voltage waveforms on resonant transmission lines (2)

The criteria for resonance is inherent in the spatial voltage distribution and requires the voltage maximum and minimum be separated by one quarter wavelength. This means that regardless of the loading, the system must be, electrically, ninety degrees long. This can be easily demonstrated with a Smith chart analysis of the capacitively loaded


resonator.

Foreshortened Coaxial Resonator

The example presented in this paper to illustrate the coaxial cavity resonator is a discharge device where the potential for discharge is built up in a spherical capacitor. It is appropriate to show how this capacitive load will change the operating parameters at the resonator. Numerical values presented are appropriate for the demonstration model. The material presented comes from published work (3) and is in part the result of mutual theoretical development conducted by the authors and Dr. F. Corum on this research project. (Some of this material will also be found in the monograph "Vacuum Tube Tesla Coils" by James F. Corum and Kenneth L. Corum).

For the capacitivly loaded transmission line, the length at resonance will be less than a quarter wavelength long due to the change in the angle of the reflection coefficient at the load end. The effect on the SWR is that the angle is reduced from 0 degrees, for a full


unloaded line, to a negative angle such that the reflection coefficient for the open-circuited end will be:


where: f <0 (7)

The result of this change is that the line is not only physically shortened but that stationary >/4 sinusoidal field distribution is effectively shortened. Figure 3 shows the comparison between the unloaded and loaded distributions.


Figure 3. The capacitively foreshortened line: (a) unloaded line (b) voltage distribution (c) loaded line and resulting reduction in attained load voltage

Lumped Equivalent Example

Consider the resonator shown in Fig. 4 with its equivalent transmission line model. At the shorted input end (1) the impedance is (ZL = Rloss + j0) and the reflection coefficient (


= 0 < 180°). At the load end (2) the capacitance of the sphere must be calculated and the load parameters evaluated.


Figure 4. The capacitively top loaded coaxial cavity and its equivalent transmission line circuit for Smith chart analysis.

The capacitance to ground of an isolated sphere is given below and may be used as a reasonable approximation for the top loaded sphere.

C sphere = 4 pe o r (m) (8)

Where r(m) = radius in meters

Thus C top = 5.65 pF

The unloaded


resonant frequency of the line is calculated from:

c =


= 41 (m)f, or f = c/ 41(m). (9)

This yields: f=100 Mhz, and


rad/sec.

Thus the proper load impedance is:


(10)

The characteristic impedance of the resonator in an air dielectric is:


(11)

The normalized load reactance is:


(12)

The effect of the loading capacitance is to require the line to be shortened to maintain the same resonant frequency or to lower the resonant frequency when added to a line of given length. The electrical line length with C top as a load will be:

I' (


) = 0.25

= 0.2875

(13)

or

l'(m) = I' (


)

(m) = 0.863 m

and the resulting resonant frequency is:

f' = C/4 I'(m) = 86.91 Mhz (14)

Coaxial Cavity Resonator

A transition from resonant transmission line to resonant cavity occurs for the coaxial line when the input is short circuited as shown in the Figure 5. Several things become apparent with this fundamental observation. The structure takes on the nature of a completely enclosed system, or hohlraum as they have been called (4), like that of a cavity is a traditional geometry exhibiting properties of both waveguides and cavities and is an important conceptual link for developing all classes of cavity resonators for application as RF power processing elements.

It may be shown that optimal dimensions of the coaxial cavities may be developed from criteria for maximizing the energy storage (Q) and step up (SWR) of the cavity. The criteria for development of maximum Q may be seen in the optimization of the Q from power considerations. The Q of a resonant cavity is defined as before:


(15)

The energy stored and dissipated is approximated as:

Energy stored =


and Energy dissipated =

Therefore:


Thus, Q may be maximized for the geometry which yields the largest volume to surface area ratio. This occurs for the coaxial cavity when the ratio of (b/a) equals 3.6 and the resonator has a characteristic impedance (


) of 76.9

(5). A comparison of obtainable unloaded Q's for different resonant systems is insightful. Table 1 shows the representative magnitudes of the various unloaded resonant circuits.

Table 1: Magnitudes of R sh for Resonant Structures

TYPE OF CIRCUIT


(MAGNITUDE)


Lumped Tank

10,000

Tesla Coil

100,000


Coaxial Resonators

100,000

Cavity Resonators,

1,000,000

.
The trend in increasing "Q" with increased volume to surface area may be seen progressing down the table. The coaxial cavity fares well comparatively with all structures and was chosen for the simplicity of construction and design as well as its representative nature. The Q's of cavity resonators exceed those of coaxial resonators by an order of magnitude and might be chosen as a superior system for certain applications of power processing.

The criteria for the development of maximum step-up (SWR) parallels the shunt resistance considerations since R sh - SWR.


(17)


Figure 5. The quarter wavelength capacitively loaded cavity:

(a) spherical capacitively top-loaded cavity
(b) foreshortened capacitivly loaded cavity
(c) transmission line equivalent

It is obvious that the shunt resistance is a function of the conductor losses and loading. The maximum shunt resistance occurs for b/a = 9.2 which yields a characteristic impedance of 133.1


(5).

Again, a comparison of obtainable shunt resistance (R sh) for different resonant systems is insightful. Table 2 shows the representative magnitudes of the various unloaded resonant circuits.

Table 2: Magnitudes of "


" for Resonant Structures

TYPE OF CIRCUIT


(MAGNITUDE)

Lumped Tank

100

Tesla Coil

100


Coaxial Resonators

1000

Cavity Resonators

100,000

RF Power Processing

The first individual in the literature to realize that the RF resonant systems might be used to advantage in the processing (or transformation) of large electrical energies for engineering applications was Nikola Tesla. He first developed the "distributed helical resonator" strictly as a means of generating high voltages and the resulting spectacular discharges for which he is famous. He completed extensive empirical testing and optimization of this structure during the late 1890's and proposed a variety of possible applications including wireless transmission of power and the concept of directed energy weapons.

When examining Tesla's Colorado Springs Device as a model for a power processing system four basic processes can be observed. The conversion of power from 60 Hz to RF; the transformation by pulse modulation to high peak power and variable duty factor; the input coupling system; and the output couple to the load, Figure 6. These block components may be implemented by standard RF techniques in a variety of ways depending on the magnitude of the powers to be processed and the desired efficiency. The blocks comprising the resonator have been developed in the preceding sections and only the source considerations remain to be considered. Tesla implemented pulse moderation via a special breakwheel. As the break occurs and the spark is quenched (Tesla used a magnetic field and forced-air to quench the spark quickly). The high voltage transformer reactance is reintroduced across the primary tank detuning it to lower the "Q pri" and reduce the impedance which is coupled into the secondary. The secondary, which is now free of the loading of the ringing primary, now rings at its self resonant frequency (f sec) which is identical to that of the extra coil where the voltage (V sec) is stepped by resonant rise (VSWR). The primary capacitance (C p) is recharged during the break interval.


Figure 6: (a) Block diagram of typical RF power processing system (b) Tesla's 1899 Colorado Springs apparatus system equivalent. (3)

The spark interval during the exchange of energy between the primary and secondary must be carefully controlled to avoid reflection of energy back into the primary. For efficient operation, the optimal spark dwell must be used. This provides for trapping of the energy in the secondary/ extra coil circuit which can be charged over many spark and break intervals to very large power levels. With repeated pulses of peak energy from the primary, the secondary master oscillator will charge the extra coil, and the system achieve base voltages which, when stepped by the extra coil, will exceed the breakdown potential of (C top). It should be noted that with the tight coupling used by Tesla in 1899 (k=0.6) practical for any of the resonator types, the breakwheel was required to switch with less than two cycles of oscillation i.e.:


This is a phenomenal achievement with a breakwheel switch or modulator. This became the fundamental limitation preventing Tesla from exploring higher frequencies and smaller resonator geometries. The advent of the vacuum tube switch (not the oscillator) would remove some of these limitations.

State-of-the-Art Switching

The appropriate engineering choice of a vacuum tube replacement for the breakwheel is the hydrogen thyatron switch. State-of-the-art tubes achieve rise times on the order of a few nanoseconds and fall times (deionization times) on the order of ten nanoseconds. If it is assumed that the pulse duration must be on the order of twice the fall time to provide an efficient waveform, it is obvious that attainable frequencies are no greater than 100 MHz. This may seem surprising considering the frequencies attainable by "Class C" oscillators (GHz). However, with the currently available dielectric insulators this is as high a frequency (due to the resulting physical sizes) at which the physical dimensions of any of the open or cavity resonators can hope to be effectively insulated for high power applications (hundreds of kilowatts).

A variety of power oscillators designs have been developed for different applications in industry and have become part of many standard texts on vacuum tube electronics. One in particular suggests the basis for the development of another alternative. In the tuned grid-tuned plate oscillator the grid circuit acts as a master oscillator to drive the parallel tank in the plate circuit. The plate tank is tuned to a slightly different frequency from the grid circuit to provide a capacitive impedance large enough such that the capacitance in combination with that from the grid to plate of the tube meets the Barkhausen criterion. The use of a master oscillator is one critical element advantageous for driving a resonator, the removal of the double tuned circuit from the plate is another. Both of these may be achieved with a modified form of the tuned grid-tuned plate oscillator.

The general arrangement of this power oscillator is shown in Figure 7. It is to be noted that the link is untuned and must be constructed such that the grid to plate capacitance does not bring it to resonance and develop parasitic oscillations. This configuration avoids the spectral line splitting and efficiency limitations of the double tuned plate circuit and allows tight coupling to be utilized for efficient energy transfer to the resonator.


Figure 7: The optimal vacuum tube source configuration utilizing a tuned grid plate pulse modulated amplifier/oscillator with feedback via the grid to plate capacitance.

This is essentially a means of implementing an untuned plate circuit link coupled to the cavity resonator. The configuration places a master oscillator tank in the grid circuit so that the inefficiency of double tuned, coupled circuits is not present during the transfer of energy from the resonator to the high voltage discharge of the load. This is essentially a system paralleling the Colorado Springs Device and utilizing the insight provided by Sloan (6) in his work with early vacuum tube oscillators. The use of a distributed circuit (cavity) oscillator tank is advantageous in minimizing the tank circuit losses (principally the dielectric losses associated with gaseous dielectrics under pressure and the low losses characteristics of them. This characteristic bodes extremely well for the future application of resonant cavities to current high energy physics technologies.

Empirical Verification

Unfortunately the tubes needed to construct the device in Figure 7 do not exist or were not available. The capacitor discharge electrodes that were constructed were moderate in size and function well enough to give strong evidence of the potential for ~ /4 coaxial cavity resonators.

Two cavities were constructed and capacitively top loaded with small egg shaped (prolate spheroids) discharge electrodes as shown in Figure 8. The inner conductor length was chosen such that the electrodes partially protruded from the end of the outer conductor to provide for a compromise between minimization of the radiation resistance (cavity loading), the advantages of increased distance to the outer conductor (arc-over considerations), and the resulting visual display. Each of the geometries were empirically examined for agreement with the performance predicted by the analysis. And as the following analysis and results comparison shows, the empirical results of both were reasonable and fell within the expected magnitudes of performance.

As a result of a limited source power (= 200 watts C.W.) the magnitude of attainable voltage (= 20 KV) required the selection of very small spheroidal capacitor discharge electrodes. In fact any spherical capacitor designed to break down at approximately 20 kV has too low a capacitance to deliver any sizable charge to the spark discharge and to appreciably demonstrate lumped capacitive loading and foreshortening effects. The egg-shaped spheroid electrodes provide larger capacitance at a lower breakdown potential (though it is not accurately predictable). The concentration of charge at the egg tip combined with the high E-field emission of charge for a small but impressively stable plasma discharge of two to four inches (5 to 10 cm) in length.


Figure 8: Experimental cavity dimensions (a) Cavity 1 b/a = 3.4; (b) Cavity 2 b/a = 9.2

The resonators were initially driven with a C.W. source (approximately 200 W) which yielded a vertical plasma (flame) two to three inches (5 to 7.5 cm) long which was fed or pumped by a bush type discharge approximately 1.0 to 1.5 inches (2.5 to 3.8 cm) in length. The source was then pulse modulated over a range of duty factors to allow the class A amplifier used to be driven at larger peak pulse powers while maintaining between 150 and 200 watts average power input.

This provided larger base voltages over pulse lengths sufficient to charge resonator to the breakdown potential of the discharge electrode. As a result of the larger base voltages obtained, the plasma length increased to between three and four inches (7.5 to 10 cm) fed by a brush type discharge 1.5 to 2.5 inches (3.8 - 6.3 cm) in length.

The brush type discharge is one of five types seen and characterized by Tesla in various experiments he conducted. As stated above, what was observed in this experiment was actually a I to 2.5 inch (2.5 - 6.3 cm) (this range in lengths covers the results obtained for both CW and pulse modulated sources) blue-white brush discharge occurring at rates sufficient to couple energy into and sustain a white RF plasma (two to four inches/five to ten cm long) producing temperatures at the base of the flame (not the hot centre) sufficient to melt the tip of an aluminum electrode (> 600 C). (Stainless steel was later introduced.) The larger discharges were achieved by pulse modulating the generator allowing the amplifier to be driven at higher powers for short pulse-periods (these being on the order of the fill time of the cavity). This produced higher peak voltages and hence the longer, hotter, and more stable discharges.

The discharges in the pulsed mode were driven with pulse repetition rates in the frequency range (100-10,000 pps). The modulation of the discharges at these frequencies produced an almost painfully loud audible frequency pitch. The frequency of the pitch was readily adjusted across the audio spectrum by varying the pulse repetition rate and the pitch was intensity was also seen to vary with changes in the pulse duration. This modulation, as well as the unmodulated excitation, was easily detected on FM radio receivers within a few hundred feet (approximately 100 m) due to the intensity of the stray reactive fields off the end of the resonator. The radiated field components were measured at less than a milliwatt and are entirely negligible.

Coaxial Cavity Design Examples

The design of the coaxial cavity and the calculation of the parameters of interest for evaluating its performance, is directly obtained from analysis. The case of interest for evaluating the potential resonator performance and comparison to the empirical results is for the capacitively top loaded resonator with minimum loading presented by the input. coupling since this load can be effectively removed from the system by utilizing synchronous switching of the energy input.

The design parameters needed are:

a

=inner conductor radius (O.D)

b

= outer conductor radius (I.D)

la

= inner conductor length

or

(specify one or the other)

f

= desired frequency (MHz)

lb

= outer conductor length (if lb


la)

Ctop

= top loading capacitance



= relative dielectric permitivity

All dimensions will be worked in inches and frequencies in MHz unless otherwise specified. A reasonable estimate of Ctop is the first requirement. The top loading capacitance may be a spheroid or the foreshortening capacitance from the end of the inner conductor (terminated in a circular plate) to the outer conductor walls. Measurement of the cavity frequency responses were done with the spheroid discharge electrode removed and therefore with Ctop equal to the foreshortening capacitance. The recorded discharge parameters were obviously obtained with the spheroidal electrodes as the top loading capacitance.

The foreshortening capacitance may be roughly estimated from the area of the outer cylinder which extends beyond the inner conductor.


(18)

Where:

A =


2b(lb - la) = (area of the end of outer conductor)

d = b - a/2 = (mean distance to outer conductor)

The capacitance of the spheroid electrode used may be evaluated from an estimate of the mean diameter approximated by a cylinder of the same surface area.


(19)

This equation is for an isolated elevated sphere above a ground plane and does not account for the increased capacitance due to the vicinity of the surrounding outer conductor. For a rough approximation the value for the elevated body above a ground plane can be increased such that:


(20)

For small capacitance loads (Ctop << C) (i.e. when the top loading capacitance is less than ten percent of the resonator capacitance) a reasonable estimate of the foreshortened resonant frequency (fd) may be obtained as follows.

Determine the approximate frequency of operation (f0) from the inner conductor strength.


(21)

For linear conductors (fast wave structures) the velocity factor is assumed to be: Vf = 0. 999.

The load impedance is then calculated from the appropriate top loading capacitance.


(22)

The characteristic impedance is:

Z0 = 60 In(b/a) (23)

The load reflection coefficient is then calculated from these values.


(24)

The calculation of


is unnecessary if there is no radiation resistance component included (i.e.

= 1.0<

for pure reactive loads) but is included here to demonstrate the effects of the loading resistance if it is not negligible (i.e. if large protruding discharge electrodes are used). At resonance:


(25)

Therefore, the approximate resonant wavelength is:


(26)

The estimated resonant frequency (to within 2%) is then:


(27)

The electrical length of the resonator may be calculated from equation 28:


(28)

The attenuation factor (


) is calculated from equation 29:


(29)

The propagation loss is (


) and the Qu of the resonator can be calculated from equation 30.


The final parameter of interest before calculating the step-up is the base impedance (Rbase) calculated from equation 31.


(31)

Rbase is the input impedance of the cavity that the source (generator) would have to drive (or be matched to) if the resonator were to be driven by direct connection at the base.

The step-up is then calculated from equation 32.


(32)

Consider the following examples:

Example 1: Maximum "Q" Design b/a = 3.4:

Let:

a =

0.431 in. (1.095 cm)

b =

1.451 in. (3.686 cm)

la =

29.577 in. (75.126 cm)

lb =

30.000 in. (76.200 cm)


=

1.0

Example 2: Maximum (Rsh) Design b/a = 9.2:

Let:

a =

0.158 in. (0.401 cm)

b =

1.451 in. (3.686 cm)

la =

29.577 in. (75.126 cm)

lb =

30.000 in. (76.200 cm)


=

1.0

The results of these theoretical calculations are given in Table 3. It is to be noted that the results are from the estimated top loading capacitance and therefore are not strictly accurate. However, the estimates do agree well (within 1%) with operating frequencies measured in the lab and hence may be used with reasonable confidence.


Table 3. Tabulated theoretical results for critical parameters

The theoretical results are also obtainable, for the line without capacitive top loading, by calculating the lumped equivalent parameters and the equivalent equations to those above. This proves to be an appropriate method of calculation when coupling considerations and design are to be developed. The material constants of the cavity may also be changed from an air dielectric and copper conductor to any alternatives more easily than with the above method (the constants of the above equations include the material constants). A list of the material constants used is given below. The following calculations are done in meters to allow use of the more familiar constant values.

List of Material Constants


=

5.65 x 10 mhos/m


=

o = 1.257 x 10-6H/m


=

1.0


=

2.5 x 10- i 4 mhos/m

The propagation losses (


) can now be calculated and the resulting Qu and the SWR determined from equations 33, 34.


(33)

The foreshortened SWR is then:

SWR' = SWR sin(90 -


°) and


(34)

The results of the two examples are tabulated below for comparison.

Example 1: Maximum "Q" b/a = 3.6:


= 2.58 x 104 Nepers
Qu = 3,040
SWR = 3,880

Example 2: Maximum Rsh b/a = 9.7:


= 3.37 x 10-4 Nepers
Qu = 2,330
SWR = 2,970

Empirical Results:

The response curves of the two example cavities without capacitive top loading were measured in the lab with a link coupled input load on the cavity. The singly loaded "Q" of each was then calculated from plots of these curves shown in Figures 9 and 10. Though this is not directly comparable with the calculations of the unloaded "Q" above, it does place the magnitudes of the obtained results approaching the expected values.

f(MHz)

f0(0dB)

fL(-3dB)

fu(-3dB)QL


cavity 1

96.877

96.852

96.917

1938

cavity 2

95.640

95.611

95.690

1648

Table 4: Experimental Data for Determination of QL'

The loaded Q, (QL') was then calculated from equation 35:


(35)

The usual form of equation 35 in texts is:


(36)

where;


is the difference in frequency found between the 3dB down points (

= fu - fL) of the response curve. The equation has been modified such that

is a measure of the difference in frequency between f0 and the 3dB down points of the response curve (

= f0 - fL) in this case.

This change was made to allow for the computation of the QL' such that the capacitive loading effects of the link couple, observable in the non-symmetric nature of the right half of the response curves (fo


fu), did not effect the calculation and a closer approximation of the unloaded "Q" could be obtained (i.e. as close an approximation to the unloaded "Q" is desired). This loading is not a critical concern since it may be effectively removed by synchronous switching of the source energy. Thus, a reasonable approximation of the singly loaded Q (this is the "Q" with input coupling reduced to a minimum), QL', may be obtained from the equation:


(37)

This is accomplished by minimizing the coupling (rotating the link such that it is parallel rather than perpendicular to the lines of magnetic flux) and measuring the response with a probe, achieving slight coupling to the reactive fields at the open end of the cavity as described above. The results obtained for QL' of each of the cavities are in Table 4 (and Figures 9,10) with the data needed to calculate them.

Empirical measurement of the step-up is prohibitive since any attempt to measure Vtop results in an additional capacitive load on the resonator and detunes it from resonance. A rough approximation of the potential can be made from the length of the brush discharge which is approximately:

V break down = 10 kv/in,


10% (38)

The brush discharge was observable as a blue-white discharge beginning at the tip of the egg shaped discharge electrode as a typical arc, then forking or branching out at its end (over the final 30% of its length). It was most easily observed with a pair of welders goggles due to the surrounding white plasma (flame) which tended to wash out and mask the discharge outline. It is to be noted that all of the discharges occurred directly off the tip of discharge electrodes with vertical orientation, regardless of the orientation of the resonator (i.e. if the resonator was titled the discharge was maintained vertically).

The discharges were self initiating if the average power was of the magnitude of two hundred and fifty watts and the discharge electrode was clean. However, it could be started by placing a metal object such as a screwdriver tip near the tip of the electrode or passing a lit match across the tip with average powers on the order of 100 watts.

The brush discharges (not the plasma flames) of the respective cavities for the pulse modulated source were of the following dimensions, Table 5:

Table 5: Experimental values for determination of SWR'

Excitation

Device#

Ldis(in.)

Lflame(in.)

Pavg(Watts)

CW

cavity 1

1.0

2.0

200

CW

cavity 2

1.5

3.0

150

modulated

cavity 1

1.5

3.0

250

modulated

cavity 2

2.5

4.0

150

Example 1.

Ldis1 = 1.5 in. (3.8 cm)
Pavg = 250 W

Therefore; Vtop1 = 15.0 kv

Example 2:

Ldis2 = 2.5 in. (6.3 cm)
Vtop2 = 25.0 kv
Pavg = 150 W

The input voltage at the base of the resonator impressed by the generator may be estimated by:


(39)

From Figure 8,


(experimental) = 10°. So for cavity 1 with Z0 = 72.8

:


(40)

From Table 4:

Vin1 = | Rbg Pavg |½ = | (3.2) (250) |½ = 28.3 Volts (41)

For cavity 2 with Z0 = 136.2


:


(42)

and:

Vin2 =|(11.2)(150)|½=40.9 Volts (43)


Figure 9. Response of cavity 1 (b/a = 3.4)


Figure 10. Response of cavity 2 (b/a = 9.2)

This yields the loaded step-up.

Example Cavity 1:

step-up1 =


Example Cavity 2:

step-up2 =


The step-up of cavity 2 should be approximately 25% higher than cavity 1 according to the analysis and as shown above, the experimental results fall within 10% of this predicted difference in the step-up.

Conclusions

The data did verify the ability to produce a discharge from the end of the inner conductor with relatively heavy loading on the cavity and unprecedented minimum power requirements with respect to the minimum power required to drive a Tesla coil to Ebreakdown with similar loading. Once started, either of the resonators were able to maintain the plasma of at least an inch with only fifty to sixty watts of input. Reduction in power input could be adjusted by reducing coupling or amplifier gain.

Recommendations have been made and commented on throughout the text to allow consideration of the topics as they are presented. The development of a full scale prototype RF power processing system is most definitely indicated to attain more accurate data as to the achievable efficiencies with state-of-the-art technologies. Additionally, a complete experimental determination of the loaded response of the cavity using pulsed excitation is necessary to evaluate the degree of efficiency obtainable with current technologies when using the phenomenon of switched resonant trapping. Such evaluation will allow targeting of new high power synchronous switch vacuum tube technologies for development.

Development of improved capacitive loads storing larger charge densities than uninsulated spherical discharge electrodes should be developed. The development of switches (controls) of high voltage discharges from such charge reservoir is feasible and should be investigated as a new high energy technology. This new application would provide means to accommodate the current directed energy technologies which might provide the needed impetus for further development of RF power processing technologies.

References

1. S.A. Schelkunoff. "The electromagnetic theory of coaxial transmission lines and cylindrical shields". Bell Systems Technical Journal. Vol. 4. p. 532-79. 1935.

2. F.E. Terman. Electronic and radio engineering. McGraw Hill. 1955. p. 91.

3. J.F. Corum and K.L. Corum. "A technical analysis of the extra coil resonator as a slow wave helical resonator", Proceedings, 2nd International Tesla Symposium. Colorado Springs p. 1-24. 1986.

4. W.W. Hansen. "A type of electrical resonator". Journal of Applied Physics. Vol.9. p. 654-63. Oct 1938.

5. T. Moreno. Microwave transmission design data. Dover. p.225-9. 1948.

6. D.H. Sloan. "A radio frequency high voltage generator". Physical Review. Vol. 47. p. 62-71. Jan., 1935.

LIST OF SYMBOLS

A =

magnetic vector potential

a =

(in.) inner conductors outer radius

b =

(in.) outer conductors inner radius

C =

(F/m) distributed equivalent capacitance


=

top (end) loading capacitance of resonator


=

capacitance of an elevated metal sphere

c =

speed of light

d =

tap distance from shorted end of line

E =

electric field

f =

(Hz, MHz) frequency


=

circuit self resonant frequency


=

(Hz) lower and upper 3dB comer frequencies

G =

(Siemens/m) distributed equivalent shunt conductance

H =

magnetic field

i =

current

I =

current

K =

tapped x-mission line coupling constant

k =

coefficient of coupling


=

critical coupling coefficient

L =

(H/m) distributed equivalent inductance

l =


(in.) length of inner coax conductor

lb =

(in.) length of outer conductor if (


)

M =

mutual inductance coupling constant

N =

number of 114 wavelengths

Q =

resonant circuit quality factor


=

unloaded Q


=

semi loaded Q (includes output end loading)


=

fully loaded Q (includes output end loading)


=

Q of the output (open end) load alone

r =

(m) radius of top loading spheroidal capacitance

R =

(


) distributed equivalent resistance


=

skin effect resistance


=

real portion of characteristic impedance


=

resistance referred to the base of x-mission line


=

unloaded base equivalent transmission line resistance

S =

VSWR

T =

(sec.) phase period


=

(sec.) beat period of exchange of energy


=

(sec.) cavity resonator fill time

Vf =

velocity factor of a distributed circuit

v =

longitudinal phase velocity

XL =

(


) load reactance of resonator (x-mission line)

Y =

(mhos)admittance

Z =

(


) impedance

Z0 =

characteristic impedance of distributed circuit

Zin =

impedance referred to the base of the line

Zo.c. =

impedance referred to open circuited end of line

Zs.c. =

impedance referred to shorted end of line

GREEK SYMBOLS


=

propagation attenuation constant


=

propagation phase constant


=

transmission line reflection coefficient


=

magnitude of reflection coefficient


=

(


) longitudinal propagation constant


=

dielectric permitivity


=

free space dielectric permitivity


=

vacuum tube plate circuit efficiency


=

transmission line electrical length


=

wavelength


=

free space wavelength


=

wavelength in propagating media and conductor


=

conductivity


=

current density


=

conductance


=

pulse duration


=

cavity fill time rate constant


=

magnetic flux


=

transmission line reflection coefficient phase


=

radian rotational frequency