Cover Image
close this bookThe Impact of Chaos on Science and Society (UNU, 1997, 394 p.)
View the document(introduction...)
View the documentPreface
View the document1. Chaotic dynamics
View the document2. Chaos and politics: Applications of nonlinear dynamics to socio-political issues
View the document3. Is the EEG a strange attractor? Brain stem neuronal discharge patterns and electroencephalographic rhythms
View the document4. The impact of chaos on mathematics
View the document5. Chaos in neural networks
View the document6. The impact of chaos on physics
View the document7. Chaos and physics
View the document8. Irreversibility and quantum chaos
View the document9. Impact of high-dimensional chaos: A further step towards dynamical complexity
View the document10. The impact of chaos on biology: Promising directions for research
View the document11. Dynamical disease - The impact of nonlinear dynamics and chaos on cardiology and medicine
View the document12. The impact of chaos on meteorology
View the document13. The concept of chaos in the problem of earthquake prediction
View the document14. The impact of chaos on engineering
View the document15. The impact of chaos on economic theory
View the document16. Chaos in society: Reflections on the impact of chaos theory on sociology
View the document17. Strange attractors and the origin of chaos
View the documentPanel discussion: The impact of chaos on science and society
View the documentOpening address
View the documentContributors
View the documentOther titles of interest

7. Chaos and physics

Hao Bai-lin

Although the word chaos appeared for the first time as a physical term in Ludwig Boltzmann's assumption on molecular chaos more than one hundred years ago, the concept of chaos in its modern sense seems to have re-emerged in physics too late. In fact, at the turn of the nineteenth and twentieth centuries physics had come very close to the discovery of chaos both in conservative and in dissipative systems. In conservative systems, there were the three-body and other non-integrable problems of dynamics, in particular, the problem of stability of the solar system, and the problems raised by the foundation of the then newly formulated statistical mechanics. In dissipative systems there was the challenge of turbulence, which had been calling for a fundamental explanation ever since the first quantitative measurements made by Osborne Reynolds in 1883.

Nevertheless, this delay happened not accidentally. There had been the domination of mechanical determinism since the time of Newton, hence the ignorance of phenomena that might be manifestations of external noise out of the control of the experimentalist. The mathematical theory of dynamical systems was still to he developed and a new powerful weapon in the modern scientific arsenal - the high-speed electronic computer - had to wait for several decades to see the light. Therefore, looking retrospectively, it was quite natural that the establishment of quantum mechanics and relativistic theory, the two greatest physics theories of the first half of the twentieth century, and the subsequent rapid development of modern high-technique as well as the demand of the two World Wars had absorbed the efforts of an overwhelming majority of physicists. The difficult problems of "dynamics" had been left to mathematicians for calm study under the name of qualitative theory of differential equations, theory of dynamical systems, ergodic theory, etc. In fact, mathematicians have done a good job, having prepared the ground for the unprecedented interdisciplinary development the symposium has highlighted.

Determinism versus probabilism

In order to describe one and the same nature there have been two systems of description in physical science: deterministic and probabilistic. Celestial mechanics, based on Newtonian mechanics and the universal law of gravitation, had served as the touchstone for determinism. Deterministic view culminated in the night of 23 September 1846 when German astronomer Gale discovered Neptune at first attempt according to the theoretical prediction of French astronomer Le Virrier. This was a triumph for the Laplacian mechanical determinism: physics seemed to be really able to predict the future of the universe provided the present status was known with high enough precision.

However, approximately in the same period the development of thermal engines had put forward the study of the property of gases and fluids. Here people were using macroscopic notions like pressure, temperature, and volume, looking for their relations empirically. The laws of thermodynamics, based on an enormous collection of empirical facts, were at least as good as the laws of mechanics. The attempt to justify them by considering the elementary processes taking place in macroscopic bodies had led to the establishment of statistical mechanics. A key point in the theory was to introduce some probabilistic element at one or another step. Traditionally, physicists prefer the deterministic approach to the probabilistic and consider the latter as something reluctant and unwilling. They were forced to do so when there was no better alternative due to the complexity of the problem and the lack of knowledge or data.

The twentieth century inherited this tradition. Although quantum mechanics has challenged classical determinism, probabilistic concept has been invoked in a rather pragmatic way in interpreting and relating theoretical calculation to experimental observation. Anyway, the basic equation of quantum mechanics remains linear and entirely deterministic. Consequently, determinism continued to thrive in physics, with Albert Einstein being the standard-bearer. Although by the end of the last century Henri Poincaré already recognized that stochastic behaviour is inherent in Newtonian mechanics and many mathematicians complemented Poincaré's thesis with new proofs and examples, most physicists had been rather ignorant about this development. As late as in 1981, A. S. Wightman complained that "... it is really a pedagogical scandal that after more than three quarters of a century the simple and enlightening ideas of Poincaré do not appear in elementary mechanics books written for physics students. Will we have to wait for the third millennium to see an analytic mechanics book for undergraduates in which there is described what really happens when two harmonic oscillators are coupled together with a non-linear coupling?" [1] Wightman went further to say: "One might almost think, in the spirit of the day, that there is a conspiracy against Poincaré." Fortunately, we do not have to wait for the next century. Many texts have appeared, if not written for undergraduate students, then at least for their professors.1

And no wonder that mathematicians like Jim Yorke and ecologists like Robert May, among others, were destined to wake the physicists from their 300-hundred-year dream of determinism. Yet in the beginning many physicists took chaos for randomness and turned their back. Only a bunch of not-so-orthodox physicists grasped the new development with enthusiasm.

A class of ubiquitous phenomena

Due to the lack of theoretical understanding, chaotic phenomena must have been overlooked again and again in nonlinear electric circuits, in acoustic cavitation noise, in mechanical systems, and in many other experiments.

Once convinced of the existence of chaos, physicists see it everywhere. They discover chaotic behaviour in laboratories, in computer experiments, and in observation of natural processes. The explosion of literature on chaos was to a great extent due to the efforts of physicists.2 Since it is better to leave the natural phenomena to our colleague-geophysicists and we do not feel a deficiency in computer graphics, I will confine myself to a discussion of the situation with laboratory experiments.

As a theorist I divide physical experiments on chaos roughly into three groups, according to how accurately the observation agrees with theoretical prediction. The first group consists of those experiments where the measurements fit into the theoretical framework very well. In the first place I have in mind nonlinear RLC circuits with periodic forcing. The experimental setup may be described precisely by a system of ordinary differential equations, which in turn may be studied in detail on computers. Here we have not only qualitative agreement of various routes to chaos, but also confirmation of the universal exponents numerically.

The third group includes experiments that show much more variation and complication; only some facets resemble one or another aspect of theoretical models. Here we have mainly experiments on hydrodynamical instabilities and transitions to turbulence. The study of turbulence has returned to physical laboratories due to the availability of new measuring techniques and new ideas, including laser Doppler velocimetry, low-temperature measurement with effective suppression of ambient noise, renormalization group and fractal considerations, etc. Yet positive results are restricted to confined geometry (e.g., Rayleigh-Benard or Couette-Taylor) and to a weak turbulent regime, i.e., the first few steps in the transition to turbulence. Fully developed turbulence, especially in open flows, remains a puzzle for science and the relevance of strange attractors to turbulence has been questioned. Nonetheless, the term turbulence is now being widely used in physics, denoting phenomena showing intrinsically irregular, non-periodic temporal or spatial or spatio-temporal behaviour. The concept of turbulence is by no means less general than the concept of solid state.

The second group of intermediate cases is of the most interest. Many well-controlled experiments on acoustic, optical, chemical, and solid-state "turbulence" fall into this group. Unable at this time to discuss these in any detail, I just name some of them: acoustic cavitation and supersonic absorption in liquid helium, optical bistability in all-optical or in hybrid devices, anomalous dynamical noise in Josephson junctions and in SQUIDs (Superconducting Quantum Interference Devices), microwave ionization of hydrogen atoms and dissociation of molecules, electron-hole plasma and electronic transport in semiconductor germanium, spinwave instability in YIG (yttrium iron garnet) spheres, and the Belousov-Zhabotinskii reaction carried out in physical laboratories.

I would like to make only one comment regarding the "physics" of these experiments. Many experimental setups in physics may be considered as frequency transformers and chaos may be viewed as a long-overlooked but rather common regime of nonlinear oscillations. If the frequency components of the input and output signals are the same, the system is essentially linear. In the presence of nonlinearity new frequency components may appear. Sums or differences of the input frequencies, in particular, higher harmonics, are trivial; they always appear no matter however weak the nonlinearity. However, subharmonics with well-defined thresholds are not so easy to explain. They usually appear as the first tone in the overture to chaos. Any experimental situation, where the occurrence of subharmonics has been known before, deserves further search for chaos.

In one word, chaos has played an eye-opening role for physicists. Strange attractors are no longer strange objects since the early 1980s.

The impact of physics on chaos

When physicists started to dig into chaos, they began to extend their vocabulary with terms that have been in use basically among mathematicians, such as hyperbolicity, stable and unstable manifolds, homoclinic and heteroclinic intersections, Smale's horseshoe, and symbolic dynamics. "Aha! you are playing with mathematics," some sceptics criticize their colleagues who have been deeply involved in the new game. But there is nothing strange in the situation. In the early 1920s few physicists knew about matrix and eigenvalue problems,3 not to mention linear operators in Hilbert space, notions to be taught to every physics-major nowadays.

However, physicists are not passive learners, they have enriched the study of chaos significantly with their style of thinking and doing. Physicists have added much flesh to the mathematical skeleton and have challenged mathematicians with many more new problems.

The study of chaos emerged just after the success of the theory of phase transitions and critical phenomena where the notion of scaling invariance and universality played a decisive role. Physicists have just mastered the skill of renormalization group technique and are now able to calculate various universal critical exponents from the linearized renormalization group equations. In this way they have complemented the structures, known to mathematicians for many years, such as the period-doubling cascades, with new invariant exponents, now understood and determined to many digits theoretically. Various nonlinear models and systems are divided into different universality classes. The pioneering role in this approach has been played by Mitchell Feigenbaum.

Transient phenomena are essentially a physical notion and closely related to finite precision of measurements. Here too a lot has been learnt from critical phenomena, in particular, from so-called critical dynamics. We have here critical slowing down near every bifurcation and the slowing down itself is described by another universal critical exponent. In fact, my first modest contribution to the study of chaos was a two-page note [4] in which the slowing down exponent at any finite period-doubling bifurcation point was shown to take the "mean field" value D = 1. This almost trivial result was followed by another simple development, namely, at any finite bifurcation point, due to critical slowing down, in numerical calculations one always gets a converging cluster of points whose "operational dimension" may be shown to be 2/3 = 0.666 ... by box-counting arguments. In order to reconcile it with the value 0.538 ... at the accumulation point of the period-doubling cascade, one was led naturally to a scaling function D (k, e) [5], where k is the order of bifurcation and e the box size. It so happened that interchanging the order of taking the limits k ® ¥ and e ® 0 leads to the two different numbers given above. This kind of "crossover" has been familiar in the theory of critical phenomena.

As a third example one may take the role of noise. External noise is unavoidable in any real experiment and in computer simulations. Chaos always appears dressed in noise. Therefore, one must be able to distinguish chaotic behaviour from random noise. I will not mention the significant progress achieved in recent years along this line, but rather like to emphasize that noise also plays a positive role in constructing a more complete renormalization group description of the transition to chaos. It resembles the external magnetic field in the conventional theory of phase transitions, i.e., a conjugate field to the order parameter.

The last, but not the least issue I would like to mention is that physicists have played an active role in bringing together experiments and theory of chaos and in popularizing the new idea to colleagues in other branches of science. Reconstruction of phase portraits from experimental data, as well as extraction of invariant characteristics, such as dimension, entropy, and Lyapunov exponents from time series, have become a minor industry. Now scientists are in a position, in principle, to distinguish chaos from noise, to compare strangeness of chaotic attractors, etc. Chaos does not diminish the forecasting power of science, as one might take literally from the "butterfly effect," explained by E. N. Lorenz. It merely breaks the illusion of non-existent long-term forecasting. In fact, the study of chaos improves short-term predictions by incorporating dynamical aspects of the underlying processes. At the same time, it provides a better long-term forecasting of averaged quantities by making use of various invariant characteristics.

Speaking about popularization of mathematical ideas, I would like to draw a little upon personal experience. Having found rather complicated bifurcation and chaos "spectra" in the parameter space of several systems of ordinary differential equations, we tried hard to understand the global structure of the parameter space and to find the systematics of periodic solutions that were determined with confidence in numerical experiments. In so doing we came across an old mathematical subject, namely, symbolic dynamics, which has been in use in dynamical systems theory and ergodic theory since the 1920s. We soon realized that symbolic dynamics is just what physicists call coarse-graining and it is a rigorous way to describe dynamics with finite precision. Therefore, it gets along with the spirit of physics very well. In fact, symbolic dynamics can be cast into a useful tool for practitioners in physical sciences.

In order to show the power and beauty of symbolic dynamics, let me take the celebrated Lorenz model [6]. It is an autonomous system of three ordinary differential equations. Therefore, there does not exist a natural time unit to measure the periodic motion and to tell their periods, as we usually do with periodically forced systems. Just skimming over the one hundred odd publications on the Lorenz system, one would see that no authors have used an absolute nomenclature for the observed periods. Now this has been done by using symbolic dynamics of three letters. Moreover, the majority of the stable periodic solutions in the Lorenz model is ordered in the same way as that in the one-dimensional antisymmetric cubic map

xn+1 = Axn3 + (1 - A)xn.

Furthermore, symmetry breakings in periodic regimes and symmetry restorations in chaotic regimes, observed in many systems with a discrete symmetry since 1981 and known in the literature under various names, e.g., suppression of period doubling by symmetry breaking, may be explained easily in terms of symbolic dynamics. For example, only even periods can undergo symmetry breaking, but not all of them are capable of doing so. The selection rules are given by simple symbolic dynamics consideration. To emphasize that at least there is a subset of symbolic dynamics that is no longer an abstract chapter of mathematics, I usually precede the titles with an adjective such as elementary [7] or applied [8].

The problem of quantum chaos

In less than 20 years the chaotic "epidemic" has swept over almost all subfields of classical physics, but quantum mechanics seems to be more or less immune to this disease. Chaos is essentially a classical phenomenon. This is, of course, a deep fact whose meaning has yet to be fully appreciated.

Speaking about quantum mechanics, one has to distinguish two kinds of problems: time-independent and time-dependent. In time-independent cases it is the problem of quantization of a classically non-integrable Hamiltonian. Apparently, A. Einstein was the first one who ever raised the question. In a 1917 paper [9] Einstein pointed out that the Sommerfeld quantization rule is applicable only to integrable systems when there are enough constants of motion, leading to enough quantum numbers. The study of the quantization of non-integrable Hamiltonian functions has led us back to the 1960s to the random matrix theory of energy level distributions in compound nuclei. The only progress, so to speak, consists in the understanding that nonintegrability is more essential than the number of degrees of freedom.

Time-dependent problems are closer to the spirit of classical chaos, as chaos appears in the t ® ¥ limit of dynamical systems. To this end one should admit that, in fact, very little has been discovered about time-dependent quantum mechanics, except for transition theory under time-periodic perturbation. The correspondence principle has not been established in the t ® ¥ limit. Classical mechanics happens to be a singular limit of quantum mechanics when the Planck constant h goes to zero; the singularity varies from case to case, i.e., it is not universal. Typical motion of a quantum system is not chaotic, but there may be some manifestation of classical chaos in finite time or frequency range, or, as some people put it, there may be quantum signature in classical chaos. M. V. Berry described this situation concisely by saying "quantum chaology, not quantum chaos." By quantum chaology he understands "the study of semiclassical, but nonclassical, phenomena, characteristic of systems whose classical counterparts are chaotic" [10].

In the context of quantum chaos, it might be appropriate to recall a comment made by M. Born in the mid-1950s [11]. Born pointed out that, measured in their own natural units of time, the microscopic world, say, an atom, is much more long living than the macroscopic world, e.g., the solar system. Having this comment in mind, we see the contrast: in the short-living systems we speak about infinite time limit and chaos, but in the eternal microworld there is no chaos. The implication of this contrast has yet to be understood.

Is there new physics in chaos?

This question has to be considered from two different angles. The totality of our physics knowledge may be represented by the volume enclosed in a shuttle-shaped surface (see fig. 1). The two sharp tips of the shuttle represent the two generally recognized "frontiers" of physics, i.e., the study of the microworld and the exploration of the universe at cosmological scales, both requiring more and more sophisticated and expensive equipment, which has made them a privilege for only a select community of scientists. However, the majority of physicists have been working at the much wider real frontier, namely, the investigation of the macroworld where we all live and where success in basic research has, in general, a quick positive feedback to society. In dealing with the macroworld, we are facing the problem of complexity. Indeed, the study of chaos has opened new horizons in the exploration of complexity. One of the most instructive morals we have learnt from chaos consists in the understanding that seemingly complex temporal behaviour or spatial pattern or the combination of both may turn out to be the results of repeated application of simple elementary rules or actions.


Fig. 1 The frontiers of physics (schematic)

We have seen the use of the one-dimensional cubic map. An even simpler and well-known example may be more instructive. The validity of the one-dimensional quadratic map

xn+1 = 1 - µx2n

goes far beyond modelling the insect population without generation overlap. We have here a lucky case when many conclusions drawn from such a simple model need not to be restricted to one-dimensional systems. How much we have learnt from this innocent iteration of parabola, yet its richness has not been exhausted. In particular, the alternation of periodic and chaotic regimes in many periodically forced systems may be understood with the help of this model at least in certain parts of the parameter space.

I would like to emphasize the importance of one-dimensional mappings by comparing them with two other paradigms in physics, namely, the two-body problem and the Brownian motion. On the two-body problem one learns the essentials of the deterministic approach in physics, from the Kepler problem in celestial mechanics, the perihelion motion of mercury in the theory of relativity and hydrogen atom in quantum mechanics, to the Lamb shift in quantum field theory. On the Brownian motion one constructs the whole stochastic approach in physics, from the Langevin and Fokker-Planck equations to the path integral formulations. One-dimensional mappings provide a paradigm for understanding the physics of complexity. Who dares say that there is no new physics in chaos? It is more tempting to say the contrary, namely, chaos not only brings about new physics, but also calls for a re-examination of the fundamental principles of physics. In this way we come to the other angle that has been mentioned at the beginning of this section. We will discuss it in the next section.

Does chaos bring a new fundamental principle into physics?

Both completely deterministic points of view and purely probabilistic approaches require implicitly a certain infinite process as a prerequisite. On one hand, the acceptance of the notion of an exact Newtonian orbit implies that, in principle, it is possible to measure the orbit with unlimited precision. Once we postulate a finite precision e in the measurements, it would be impossible to distinguish a purely "deterministic" orbit from the same orbit with random noise of order e/2 added.

On the other hand, a finite sequence of N random numbers may pass a randomness test only within a tolerance of order . In the N ® ¥ limit, the tolerance goes to zero, i.e., the sequence may be said to be purely random. As long as N remains finite, one may generate, in principle, the same sequence of numbers by using a deterministic process. In fact, the pseudo-random number generators used in Monte Carlo calculations all work in this way.

The truth lies somewhere in between the purely deterministic and entirely stochastic. However, in physics we do not have a fundamental principle to reflect this essential fact. A fundamental principle in physics possesses certain characteristic features. First, it cannot be proved or derived from other principles. Only its consequences may be tested by more and more precise experiments, yet it cannot be verified once and for ever. Secondly, it may be formulated as a negative statement and negation is often stronger and more general than confirmation. Thirdly, there is usually a fundamental physical constant associated with it. Checked against these features, both quantum mechanics and relativistic theory have their well-known fundamental principle, but statistical physics is a fundamentally important theory without a fundamental principle. The law of large numbers is a provable mathematical theorem, hence may not be taken as a fundamental principle. The ergodic hypothesis, as the theory develops, departs from the foundation of statistical mechanics farther and farther away. However, the deeper we enter the world of complexity, the more we rely on statistical physics. This is a somewhat unsatisfactory situation.

Do we need another fundamental principle in physics in order to make the physics of complexity a lawful part of our science? If so, what it is? Now I come to the most speculative part of my presentation. I will describe only some very preliminary thoughts. We observe and measure the macroworld in finite time span with finite precision to get finite sets of data, yet we intend to reach a rigorous description. We use computers of finite word length and finite memory to simulate physical processes in finite number of steps. Everywhere we encounter finiteness. As we have already seen, with the introduction of finiteness the border between determinism and probabilism vanishes. All this hints on the necessity to raise finiteness into a fundamental principle.

One may even suggest various formulations of this principle. One formulation in deterministic language might be: in nature there is no process more stochastic than the Kolmogorov flow (in fact, this was a conjecture by Arnold; K-flow is a step on the ladder of ergodicity). Another formulation in probabilistic language might be: there is no white noise in Nature. Both formulations are physical that may be tested only by experiments. We do not know the equivalence of them, neither we know how to perform the experiments. One may go even further to speculate that in fact the fundamental physical constant associated with this principle has been in existence for more than a century: it is the Boltzmann's constant k. I illustrate these thoughts in the schematic drawing of figure 2.


Fig. 2 Schematic drawing showing the possible relation of fundamental physical theories. The horizontal dash line divides the regions with zero and nonzero Planck's constant h; the vertical line separates regions of zero and nonzero 1/c, c being the velocity of light; within the circle the Boltzmann's constant k is zero, k ¹ 0 outside the circle

The relativistic theory was created by A. Einstein almost single-handed. The quantum theory was formulated in the hands of several prominent masters of physics. The science of complexity, due to its very nature, is being shaped by thousands of workers from various disciplines. The study of chaos will certainly help us to understand the physics of complexity with finiteness being raised into a fundamental principle. At the present time, we are far from the completion of this programme.

Acknowledgements

I would like to take the opportunity to thank Professor Peng Huan-wu, who has constantly inspired me by asking about the fundamental principle underlying statistical physics and by emphasizing the importance of finiteness. Our own research mentioned in this talk has been supported by the Chinese Natural Science Foundation.

Notes

1. See, e.g., the first part of Ref. [2], where more than 200 books are listed.

2. In Ref. [2] more than 7,000 papers are listed, of which about 2,800 contain the word chaos or chaotic or strange attractor in their title.

3. See, e.g., the description in Mehra and Rechenberg's book [3].

References

[1] A. S. Wightman, in Perspectives in Statistical Physics, ed. H. J. Raveche, North-Holland (1981), p.347.

[2] Zhang Shu-yu, Bibliography on Chaos, vol. 5 of Directions in Chaos, World Scientific Publishing Co. (1991).

[3] J. Mehra and H. Rechenberg, The Historical Development of Quantum Theory, vol. 3, Springer-Verlag (1982).

[4] B.-L. Hao, Phys. Lett. A86: 287-288 (1981).

[5] G. Hu and B.-L. Hao, Commun. Theor. Phys. 2: 1473 (1983).

[6] E. N. Lorenz, J. Atmos. Sci. 20: 130 (1963).

[7] B.-L. Hao, Elementary Symbolic Dynamics and Chaos in Dissipative Systems, World Scientific, (1989).

[8] W.-M. Zheng and B.-L. Hao, Applied Symbolic Dynamics, in Experimental Study and Characterization of Chaos, vol. 3 of Directions in Chaos, World Scientific (1990).

[9] A. Einstein, Verh. Dtsch. Phys. Ges. 19: 82 (1917).

[10] M. V. Berry, Physics Scripta 40: 355 (1989).

[11] M. Born, "Is Classical Mechanics in Fact Deterministic?" Phys. Blotter 11 (9): 49 (1955); reprinted in Physics of My Generation, Springer-Verlag (1964).