Schweber, S. S. (1994). Qed and the men who made it: Dyson, feynman, schwinger, and tomonaga. Princeton: Princeton University Press.
Added by: Dominique Meeùs (2012-08-25 08:51:44)
|Resource type: Book
ID no. (ISBN etc.): ISBN : 0-691-03327-7
BibTeX citation key: Schweber1994
View all bibliographic details
|Categories: Histoire, Physique
Keywords: électrodynamique quantique
Publisher: Princeton University Press (Princeton)
Views index: 86%
Popularity index: 21.5%
pp.xxi-xxvii, Section introduction
In this introduction I sketch the history of quantum electrodynamics from 1927 to the late 1940s in order to delineate the main trends.
The foundational aspects of physics during the first half of the twentieth century have been principally concerned with the characterization of the “elementary” constituents of matter and the elucidation of the nature of the space-time framework in which their interactions take place. The discovery of the electron by Thomson, the precise characterization of its charge by Millikan, the demonstration of the nuclear atom by Rutherford, the photon hypothesis of Planck and Einstein, and Bohr’s explanation of the spectrum of hydrogen were some of the landmarks of that history. These early efforts culminated in the mid-twenties with the formulation of quantum mechanics by Heisenberg, Dirac, and Schrödinger (Kuhn 1978; Heilbron 1975; Segrè 1980; Pais 1986; Mehra and Rechenberg 1982-1988).
The revolutionary achievements in the period from 1925 to 1927 stemmed from the confluence of a theoretical understanding (the description of the dynamics of microscopic particles by quantum mechanics), and the apperception of an approximately stable ontology (electrons and nuclei). Approximately stable meant that these particles (electrons, nuclei), the building blocks of the entities (atoms, molecules, simple solids) that populated the domain that was being carved out, could be treated as ahistoric objects (whose physical characteristics were seemingly independent of their mode of production and whose lifetimes could be considered as essentially infinite). These entities could be assumed to be “elementary” pointlike objects that were specified by their mass, spin, and statistics (whether bosons or fermions), and by electromagnetic properties such as their charge and magnetic moment.
Quantum mechanics came to be seen as correctly describing that domain of nature delineated by Planck’s constant (h) : any system whose characteristic length (l), mass (m), and time (t) were such that the product ml²/t was of the order of h, and such that l/t was much smaller than c, the velocity of light, was quantum mechanical and was to be described by the new nonrelativistic quantum mechanics.
Quantum mechanics reasserted that the physical world presented itself hierarchically. The world was not carved up into terrestrial, planetary, and celestial spheres, but was layered by virtue of certain constants of nature. As Dirac (1930e) emphasized in the first edition of his The Principles of Quantum Mechanics, Planck’s constant allows the world to be parsed into microscopic and macroscopic realms. It is to be stressed that it is constants of nature—Planck’s constant, the velocity of light, the masses of “elementary” particles—that demarcate the domains. In the initial flush of success, quantum mechanics was believed to explain most of physics and all of chemistry. All that remained to be done was “fitting of the theory with relativity ideas.” Dirac’s ( 1929b) famous assertion reflected the confidence and hubris of the community:
“The general theory of quantum mechanics is now almost complete, the imperfections that still remain being in connection with the fitting of the theory with relativity ideas. These give rise to difficulties only when high-speed particles are involved, and are therefore of no importance in the consideration of atomic and molecular structure and ordinary chemical reactions… The underlying physical laws necessary for the mathematical theory of a large part of physics and the whole of chemistry are thus completely known, and the difficulty is only that the exact application of these laws lead to equations much too complicated to be soluble.”
However, fitting the theory with relativity ideas proved to be a much more difficult problem than had been anticipated. How to synthesize the quantum theory with the theory of special relativity was—and has remained—the basic problem confronting “elementary” particle theorists since 1925-1927.
By the early thirties it had become clear that particle creation and annihilation were the genuinely novel features emerging from that synthesis. It should be recalled that the theoretical apparatus for the description of microscopic phenomena up to that time had been predicated on a metaphysics that assumed conservation of “particles.” Dirac’s (1927b,c) quantum electrodynamics was the first step in the elimination of that preconception (Bromberg 1976, 1977). Dirac’s (1931b) hole theory was the first instance of a relativistic quantum theory in which the creation and annihilation of matter was an intrinsic feature. Hole theory was the first insight into what was entailed by a quantum mechanical description of particles that conformed with the requirements of special relativity.
The history of elementary particle physics can be analyzed in terms of oscillations between two viewpoints: one which takes fields as fundamental, in which particles are the quanta of the fields; and the other which takes particles as fundamental, and in which fields are macroscopic coherent states (Weinberg 1977, 1985a, 1986a). Figure 1.1 outlines the history of relativistic quantum mechanics during the 1930s from this perspective. One research tradition (de Broglie → Schrödinger → Jordan → Pauli-Heisenberg) laid the foundation of the quantum theory of fields. The other, predicated on a metaphysics that took particles as the fundamental entities, has Dirac as its founding father and guiding spirit. It is exemplified by the hole theoretic formalism. In hole theory electrons were described by the Dirac equation. To this was appended the postulate that the vacuum was the state in which all the negative energy states were occupied. Pair creation was regarded as a transition of a “negative energy” electron to an unoccupied positive energy state—rather than the creation de novo of a positron and an electron (Dirac 1931b; Pais 1947, 1948). Although essentially equivalent to the field-theoretic approach for problems dealing with a single electron, the hole-theoretic formulation was ambiguous in its treatment of multi-electron systems. Hole theory was a “quasi” field theory in that it accepted the fact that a one-particle interpretation of the Dirac equation was impossible and recognized that it dealt with a (denumerable) infinity of particles. It was a particle theory in that particles were the primary entities and no reference was made to a quantized field. Hole theoretic QED was a particle theory as far as matter was concerned but a quantum field theory in its treatment of radiation (Heitler 1936). The electromagnetic field had a privileged role by virtue of the fact that it exhibited a classical limit. Both research traditions agreed that it should be quantized.
In his Nobel Prize speech, Feynman (1966a) made this insightful observation: “Theories of the known, which are described by different physical ideas, may be equivalent in all their predictions and hence scientifically indistinguishable. However, they are not psychologically identical when trying to move from that base into the unknown. For different views suggest different kinds of modifications which might be made and hence are not equivalent in the hypotheses one generates from them in one’s attempt to understand what is not yet understood.”
That the quantum field-theoretic approach was richer in potentialities and possibilities is made evident by the field-theoretic developments of the thirties. All these advances took as their point of departure insights gained from the quantum theory of the electromagnetic field, and in particular from the centrality of the concept of emission and absorption of quanta.
Fermi’s theory of beta decay and Yukawa’s theory of nuclear forces suggested that quantum field theory was the natural framework in which to attempt to understand what we now call the weak and strong interactions. By the late 1930s the formalism of quantum field theory was fairly well understood and the state of affairs can be inferred from Pauli’s article in the Reviews of Modern Physics (Pauli 1941). But it was Wentzel’s Einführung in der Quantentheorie der Wellenfelder (Wentze1 1943) in which a full account of relativistic quantum field theories was presented and which disseminated this approach to a wide audience after World War II.
In many ways, hole theory was equally successful. Most of the predictions of quantum electrodynamics during the 1930s—such as the cross sections for electron-positron pair production and annihilation, bremsstrahlung, Compton scattering—as well as the verification of the validity of the theory up to energies of the order of 137 mc² and even greater, were based on hole-theoretic QED calculations. Incidentally, it was confidence in this theory that was responsible for the discovery and postulation of a new particle in the cosmic radiation, the “mesotron,” the particle now identified as the muon (Cassidy 1981; Galison 1983). Heitler (1936) summarized the hole theoretic approach in his The Quantum Theory of Radiation, with a slightly revised edition appearing in 1944. Heitler’s books were the primary and standard sources for learning how to “calculate” quantum electrodynamic processes.
But both approaches—field theory and hole-theoretic QED—were beset by overwhelming divergence difficulties that manifested themselves in higher-order calculations (Weinberg 1977; Pais 1986). These difficulties impeded progress and gave rise to a deep pessimism about the formalisms at hand (Rueger 1991). Numerous proposals to overcome the problems of the divergences were advanced. They can be classified as follows (Aramaki 1987):
1. Attempts at eliminating the divergences. Bom and Infeld’s (1935) non-linear theory of the electromagnetic field is one example of an attempt to remove the divergences (see also Pauli 1936). Wentzel’s (1933, 1934) λ-limiting procedure, which reinterpreted the meaning of a local interaction is another. Several investigators tried to remove the divergences by introducing new interactions that “compensate” (i.e., cancel) the divergences of the original theory. Sakata’s (1947) C-meson field and Pais’s (1945, 1946, 1947) f-field are representative examples. This trick, however, only works for the self-energy divergences in lowest order of perturbation theory (Kinoshita 1951).
2. Attempts at circumventing difficulties. The procedure of redefining the charge-current operator in the problem of vacuum polarization so as to absorb the divergences (Dirac 1934a,d; Heisenberg 1934b; Weisskopf 1936) is the first example of the circumvention of the divergence difficulties by a process that would later be called “renormalization.” Pauli and Fierz (1937) in the quantum case, and Kramers (1938a,b) in the classical case, similarly removed the self-energy divergence of a charged particle in interaction with the radiation field by a redefinition of the mass parameter in terms of which the theory was originally formulated. Dancoff (1939) made an attempt to obtain a divergence-free formulation of hole-theoretic quantum electrodynamics to lowest order in perturbation theory by renormalizing both the charge and the mass of the electron. His failure to include all the contributions to that order of perturbation theory doomed the effort at the time.
3. Attempts to understand the structure of the theory and the nature of divergences. Weisskopf (1939) computed the self-energy of an electron in QED in higher orders and concluded that the self-energy divergences are logarithmic to all orders of perturbation theory.
Almost all the proposals to eliminate the divergences that were made during the 1930s ended in failure. The pessimism of the leaders of the discipline—Bohr, Pauli, Heisenberg, Dirac—was partly responsible for the lack of progress. They had witnessed the overthrow of the classical concepts of space-time and were responsible for the rejection of the classical concept of determinism in the description of atomic phenomena. They had brought about the quantum-mechanical revolution and they were convinced that only further conceptual revolutions would solve the divergence problem in quantum field theory. Heisenberg in 1938 noted that the revolutions of special relativity and quantum mechanics were associated with fundamental dimensional parameters: the speed of light, c, and Planck’s constant, h. These delineated the domain of classical physics. He proposed that the next revolution be associated with the introduction of a fundamental unit of length, which would delineate the domain in which the concept of fields and local interactions would be applicable (Heisenberg 1938a,b,c).
The circumvention of the divergence difficulties in the 1945-1950 period was the work of a handful of individuals—principally Kramers, Bethe, Schwinger, Tomonaga, Feynman, and Dyson—and the solution advanced was conservative and technical. It asked to take seriously the received dogma of quantum field theory and special relativity and to explore the limits of that synthesis. Renormalization theory—the technical name for the proposed solution of the 1947-1950 period—revived the faith in quantum field theory.
The history of the developments of quantum field theory in the period from 1943 to the early 1950s is summarized in figure 1.2 in a form which again highlights the distinction between the field and the particle approach. Tomonaga, Schwinger, and Dyson were all field theorists. On the other hand, particles were the fundamental building blocks for Feynman, as had been the case for Dirac. In fact, Feynman can be said to have inherited Dirac’s mantle. Feynman diagrams visualize the fundamental processes in terms of space-time trajectories of particles. Schwinger (1948b,c) and Tomonaga (1943b, 1946, 1948), by exhibiting a field-theoretic formalism that identified and eliminated the divergences in low orders of perturbation theory in a relativistically and gauge-invariant fashion, established the validity of relativistic quantum field theories. In Schwinger’s and Tomonaga’s works, the elimination was accomplished by a renormalization procedure that identified the divergent terms according to their relativistic and gauge transformation properties, and then showing that these divergent contributions could be absorbed in a redefinition of the mass and charge parameters entering the original Lagrangian. Most importantly, the formalism could make predictions about observable phenomena (e.g., the magnetic moment of the electron, the Lamb shift, radiative corrections to Coulomb scattering). Feynman’s genius was such that his idiosyncratic approach (stemming from his work with Wheeler on an action-at-a-distance formulation of classical electrodynamics from which all reference to the electromagnetic field had been eliminated!) resulted in a highly effective computational scheme (Feynman 1949a,b). Dyson (1949a) then demonstrated that Feynman’s results and insights were derivable from Schwinger’s and Tomonaga’s formulation of QED. Furthermore, Dyson (1949b) was able to exhibit a proof that mass and charge renormalization removed all the divergences from the S-matrix of QED to all orders of perturbation theory. It suggested that renormalized QED was a consistent quantum field theory.
Let me briefly indicate some of the accomplishments of the 1947-1951 period. Foremost was the establishment of local quantum field theory as the best suited framework and formalism for the unification of quantum theory and special relativity. Furthermore, quantum electrodynamics, with the addition of renormalization rules, yielded calculated values for the fine and the hyperfine structure of hydrogen and positronium in remarkable agreement with experiments (Baranger 1951; Baranger et al. 1953; Karplus et al. 1952; Karplus and Klein 1952); and the same was true for the magnetic moment of the electron and muon.
In addition, a deeper understanding of the consequences of synthesizing quantum field theory and special relativity was obtained. Thus, it was shown that a relativistically invariant quantum field theory of charged fields automatically contains in its description oppositely charged particles. Also, a deeper insight was obtained into the necessity of quantizing spin zero and integer spin field theories with commutation rules and odd half-integer spin theories with anticommutation rules (Feynman 1949b, 1987; Pauli 1950; Schwinger 1950). Finally, the most perspicacious theorists—for example, Gell-Mann—noted the ease with which other symmetries besides the space-time ones could be incorporated into a local quantum field theory, and made field theory the framework in terms of which to describe the plethora of elementary particles then being discovered.
In January 1950, J. Robert Oppenheimer gave a series of lectures on the “Problems in the Interaction of Elementary Particles” at the California Institute of Technology. He reported on “the very great effort” on the part of theoretical physicists that had been devoted “during the last few years” to these problems and took it as his task to make clear “to what this effort has led and where we stand today.” In his opening remarks Oppenheimer remarked that while preparing his lectures he was “appalled by how much we have learned and how little we knew [in the thirties].” He indicated that physics was “in the middle of a great and … extremely deep advance.” That “great advance” is the subject of this book. Added by: Dominique Meeùs