Posts Tagged ‘Quantum mechanics’

Real Time 3 D Holograms

Larry H. Bernstein, MD, FCAP, Curator



Next-Gen Holographic Microscope Offers Real-Time 3D Imaging

http://www.rdmag.com/news/2016/03/next-gen-holographic-microscope-offers-real-time-3d-imaging       KAIST

3-D Images of representative biological cells taken with the HT-1

3-D Images of representative biological cells taken with the HT-1

Researchers have developed a powerful method for 3D imaging of live cells without staining.

Professor YongKeun Park of the Physics Department at the Korea Advanced Institute of Science and Technology (KAIST) is a leading researcher in the field of biophotonics and has dedicated much of his research career to working on digital holographic microscopy technology. Park and his research team collaborated with the R&D team of a start-up that Park co-founded to develop a state-of-the-art, 2D/3D/4D holographic microscope that would allow a real-time label-free visualization of biological cells and tissues.

The HT is an optical analogy of X-ray computed tomography (CT). Both X-ray CT and HT share the same physical principle—the inverse of wave scattering. The difference is that HT uses laser illumination, whereas X-ray CT uses X-ray beams. From the measurement of multiple 2D holograms of a cell, coupled with various angles of laser illuminations, the 3-D refractive index (RI) distribution of the cell can be reconstructed. The reconstructed 3D RI map provides structural and chemical information of the cell including mass, morphology, protein concentration and dynamics of the cellular membrane.

The HT enables users to quantitatively and non-invasively investigate the intrinsic properties of biological cells, for example, dry mass and protein concentration. Some of the research team’s breakthroughs that have leveraged HT’s unique and special capabilities can be found in several recent publications, including a lead article on the simultaneous 3-D visualization and position tracking of optically trapped particles which was published in Optica on April 20, 2015.

Current fluorescence confocal microscopy techniques require the use of exogenous labeling agents to render high-contrast molecular information. Therefore, drawbacks include possible

  • photo-bleaching
  • photo-toxicity
  • interference with normal molecular activities

Immune or stem cells that need to be reinjected into the body are considered particularly difficult to employ with fluorescence microscopy.

“As one of the two currently available, high-resolution tomographic microscopes in the world, I believe that the HT-1 is the best-in-class regarding specifications and functionality. Users can see 3D/4D live images of cells, without fixing, coating or staining cells. Sample preparation times are reduced from a few days or hours to just a few minutes,” said Park.

“Our technology has set a new paradigm for cell observation under a microscope. I expect that this tomographic microscopy will be more widely used in future in various areas of pharmaceuticals, neuroscience, immunology, hematology and cell biology,” Park added.

The researchers announced the launch of their new microscopic tool, the holotomography (HT)-1, to the global marketplace through the Korean start-up TomoCube. Two Korean hospitals, Seoul National University Hospital in Bundang and Boramae Hospital in Seoul, are currently using this microscope. The research team has also introduced the HT-1 at the Photonics West Exhibition 2016 that took place on February 16-18, 2016, in San Francisco, CA.


Chip-Based Atomic Physics Makes Second Quantum Revolution a Reality


A quartz surface above the electrodes used to trap atoms. The color map on the surface shows the electric field amplitude.

A quartz surface above the electrodes used to trap atoms. The color map on the surface shows the electric field amplitude.

A University of Oklahoma-led team of physicists believes chip-based atomic physics holds promise to make the second quantum revolution—the engineering of quantum matter with arbitrary precision—a reality. With recent technological advances in fabrication and trapping, hybrid quantum systems are emerging as ideal platforms for a diverse range of studies in quantum control, quantum simulation and computing.

James P. Shaffer, professor in the Homer L. Dodge Department of Physics and Astronomy, OU College of Arts and Sciences; Jon Sedlacek, OU graduate student; and a team from the University of Nevada, Western Washington University, The United States Naval Academy, Sandia National Laboratories and Harvard-Smithsonian Center for Astrophysics, have published research important for integrating Rydberg atoms into hybrid quantum systems and the fundamental study of atom-surface interactions, as well as applications for electrons bound to a 2-D surface.

“A convenient surface for application in hybrid quantum systems is quartz because of its extensive use in the semiconductor and optics industries,” Sedlacek said. “The surface has been the subject of recent interest as a result of it stability and low surface energy. Mitigating electric fields near ‘trapping’ surfaces is the holy grail for realizing hybrid quantum systems,” added Hossein Sadeghpour, director of the Institute for Theoretical Atomic Molecular and Optical Physics, Harvard-Smithsonian Center for Astrophysics.

In this work, Shaffer finds ionized electrons from Rydberg atoms excited near the quartz surface form a 2-D layer of electrons above the surface, canceling the electric field produced by rubidium surface adsorbates. The system is similar to electron trapping in a 2-D gas on superfluid liquid helium. The binding of electrons to the surface substantially reduces the electric field above the surface.

“Our results show that binding is due to the image potential of the electron inside the quartz,” said Shaffer. “The electron can’t diffuse into the quartz because the rubidium adsorbates make the surface have a negative electron affinity. The approach is a promising pathway for coupling Rydberg atoms to surfaces, as well as for using surfaces close to atomic and ionic samples.”

A paper on this research was published in the American Physics Society’s Physical Review Letters. The OU part of this work was supported by the Defense Advanced Research Projects Agency Quasar program by a grant through the Army Research Office, the Air Force Office of Scientific Research and the National Science Foundation.


New Spin on Biomolecular Tags Lets MRI Catch Metabolic Wobbles


Duke scientists have discovered a new class of inexpensive and long-lived molecular tags that enhance MRI signals by 10,000-fold. To activate the tags, the researchers mix them with a newly developed catalyst (center) and a special form of hydrogen (gray), converting them into long-lived magnetic resonance “lightbulbs” that might be used to track disease metabolism in real time. [Thomas Theis, Duke University]

In principle, magnetic resonance imaging (MRI) could be used to track disease-related biomolecular processes. In practice, magnetic resonance signals die out too quickly. Also, these signals are detectable only with incredibly expensive equipment. The necessary devices, called hyperpolarizers, are commercially available, but they cost as much as $3 million each.

Yet magnetic resonance can be more practical, report scientists from Duke University. These scientists say that they have discovered a new class of molecular tags that enhance magnetic resonance signals by 10,000-fold and generate detectable signals that last over an hour, and not just a few seconds, as is the case with currently available tags. Moreover, the tags are biocompatible and inexpensive to produce, paving the way for widespread use of MRI to monitor metabolic process of conditions such as cancer and heart disease in real time.

According to the Duke team, which was led by physicist Warren S. Warren, Ph.D., and chemist Thomas Theis, Ph.D., the hyperpolarization window to in vitro and in vivo biochemistry can be opened by combining two advances: (1) the use of 15N2-diazirines as storage vessels for hyperpolarization, and (2) a relatively simple and inexpensive approach to hyperpolarization called SABRE-SHEATH.

The details appeared March 25 in the journal Science Advances, in an article entitled, “Direct and Cost-Efficient Hyperpolarization of Long-Lived Nuclear Spin States on Universal 15N2-Diazirine Molecular Tags.” The article explains that the promise of magnetic resonance in tracking chemical transformations has not been realized because of the limitations of existing techniques, such as dissolution dynamic nuclear polarization (d-DNP). Such techniques have lacked adequate sensitivity and are unable to detect small number of molecules without using unattainably massive magnetic fields.

MRI takes advantage of a property called spin, which makes the nuclei in hydrogen atoms act like tiny magnets. Applying a strong magnetic field, followed by a series of radio waves, induces these hydrogen magnets to broadcast their locations. Most of the hydrogen atoms in the body are bound up in water; therefore, the technique is used in clinical settings to create detailed images of soft tissues like organs, blood vessels, and tumors inside the body.

With greater sensitivity, however, magnetic resonance techniques could be used to track chemical transformations in real time. This degree of sensitivity, say the Duke scientists, could be within reach.

“We use a recently developed method, SABRE-SHEATH, to directly hyperpolarize 15N2 magnetization and long-lived 15N2 singlet spin order, with signal decay time constants of 5.8 and 23 minutes, respectively,” wrote the authors of the Science Advances article. “We find >10,000-fold enhancements generating detectable nuclear MR signals that last for over an hour.” The authors added that 15N2-diazirines represent a class of particularly promising and versatile molecular tags and can be incorporated into a wide range of biomolecules without significantly altering molecular function.

“This represents a completely new class of molecules that doesn’t look anything at all like what people thought could be made into MRI tags,” said Dr. Warren “We envision it could provide a whole new way to use MRI to learn about the biochemistry of disease.”

Qiu Wang, Ph.D., an assistant professor of chemistry at Duke and co-author on the paper, said the structure of 15N2-diazirine is a particularly exciting target for hyperpolarization because it has already been demonstrated as a tag for other types of biomedical imaging.

“It can be tagged on small molecules, macromolecules, amino acids, without changing the intrinsic properties of the original compound,” said Dr. Wang. “We are really interested to see if it would be possible to use it as a general imaging tag.” Magnetic resonance, added Dr. Theis, is uniquely sensitive to chemical transformations: “With magnetic resonance, you can see and track chemical transformations in real time.”

The scientists believe their SABRE-SHEATH catalyst could be used to hyperpolarize a wide variety of chemical structures at a fraction of the cost of other methods. “You could envision, in five or ten years, you’ve got the container with the catalyst, you’ve got the bulb with the hydrogen gas,” explained Dr. Warren. “In a minute, you’ve made the hyperpolarized agent, and on the fly you could actually take an image. That is something that is simply inconceivable by any other method.”


Direct and cost-efficient hyperpolarization of long-lived nuclear spin states on universal 15N2-diazirine molecular tags
Conventional magnetic resonance (MR) faces serious sensitivity limitations which can be overcome by hyperpolarization methods, but the most common method (dynamic nuclear polarization) is complex and expensive, and applications are limited by short spin lifetimes (typically seconds) of biologically relevant molecules. We use a recently developed method, SABRE-SHEATH, to directly hyperpolarize 15N2 magnetization and long-lived 15N2 singlet spin order, with signal decay time constants of 5.8 and 23 minutes, respectively. We find >10,000-fold enhancements generating detectable nuclear MR signals that last for over an hour. 15N2-diazirines represent a class of particularly promising and versatile molecular tags, and can be incorporated into a wide range of biomolecules without significantly altering molecular function.

Hyperpolarization enables real-time monitoring of in vitro and in vivo biochemistry

Conventional magnetic resonance (MR) is an unmatched tool for determining molecular structures and monitoring structural transformations. However, even very large magnetic fields only slightly magnetize samples at room temperature and sensitivity remains a fundamental challenge; for example, virtually all MR images are of water because it is the molecule at the highest concentration in vivo. Nuclear spin hyperpolarization significantly alters this perspective by boosting nuclear MR (NMR) sensitivity by four to nine orders of magnitude (13), giving access to detailed chemical information at low concentrations. These advances are beginning to transform biomedical in vivo applications (49) and structural in vitro studies (1016).

Current hyperpolarization technology is expensive and associated with short signal lifetimes

Still, two important challenges remain. First, hyperpolarized MR is associated with high cost for the most widespread hyperpolarization technology [dissolution dynamic nuclear polarization (d-DNP), $2 million to $3 million for commercial hyperpolarizers]. Second, hyperpolarized markers typically have short signal lifetimes: typically, hyperpolarized signals may only be tracked for 1 to 2 min in the most favorable cases (6), greatly limiting this method as a probe for slower biological processes.

The presented approach is inexpensive and produces long-lived signals

Here, we demonstrate that both of these challenges can be overcome simultaneously, setting the stage for hour-long tracking of molecular markers with inexpensive equipment. Specifically, we illustrate the potential of 15N2-diazirines as uniquely powerful storage vessels for hyperpolarization. We show that diazirine can be hyperpolarized efficiently and rapidly (literally orders of magnitude cheaper and quicker than d-DNP), and that this hyperpolarization can be induced in states that maintain hyperpolarization for more than an hour.

Our approach uses parahydrogen (p-H2) to directly polarize long-lived nuclear spin states. The first demonstration of parahydrogen-induced polarization (PHIP) was performed in the late 1980s (1719). Then, PHIP was used to rely on the addition of p-H2 to a carbon double or triple bond, incorporating highly polarized hydrogen atoms into molecules. This approach generally requires specific catalyst-substrate pairs; in addition, hydrogen atoms usually have short relaxation times (T1) that cause signal decay within a few seconds. A more recent variant, SABRE (signal amplification by reversible exchange) (20, 21), uses p-H2 to polarize 1H atoms on a substrate without hydrogenation. In SABRE, both p-H2 and substrate reversibly bind to an iridium catalyst and the hyperpolarization is transferred from p-H2 to the substrate through J-couplings established on the catalytic intermediate. Recently, we extended this method to SABRE-SHEATH (SABRE in SHield Enables Alignment Transfer to Heteronuclei) for direct hyperpolarization of 15N molecular sites (2224). This method has several notable features. Low-γ nuclei (13C, 15N) tend to have long relaxation times, particularly if a proton is not attached. In addition, conventional SABRE relies on small differences between four-bond proton-proton J-couplings (detailed in the Supplementary Materials), whereas SABRE-SHEATH uses larger two-bond heteronuclear J-couplings. It is extremely simple: SABRE-SHEATH requires nothing but p-H2, the catalyst, and a shield to reduce Earth’s field by about 99%. After 1 to 5 min of bubbling p-H2into the sample in the shield, we commonly achieve 10% nitrogen polarization, many thousands of times stronger than thermal signals (22). In contrast, d-DNP typically produces such polarization levels in an hour, at much higher cost.

Diazirines are small and versatile molecular tags

A general strategy for many types of molecular imaging is the creation of molecular tags, which ideally do not alter biochemical pathways but provide background-free signatures for localization. This strategy has not been very successful in MR because of sensitivity issues. Here, we demonstrate that SABRE-SHEATH enables a MR molecular beacon strategy using diazirines Embedded Image (three-membered rings containing a nitrogen-nitrogen double bond). They are highly attractive as molecular tags, primarily because of their small size. Diazirines have already been established as biocompatible molecular tags for photoaffinity labeling (25). They can be incorporated into many small molecules, metabolites, and biomolecules without drastically altering biological function. Diazirines share similarities with methylene (CH2) groups in terms of electronic and steric properties such that they can replace methylene groups without drastically distorting biochemical behavior. Furthermore, diazirines are stable at room temperature, are resistant to nucleophiles, and do not degrade under either acidic or alkaline conditions (25). With these attractive properties, diazirines have been used for the study of many signaling pathways. For example, they have been incorporated into hormones (26), epileptic drugs (27), antibiotics (28), hyperthermic drugs (29), anticancer agents (30), anesthetics (31), nucleic acids (32), amino acids (33), and lipids (34). They also have been introduced into specific molecular reporters to probe enzyme function and their binding sites such as in kinases (35), aspartic proteases (36), or metalloproteinases (37), to name a few. The nitrogen-nitrogen moiety is also intrinsically interesting, because the two atoms are usually very close in chemical shift and strongly coupled, thus suited to support a long-lived singlet state as described below.


Fig. 1The hyperpolarization mechanism.

(A) The precatalyst, 15N2-diazirine substrate, and p-H2 are mixed, resulting in the activated species depicted in (B). (B) Both p-H2 and the free 15N2-diazirine [2-cyano-3-(D3 methyl-15N2-diazirine)-propanoic acid] are in reversible exchange with the catalytically active iridium complex. The catalyst axial position is occupied by IMes [1,3-bis(2,4,6-trimethylphenyl)-imidazolium] and Py (pyridine) as nonexchanging ligands. The structure shown is a local energy minimum of the potential energy surface based on all-electron DFT calculations and the dispersion-corrected PBE density functional. In the complex, hyperpolarization is transferred from the parahydrogen (p-H2)–derived hydrides to the 15N nuclei (white, hydrogen; gray, carbon; blue, nitrogen; red, oxygen).

Density functional theory calculations shed light on polarization transfer catalyst

The Ir complex conformation shown in Fig. 1B was determined by all-electron density functional theory (DFT) calculations [semilocal Perdew-Burke-Ernzerhof (PBE) functional (38), corrected for long-range many-body dispersion interactions (39), in the FHI-aims software package (40, 41); see the Supplementary Materials for details]. The calculations indicate a η1 single-sided N attachment rather than η2-N=N attachment of the diazirines. In the Ir complex, hyperpolarization is transferred from p-H2 gas (~92% para-state, 7.5 atm) to the 15N2-diazirine. Both p-H2 and substrate are in reversible exchange with the central complex, which results in continuous pumping of hyperpolarization: p-H2 is continually refreshed and hyperpolarization accumulated on the diazirine substrate.

An alternate polarization transfer catalyst is introduced

As opposed to the traditional [Ir(COD)(IMes)(Cl)] catalyst (18), the synthesized [Ir(COD)(IMes)(Py)][PF6] results in a pyridine ligand trans to IMes, improving our hyperpolarization levels by a factor of ~3 (see the Supplementary Materials). We have found that this new approach, which avoids competition from added pyridine, makes it possible to directly hyperpolarize a wide variety of different types of15N-containing molecules (and even 13C). However, diazirines represent a particularly general and interesting class of ligands for molecular tags and are the focus here.

As depicted in Fig. 2A, the hyperpolarization proceeds outside the high-field NMR magnet at low magnetic fields, enabling SABRE-SHEATH directly targeting 15N nuclei (22). To establish the hyperpolarization, we bubble p-H2 for ~5 min at the adequate field. Then, the sample is transferred into the NMR magnet within ~10 s, and 15N2 signal detection is performed with a simple 90° pulse followed by data acquisition.


Fig. 2Experimental and spectral distinction between magnetization and singlet spin order.

(A) Experimental procedure. The sample is hyperpolarized by bubbling parahydrogen (p-H2) through the solution in the NMR tube for 5 min and subsequently transferred into the high-field magnet for detection. If the hyperpolarization/bubbling is performed in a magnetic shield at ~6 mG, z-magnetization is created (black). If the hyperpolarization/bubbling is performed in the laboratory field anywhere between ~0.1 and ~1 kG, singlet order is created (blue). (B) Z-magnetization and singlet spin order can easily be distinguished based on their spectral appearance. a.u., arbitrary units. Z-magnetization produces an in-phase quartet (black). Singlet order gives an anti-phase quartet (blue). The spin system parameters for the 15N2 two-spin system are JNN = 17.3 Hz and Δδ = 0.58 parts per million.

Two types of hyperpolarized states can be created: Magnetization and singlet order

We create two different types of hyperpolarization on the 15N2-diazirine. We can hyperpolarize traditional z-magnetization, which corresponds to nuclear spins aligned with the applied magnetic field and is associated with pure in-phase signal as illustrated with the black trace in Fig. 2B. Alternatively, we can hyperpolarize singlet order on the 15N2-diazirine, which corresponds to an anti-aligned spin state, with both spins pointing in opposite directions, entangled in a quantum mechanically “hidden” state. This hidden singlet order is converted into a detectable state when transferred to a high magnetic field and associated with the anti-phase signal illustrated by the blue trace in Fig. 2B (see the Supplementary Materials for details). The difference in symmetry of z-magnetization and singlet order leads to differences in signal decay rates; z-magnetization is directly exposed to all NMR relaxation mechanisms and is often associated with shorter signal lifetimes, which may impede molecular tracking on biologically relevant time scales. Singlet order, on the other hand, is protected from many relaxation mechanisms because it has no angular momentum (4252) and can therefore exhibit much longer lifetimes, enabling hour-long molecular tracking.

The type of hyperpolarized state is selected by the magnetic field

We can control which type of hyperpolarization we create by choosing the appropriate magnetic fields for the bubbling process. Z-magnetization is created in the SABRE-SHEATH mode at low magnetic fields inside a magnetic shield (2224, 53, 54). This behavior is explained by resonance conditions for hyperpolarizing magnetization versus singlet order that we derive in the Supplementary Materials. The condition for creating magnetization, νH − νN = |JHH ± JNN|, is field-dependent in the NMR frequencies, νH and νN, and field-independent in the J-couplings. Accordingly, hyperpolarized magnetization is created at a magnetic field where the frequency difference matches the J-couplings. This magnetic field is ~6 mG, which is obtained by using JHH = −10 Hz, JNN = −17.3 Hz, γ1H = 4.2576 kHz/G, and γ15N = −0.4316 kHz/G (see the Supplementary Materials). The theoretical prediction of 6 mG matches the experimental maximum for hyperpolarized z-magnetization illustrated by the blue data in Fig. 3A.



The demonstrated hyperpolarization lifetimes, combined with the ease of hyperpolarization in these broadly applicable biomolecular tags, may establish a paradigm shift for biomolecular sensing and reporting in optically opaque tissue. The demonstrated lifetimes even exceed lifetimes of some common radioactive tracers used in positron emission tomography (PET) (for example, 11C, 20.3 min). However, unlike PET, MR is exquisitely sensitive to chemical transformations and does not use ionizing radiation (such that, for example, daily progression monitoring of disease is easily possible). The presented work may allow direct access to biochemical mechanisms and kinetics in optically opaque media. We therefore envision tracking subtle biochemical processes in vitro with unprecedented NMR sensitivities as well as real-time in vivo biomolecular imaging with hyperpolarized diazirines.

Read Full Post »

History of Quantum Mechanics

Curator: Larry H. Bernstein, MD, FCAP




A history of Quantum Mechanics


It is hard to realise that the electron was only discovered a little over 100 years ago in 1897. That it was not expected is illustrated by a remark made by J J Thomson, the discoverer of the electron. He said

I was told long afterwards by a distinguished physicist who had been present at my lecture that he thought I had been pulling their leg.

The neutron was not discovered until 1932 so it is against this background that we trace the beginnings of quantum theory back to 1859.

In 1859 Gustav Kirchhoff proved a theorem about blackbody radiation. A blackbody is an object that absorbs all the energy that falls upon it and, because it reflects no light, it would appear black to an observer. A blackbody is also a perfect emitter and Kirchhoff proved that the energy emitted E depends only on the temperature T and the frequency v of the emitted energy, i.e.

E = J(T,v).

He challenged physicists to find the function J.

In 1879 Josef Stefan proposed, on experimental grounds, that the total energy emitted by a hot body was proportional to the fourth power of the temperature. In the generality stated by Stefan this is false. The same conclusion was reached in 1884 by Ludwig Boltzmann for blackbody radiation, this time from theoretical considerations using thermodynamics and Maxwell‘s electromagnetic theory. The result, now known as the StefanBoltzmann law, does not fully answer Kirchhoff‘s challenge since it does not answer the question for specific wavelengths.

In 1896 Wilhelm Wien proposed a solution to the Kirchhoff challenge. However although his solution matches experimental observations closely for small values of the wavelength, it was shown to break down in the far infrared by Rubens and Kurlbaum.

Kirchhoff, who had been at Heidelberg, moved to Berlin. Boltzmann was offered his chair in Heidelberg but turned it down. The chair was then offered to Hertz who also declined the offer, so it was offered again, this time to Planck and he accepted.

Rubens visited Planck in October 1900 and explained his results to him. Within a few hours of Rubens leaving Planck‘s house Planck had guessed the correct formula for Kirchhoff‘s J function. This guess fitted experimental evidence at all wavelengths very well but Planck was not satisfied with this and tried to give a theoretical derivation of the formula. To do this he made the unprecedented step of assuming that the total energy is made up of indistinguishable energy elements – quanta of energy. He wrote

Experience will prove whether this hypothesis is realised in nature

Planck himself gave credit to Boltzmann for his statistical method but Planck‘s approach was fundamentally different. However theory had now deviated from experiment and was based on a hypothesis with no experimental basis. Planck won the 1918 Nobel Prize for Physics for this work.

In 1901 Ricci and Levi-Civita published Absolute differential calculus. It had been Christoffel‘s discovery of ‘covariant differentiation’ in 1869 which let Ricci extend the theory of tensor analysis to Riemannian space of n dimensions. The Ricci and Levi-Civita definitions were thought to give the most general formulation of a tensor. This work was not done with quantum theory in mind but, as so often happens, the mathematics necessary to embody a physical theory had appeared at precisely the right moment.

In 1905 Einstein examined the photoelectric effect. The photoelectric effect is the release of electrons from certain metals or semiconductors by the action of light. The electromagnetic theory of light gives results at odds with experimental evidence. Einstein proposed a quantum theory of light to solve the difficulty and then he realised that Planck‘s theory made implicit use of the light quantum hypothesis. By 1906 Einstein had correctly guessed that energy changes occur in a quantum material oscillator in changes in jumps which are multiples of v where  is Planck‘s reduced constant and v is the frequency. Einstein received the 1921 Nobel Prize for Physics, in 1922, for this work on the photoelectric effect.

In 1913 Niels Bohr wrote a revolutionary paper on the hydrogen atom. He discovered the major laws of the spectral lines. This work earned Bohr the 1922 Nobel Prize for Physics. Arthur Compton derived relativistic kinematics for the scattering of a photon (a light quantum) off an electron at rest in 1923.

However there were concepts in the new quantum theory which gave major worries to many leading physicists. Einstein, in particular, worried about the element of ‘chance’ which had entered physics. In fact Rutherford had introduced spontaneous effect when discussing radio-active decay in 1900. In 1924 Einstein wrote:-

There are therefore now two theories of light, both indispensable, and – as one must admit today despite twenty years of tremendous effort on the part of theoretical physicists – without any logical connection.

In the same year, 1924, Bohr, Kramers and Slater made important theoretical proposals regarding the interaction of light and matter which rejected the photon. Although the proposals were the wrong way forward they stimulated important experimental work. Bohr addressed certain paradoxes in his work.

(i) How can energy be conserved when some energy changes are continuous and some are discontinuous, i.e. change by quantum amounts.
(ii) How does the electron know when to emit radiation.

Einstein had been puzzled by paradox (ii) and Pauli quickly told Bohr that he did not believe his theory. Further experimental work soon ended any resistance to belief in the electron. Other ways had to be found to resolve the paradoxes.

Up to this stage quantum theory was set up in Euclidean space and used Cartesian tensors of linear and angular momentum. However quantum theory was about to enter a new era.

The year 1924 saw the publication of another fundamental paper. It was written by Satyendra Nath Bose and rejected by a referee for publication. Bose then sent the manuscript to Einstein who immediately saw the importance of Bose‘s work and arranged for its publication. Bose proposed different states for the photon. He also proposed that there is no conservation of the number of photons. Instead of statistical independence of particles, Bose put particles into cells and talked about statistical independence of cells. Time has shown that Bose was right on all these points.

Work was going on at almost the same time as Bose‘s which was also of fundamental importance. The doctoral thesis of Louis de Broglie was presented which extended the particle-wave duality for light to all particles, in particular to electrons. Schrödinger in 1926 published a paper giving his equation for the hydrogen atom and heralded the birth of wave mechanics. Schrödingerintroduced operators associated with each dynamical variable.

The year 1926 saw the complete solution of the derivation of Planck‘s law after 26 years. It was solved by Dirac. Also in 1926 Born abandoned the causality of traditional physics. Speaking of collisions Born wrote

One does not get an answer to the question, What is the state after collision? but only to the question, How probable is a given effect of the collision? From the standpoint of our quantum mechanics, there is no quantity which causally fixes the effect of a collision in an individual event.

Heisenberg wrote his first paper on quantum mechanics in 1925 and 2 years later stated his uncertainty principle. It states that the process of measuring the position x of a particle disturbs the particle’s momentum p, so that

Dx Dp ≥  = h/2π

where Dx is the uncertainty of the position and Dp is the uncertainty of the momentum. Here h is Planck‘s constant and  is usually called the ‘reduced Planck‘s constant’. Heisenberg states that

the nonvalidity of rigorous causality is necessary and not just consistently possible.

Heisenberg‘s work used matrix methods made possible by the work of Cayley on matrices 50 years earlier. In fact ‘rival’ matrix mechanics deriving from Heisenberg‘s work and wave mechanics resulting from Schrödinger‘s work now entered the arena. These were not properly shown to be equivalent until the necessary mathematics was developed by Riesz about 25 years later.

Also in 1927 Bohr stated that space-time coordinates and causality are complementary. Pauli realised that spin, one of the states proposed by Bose, corresponded to a new kind of tensor, one not covered by the Ricci and Levi-Civita work of 1901. However the mathematics of this had been anticipated by Eli Cartan who introduced a ‘spinor’ as part of a much more general investigation in 1913.

Dirac, in 1928, gave the first solution of the problem of expressing quantum theory in a form which was invariant under the Lorentz group of transformations of special relativity. He expressedd’Alembert‘s wave equation in terms of operator algebra.

The uncertainty principle was not accepted by everyone. Its most outspoken opponent was Einstein. He devised a challenge to Niels Bohr which he made at a conference which they both attended in 1930. Einstein suggested a box filled with radiation with a clock fitted in one side. The clock is designed to open a shutter and allow one photon to escape. Weigh the box again some time later and the photon energy and its time of escape can both be measured with arbitrary accuracy. Of course this is not meant to be an actual experiment, only a ‘thought experiment’.

Niels Bohr is reported to have spent an unhappy evening, and Einstein a happy one, after this challenge by Einstein to the uncertainty principle. However Niels Bohr had the final triumph, for the next day he had the solution. The mass is measured by hanging a compensation weight under the box. This is turn imparts a momentum to the box and there is an error in measuring the position. Time, according to relativity, is not absolute and the error in the position of the box translates into an error in measuring the time.

Although Einstein was never happy with the uncertainty principle, he was forced, rather grudgingly, to accept it after Bohr‘s explanation.

In 1932 von Neumann put quantum theory on a firm theoretical basis. Some of the earlier work had lacked mathematical rigour, but von Neumann put the whole theory into the setting of operator algebra.

References (33 books/articles)

Article by: J J O’Connor and E F Robertson


A Brief History of Quantum Mechanics

Appendix A of
The Strange World of Quantum Mechanics

written by Dan StyerOberlin College Physics Department;
copyright © Daniel F. Styer 1999


One must understand not only the cleanest and most direct experimental evidence supporting our current theories (like the evidence presented in this book), but must understand also how those theories came to be accepted through a tightly interconnected web of many experiments, no one of which was completely convincing but which taken together presented an overwhelming argument

Thus a full history of quantum mechanics would have to discuss Schrödinger’s many mistresses, Ehrenfest’s suicide, and Heisenberg’s involvement with Nazism. It would have to treat the First World War’s effect on the development of science. It would need to mention “the Thomson model” of the atom, which was once the major competing theory to quantum mechanics. It would have to give appropriate weight to both theoretical and experimental developments.

Much of the work of science is done through informal conversations, and the resulting written record is often sanitized to avoid offending competing scientists. The invaluable oral record is passed down from professor to student repeatedly before anyone ever records it on paper. There is a tendency for the exciting stories to be repeated and the dull ones to be forgotten.

The fact is that scientific history, like the stock market and like everyday life, does not proceed in an orderly, coherent pattern. The story of quantum mechanics is a story full of serendipity, personal squabbles, opportunities missed and taken, and of luck both good and bad.

Status of physics: January 1900

In January 1900 the atomic hypothesis was widely but not universally accepted. Atoms were considered point particles, and it wasn’t clear how atoms of different elements differed. The electron had just been discovered (1897) and it wasn’t clear where (or even whether) electrons were located within atoms. One important outstanding problem concerned the colors emitted by atoms in a discharge tube (familiar today as the light from a fluorescent tube or from a neon sign). No one could understand why different gas atoms glowed in different colors. Another outstanding problem concerned the amount of heat required to change the temperature of a diatomic gas such as oxygen: the measured amounts were well below the value predicted by theory. Because quantum mechanics is important when applied to atomic phenomena, you might guess that investigations into questions like these would give rise to the discovery of quantum mechanics. Instead it came from a study of heat radiation.

Heat radiation

You know that the coals of a campfire, or the coils of an electric stove, glow red. You probably don’t know that even hotter objects glow white, but this fact is well known to blacksmiths. When objects are hotter still they glow blue. (This is why a gas stove should be adjusted to make a blue flame.) Indeed, objects at room temperature also glow (radiate), but the radiation they emit is infrared, which is not detectable by the eye. (The military has developed — for use in night warfare — special eye sets that convert infrared radiation to optical radiation.)

In the year 1900 several scientists were trying to turn these observations into a detailed explanation of and a quantitatively accurate formula for the color of heat radiation as a function of temperature. On 19 October 1900 the Berliner Max Planck (age 42) announced a formula that fit the experimental results perfectly, yet he had no explanation for the formula — it just happened to fit. He worked to find an explanation through the late fall and finally was able to derive his formula by assuming that the atomic jigglers could not take on any possible energy, but only certain special “allowed” values. He announced this result on 14 December 1900. We know this because the assumption of allowed energy values raises certain obvious questions. If a jiggling atom can only assume certain allowed values of energy, then there must also be restrictions on the positions and speeds that the atom can have. What are they?

Planck wrote (31 years after his discovery):

I had already fought for six years (since 1894) with the problem of equilibrium between radiation and matter without arriving at any successful result. I was aware that this problem was of fundamental importance in physics, and I knew the formula describing the energy distribution . . .

Here is another wonderful story, this one related by Werner Heisenberg:

In a period of most intensive work during the summer of 1900 [Planck] finally convinced himself that there was no way of escaping from this conclusion [of “allowed” energies]. It was told by Planck’s son that his father spoke to him about his new ideas on a long walk through the Grunewald, the wood in the suburbs of Berlin. On this walk he explained that he felt he had possibly made a discovery of the first rank, comparable perhaps only to the discoveries of Newton.

(the son would probably remember the nasty cold he caught better than any remarks his father made.)

The old quantum theory

Classical mechanics was assumed to hold, but with the additional assumption that only certain values of a physical quantity (the energy, say, or the projection of a magnetic arrow) were allowed. Any such quantity was said to be “quantized”. The trick seemed to be to guess the right quantization rules for the situation under study, or to find a general set of quantization rules that would work for all situations.

For example, in 1905 Albert Einstein (age 26) postulated that the total energy of a beam of light is quantized. Just one year later he used quantization ideas to explain the heat/temperature puzzle for diatomic gases. Five years after that, in 1911, Arnold Sommerfeld (age 43) at Munich began working on the implications of energy quantization for position and speed.

In the same year Ernest Rutherford (age 40), a New Zealander doing experiments in Manchester, England, discovered the atomic nucleus — only at this relatively late stage in the development of quantum mechanics did physicists have even a qualitatively correct picture of the atom! In 1913, Niels Bohr (age 28), a Dane who had recently worked in Rutherford’s laboratory, introduced quantization ideas for the hydrogen atom. His theory was remarkably successful in explaining the colors emitted by hydrogen glowing in a discharge tube, and it sparked enormous interest in developing and extending the old quantum theory.

During the WWI (in 1915) William Wilson (age 40, a native of Cumberland, England, working at King’s College in London) made progress on the implications of energy quantization for position and speed, and Sommerfeld also continued his work in that direction.

With the coming of the armistice in 1918, work in quantum mechanics expanded rapidly. Many theories were suggested and many experiments performed. To cite just one example, in 1922 Otto Stern and his graduate student Walther Gerlach (ages 34 and 23) performed their important experiment that is so essential to the way this book presents quantum mechanics. Jagdish Mehra and Helmut Rechenberg, in their monumental history of quantum mechanics, describe the situation at this juncture well:

At the turn of the year from 1922 to 1923, the physicists looked forward with enormous enthusiasm towards detailed solutions of the outstanding problems, such as the helium problem and the problem of the anomalous Zeeman effects. However, within less than a year, the investigation of these problems revealed an almost complete failure of Bohr’s atomic theory.

The matrix formulation of quantum mechanics

As more and more situations were encountered, more and more recipes for allowed values were required. This development took place mostly at Niels Bohr’s Institute for Theoretical Physics in Copenhagen, and at the University of Göttingen in northern Germany. The most important actors at Göttingen were Max Born (age 43, an established professor) and Werner Heisenberg (age 23, a freshly minted Ph.D. from Sommerfeld in Munich). According to Born “At Göttingen we also took part in the attempts to distill the unknown mechanics of the atom out of the experimental results. . . . The art of guessing correct formulas . . . was brought to considerable perfection.”

Heisenberg particularly was interested in general methods for making guesses. He began to develop systematic tables of allowed physical quantities, be they energies, or positions, or speeds. Born looked at these tables and saw that they could be interpreted as mathematical matrices. Fifty years later matrix mathematics would be taught even in high schools. But in 1925 it was an advanced and abstract technique, and Heisenberg struggled with it. His work was cut short in June 1925.

It was late spring in Göttingen, and Heisenberg suffered from an allergy attack so severe that he could hardly work. He asked his research director, Max Born, for a vacation, and spent it on the rocky North Sea island of Helgoland. At first he was so ill that could only stay in his rented room and admire the view of the sea. As his condition improved he began to take walks and to swim. With further improvement he began also to read Goethe and to work on physics. With nothing to distract him, he concentrated intensely on the problems that had faced him in Göttingen.

Heisenberg reproduced his earlier work, cleaning up the mathematics and simplifying the formulation. He worried that the mathematical scheme he invented might prove to be inconsistent, and in particular that it might violate the principle of the conservation of energy. In Heisenberg’s own words:

The energy principle had held for all the terms, and I could no longer doubt the mathematical consistency and coherence of the kind of quantum mechanics to which my calculations pointed. At first, I was deeply alarmed. I had the feeling that, through the surface of atomic phenomena, I was looking at a strangely beautiful interior, and felt almost giddy at the thought that I now had to probe this wealth of mathematical structures nature had so generously spread out before me.

By the end of the summer Heisenberg, Born, and Pascual Jordan (age 22) had developed a complete and consistent theory of quantum mechanics. (Jordan had entered the collaboration when he overheard Born discussing quantum mechanics with a colleague on a train.)

This theory, called “matrix mechanics” or “the matrix formulation of quantum mechanics”, is not the theory I have presented in this book. It is extremely and intrinsically mathematical, and even for master mathematicians it was difficult to work with. Although we now know it to be complete and consistent, this wasn’t clear until much later. Heisenberg had been keeping Wolfgang Pauli apprised of his progress. (Pauli, age 25, was Heisenberg’s friend from graduate student days, when they studied together under Sommerfeld.) Pauli found the work too mathematical for his tastes, and called it “Göttingen’s deluge of formal learning”. On 12 October 1925 Heisenberg could stand Pauli’s biting criticism no longer. He wrote to Pauli:

With respect to both of your last letters I must preach you a sermon, and beg your pardon… When you reproach us that we are such big donkeys that we have never produced anything new in physics, it may well be true. But then, you are also an equally big jackass because you have not accomplished it either . . . . . . (The dots denote a curse of about two-minute duration!) Do not think badly of me and many greetings.

The wavefunction formulation of quantum mechanics

While this work was going on at Göttingen and Helgoland, others were busy as well. In 1923 Louis de Broglie (age 31), associated an “internal periodic phenomenon” — a wave — with a particle. He was never very precise about just what that meant. (De Broglie is sometimes called “Prince de Broglie” because his family descended from the French nobility. To be strictly correct, however, only his eldest brother could claim the title.)

It fell to Erwin Schroedinger, an Austrian working in Zürich, to build this vague idea into a theory of wave mechanics. He did so during the Christmas season of 1925 (at age 38), at the alpine resort of Arosa, Switzerland, in the company of “an old girlfriend [from] Vienna”, while his wife stayed home in Zürich.

In short, just twenty-five years after Planck glimpsed the first sight of a new physics, there was not one, but two competing versions of that new physics!

The two versions seemed utterly different and there was an acrimonious debate over which one was correct. In a footnote to a 1926 paper Schrödinger claimed to be “discouraged, if not repelled” by matrix mechanics. Meanwhile, Heisenberg wrote to Pauli (8 June 1926) that

The more I think of the physical part of the Schrödinger theory, the more detestable I find it. What Schrödinger writes about visualization makes scarcely any sense, in other words I think it is shit. The greatest result of his theory is the calculation of matrix elements.

Fortunately the debate was soon stilled: in 1926 Schrödinger and, independently, Carl Eckert (age 24) of Caltech proved that the two new mechanics, although very different in superficial appearance, were equivalent to each other. [Very much as the process of adding arabic numerals is quite different from the process of adding roman numerals, but the two processes nevertheless always give the same result.] (Pauli also proved this, but never published the result.)


With not just one, but two complete formulations of quantum mechanics in hand, the quantum theory grew explosively. It was applied to atoms, molecules, and solids. It solved with ease the problem of helium that had defeated the old quantum theory. It resolved questions concerning the structure of stars, the nature of superconductors, and the properties of magnets. One particularly important contributor was P.A.M. Dirac, who in 1926 (at age 22) extended the theory to relativistic and field-theoretic situations. Another was Linus Pauling, who in 1931 (at age 30) developed quantum mechanical ideas to explain chemical bonding, which previously had been understood only on empirical grounds. Even today quantum mechanics is being applied to new problems and new situations. It would be impossible to mention all of them. All I can say is that quantum mechanics, strange though it may be, has been tremendously successful.

The Bohr-Einstein debate

The extraordinary success of quantum mechanics in applications did not overwhelm everyone. A number of scientists, including Schrödinger, de Broglie, and — most prominently — Einstein, remained unhappy with the standard probabilistic interpretation of quantum mechanics. In a letter to Max Born (4 December 1926), Einstein made his famous statement that

Quantum mechanics is very impressive. But an inner voice tells me that it is not yet the real thing. The theory produces a good deal but hardly brings us closer to the secret of the Old One. I am at all events convinced that He does not play dice.

In concrete terms, Einstein’s “inner voice” led him, until his death, to issue occasional detailed critiques of quantum mechanics and its probabilistic interpretation. Niels Bohr undertook to reply to these critiques, and the resulting exchange is now called the “Bohr-Einstein debate”. At one memorable stage of the debate (Fifth Solvay Congress, 1927), Einstein made an objection similar to the one quoted above and Bohr

replied by pointing out the great caution, already called for by ancient thinkers, in ascribing attributes to Providence in every-day language.

These two statements are often paraphrased as, Einstein to Bohr: “God does not play dice with the universe.” Bohr to Einstein: “Stop telling God how to behave!” While the actual exchange was not quite so dramatic and quick as the paraphrase would have it, there was nevertheless a wonderful rejoinder from what must have been a severely exasperated Bohr.

The Bohr-Einstein debate had the benefit of forcing the creators of quantum mechanics to sharpen their reasoning and face the consequences of their theory in its most starkly non-intuitive situations. It also had (in my opinion) one disastrous consequence: because Einstein phrased his objections in purely classical terms, Bohr was compelled to reply in nearly classical terms, giving the impression that in quantum mechanics, an electron is “really classical” but that somehow nature puts limits on how well we can determine those classical properties. … this is a misconception: the reason we cannot measure simultaneously the exact position and speed of an electron is because an electron does not have simultaneously an exact position and speed — an electron is not just a smaller, harder edition of a marble. This misconception — this picture of a classical world underlying the quantum world … avoid it.

On the other hand, the Bohr-Einstein debate also had at least one salutary product. In 1935 Einstein, in collaboration with Boris Podolsky and Nathan Rosen, invented a situation in which the results of quantum mechanics seemed completely at odds with common sense, a situation in which the measurement of a particle at one location could reveal instantly information about a second particle far away. The three scientists published a paper which claimed that “No reasonable definition of reality could be expected to permit this.” Bohr produced a recondite response and the issue was forgotten by most physicists, who were justifiably busy with the applications of rather than the foundations of quantum mechanics. But the ideas did not vanish entirely, and they eventually raised the interest of John Bell. In 1964 Bell used the Einstein-Podolsky-Rosen situation to produce a theorem about the results from certain distant measurements for any deterministic scheme, not just classical mechanics. In 1982 Alain Aspect and his collaborators put Bell’s theorem to the test and found that nature did indeed behave in the manner that Einstein (and others!) found so counterintuitive.

The amplitude formulation of quantum mechanics

The version of quantum mechanics presented in this book is neither matrix nor wave mechanics. It is yet another formulation, different in approach and outlook, but fundamentally equivalent to the two formulations already mentioned. It is called amplitude mechanics (or “the sum over histories technique”, or “the many paths approach”, or “the path integral formulation”, or “the Lagrangian approach”, or “the method of least action”), and it was developed by Richard Feynman in 1941 while he was a graduate student (age 23) at Princeton. Its discovery is well described by Feynman himself in his Nobel lecture:

I went to a beer party in the Nassau Tavern in Princeton. There was a gentleman, newly arrived from Europe (Herbert Jehle) who came and sat next to me. Europeans are much more serious than we are in America because they think a good place to discuss intellectual matters is a beer party. So he sat by me and asked, “What are you doing” and so on, and I said, “I’m drinking beer.” Then I realized that he wanted to know what work I was doing and I told him I was struggling with this problem, and I simply turned to him and said “Listen, do you know any way of doing quantum mechanics starting with action — where the action integral comes into the quantum mechanics?” “No,” he said, “but Dirac has a paper in which the Lagrangian, at least, comes into quantum mechanics. I will show it to you tomorrow.”

Next day we went to the Princeton Library (they have little rooms on the side to discuss things) and he showed me this paper.

Dirac’s short paper in the Physikalische Zeitschrift der Sowjetunion claimed that a mathematical tool which governs the time development of a quantal system was “analogous” to the classical Lagrangian.

Professor Jehle showed me this; I read it; he explained it to me, and I said, “What does he mean, they are analogous; what does that mean,analogous? What is the use of that?” He said, “You Americans! You always want to find a use for everything!” I said that I thought that Dirac must mean that they were equal. “No,” he explained, “he doesn’t mean they are equal.” “Well,” I said, “let’s see what happens if we make them equal.”

So, I simply put them equal, taking the simplest example . . . but soon found that I had to put a constant of proportionality A in, suitably adjusted. When I substituted . . . and just calculated things out by Taylor-series expansion, out came the Schrödinger equation. So I turned to Professor Jehle, not really understanding, and said, “Well you see Professor Dirac meant that they were proportional.” Professor Jehle’s eyes were bugging out — he had taken out a little notebook and was rapidly copying it down from the blackboard and said, “No, no, this is an important discovery.”

Feynman’s thesis advisor, John Archibald Wheeler (age 30), was equally impressed. He believed that the amplitude formulation of quantum mechanics — although mathematically equivalent to the matrix and wave formulations — was so much more natural than the previous formulations that it had a chance of convincing quantum mechanics’s most determined critic. Wheeler writes:

Visiting Einstein one day, I could not resist telling him about Feynman’s new way to express quantum theory. “Feynman has found a beautiful picture to understand the probability amplitude for a dynamical system to go from one specified configuration at one time to another specified configuration at a later time. He treats on a footing of absolute equality every conceivable history that leads from the initial state to the final one, no matter how crazy the motion in between. The contributions of these histories differ not at all in amplitude, only in phase. . . . This prescription reproduces all of standard quantum theory. How could one ever want a simpler way to see what quantum theory is all about! Doesn’t this marvelous discovery make you willing to accept the quantum theory, Professor Einstein?” He replied in a serious voice, “I still cannot believe that God plays dice. But maybe”, he smiled, “I have earned the right to make my mistakes.”








Read Full Post »

%d bloggers like this: