The word “predictable” first entered the English language two centuries ago. Its début came in neither a farmer’s almanac nor a cardsharp’s manual but in The Monthly Repository of Theology and General Literature, a Unitarian periodical. In 1820, one Stephen Freeman wrote a dense treatise in which he criticized the notion that human behavior—seemingly manifest “amidst the conflicting, boisterous, unreasonable wills of men, all acting, as they feel they do, their various parts with complete freedom of choice”—somehow existed outside the domain of cause and effect. Freeman (“free man,” no less!) argued, instead, that human consciousness and our perception of free will must be subject to chains of causation. “What but this certainty, this necessity, can render any event, even such as depends on the free-will of intelligent agents, predictable?” he asked.
This week, in the journal Nature, a collaboration of more than a hundred quantum physicists, distributed across twelve laboratories in eleven countries on five continents, turned Freeman’s formulation on its head. With the help of high-powered lasers, superconducting magnets, and state-of-the-art machine-learning algorithms, they concluded that “if human will is free, there are physical events . . . that are intrinsically random, that is, impossible to predict.” The group dubbed their experiment the Big Bell Test, after the renowned twentieth-century physicist John S. Bell.
The question at the center of Bell’s work is whether objects in the real world, including elementary particles, have definite properties of their own, independent of whether anyone happens to measure them. Quantum theory holds that they do not—that the act of performing a measurement doesn’t so much reveal a preëxisting value as summon it forth. (It as though you had no definite weight until you stepped on your bathroom scale.) The Danish physicist Niels Bohr, writing in the nineteen-thirties, argued that the outcomes of quantum measurements were thus truly, inherently random. [Continue reading…]