My wife’s family was astonished when her Uncle Melvin showed up at our wedding more than 40 years ago. He had been in poor health for years and rarely ventured out of town, much less out of state. Then in his sixties, he was short, fat and sedentary; smoked cigars and enjoyed a drink now and then. I was told he had nearly died on more than one occasion. My wife and I dropped in on him when we were traveling in his vicinity a year or two later. I found him to be an affable host and a great storyteller, happily ensconced on his sun porch with a drink and a cigar. I remember thinking it was too bad I would probably never see him again, because I would have liked to get to know him better. As it turned out, I had plenty of opportunity. He stuck around for another 25 years or so, living into his nineties.
Although I was not yet connected to the insurance industry, you didn’t have to be an underwriter to see that no life insurer would prosper issuing policies on too many people like Uncle Mel, even if he were the exception that proved the rule. Of course, most people buy life insurance when they have young families to protect, long before cigars and drink and old age have taken their toll. In any case, as I later learned, underwriters are not primarily concerned with the fate of individuals. Their main task is to make certain that individuals are classified in the right group, based on their common risk characteristics, so that the insurance company can charge an appropriate premium. They rely on something called the law of large numbers, which relieves them of having to predict which particular policyholder will die prematurely, wreck his car or be sued when a tree in his yard falls on a neighbor’s roof.
Before the law of large numbers, people blamed their misfortune on fate or divine retribution. Then mathematicians figured out probabilities, initially in connection with games of chance. Common sense suggested that the more you rolled the dice, the greater the likelihood the cumulative outcome would conform to statistical averages. But it was hard to prove. Francis Bacon tossed a coin 40,000 times to see if the ratio of heads to tails was exactly 50/50; indeed it was. A century or so later, the mathematician Jacob Bernoulli worked out a proof, which has come to be known as the law of large numbers. His proof, which was 20 years in the making, concluded with this thought: “Whence, finally, this one thing seems to follow: that if observations of all events were to be continued throughout all eternity, (and hence the ultimate probability would tend toward perfect certainty), everything in the world would be perceived to happen in fixed ratios and according to a constant law of alternation, so that even in the most accidental and fortuitous occurrences we would be bound to recognize, as it were, a certain necessity and, so to speak, a certain fate.”
Although probability theory was originally developed for games of chance, there were obvious applications for insurance companies, which, after all, were in the business of offering protection against accidental and fortuitous occurrences. John Graunt, a pioneering demographer who started out as a haberdasher, was able to assemble the first mortality tables in 1662 by sampling the number of live births in London each year for the previous 60 years and determining how many of these individuals were still living. (It may say something about life spans in Graunt’s day that he did not bother to calculate mortality rates beyond age 60; Graunt himself died at 53.) The French mathematician and physicist Joseph Fournier, best known for his work on heat transfer, noted in the 1820s that average rates in Paris for births, deaths, suicides, marriages and even crimes remained remarkably steady from year to year. Building on such findings, the Belgian astronomer and mathematician Adolphe Quételet published a book in 1835 in which he introduced his concept of the "average man" (l'homme moyen) who is characterized by the mean values of measured variables that follow a normal distribution. Quételet wrote: “The greater the number of individuals, the more the individual is effaced and allows to predominate the series of general facts which depend on general causes according to which society exists and is maintained.”
One of the peculiarities of the law of large numbers, alluded to both by Quételet and by Bernoulli before him, is what the quantum physicist Erwin Schrodinger later referred to as the “order from disorder principle” in which one can observe the “magnificent order of exact physical law coming forth from atomic and molecular disorder." Nobody gets too worked up about this when it occurs at the molecular level or even when it involves the toss of a coin. But what are the implications when you are talking about human behavior? Presumably the individuals affected are not concerned with the mean values of measured variables when they decide to get married or commit a crime. The philosopher Emmanuel Kant anticipated the central issue in 1784 when he noted that “the free will of humans has such a great influence on marriages, on the births that result from these, and on dying, it would seem that there is no rule to which these events are subject and according to which one could calculate their number in advance. And yet the relevant statistics compiled annually in large countries demonstrate that these events occur just as much in accordance with constant natural laws as do inconstancies in the weather…”
The suggestion that even the accidental and the fortuitous were subject to statistical necessity proved especially unsettling to those who believed in free will. But how do you counter the laws of probability with a moral imperative? Leading the charge against Bernoulli’s theorem – or at least any deterministic implications thereof  was neither a theologian nor a philosopher but a mathematician who also happened to be a devout Christian. Pavel Nekrasov, a onetime seminarian who taught mathematics at Moscow University at the turn of the last century, seized on the fact that every flip of the coin is completely independent of the ones that came before and the ones that would come after. Statisticians like to say that a coin has no memory, meaning that it no more or less likely to land on one side or the other just because it previously landed heads or tails a number of times in a row. There are no causal links between coin tosses. For Nekrasov, acts of free will were like the independent events in probability theory, each existing entirely on its own, even if it ultimately conformed to some statistical pattern.
Nekrasov’s mathematical defense of free will was based on the premise that the law of large numbers applied only to independent events, and since certain human activities like marriage and crime conformed to the law of large numbers, they must also be independent – i.e., the products of free will. However, Nekrasov’s rival, A.A. Markov, regarded such reasoning as an “abuse of mathematics” and set about to prove that the law of large numbers also applied to dependent variables. His most famous illustration used the distribution of vowels and consonants in Alexander Pushkin’s verse novel Eugene Onegin. These followed the normal rules of spelling and therefore were not wholly independent, yet they still conformed to the law of large numbers. The technique Markov developed, which is now called the Markov chain, has found wide application in the real world, including in the algorithm used in the Google search engine.
In rescuing humanity from the iron grip of fate or divine retribution, have mathematicians merely imposed their own form of determinism? The founders of modern probability theory, including Bernoulli and Quételet, were mostly determinists. But it may be that concepts such as free will or determinism are too crude to capture the dynamic that applies when patterns emerge from singular events – Schrodinger’s “order from disorder” principle. As Uncle Mel’s case demonstrated, it is indeed possible for individuals to defy the odds. And yet, whether you call it free will or fate, the exception always seems to prove the rule.
Adolphe Quételet, On Man and the Development of his Faculties, or Essays on Social Physics Emmanuel Kant, Idea for a Universal History
