This is an extract from an unpublished paper. I first heard of this concept on the excellent Talking Politics podcast (RIP).
The Copernican principle (or Copernicus method) is a concept developed by American astrophysicist J. Richard Gott. The principle was first hypothesized by Gott in 1969. It is a probability-based method that allows estimating the potential lifespan of observable objects without any further information, other than the dates of their observation.
The method is based on dividing the hypothetical lifespan of a thing into four quarters. It yields a lower bound on the future lifespan of a thing and an upper bound.
If I am observing the object at the beginning of the middle period, there are three quarters of its lifespan left to go. That means that the future is three times as long as the past. However if one assumes one is at the end of that middle period, the future is only one-third as long as the past.
The canonical example used by Gott is the Berlin Wall. Here are the facts we know in this application:
- We visited the wall in 1969.
- The wall is eight years old at that time.
Thus the lower bound for the time-left is given by 8 × ⅓
, that is, two and
two thirds. This would mean that the earliest date for the "end" of the object,
i.e. the collapse of the Wall, is two-thirds of the way through 1971.
The upper bound, on the other hand, is given by 8 × 3
, 24 years. This
means that the upper bound for the fall of the Wall is 1993.
The Copernicus method would claim that there is a 50% chance that the lifespan falls into these bounds.
As one can see this produces a fairly wide interval: just over 21 years in the
case of an 8-year-old object. A 45-year-old object gives an interval of 120
years. In fact, the interval size is given by 8x/3
, or equivalently,
multiplying by two and two thirds. So these bounds are very large, and remember
they're always mediated by the condition that the probability of these bounds
being true at all is given at only 0.5.
A similar concept is the `Lindy effect' which was first named by Albert Goldman. Goldman's concept was rather divorced from the one that is current now, though: that concept finds its ancestor in Benoit Mandelbrot. The point can be crudely summarized in the following manner: the future predicted lifespan of a thing varies proportionally to its past lifespan.
It's unclear whether the Copernican method applies to repeated 'observations'. For instance, if Gott were to revisit the Berlin wall in, say, 1975, would the same calculation apply? Do we gain any more information by having that six-year gap, beyond the fact that the wall is now 14 years old?
One can use this information to create heuristics based on how long a future piece of knowledge is likely to be valuable for. For instance, SQL was created in 1974, making it 48 years old as of this writing. The lower bound for SQL's lifespan is thus 2038, making it likely a worthwhile investment. However SQL is an extreme outlier in this scenario. As a matter of simple mental arithmetic, if one decides to focus on current media from 1992, such media has lasted thirty years: one can see the lower bound of its lifespan under the Copernican principle would be ten years.
Of course, this only applies in the scenario where we lack other information. If we assume that certain allegations cast a shadow over Woody Allen's career, it may not be prudent to assume that simply because of the empirical fact of his film presenting itself to consciousness that the Copernican principle applies. The lifespan may be artificially shortened, or behave nonlinearly.