The Long View: Science and Cosmology

John was not a scientist. He never pretended to be one. I think that gave him a certain clarity of vision. I always respected his views on science, because he approached it as an interested outsider. This may be the shortest of his topical collections, but one of my favorites. This is an area where I had read most of the books before John reviewed them, but I did manage to learn a few things from John.

His review of a biography of Kurt Gödel is one of his more popular items he wrote, and it was influential in my own views of strong AI. John also was a bit sceptical of Stephen Jay Gould and Malcolm Gladwell, which increased my respect for him. Although I do feel a bit bad for Gladwell now that he isn't a media darling anymore. I guess I just don't like kicking a man while he is down.

Science and Cosmology

I suppose it is hard to have a broader interest than "cosmology." For some reason, I have always believed it to be a virtue to resist limiting my curiosity to things I might actually be able to understand. In any event, here are some pieces I have done about really, really big questions.

Being and Time (Martin Heidegger explains the world in terms of Death and Equipment.)
An Army of Davids (Glenn Reynolds argues for homebrewed beer and transhumanism.)
Two Scientists (Some thoughts on biographies of Albert Einstein and Marie Curie.)

Why post old articles?

Who was John J. Reilly?

All of John's posts here

An archive of John's site


The Long View 2002-06-02: The Menace in South Asia

NagasakiEven now that the Cold War is long past, most Westerners feel horror and shame at the thought of nuclear war. This is understandable, but the possibility of mutual assured destruction, and the intensive cultural revulsion that possibility engenders in us, are the products of a particular time, place, and set of assumptions.

It took a great deal of time and effort to demonize all things nuclear. Immediately after the Second World War, there intense optimism about harnessing atomic power for the good of mankind. For example, there was Operation Plowshare, which sought a way to turn the crude destructive power of the atom bomb to more mundane purposes, much the same way TNT and other explosives became a tool of the construction and mining industries. The attitudes of 1950s America toward the power of the atom seem blithe to us now, but this is the direct result of a campaign to convince us of the utter horror and unwinnability of a nuclear war.

There was a losing side of that campaign, for which I feel some sympathy. While they lost the war of public opinion, they definitely won the actual Cold War. After the 9/11 attacks, Paul Krugman suggested creating an office of evil to help the government imagine horrible things so we would not be surprised so badly next time. This role was filled for a long time by men like Edward Teller and Herman Kahn, who were perfectly happy to think the unthinkable in order to better prepare for it. A later entry in the field was the Strategy of Technology by Possony, Pournelle, and Kane. They argued that a decisive advantage in war could be gained by the targeted pursuit of specific technologies, particularly in the Cold War, which was already a technological contest.

Arguably, this was in fact the strategy that impoverished the Soviet Union to a degree where the dissent of client states like East Germany and Poland could fatally destabilize it. However, at present, the men who brought this about are likely to be remembered for nuclear brinkmanship and warmongering rather than successfully preventing the Cold War from turning into a hot one, and achieving victory as well.

What is perhaps even less well appreciated is how different the world is now from the peak of the Cold War. The US and Russia still have a lot of nuclear weapons, but the real worry these days is that some unpleasant little excuse for a country like North Korea or Pakistan will start something nuclear. It would be bad if they did, but to see MAD as the result is a failure of the imagination, or perhaps a success of propaganda. Look at the picture that heads this post, and imagine for yourself, "this is one of only two cities ever destroyed by nuclear weapons." And then try to believe your lying eyes.

The Menace in South Asia


There are three important points about the current confrontation between India and Pakistan. The first two are commonplaces. The third has not been addressed by policy makers, at least in public.

First, it is not likely that the fighting between the two countries will go beyond border skirmishes. This is not a situation like 1914 in Europe, when strategic plans had to be carried out like clockwork if they were to be carried out at all. Furthermore, the situations of the parties are not symmetrical. While Pakistan is perhaps most to blame because of its acquiescence in the use of its territory by militants, India would be the actual aggressor in a war. That country's friends and well wishers have let the Indian government know that a war would delay India's accession to the ranks of the great powers.

Second, even if a serious invasion of Pakistan does occur, it is unlikely that the conflict will go nuclear. On the nuclear level, Pakistan would have to be the aggressor. It is hard to see what Pakistan could gain from that step. The use of tactical nuclear weapons to halt an Indian invasion could cause the Indians to escalate their goals from border security to the destruction of the Pakistani state. In any case, India will always be in a position to declare victory and withdraw. There is no necessary ladder of escalation.

Third, if there is a war and it does go nuclear, India is going to win decisively. Its traditional enemy will be dismembered and the fragments disarmed. The civilian casualties India would suffer, even in the worst-case scenarios, would be proportionately less than those suffered by Great Britain in the Blitz. The moral that the world would draw from a South Asian nuclear war is that nuclear wars are fightable and winnable.

The Cold War between the United States and the Soviet Union occasioned the creation, not just of new weapons systems, but of new disciplines in logic and political science. Those disciplines applied only in a historically unique situation of overwhelming firepower and comparably high levels of technical competence. Nuclear weapons began, however, as an incremental augmentation to the tactics of area bombing. A substantial amount of time passed before the Cold War competitors had the nuclear devices and the delivery systems that could threaten the existence of each other's societies. India and Pakistan are far from crossing that threshold.

Several countries around the world aspire to just the situation in South Asia, where the use of nuclear weapons is a rational option. An Indian victory would have obvious policy implications for Iran, Taiwan, the Koreas and even Japan.

Just yesterday, President George Bush made a speech at West Point in which he declared that deterrence is not enough. He is right, but few people have remarked on the scope of the police project he is proposing. Let us take a deep breath as we prepare to jump in.

Why post old articles?

Who was John J. Reilly?

All of John's posts here

An archive of John's site

The Long View: Gödel: A Life of Logic

Kurt GödelI like to think that I am re-posting all of John's blog as some sort of service to humanity, but really I just enjoy rediscovering gems like this one. John's review of a biography of Kurt Gödel has been definitive in shaping my opinions about AI and computation.

In short, I don't think strong AI is possible, and this is the true explanation why computer scientists have spent the last seventy years looking for it without finding it.

Roger Penrose famously criticized strong AI in his book The Emperor's New Mind. Wikipedia's summary claims that so many eminent scientists have criticized Penrose's position that it is effectively refuted, to which one might reply, "OK, when where are all the AIs?"

I think Penrose truly fails by looking for the mind in physics. He is really just embodying the spirit of the age, but it is a sad thing to see when he was perceptive enough to notice that the essence of thinking, abstraction, is not algorithmic.

There really is a similarity between what minds do and what computers do, but the real similarity makes AI less likely instead of more. Computers are the instantiation of the immaterial forms of Plato [thereby proving Aristotle right]. Ross's conference presentation also illustrates the dangers of treading outside one's field. I do it, I like to do it, but I am always aware that I can sound just as silly to others as they sometimes sound to me. Ross makes an off-hand comment in his presentation about the deadliness of dioxin, which was quite the trendy toxin for a while. Then the Russians tried to poison the Ukrainian presidential candidate Viktor Yushchenko wtih dioxin [a preview of the recent unpleasantness], presumably under the impression that it was exceptionally deadly, only to find that all it did was give him a bad case of acne. Oops. Maybe that was just a clever counter-intelligence ploy in the wilderness of mirrors, like the time we sold the Russians faulty gas equipment.

I'm not an expert in toxicology, but I at least need to know enough to be able to accurately communicate with the experts so I can demonstrate the products I design are safe. Dioxin isn't nice stuff, but the dangers were wildly overblown.

This was also the beginning of the end of my interest in Neal Stephenson's books. His environmental thriller Zodiac featured a plucky band of environmental crusaders who thwarted a plot to dump dioxin in Boston Harbor. I already knew that dioxin wasn't all it was cracked up to be, and once I noticed one that was a little off, I started to notice a lot of things that were a little off. Oh well.


A Life of Logic

by John Casti and Werner DePauli
Perseus Publishing, 2000
210 Pages, US$25
ISBN 0-7382-0274-6


Kurt Gödel (1906-1978) was the mathematician and logician whose now famous incompleteness theorem easily ranks among the most uncanny products of the notoriously uncanny first half of the European 20th century. This very brief book by two computer scientists does try to fit Gödel into the world of scientific Vienna in the1920s and 30s. (The book started life as a program for Austrian television: there is a great deal of talk about mysteriously undecidable recipes for Sachertorte pastry.) The authors are more concerned, however, to explain the theorem itself, its relationship to the idea of computability, and the connection all these things have to such questions as the feasibility of artificial intelligence and time travel. This is an unmanageable amount of ground to cover, and the treatment is uneven. Still, simply addressing all these topics between two covers is an accomplishment. The authors provide a blessedly brief, ten-item reading list for those who want to look more deeply into the separate areas covered.

Gödel was born in the town of Brno, in what is now the Czech Republic, to a family that had grown wealthy from textile manufacturing. The Gödels were German-speaking. The authors tell us they were not Jewish, but we learn no more about confessional affiliation, beyond the fact Kurt was anti-Catholic all his life. Gödel entered the University of Vienna to study physics, but switched to mathematics after a few years. He soon became a member of the Vienna Circle, the influential group that sought to reduce all philosophical questions to problems of language.

Like Karl Popper and Ludwig Wittgenstein, who were more loosely associated with the Circle, Gödel's membership probably helped him most by providing fodder for criticism. Indeed, few thinkers have ever been less interested than was Gödel in closing down metaphysics. If mathematical Platonism were a religion, Gödel would have been Billy Sunday, his American evangelist contemporary. For Gödel, mathematical objects were as "given" as lumber. They are just another kind of semantic content of sentences. What Gödel did in his proof, the first published version of which appeared in 1931, was to show the weakness of syntax, the system by which semantic content is ordered. The incompleteness theorem shows that there are propositions that we know to be true, but that are nevertheless logically unprovable. A slightly more rigorous formulation is that any logical system at least as complicated as arithmetic will be incomplete, because it will be able to produce statements that cannot be proven or dispoven within the terms of the system. The natural languge versions of the "Liar Paradox" are of this nature.

While Gödel was thinking these deep thoughts, the politics and economy of the German-speaking world were going to hell in a hand-basket. The failure of the Austrian bank, the Credit-Anstalt, in the same year as the publication of the theorem is usually blamed for blowing up the already stressed European financial system. Austria's First Republic, created when the Habsburg empire disintegrated after the First World War, collapsed into rule-by-decree in 1933. Nazi Germany annexed Austria in 1938. (This happened, it must be said, with the approval of most Austrians.) The Second World War began in 1939.

Gödel divided his time in those years between Vienna and the Institute for Advanced Studies at Princeton, New Jersey. The Institute, acting in large part under the influence of John von Neumann, served through most of the'30s as a haven for scientific refugees from Europe. Gödel was not neglected. He was offered and took several temporary appointments, but he kept going back to the University of Vienna. Although too unworldly to have ever engaged in politics, he did lose his license to lecture, the Privat Docent, because of his connection with the Vienna Circle, which the Nazis regarded as too Leftist and too Jewish. However, he applied for and actually received a new license as a Docent of the New Order. It was only in 1940, when it was apparent he would be drafted, that he left Austria for good. He and his wife traveled east by train across the Soviet Union, then to Japan, then to the West Coast of the United States, and then to Princeton. His wife, Adele, did not like New Jersey, but they stayed permanently.

There are many legends about Gödel's antics at Princeton. This book gives us only a few of the best-known ones, such as how Einstein himself had to help calm Gödel down when the latter went to take the oath of citizenship. (It seems that Gödel had found a logical flaw in the federal constitution that would permit the creation of a dictatorship, and he insisted on telling the judge.) The most surprising thing to me, however, was that Gödel was actually a conscientious faculty member. His flaw was that he tended to obsess about the work of any committee on which he sat.

Although Gödel continued to produce significant mathematical results during his time at Princeton, he was never again as productive as he had been at Vienna. (His wife called the Institute "an old-folks' home," and she may have had a point.) In any case, his interests turned increasingly to philosophy. Gödel famously constructed an ontological proof of the existence of God (he was a great admirer of Leibniz, who had a proof of the same type), and an independent proof of personal immortality. (Karl Popper had one of these too, by the way.) We are told that Gödel was also interested in "the occult," but are given no specifics.

Gödel was paranoid, convinced that someone was trying to poison him. He therefore always made a great fuss about eating. When he died of what his doctor called "malnourishment and inanition," he weighed just 60 pounds. On the other hand, he also suffered throughout his life from some obscure gastro-intestinal disorder, so it is possible that an underlying basis for this behavior was simply never diagnosed.

Why should we care about crazy old Kurt and his annoying theorem? For one thing, it's immensely practical. The theorem, and Alan Turing's related Halting Problem that was developed at about the same time, are key to our understanding of what computer programs can and cannot do.

Perhaps the most interesting such question, covered at length in this book, is whether it is possible to construct an artificial computer intelligence. In "The Emperor's New Mind" (1989), Roger Penrose revived an argument based on Gödel's theorem against the possibility of an algorithmic machine mind. To put it briefly, Penrose pointed out that people can spot "Gödel sentences," true but unprovable propositions, that computer programs cannot detect. Thus, he reasoned, whatever else the human mind is doing when it spots these sentences, it is not computing. Refutations of Penrose are usually variations on the idea that Gödel's theorem applies only to consistent systems, and human beings clearly do not think consistently.

I should note that I find this argument mysterious. If human minds are being inconsistent when they do advanced mathematics, then how do we manage to reach the same conclusions consistently? In any case, even the most committed Artificial Intelligence believers have mostly abandoned the idea that a program for an artificial intelligence can be written. Now they hope to create Darwinistic, self-programming systems that will organize an intelligent entity spontaneously. Good luck.

Gödel's theorem serves in popular culture as a symbol of the supposed irrationality of reality. As the authors note, the theorem tends to be dragged out these days to "hit people over the head" with. The authors are too polite to point out that this most subtle of logical arguments is often employed by persons who cannot make any logical argument at all. Nonetheless, it is clear that the theorem and the body of study it make possible are philosophically important, though people differ on just why. For me, the theorem is good evidence that the limits of language are not the limits of knowledge, or even of reason, broadly construed. This suggests that the world is objectively knowable. Surely this is a good thing.


Copyright © 2000 by John J. Reilly

Why post old articles?

Who was John J. Reilly?

All of John's posts here

An archive of John's site

The Long View 2002-05-02: Bill Clinton in 2005!

Another Constitutional law post from John. Since I'm not a lawyer, I'll refrain from commenting on the technical merits of his proposal other than to say it seems plausible to this non-specialist. I also think I remember a joke making the rounds a while ago about how W. was still eligible to run for re-election in 2008 since he wasn't really elected the first time.

Bill Clinton in 2005!

They tell a wonderful story about Kurt Gödel, the greatest of 20th century logicians. He fled Europe during World War II, and when he went to take the oath of U.S. citizenship before a federal judge, Albert Einstein himself came along as a witness. The judge chatted with his prominent visitors before the ceremony, unfortunately. Alluding to the collapse of law in Nazi Germany, the judge remarked that the Constitution prevented anything like that from happening in the United States. "Not true!" Gödel replied, and explained that he had found a logical flaw in the Constitution that could be used to found a dictatorship. It took Einstein two hours to calm him down.

Say what you like about the Clinton Administration, it did at least provide an eight-year tutorial in aspects of constitutional law that almost no one had ever heard of before. Indeed, the Clinton's still have that effect, even though they left the White House almost a year and a half ago. Liz Smith, the gossip columnist, aired an argument in her column of May 7 for the proposition that Bill Clinton really could serve a third term. The notion is that, if Bill Clinton were elected vice president, presumably as number two on a Hillary ticket, he could succeed her if she did not serve out her term. Liz Smith has no pretensions to constitutional scholarship, and it is not clear who suggested the idea to her. Nonetheless, the argument is plausible.

The Twenty-second Amendment to the Constitution was ratified in 1951, in the aftermath of the administration of Franklin Delano Roosevelt, the only president to break with the tradition of the two-term maximum. (He sought and won four terms in office.) Common knowledge has it that the Constitution now prohibits anyone from serving as president for more than two terms. However, the Amendment does not quite say that:

Section 1. No person shall be elected to the office of the President more than twice, and no person who has held the office of President, or acted as President, for more than two years of a term to which some other person was elected President shall be elected to the office of the President more than once. But this Article shall not apply to any person holding the office of President when this Article was proposed by Congress, and shall not prevent any person who may be holding the office of President, or acting as President, during the term within which this Article becomes operative from holding the office of President or acting as President during the remainder of such term.

Note that this text does not address how long one may be president, but simply how one becomes president. It forbids anyone to be "elected" more than twice. Succeeding to the office is another matter, however. This provision does not, by its terms, forbid someone who has already been elected president twice from becoming president if the incumbent should die or resign. I might also remark that not only vice presidents can succeed to the presidency; a two-term president emeritus might be anywhere in the line of succession.

The Twelfth Amendment defines the operation of the Electoral College and how Congress should choose a president if the College does not give any candidate a majority. A seeming objection to the possibility of a president-for-life is offered by the last sentence of the Amendment, which says:

But no person constitutionally ineligible to the office of President shall be eligible to that of Vice-President.

At the time the Twelfth Amendment was ratified, the terms of eligibility in question were clearly those set out in Article II, Section 1, Paragraph 5, which require that the president be a natural-born citizen, at least 35 years old, and a US resident for at least 14 years. The Twelfth Amendment adds a further requirement that the president and vice president not be "inhabitants" of the same state. Did the addition of the Twenty-second Amendment add to the eligibility requirements?

Not by the letter of the text. The Twelfth Amendment is about how presidents are elected, not about who can serve. All we are told about "eligibility" is that it is the same for the vice president as for the president. If succession by a two-term president is possible under the Twenty-second Amendment, the Twelfth does nothing to change matters. But might the Twelfth Amendment make a former president ineligible to run for vice president? Probably not, because no provision of the Constitution makes someone who has been twice elected president "ineligible for the office of President." The Constitution simply forbids such a person to be elected yet again. If there is no such ineligibility for a president, then there is none for a vice president.

Even if my interpretation of the text were the only possible one, that would not settle the issue. A look at the statutory history of the Twenty-second Amendment might show that its drafters and the legislators who voted for it were all intend on ensuring that no one would ever again be president for more than eight years. In that case, a court asked to apply the Twenty-second Amendment would probably look to the intent of the Amendment, rather than to its literal terms. Of course, legislative history might also show that the drafters and ratifiers meant to leave open the possibility that an experienced gray head could serve again as president, presumably in some emergency when the government had been decapitated. When they spoke of "election," maybe that is what they meant.

The only place to look for precedents would be the states. I am not a great fan of term limits in any form, but many states have them. It is quite possible that just the question we have been considering has arisen before. State court decisions interpreting such statutes would not be binding on the federal judiciary, of course, but they might be persuasive. From what little I recall about the subject, I believe that the states have tended to interpret term limits narrowly rather than broadly. In other words, if an incumbent makes a plausible argument for why a term limit should not apply, the courts will usually accept it.

I doubt that the particular anomaly we have been considering is the one that Kurt Gödel was thinking about. I am also pretty sure that Bill Clinton has no intention of running for vice president in 2004, or in any other year. Still, it may someday be important that the rules for succession to the presidency are looser than those for election. Constitutional law is full of surprises.

Why post old articles?

Who was John J. Reilly?

All of John's posts here

An archive of John's site

The Long View: The Physics of Immortality

I'll freely admit this book review jaundiced me against Tipler's book. I'm still convinced this is not a bad thing. I brought this up in a philosophy course once, and my professor chided me. I'm still convinced this is not a bad thing. Tipler is an advocate of the idea that the universe may be a simulation. I regard this as best unproven, and as at worst ridiculous. Tipler uses this idea to explain how resurrection is just a complicated algorithm. I rather think he missed the point, but some rather smart people agree with him. On the plus side, Tipler's favorite dead theologian is Aquinas. While I do think Thomas got some of the foundational ideas of science right, I still think Tipler misses the point.

As a fair warning, my physics education stopped at the undergraduate level. My philosophical education stopped partway through the masters level. I'm an amateur, and I like it that way.

The Physics of Immortality
by Frank J. Tipler
Doubleday, 1994
ISBN: 0-385-46798-2 $24.95

Cultures have their insistences. Navajos, I am told, tend to leave unfinished some little detail of any work they do, just for good luck. Thus, a geometrical design will have a corner undone, or a familiar story will be told on any given occasion with a minor incident omitted. The Bolshevik regime in Russia was vehemently anti-religious, yet its leaders found it perfectly natural to embalm and perpetually display the body of Lenin, for all the world like the incorrupt body of a Russian saint. America too has its insistences, features of its culture which are often invisible to the natives but the most striking characteristics of the country in the eyes of foreigners. America, we know from earliest report, has always managed to be both extremely religious and implacably antimetaphysical. Thus, America is the world capital both of textual literalism in religion and of science ambitious to prophesy. Without careful watching, Americans will tend to reduce metaphysical questions to engineering problems, all the while believing that they are resolving real metaphysical difficulties.

A particularly vivid example of this tendency is provided by Frank J. Tipler's recent book, "The Physics of Immortality." In this book Dr. Tipler, Professor of Mathematical Physics at Tulane University, purports to demonstrate scientifically the existence of God, the resurrection of the dead and the moral coherence of the universe (indeed, of all universes, since the author is an adherent of the "Many Worlds" interpretation of quantum mechanics). "The Physics of Immortality" sets out an amplified and more extreme version of the speculations about the fate of the universe which appeared in "The Anthropic Cosmological Principle" (1986), a highly influential work which Dr. Tipler co-authored with the British astrophysicist, John D. Barrow. The gist of the earlier book, at least as I understood it, is that we are living in a very improbable universe. If any of the physical and mathematical constants on which physical reality depends were only slightly different, there would not only be no human race, there would be nothing worth mentioning. The Anthropic Principle is that, despite the modern cliche that we live in a hostile world unconcerned with human happiness, in reality the structure and history of the universe are friendly to man. Indeed, "The Anthropic Cosmological Principle" also claimed to prove that man is the only intelligent species in the universe, and will be the only progenitor of the greater intelligences yet to be. In "The Physics of Immortality," the author explains how the universe can be this way and what its future must be.


The Long View: The Fate of Noospheres

John was an accomplished essayist. His book reviews really are long form essays inspired by the book he was reading. Often I learned things from his book reviews that weren't contained in the book he reviewed. In my mind, John exemplified the ideal of a liberal education. He had his areas of expertise, but he was not unfamiliar with most of the major currents of thought in the Western world. Homo sum, humani nihil a me alienum puto.

John wrote this essay in 1997. It exhibits many of the themes you can find in his later work. Interest in fundamental questions in science. An ability to integrate science with the liberal arts. A certain sense of humor founded in a partially cyclical view of history. A Thomistically informed sense that formal causes explain many of the interesting features of modern science.

As best I know, John was a fan of Jerry Pournelle. As as am I. Jerry Pournelle has admitted that he was influenced by cyclical theories of history, and that he is something of a Thomist. Pournelle has written books in which the survival of the human race depends upon us colonizing other worlds. You can see that kind of thinking in the Fate of Noospheres. Universal states that emerge cyclically. A narrow window in which we can spread ourselves to the stars. The propensity of man to ruin himself. John was heavily influenced by the apocalyptic [science] fiction of the 1970s.

You may notice that there is a note at the top of John's page that states that this item has been anthologized. I do not know whom John designated as the beneficiary of his estate. I think John's work should be widely read and appreciated. I think you should order the book, Apocaplyse & Future, which contains this essay and many others. I just don't know whom, if anyone, benefits from it's sale. Since John was a lawyer, I assume he took care of this. I just don't know the details.


Fifty years ago, Enrico Fermi formulated what to many people still seems to be the definitive argument against the existence of intelligent extraterrestrial life: "If they existed, they would be here." This essay argues that there is an explanation for the lack of apparent extraterrestrial intelligence other than nonexistence. I will also discuss some other explanations commonly put forward for why we would be unlikely to hear from alien civilizations, even if they did exist.

In essence, I am expanding on the hypothesis of the Jesuit paleontologist, Pierre Teilhard de Chardin (1881-1955), that the development of intelligence should be understood as a natural stage in the development of Earth's biosphere. Teilhard called this stage the development of the "noosphere," or region of mind. It is analogous to the biosphere, and so should also be understood as an ecology in which new emergent entities appear. The notion of the noosphere has undergone something of a revival in recent years, since the Internet has many of the characteristics Teilhard ascribed to this supposed theater of evolution. Though Teilhard's general theory of evolution has been criticized, perhaps rightly, for positing unnecessary vitalistic forces, nevertheless the basic outline of his model of history may tell us something important about the fate of our own noosphere, and by implication about the common fate of the noospheres of other planets.


Are there in fact any extraterrestrials?

Many other people, of course, have long claimed that not only are the extraterrestrials here, but that they have been personally assaulted by them. Putting aside the claims of UFO enthusiasts, however, the fact remains that Fermi's critique is acute. If species comparable to the human race occur at multiple times and places in the history of the universe, then they or their automata should have reached Earth a long time ago. This would be the case even if such species were very rare and interstellar travel were very difficult.

The rationale for this conclusion is simply a traditional Darwinian appeal to large numbers. That is: even evolutionary events that are vanishingly improbable at any one time become nearly inevitable if enough time is provided for them to occur. There is a large class of stars similar to Earth's sun in mass and composition, many of whose members are also billions of years older than Earth's sun. Any hypothetical planetary systems that circle these stars will be similarly older than Earth. A billion years is probably enough time to figure out how to do anything that is physically possible (certainly it is enough time for biology to do most things by accident). Sending machines between stars is physically possible, and at some point or points in the cultural history of an intelligent species, it will seem like a good thing to do. Various engineering proposals for how such exploration could be made self-sustaining by using self-replicating robots have been proposed from time to time, as have estimates of how long it would take such a process to expand across a galaxy, or even the observable universe. (The most thorough treatment of the subject with which I am familiar can be found in Barrow & Tipler's "The Anthropic Cosmological Principle.") The upshot of these analyses is usually that it would take an intelligent species a remarkably short fraction of the age of the universe to make its presence conspicuous just about everywhere.

There is no evidence that this has happened on even one occasion, however, much less the multiple occasions that would be expected if inhabited worlds were numerous. While this is a question for which the absence of evidence can never be definitive evidence of absence, still there has already been enough research to call into grave doubt the most optimistic estimates for the number of intelligent species in the universe. There are several SETI (Search for Extraterrestrial Intelligence) projects, which have done or are now completing large radio telescope surveys of the whole sky and of our stellar neighborhood. The results should reveal whether there is in fact any planet within hundreds of light years that is transmitting radio signals other than Earth itself. The size of the samples is large enough that a "no" answer would be significant information. (So, of course, would a "yes," in which case this essay goes into the historical file of elaborately reasoned false hypotheses.) Every day that extraterrestrials go undiscovered increases the possibility that there are none.

The problem with the mounting evidence that there are no extraterrestrials is that it should not be true. There is no good reason why intelligent life should not have arisen elsewhere than on Earth. However, there are a number of bad reasons why this should be the case.


Is the Earth a freak?

Perhaps the easiest objections to dispose of are those that argue that planets like Earth are themselves very rare. It is not hard to show that, if Earth's position from the sun were slightly different, or the surface temperature of the sun marginally higher or lower, then life on the surface of the Earth would be impossible. Similar arguments note that the gravitational pull of Earth's unusually large moon must have had an effect on the density of the atmosphere over time. Surely an otherwise similar planet without a similarly improbable companion would have an atmosphere like that of Venus.

The problem with these arguments is that they assume the factors deemed essential for life have been invariable over time. The history of the solar system is otherwise. The sun has been getting hotter throughout its lifetime, for instance, and the moon's orbit has been slowly expanding. The factors that make the Earth habitable are not independent, but affect each other. The non-mystical version of the Gaia hypothesis is almost certainly true: the atmosphere and the biosphere interact in such a way as to keep the surface temperature of the planet in a narrow band between the freezing and the boiling points of water. When conditions change in such a way as to tend to move the temperature out of this band, then the composition of the atmosphere changes to move it back. Why this happens is somewhat mysterious, but the existence of such a compensating mechanism does suggest that planets like Earth could exist within a wide range of masses and a wide range of distances from their suns.

No one argues that every planet with basic biology would necessarily go on to develop intelligent life. Speaking only with regard to the solar system, the best candidate for extraterrestrial life at this writing (July 24, 1997) is the Jovian moon Europa. This relatively small body appears to have a water ocean, in which it is easy to imagine life-forms existing like those at the volcanic vents at the bottoms of Earth's oceans. However, living as they would in lightless, aquatic conditions, it is very difficult to see why such creatures should progress further than their analogues on Earth. When we are talking about intelligent life, we are probably talking about life on fairly massive, dense bodies like the inner planets of the solar system. Any intelligent species is likely to be a land animal large enough to support an elaborate nervous system. The objections most difficult to answer, because so few of the questions involved are testable, are those which hold that animals should rarely if ever progress to tool-and-speech intelligence like that possessed by human beings.


Evolutionary objections to intelligent life

Many of the biological objections have been provided by evolutionary theorists such as Stephen Jay Gould. This approach to evolutionary history is a conscious attempt to combat the highly linear, progressive models of evolution that come to us from the nineteenth century. Gould's model of the biosphere as an expanding "sphere" of ever increasing biological diversity obviously has a lot to do with the relativist critical-literary theory that became popular in the closing decades of the 20th century. What we are really looking at is the ideology of ethnic and gender "diversity" being read into the history of the planet. If no civilization or culture is better than another, then neither is one biological lineage more central to the process of evolution than any other. Since man and peacocks, for example, both exist on Earth at the same time, this approach assumes that evolution is no more directed toward one than the other, and so that neither was more or less likely to have chanced into existence.

Scientists have to get their ideas from somewhere, so it is no objection to a scientific theory that it chimes with other notions that were abroad in the culture at the time the ideas were being conceived. Indeed, it would be odd if scientific inspiration worked any other way. What makes science different from most other human enterprises, however, is that such fashionable theories can be shown to be true or false on their merits.

Essentially, the "diversity theory" approach to the evolution of intelligence consists of reciting the details of the biological history that led to the human race in an aggrieved tone of voice, noting at every new development how improbable it was. Certain features of living organisms evolved only once, which suggests that they are hard to evolve, and so might not have evolved at all. By cataloguing these improbabilities in the biological history of the human race, some people argue that it can thereby be shown that such a sequence of accidents is highly unlikely to occur anyplace else in the universe.

Even analytically, there is something fishy about this school of thought. To some extent, the argument is a little like asserting that a pinball will stay in play forever, because any particular path it could travel to get to the hole at the bottom of the board is so unlikely. However, as we have already noted, unlikely results can nevertheless be inevitable. Furthermore, it is not at all clear that just because something happens once we may conclude that it was intrinsically unlikely to happen. To take the most glaring example, all the basic animal anatomies that exist today, indeed all that have ever existed, appeared within a remarkably short period of time about half-a-billion years ago. The reason seems to be that only so many basic structures are possible to multicellular organisms (keeping in mind that creatures as different as mice and whales can be said to have the same basic structures). There may be something about the chemistry of long molecules which dictates that many kinds of animals should have five digits, but that there should be no six-legged animals with backbones. Brian Goodwin, in his book How the Leopard Changed Its Spots, went so far as the argue that the real seat of heredity is in protein chains. Nucleic DNA may describe the structures of these chains, but actual organisms arise from the twisting and transformations they undergo for reasons of molecular geometry. If these things are in fact the case, then the structure of complex organisms is much less a matter of chance than has conventionally been assumed.

What is true for the development of individual organisms is at least as true for the development of the biosphere as a whole. A probably unanswerable objection to the hypothesis of a directionless evolutionary process was presented by Kenneth Boulding in his book, "Ecodynamics." Boulding simply noted that food chains are in fact pretty pyramidal, with height on the pyramid corresponding roughly to things like size and behavioral flexibility. The creatures that live in the pyramids are simply the incidents of evolution; in principle, and often in fact, any given lineage can move down as well as up in the food chain. Smart predators, in other words, can become the ancestors of dumb bottom feeders. The pyramid itself, however, is a structure of biological niches, of ways for organisms to make a living. Obviously, early in evolution, the pyramid will not be filled up very high, since no creature will have any great degree of sophistication. In any case, the bulk of the pyramid will always be at the bottom; it will always be "The Age of Bacteria." (For that matter, as long as the stars shine, the universe will always be in the "The Age of Hydrogen.") However, the pyramid will fill up closer and closer to the summit, simply because that is the one direction in which there is no competition. Eventually, probably within fairly narrow constraints of time, there will come a creature that can take advantage of the top niche, which is defined by language and tool use. This should be the pocket history of a myriad of worlds.

Finally, one may note that it is rarely a good idea to assume that you are unique. There is an argument against the existence of extraterrestrial intelligence which goes, roughly, that just because the chance is 50 million to one that any randomly selected person in Great Britain will be the Queen of England, this does not mean that there is no Queen of England. Earth, according to this logic, is the vanishingly rare Queen of England. The short (and maybe also the long) answer to this reasoning is that most people who think they are the Queen of England are folk who are off their anti-psychotic medication.

Let us therefore assume that there are and have been many planets in the universe roughly like Earth. Let us further assume that on all of them one or more species evolve that are roughly like human beings, in the sense that they occupy analogous ecological niches. (Under this definition, bumble bees are "like" humming birds, since they are in the same line of work. There may be comparable differences among intelligent species, but then again we may find that evolution is less imaginative than we are.) The fact that we do not observe them would then have to be explained by some historical development that prevents them from getting into space. Indeed, since our own civilization is in space to only the most limited degree, we cannot know for a fact that any civilization, ever, spreads across the universe, our own included. It might throw some light on the subject if we consider the possible scenarios under which we ourselves might remain substantially confined to Earth.


Do intelligent species destroy themselves?

A theory that had some popularity a few years ago might be called the "atomic suicide" hypothesis. The earliest instance of this proposal of which I am aware of is a story by Isaac Asimov, entitled "The Gentle Vultures." The premise was that all intelligent species are primates and that all destroy their civilizations in atomic wars a few generations after a local industrial revolution starts. The only exception is one non-aggressive species (the "gentle vultures" of the title), who travel about the universe saving the survivors of these catastrophes and incidentally doing very well by themselves. The story deals with the problem presented by the fact the human race, though just as aggressive as the rest, forestalled its destruction by chancing on the political device of the Cold War, and so threatened to escape into space.

Carl Sagan, in his later years, seemed to adopt a thesis like that of the "The Gentle Vultures." It particularly attracted him because of his hypothesis, based on his comparative study of planetary atmospheres, that a general nuclear exchange would so obscure the sun as to put not just human life but terrestrial life in general in jeopardy. This was the celebrated "nuclear winter" hypothesis. Sagan remarked that one of the things the discovery of an artificial extraterrestrial radio source would tell us is that it is in fact possible that we might survive our early nuclear era. Even so, the possibility would remain that very few species succeed in doing so. This was one of Sagan's answers to Fermi's question.

While impossible to simply dismiss, at this writing the atomic suicide hypothesis looks more and more like a period-piece. For one thing, Sagan's "nuclear winter" hypothesis has not stood up to examination. (On the other hand, it was helpful in developing estimates of the effects of asteroidal impacts, a subject that became popular at about the same time.) More generally, one may note that a persistent feature of Western culture since the late nineteenth century has been the formation of collective images of scientific apocalypses, of which nuclear war is only one example. (Pandemics have already in large part displaced nuclear war in fiction.) Now, it is quite possible that images of apocalypses may eventually occasion one. Nevertheless, there is reason to believe from our own experience that this danger is intermittent, since not all eras are apocalyptically-minded. A species would have only to create a handful of distant colonies for this class of threat to its survival to be almost entirely eliminated.


Are intelligent species imprisoned by persistent Dark Ages?

The more subtle arguments for why many species never get into space are cultural. On Earth so far, after all, only one technological civilization has appeared, and its interest in space is both recent and shaky. This suggests that such societies are "hard to evolve," or at any rate rare. Certainly the science and technology of the West are creatures of a context of religion and geography and climate that was entirely fortuitous. If man does not begin the process of colonizing space during the Western era, it is not clear that a similar opportunity will arise again.

Even within the lifetime of Western civilization, there may be only a limited window of time in which to begin the process. Space flight is a form of exploration, and we know that civilizations on Earth have generally been interested in exploration for only limited portions of their histories. If you posit a purely cyclical model of history, then it is easy to imagine a future stage in the history of the West that will be as closed and inward-looking as Ming China. It is conceivable that human history could result in a radically conservative planetary society that would freeze over and never melt. Such societies elsewhere in the universe could even be invisible to radio telescopy, if they put their communications on cable and other point-to-point technologies. The sky could be full of what are in effect prison-planets, and we would never know about them. Especially not after we joined their number.

The problem with the proposition that technological civilization is a an unlikely fluke is that, as in biology, unlikely flukes can nevertheless be inevitable flukes, if you wait long enough. For that matter, it may be that the apparently fortuitous origins of technological civilization are in fact simply a mask on a deeper determinism. Arnold Toynbee's universal model of history has been justly criticized on the details. Still, his basic insight that civilizations on Earth have appeared in "generations," each larger and more technologically capable than the last, remains perfectly valid. Cultural history, like evolutionary history, really is progressive, if by progressive you mean that emergent entities of a an increasingly superior order tend to appear as the system ages. The willingness and the ability to get into space are thus likely to be features of every world's historical process.

Toynbee's conclusion that particular civilizations usually congeal into oppressive "universal states" does nothing to alter his essentially melioristic view of history. Universal states, polities such as the Roman Empire or Han China that encompass the whole of an old civilization, are quite mortal. They last about 500 years and then collapse. Teratoidal states, such as the one satirized in George Orwell's "1984," also occur in history. They are characterized by pretentions to universality and the ambition to remake human nature. Asoka's fantastically over-policed Mauryan Empire was one example. The Soviet Union was another. Neither lasted more than a few generations. Whatever other dangers they may pose, the ability to stop history is not one of them.

So, once again, we are faced with a puzzle. Intelligent life should evolve elsewhere in the universe. There is no reason to suppose that intelligent species are particularly likely to destroy themselves or to become culturally trapped. So where the hell are they?

The obvious solution, of course, is that the colonization of the universe is not in the natural order of things. To put it another way, the life cycle of the typical noosphere does not include extensive expansion beyond its own planet. Evolution has a goal, but it can be achieved locally.


The Omega Point

Speaking only of the fate of the Earth, Teilhard believed that the noosphere would collapse into what he called the "Omega Point." He argument, essentially, is that evolution is largely a product of feedback. Not just individual organisms evolve, but whole living systems evolve as its members interact with each other. What is true of biology is even more true of culture: human history accelerates as communications become easier and horizons broaden. When the horizon has moved right around the world, that is, when there is a single world culture, then the process can proceed only by becoming more intense.

"An intensification of what?" The lack of an answer to this question is perhaps the chief conceptual problem with Teilhard's system. He spoke of "consciousness" and "complexity" rising to ever greater heights. What these things may be neither he, nor his readers, have ever been entirely sure. Teilhard himself did not seem to think that cybernetics was essential to the evolutionary process, so he was not talking about something as simple as ever faster information processing. On the other hand, he is alleged to have been interested in the possibility that psychic phenomena are be a hint of some emergent property that would transform human life at a later stage. Whatever the mechanism, Teilhard suggested that progress of this sort proceeded to an infinity in a finite length of time, an asymptotic limit perpendicular to the timeline of history that he called the "Omega Point."

Though he did not draw the analogy himself, many writers have since noted the similarity between Teilhard's eschatology and the collapse of stars above a certain mass into spacial singularities, which form astronomical "black holes." These, too, involve real infinities (at least as observed from the outside) that form in a finite length of time. Thus, we know that the concept of a singularity has relevance beyond pure mathematics.

Does it also have relevance to history? If something like this is also the goal of planetary histories, then alien civilizations would essentially disappear from the universe, probably within a few centuries of achieving a world society. Teilhard did once suggest that perhaps other worlds moved toward an Omega Point, as he believed the Earth was doing. However, he did not elaborate on the idea.


Variations on the idea of a historical singularity

Curiously, some of Teilhard's strongest critics have adopted his terminology and the general outline of his model of history. The cosmologist Frank Tipler, for instance, is a strong believer in computational artificial intelligence who therefore has no trouble describing what an "infinite mind" would be. It would be a computer that worked infinitely fast with an infinite degree of processing power. In The Physics of Immortality, he proposed that such an entity will in fact occur in the approach to a singularity at the end of the universe. Precisely why analogous entities should not occur in connection with lesser singularities is not at all clear.

While Tipler's ideas excited more comment than concurrence, there is also a class of cyber-enthusiasts called transhumanists who also equate consciousness with computation, but who hold that a "singularity" will happen locally, on Earth, at no distant date. This notion of a cyber-Omega Point is apparently well-enough known to be the object of satire. It seems to come in two flavors. There are those who argue that the singularity will remove the human race from the observable universe, after the manner of Teilhard's theory, while there are others who interpret "singularity" to mean only a point of flexion on the graph of history. The singularity then becomes the period of maximum rate of change. After it, the human race will be utterly changed, perhaps into something post-human. However, the critical period itself will probably be several years long, not a dimensionless and rather mystical point. It is interesting to note that this speculation reproduces, in all essentials, the theory propounded by the historian Henry Adams at the beginning of the 20th century (see his grimly-titled collection of essays, "The Degradation of the Democratic Dogma"). The difference is that Adams had history ending, in the sense of reaching maximum acceleration, sometime in the 1920s, whereas the transhumanists of the 1990s point to a date around the middle of the 21st century.

Adams and the transhumanists seem to be suggesting that mankind would soon accelerate to the maximum technological capacity allowed by physics and then coast forever after. A singularity in this sense would only exasperate the "where are they?" problem. Unless there is something we don't know about the difficulties of space travel, the laws of physics almost certainly allow for interstellar exploration, even if the fastest possible voyages between neighboring starts are years long. If every noosphere has a history that culminates in an Adams-like point of flexion, then they should explode into space early in their development. Again, they should be here. We should be them. Something is wrong.

The notion that the world will undergo some psychic revolution that will end history as we know it is scarcely new. Its chief expression in the 20th century was doubtless Arthur C. Clarke's novel, "Childhood's End." In that book, the advent of the Omega Point does not just end history, it blows up the planet. That book was, fundamentally, a horror story that put the worst possible interpretation on its premise. Still, something like the historical model it describes would have to be true in order to answer Fermi's question. If you accept Teilhard's view of the matter, perhaps the end of the world is not as bad as we have been led to believe.



This article appeared in the April 1998 issue of Ted Daniel's Millennial Prophecy Report 

Copyright © 1997 by John J. Reilly

Why post old articles?

Who was John J. Reilly?

All of John's posts here

An archive of John's site

A golden age of engineering

In a Twitter exchange with John D. Cook, John mentioned to me that he "heard it said we live in a golden age of engineering and a dark age of science".

That really got me thinking: do we live in a golden age of engineering and a dark age of science? I've spent some time pondering similar questions on this blog, but I suspect my thinking has changed recently.

On the affirmative, Moore's law has been in steady operation for fifty years. Computers really do keep getting smaller, faster, and cheaper. I can do things as an engineer that my predecessors would find fantastical. I can design an object using that cheaper computer, and have it 3-D printed in a week or less. Lots of people have heard of 3-D printing with plastic, but I do it with metal. Not only is the field of engineering more capable, it tackles bigger goals now.

I work as a manufacturing engineer. I'm not only expected to make a product that works; it needs to keep the customers safe, and the people who work for me making it. I need to make sure the medical device I design is bio-compatible. I need to make sure handle of the device is comfortable in the surgeon's hands. I need to make sure the manufacturing process is ergonomically friendly to the expected variation in human body sizes. And you still need to make money. The body of knowledge I have to integrate is way more complicated than that contemplated by any Victorian engineer. I need the assistance of many more domain experts who have studied these things and turned them into bodies of knowledge that deserve to be called science.

On the negative, we haven't been to the moon in forty years. Granted, we don't spend that kind of money on science [engineering] projects anymore. In inflation-adjusted dollars, a moon mission launch alone cost $2.7 billion, and the whole project adjusted for inflation was $170 billion. We spent about $2.7 billion on the Human Genome Project, and that was considered a little steep at the time.

As for technology, we lack the flying cars and interplanetary travel that have been staples of science fiction for nearly 100 years. It is not uncommon to see the claim that either technological or scientific progress has slowed in the twentieth and twenty-first centuries. Scott Locklin posted a typical example on Taki's Mag in 2009, The Myth of Technological Progress. John Horgan posted a slightly different take just this week on Scientific American, focusing mostly on the poor quality of some contemporary research papers. Bruce Charlton went the furthest, claiming that intelligence in the West has declined over the last 125 years or so.

Ultimately, I don't really believe that technological or scientific or intellectual activity has markedly slowed. I do believe we are interested in different things than we used to be interested in. Science has turned inward, and produces more and more technical work with a narrower audience. Engineering, which is really part of the world of business, produces some science, but modern business is fiercely focused on the bottom line, typically with a very short time horizon.

I have posited a cocktail party theory that science suffers from a lack of experience with practical problems. I suspect there is a synergy between engineering and science that made the scientific revolution possible. Ancient Greek science and modern science are both pretty good, and focus on understanding things for their own sake. However, that explosion of mental activity we now call the scientific revolution came about because science turned from knowledge for its own sake to useful knowledge that allows us to bend nature to our will. We now seem to be turning back to knowledge for its own sake. As an Aristotelian myself, I can't really fault this. However, it probably means that what scientists think of as an interesting problem and what the rest of society sees as an interesting problem will diverge. Ultimately, this probably means that the golden age of engineering will end too, because mental effort will be focused elsewhere.

The STEM Crisis is a Myth

Robert Charette at IEEE Spectrum has a really good piece on the unreality of a shortage of workers with an education in science, mathematics, engineering, and technology [STEM].

Charette looks at all the different ways in which STEM jobs and STEM workers are counted. Different agencies count in different ways. He also looks this phenomenon over time in the US, and currently in other countries. India is apparently concerned they don't have enough STEM graduates, which strikes me as funny since US government policy is to suck as many STEM workers from India as possible.

I particularly liked this graph:

STEM Shortage

Bioshock Infinite videogame review

Its for the children

When I told the Magistra I had played Bioshock Infinite, and I was shocked by its graphic violence, she was surprised. "I didn't think you would play another Bioshock game." After the review I gave Bioshock, I suppose I'm not surprised my wife would say that. I called the game disturbing, and I felt afterward that it might have scarred my soul a little bit. When I was younger, I didn't think much of the the argument of Lt. Col. David Grossman (ret.) that we have learned how to overcome our innate resistance to killing another person, but now I am starting to see his point. Especially now that I am a father myself, graphic violence in games, and the FPS genre, seem to hold less and less appeal for me. After the very first fight in Bioshock Infinite, I felt disturbed. I think it was the sense that I had violated the peace of Columbia that made me feel this so acutely. Until the first fight, every scene in Columbia was visually and musically idyllic and peaceful, even if there were a few subtle hints that all was not well.

It was the Beast of America trailer that convinced me I might want to play Bioshock Infinite. It is a pretty damn good trailer, and it hinted at a lot of the better elements of the game. Well done, whoever designed that trailer. Of course, I waited to pick the game up until the Steam summer sale, since I won't pay full price for a game anymore, but I was really interested.

After playing the game, I decided to write a review because I didn't see a lot of reviews that address what this game is about: The End of the World. Bioshock Infinite, like Bioshock before it, is about the apocalypse. Literally, it means ‘disclosure’ or ‘revelation,’ particularly of the circumstances
attending the end of the world. In the narrowest sense, it is simply an ancient literary genre. It also refers to a dramatic event, which marks off a different type of time.Apocalypse

Everyone wants to talk about the ending, but I'm not that interested in the ending. I liked the ending. I was underwhelmed by the game until the ending, which catapulted the game in my estimation from just another FPS game to a classic. I am just tired of amateur philosophy hour and amateur physics hour. You all don't know what you are talking about.

I am much more interested in how we are all fascinated by the apocalypse, and kind of embarrassed by it at the same time. Richard Landes, a scholar of the apocalyptic, has a theory that I am sympathetic to. Millennial movements are major players in history, and millennialism is inherently interesting to all people. However, millennial predictions are usually [although not always] falsified in the lifetime of the believers, which leads to a persistent bias in the telling of history that minimizes the influence of the repeatedly discredited prophets of doom. Especially when some of the writers of history used to be believers.

Millennialism is not just a Christian thing, or a Western thing either. It is a universal human phenomenon, found in all cultures and religions. The Mahdist uprising in the Sudan, the Ghost Dance, and the Taiping Rebellion are all nineteenth century examples of millennial movements that are neither Western nor Christian, although the Taiping Rebellion did have some weird syncretistic elements. The millennium is a future paradisiacal state or stage of history, where constraints of human experience such as war, death, and poverty will no longer exist. It is the attempt to achieve this state that makes a movement millennial. Just so, both Bioshock and Bioshock Infinite are millennial through and through.

Columbia in its gloryBioshock had an obvious hint in the name of Andrew Ryan's underwater city, Rapture. Even though Ryan himself was thoroughly areligious, his attempt to build a perfect city apart from the rest of mankind and bring about paradise is clear enough. Columbia follows this same blueprint, although we get to see Columbia at a different stage in its history. Booker DeWitt joins Columbia during its halcyon days. We get to see Columbia in all its glory. Whereas when Jack descends into Rapture, the introductory apocalypse has already occurred, which has reduced the wicked city to ruins. Jack serves as the agent to bring about Rapture's terminal apocalypse. As an aside, this is why I never played Bioshock 2. Ken Levine wasn't involved, so it was clearly not going to be as good, but also the story was done. A terminal apocalypse is the end. Period. There was clearly nothing more to be said about Rapture, although there was equally clearly more money to be made.

Booker DeWitt is the introductory apocalypse for Columbia, and fittingly, he is also the terminal apocalypse for Columbia, although not precisely in the way one might think. The way in which this plays out is why I enjoyed the ending so much, and also why I will forgo my usual practice of discussing the ending of whatever I am reviewing. It is just too much fun, and it also fits the template I am discussing here so well, yet simultaneously so imaginatively, that I have to leave it it be.

Thematic continuity between Bioshock and Bioshock Infinite is not limited to the millennium, we also return to the struggle of the poor against the rich. Some have drawn links between the Occupy Wall Street protests and Bioshock Infinite, but I think this is mostly coincidence. The same theme was present in Bioshock, and the dehumanization and brutality of labor relations is an accurate reflection of America, and the world, in 1912. While Bioshock was set in 1960, it always seemed to me to belong in the 1930s, both because of the style of Rapture, and the way in which the working man in Rapture had no hope.

America by the 1950s and 1960s had figured out a pretty good solution for the working man. That was a time of unprecedented social mobility and economic growth. The economic pie was more evenly distributed then than any time before or since, and that was the fruit of the lessons learned by the ruling class in fin de siècle America.

Booker's Medal of HonorColumbia represents America, and it exaggerates both its virtues and its sins. There really was something magical about Western Civilization just before the Great War. That sense is captured perfectly in Bioshock Infinite. There were also many things deeply wrong, and you can see all them on display in this game as you progress. Columbia celebrates the Massacre at Wounded Knee as a pivotal event in its history [because it is], but America at the time of the Massacre was far more ambivalent. 20 Medals of Honor were awarded to men involved in the Massacre, but at the same time General Nelson A. Miles described it as, "Wholesale massacre, and I have never heard of a more brutal, cold-blooded massacre than at Wounded Knee." One of the recipients of the Medal of Honor, Lt. Harry L. Hawthorne, was posted to MIT as a professor of military science while he was recovering from his wounds, and was mocked so severely by the students that he gave up his posting at the school. We Americans did a lot wrong, but Columbia represents an America that never was.

Unequal bargainingThe plight of the underclass in Columbia that is particularly stark. In the shantytown surrounding the factories that supply Columbia, you see workers bidding hours of pay to perform the same work, in one of the ugliest illustrations of unequal bargaining I have seen. Labor relations in the late nineteenth century and early twentieth century were bad. This was the era when Communism was on the rise, papal encyclicals were searching for a viable middle between socialism and laissez faire capitalism, and strikes were something the public feared. The biggest conflict between labor and management in America was the Battle of Blair Mountain in 1920. It was the culmination of a struggle over coal mining in West Virginia, and nearly 10,000 men took up arms to fight their corporate oppressors.

Yet, America didn't burn in the fires of revolution. Most of those 10,000 miners just went home. Starting in the 1920s, America started on a remarkable equalization of income that finally bore fruit in the 1950s. Racial inequality was not included in the settlement, but it seems that class war in America was averted by a conscious choice by the haves to give more to the have nots. It was a stable solution for a while, but it seems to be breaking down again as income inequality rises in America.

Not liberationYet for all its sins, I wept for Columbia. I did not rejoice in its destruction, and I don't think we are meant to. All great cities are built on great injustice, including ours. That is what St. Augustine meant by the City of God and the City of Man. We owe allegiance to the imperfect polities in which we find ourselves, but in and of themselves they cannot truly deserve it. And the alternatives are usually much, much worse. The revolution Booker and Elizabeth unleash upon Columbia is not a liberation, but an orgy of vengeance and destruction. 

Columbia deserved to be destroyed, but in the same way, all of our homes, great and small, bear the weight of similar sins. Perhaps this is the reason why we find the apocalypse so mesmerizing: we know we deserve it.


*In all of the foregoing discussion of millennialism and the apocalypse, I am greatly indebted to my late friend, John J. Reilly. Especially his book, The Perennial Apocaplyse, which provides most of the analytical elements in this post. You are missed John, requiescat in pace.

My other videogame reviews

More Neil Armstrong Test Pilot Stories

Jerry Pournelle has a couple more good stories about Neil Armstrong's piloting ability:

And of course they all had the right stuff, and they knew it, and they knew that Armstrong had more of it than most. During the Apollo Lander Simulation flight – the trainer was dubbed the flying bedstead with good reason – in Arizona the computers glitched or the gyros tumbled so that the platform tumbled ninety degrees. If Armstrong had ejected with it in that attitude he would not have achieved enough altitude to allow the parachute to open. He kept his nerve and slowly rotated the platform as it fell, and when the angle was right – about 45 degrees I am told, I wasn’t there – ejected. Everything worked and he landed without injury. They’ve calculated that he had about three seconds to spare.

The computers overloaded during the Apollo 11 landings, and Armstrong came through again. This time he had twenty seconds of fuel to spare. The right stuff came through. The Eagle landed as the world watched, and the world would never be the same. Those of us who had a part in that can be sure of that. When I was growing up I knew from the first day I read Willy Ley’s book that I would live to see the first man on the Moon. I had not expected to outlive him, but Mankind’s conquest of space is not over. We’ll be back.

g-loading for pilots and astronauts

With the passing of Neil Armstrong, Charles Murray gives us this anecdote about Gemini 8:

Jerry began to reminisce about Gemini 8, Neil Armstrong’s previous space flight. Armstrong and his copilot, David Scott, had rendezvoused and docked with an Agena rocket as part of the rehearsal for techniques that would have to be used on the lunar mission. The combined vehicles had started to roll, so they undocked. But once it was on its own, the Gemini spacecraft started to roll even faster. Unbeknownst to the crew, one of the Gemini’s thrusters had locked on. The roll increased to one revolution per second.

I had known all this, but hadn’t thought much about it. And if you watch NASA’s version on You Tube, it is all made to sound as if the roll was a brief problem, never rising to the level of a crisis.

Actually, it was a moment that would have reduced me, and some extremely large proportion of the human race, to gibbering helplessness, no matter how well we were trained.

Imagine an amusement park ride that sits you in a pod, and that pod is twirled sideways at one revolution per second (you’ve never actually been on an amusement park ride remotely approaching that level of disorientation, because it would be prohibited). You have a panel in front of you with dozens of dials and small toggle switches, and you are supposed to toggle those switches in a prescribed sequence. While spinning one revolution per second. Pretty hard, trying to focus your eyes on those dials and coordinate your finger movement under those g forces so that you can even touch a switch that you’re aiming for. Now imagine that the sequence is not prescribed, but instead that there are many permutations, and you’re supposed to decide which permutation to do next based on what happened with the last one. Heavy cognitive demand there—long-term memory from training, short-term memory, induction, deduction. While spinning at one revolution per second. And now, to top it all off, if you don’t do it right, REALLY fast, you’re going to lose consciousness and die.

Jerry Bostick mused, “So there’s Neil, calmly toggling these little banana switches, moving through the alternatives, until he figures it out.” He shook his head in wonderment. “I’m not sure that any of our other pilots, and we had some great ones, could have analyzed the situation and solved it as quickly as he did.” I could forget about trying to make anything of Neil not being the first choice for the lunar landing.

As Tom Wolfe documented so memorably in The Right Stuff, many of the early astronauts were test pilots, men of exceptional skill and bravery. Now, astronauts are likely to be geezers with PhDs. Astronauts are likely to be very smart and hard-working, but we are probably no longer selecting for the skills Neil Armstrong exhibited.

Being smart is important to be a hell of a pilot, and if you run a statistical model on the data, as the Air Force has done, you will probably find general intelligence highly correlated with skill as a pilot. However, the Air Force decided to ignore the model, and continues to test for pilot-specific skills. Gemini 8 gives you a good idea why.

Science Jobs

The Washington Post has an article on the disparity between our public policy goal of churning out lots of new science PhDs, and the not very well kept secret that many PhDs are unable to find work in their fields.

There are some really good points here. The life sciences have the worst PhD glut by far, but chemistry seems hard hit too. Much of this has to do with other public policy decisions. For example, the cuts to the pharmaceutical industry are probably related to the ongoing drive to cut the cost of drugs and reduce the massive profits that have accrued to brand name drugs. It is understandable why we want to do this, but it means that lots of cushy research jobs will have to be cut.

Traditional academic jobs are scarcer than ever. Once a primary career path, only 14 percent of those with a PhD in biology and the life sciences now land a coveted academic position within five years, according to a 2009 NSF survey


The pharmaceutical industry once was a haven for biologists and chemists who did not go into academia. Well-paying, stable research jobs were plentiful in the Northeast, the San Francisco Bay area and other hubs. But a decade of slash-and-burn mergers; stagnating profit; exporting of jobs to India, China and Europe; and declining investment in research and development have dramatically shrunk the U.S. drug industry, with research positions taking heavy hits.


Two groups seem to be doing better than other scientists: physicists and physicians. The unemployment rate among those two groups hovers around 1 to 2 percent, according to surveys from NSF and other groups. Physicists end up working in many technical fields — and some go to Wall Street — while the demand for doctors continues to climb as the U.S. population grows and ages.

 It is normal for PhD physicists to work outside of academia [paging Steve Hsu], especially in the finance industry these days, but you would have a hard time knowing that. I suspect this needs to be true for other fields of science as well, but the skills don't seem to translate as well.


Jerry Pournelle recently recommended Freefall, and this is my favorite web comic of the moment. It may have been slow going waiting for it to come out originally, but I breezed through the first couple of years of comics already. This is definitely engineer humor, but it can also make you think.

There are nice little science tidbits scattered throughout, but also some fun ruminations on political philosophy, ethics, and common sense. From the point of view of a genetically engineered dog. =)

I also still think of the WWF as the World Wrestling Federation.