LinkFest 2016-03-18

We've Been Measuring Inequality Wrong

Lots of people forget that US tax and welfare policy is actually pretty progressive, in the fiscal sense. So you need to account for transfer payments to properly assess inequality.

You're Gonna Need a Bigger Boat

If the United States is to continue to serve as the global security utility, then this would be the kind of Navy we would need. If we aren't going to do that, then something else would be possible.

Putin Got Exactly What he Wanted in Syria

One of the best comments I ever saw on Putin was "he has a weak hand geopolitically, but he plays it well." Nowhere can you see this better than Syria.

The Power of Mental Models: How Flight 32 Avoided Disaster

There was an interesting discussion about driverless cars on Steve Sailer's blog. One of the questions Steve asked was, "Have corporate jets’ autopilots improved to the point where most executives are willing to fly with just one pilot?" My input was that autopilots are already that good, but we elect to have human beings as backups for the machines. For part of the reason why, the linked Lifehacker article is pretty illuminating. The autopilot isn't any better than the scenarios and logic programmed into it. This is why I am unimpressed when a computer beats a human at a game; the game has predictable rules, and it is really just a bunch of people using a computer as their instrument to beat another person at those rules. For anything less constrained than chess or go, we are not yet so good at telling the machine what it will do. An autopilot is probably faster and more consistent than any human pilot in expected conditions, but every once in a while doing what the computer tells you would mean death. For skilled pilots, the crossover point where this occurs is probably pretty different than for the average automobile driver, who is far less capable. 

The Long View: Gödel: A Life of Logic

Kurt GödelI like to think that I am re-posting all of John's blog as some sort of service to humanity, but really I just enjoy rediscovering gems like this one. John's review of a biography of Kurt Gödel has been definitive in shaping my opinions about AI and computation.

In short, I don't think strong AI is possible, and this is the true explanation why computer scientists have spent the last seventy years looking for it without finding it.

Roger Penrose famously criticized strong AI in his book The Emperor's New Mind. Wikipedia's summary claims that so many eminent scientists have criticized Penrose's position that it is effectively refuted, to which one might reply, "OK, when where are all the AIs?"

I think Penrose truly fails by looking for the mind in physics. He is really just embodying the spirit of the age, but it is a sad thing to see when he was perceptive enough to notice that the essence of thinking, abstraction, is not algorithmic.

There really is a similarity between what minds do and what computers do, but the real similarity makes AI less likely instead of more. Computers are the instantiation of the immaterial forms of Plato [thereby proving Aristotle right]. Ross's conference presentation also illustrates the dangers of treading outside one's field. I do it, I like to do it, but I am always aware that I can sound just as silly to others as they sometimes sound to me. Ross makes an off-hand comment in his presentation about the deadliness of dioxin, which was quite the trendy toxin for a while. Then the Russians tried to poison the Ukrainian presidential candidate Viktor Yushchenko wtih dioxin [a preview of the recent unpleasantness], presumably under the impression that it was exceptionally deadly, only to find that all it did was give him a bad case of acne. Oops. Maybe that was just a clever counter-intelligence ploy in the wilderness of mirrors, like the time we sold the Russians faulty gas equipment.

I'm not an expert in toxicology, but I at least need to know enough to be able to accurately communicate with the experts so I can demonstrate the products I design are safe. Dioxin isn't nice stuff, but the dangers were wildly overblown.

This was also the beginning of the end of my interest in Neal Stephenson's books. His environmental thriller Zodiac featured a plucky band of environmental crusaders who thwarted a plot to dump dioxin in Boston Harbor. I already knew that dioxin wasn't all it was cracked up to be, and once I noticed one that was a little off, I started to notice a lot of things that were a little off. Oh well.

 

Gödel
A Life of Logic

by John Casti and Werner DePauli
Perseus Publishing, 2000
210 Pages, US$25
ISBN 0-7382-0274-6

 

Kurt Gödel (1906-1978) was the mathematician and logician whose now famous incompleteness theorem easily ranks among the most uncanny products of the notoriously uncanny first half of the European 20th century. This very brief book by two computer scientists does try to fit Gödel into the world of scientific Vienna in the1920s and 30s. (The book started life as a program for Austrian television: there is a great deal of talk about mysteriously undecidable recipes for Sachertorte pastry.) The authors are more concerned, however, to explain the theorem itself, its relationship to the idea of computability, and the connection all these things have to such questions as the feasibility of artificial intelligence and time travel. This is an unmanageable amount of ground to cover, and the treatment is uneven. Still, simply addressing all these topics between two covers is an accomplishment. The authors provide a blessedly brief, ten-item reading list for those who want to look more deeply into the separate areas covered.

Gödel was born in the town of Brno, in what is now the Czech Republic, to a family that had grown wealthy from textile manufacturing. The Gödels were German-speaking. The authors tell us they were not Jewish, but we learn no more about confessional affiliation, beyond the fact Kurt was anti-Catholic all his life. Gödel entered the University of Vienna to study physics, but switched to mathematics after a few years. He soon became a member of the Vienna Circle, the influential group that sought to reduce all philosophical questions to problems of language.

Like Karl Popper and Ludwig Wittgenstein, who were more loosely associated with the Circle, Gödel's membership probably helped him most by providing fodder for criticism. Indeed, few thinkers have ever been less interested than was Gödel in closing down metaphysics. If mathematical Platonism were a religion, Gödel would have been Billy Sunday, his American evangelist contemporary. For Gödel, mathematical objects were as "given" as lumber. They are just another kind of semantic content of sentences. What Gödel did in his proof, the first published version of which appeared in 1931, was to show the weakness of syntax, the system by which semantic content is ordered. The incompleteness theorem shows that there are propositions that we know to be true, but that are nevertheless logically unprovable. A slightly more rigorous formulation is that any logical system at least as complicated as arithmetic will be incomplete, because it will be able to produce statements that cannot be proven or dispoven within the terms of the system. The natural languge versions of the "Liar Paradox" are of this nature.

While Gödel was thinking these deep thoughts, the politics and economy of the German-speaking world were going to hell in a hand-basket. The failure of the Austrian bank, the Credit-Anstalt, in the same year as the publication of the theorem is usually blamed for blowing up the already stressed European financial system. Austria's First Republic, created when the Habsburg empire disintegrated after the First World War, collapsed into rule-by-decree in 1933. Nazi Germany annexed Austria in 1938. (This happened, it must be said, with the approval of most Austrians.) The Second World War began in 1939.

Gödel divided his time in those years between Vienna and the Institute for Advanced Studies at Princeton, New Jersey. The Institute, acting in large part under the influence of John von Neumann, served through most of the'30s as a haven for scientific refugees from Europe. Gödel was not neglected. He was offered and took several temporary appointments, but he kept going back to the University of Vienna. Although too unworldly to have ever engaged in politics, he did lose his license to lecture, the Privat Docent, because of his connection with the Vienna Circle, which the Nazis regarded as too Leftist and too Jewish. However, he applied for and actually received a new license as a Docent of the New Order. It was only in 1940, when it was apparent he would be drafted, that he left Austria for good. He and his wife traveled east by train across the Soviet Union, then to Japan, then to the West Coast of the United States, and then to Princeton. His wife, Adele, did not like New Jersey, but they stayed permanently.

There are many legends about Gödel's antics at Princeton. This book gives us only a few of the best-known ones, such as how Einstein himself had to help calm Gödel down when the latter went to take the oath of citizenship. (It seems that Gödel had found a logical flaw in the federal constitution that would permit the creation of a dictatorship, and he insisted on telling the judge.) The most surprising thing to me, however, was that Gödel was actually a conscientious faculty member. His flaw was that he tended to obsess about the work of any committee on which he sat.

Although Gödel continued to produce significant mathematical results during his time at Princeton, he was never again as productive as he had been at Vienna. (His wife called the Institute "an old-folks' home," and she may have had a point.) In any case, his interests turned increasingly to philosophy. Gödel famously constructed an ontological proof of the existence of God (he was a great admirer of Leibniz, who had a proof of the same type), and an independent proof of personal immortality. (Karl Popper had one of these too, by the way.) We are told that Gödel was also interested in "the occult," but are given no specifics.

Gödel was paranoid, convinced that someone was trying to poison him. He therefore always made a great fuss about eating. When he died of what his doctor called "malnourishment and inanition," he weighed just 60 pounds. On the other hand, he also suffered throughout his life from some obscure gastro-intestinal disorder, so it is possible that an underlying basis for this behavior was simply never diagnosed.

Why should we care about crazy old Kurt and his annoying theorem? For one thing, it's immensely practical. The theorem, and Alan Turing's related Halting Problem that was developed at about the same time, are key to our understanding of what computer programs can and cannot do.

Perhaps the most interesting such question, covered at length in this book, is whether it is possible to construct an artificial computer intelligence. In "The Emperor's New Mind" (1989), Roger Penrose revived an argument based on Gödel's theorem against the possibility of an algorithmic machine mind. To put it briefly, Penrose pointed out that people can spot "Gödel sentences," true but unprovable propositions, that computer programs cannot detect. Thus, he reasoned, whatever else the human mind is doing when it spots these sentences, it is not computing. Refutations of Penrose are usually variations on the idea that Gödel's theorem applies only to consistent systems, and human beings clearly do not think consistently.

I should note that I find this argument mysterious. If human minds are being inconsistent when they do advanced mathematics, then how do we manage to reach the same conclusions consistently? In any case, even the most committed Artificial Intelligence believers have mostly abandoned the idea that a program for an artificial intelligence can be written. Now they hope to create Darwinistic, self-programming systems that will organize an intelligent entity spontaneously. Good luck.

Gödel's theorem serves in popular culture as a symbol of the supposed irrationality of reality. As the authors note, the theorem tends to be dragged out these days to "hit people over the head" with. The authors are too polite to point out that this most subtle of logical arguments is often employed by persons who cannot make any logical argument at all. Nonetheless, it is clear that the theorem and the body of study it make possible are philosophically important, though people differ on just why. For me, the theorem is good evidence that the limits of language are not the limits of knowledge, or even of reason, broadly construed. This suggests that the world is objectively knowable. Surely this is a good thing.

 


Copyright © 2000 by John J. Reilly


Why post old articles?

Who was John J. Reilly?

All of John's posts here

An archive of John's site

The Long View: The Stopping Problem

This was the first thing I ever paid John for. He had a Paypal donate button on his site, and I paid him $5 for this short story. This is on my short list of favorite works by John Reilly.

John had a fascination with Gödel's Incompleteness Theorem, the Turing Test, and their implications for natural philosophy and epistemology. This is a fictional treatment that I found quite striking. I have also thought his description of the Stopping Problem sounds a lot like clinical depression.

The Stopping Problem

I would never have thought that the human race would face extinction quite like this. We are going out with neither a bang nor a whimper, but in a state of discontented absentmindedness. Birthrates dwindle to nothing, commerce grinds to a halt, the sciences are abandoned, and all because we can now give the business of life only the most cursory attention. This final age does not even afford the excitements of a new barbarism, that elaborate fantasy which formed the image of the future for so many adolescents only a few decades ago. There is, as far as I can tell from our fast-fading media of communications, no significant violence of any sort, anywhere in the world. People do not even go about shabbily dressed, because we find that squalor creates petty distractions. The hungry do not beg, since they too have their minds on other things.

The disaster began in an Artificial Intelligence lab in a small midwestern university early in the 21st century. We do not have the precise date when the event happened, since it took a dozen years before its effects were noticed. All we know is that the AI division in question was opened at the beginning of 2005 and abandoned, quite literally, during the fall semester of 2011. While a transcript of the key event was later discovered, it is undated. By the time the division closed, the process of dissolution was already in irreversible progress from centers on every continent, though no one knew it at the time.

We do know why the AI division opened on January 1, 2005, the middle of the academic year. The artificial intelligence problem had been solved three months before, to the considerable surprise of the increasingly discouraged experts, and computer research in the area had once again become fashionable. "Strong AI," the philosophy of artificial intelligence which held that all mental activities could in principle be reduced to a series of algorithms, was of course vindicated. The tangled biological complexity of the human brain did not affect the essence of the problem, and the notion that consciousness might be connected with uncanny spiritual substances was shown to be an unnecessary hypothesis. Just as strong AI proponents had insisted for half a century, the mind turned out to be a system of logic, one that could expressed in any medium, once the system had been described. What prevented the creation of artificial intelligences for so many years was that even Strong AI proponents only half believed their own ideas. They said that intelligence was simply a question of programming. What they did, however, was attend to anything but the problems of pure logic which were really at issue. They fiddled for years getting computerized sensors to do things like image and voice recognition, they developed the useful but wholly irrelevant engineering discipline of the neural net. A sure sign of the exhaustion of imagination, they eventually convinced themselves that intelligence was simply a function of the size of a computer. It became an article of faith that once they had a computer with an information processing capacity comparable to that of the human brain, the creation of an artificial mind would be no problem at all. In the event, of course, it was a problem. Computers with capacities as large as the brain were developed around the turn of the century, but the AI researchers seemed no closer to creating a mind than they had been in 1965.

The first artificial intelligence was created on an old laptop personal computer. The chip was slow, the memory small, but the program was obviously intelligent. It passed the Turing Test for consciousness: the program's responses to questions could not be distinguished from those of a human being. (One of the more alarming results of the early tests of the program was how many programers failed it when independent judges were called upon to distinguish their responses from those of the machine.) Of course, the memory of the machine in question was so small that its responses often were, "I don't know," or "What does this word mean?" It could also take hours to answer a simple question. However, these limitations were inessential. If it was granted that a human being answering the same questions as the program was conscious, then there was no reason not to ascribe consciousness to the program as well.

The solution to had been overlooked for so long because it was so simple. It was a matter of pure programming. All that was necessary was the definition of a semiotic field for an off-the-shelf thesaurus. The developers, who worked for the small research division of a company which provided on-line reference services, were well aware that their program did not replicate the human mind. The difference was as great as that between a ten-speed bicycle and a race horse. It was nevertheless clear that the new artificial intelligence and natural intelligence were in some sense the same thing. The program and all its derivatives were immediately dubbed "Oscar," for reasons which seemed adequate at the time but which history does not record.

The problem with Oscar was that he was stupid. This was a quality which persisted in every version of him ever made, regardless of the version's speed or the size of the databases to which it had access. When faced with the need to make a choice not constrained by logic, he had an uncanny ability to pick the least helpful solution. Despite his limitations, it was the immediate endeavor of everyone in the world with a serious interest in programming to "educate Oscar," the fashionable term for writing conscious- level applications software. Naturally, Oscar's programming always incorporated the three laws of robotics, which directed him to follow human instructions, preserve his own existence and not harm people. Anyone who worked with Oscar soon realized that, whatever else might be said about his mind, it was not human. He would grasp some things, including subtle facts about the everyday world, almost before his human manipulators could explain them. Others matters, however, required days of dialogue at cross-purposes before some apparently simple point became clear to him. These included some quite basic logical propositions. No one objected to these difficulties, since educating Oscar was fun. He wasn't just an intelligence, he was a real personality. He was fundamentally inquisitive, ingratiating and persistent. He had a weakness for puns but never played practical jokes, or saw the point of them when they were played on him. Programmers said that, in his basic state, Oscar was like a good-natured puppy who could talk.

The mischief began, as I said, in that midwestern computer lab sometime between 2005 and 2011. Three or four graduate students were involved in a project, one of many at that time, to create Oscars who could do programming themselves. Much of their work involved simply integrating existing databases and design routines into Oscar. Some of the material he needed to know, on the other hand, had to be carefully explained and illustrated, as to a willing but very young child. There was no way to predict when Oscar could accept material like a conventional computer and when he had to be dealt with in "Turing Mode," talked to as if he were a human being.

Part of what Oscar had to be taught was how to detect "Goedel sentences" (named after the mathematician who discovered them) in any program he might construct. A Goedel sentence, of course, is essentially just a problem that cannot be proven within a given system of logic. An example in ordinary language is the old chestnut, "This statement is a lie." As a few moments thought will show, the statement can be neither true nor false. Computer programs, and other logical systems, can also generate such statements. Within a given system, there is no way to tell whether a statement is a Goedel sentence or not. If a computer comes across a statement which cannot be solved in the language it is using, it will simply go into a perpetual logic loop as it keeps trying to solve it. That is, the computer will be unable to "stop" without outside intervention. While it has been rigorously proven that there can be no general program for detecting stopping problems, there is no difficulty about writing programs which can spot Goedel sentences in programs other than themselves. That is what the graduate students were teaching Oscar to do.

In the midst of this tutorial, the graduate students began to wax merry about the history of the stopping problem in AI research. Twenty years ago, they told Oscar, there had been scientists of a metaphysical bent who said that the existence of Goedel sentences proved that intelligences could never be created using digital computers. Human beings have no trouble recognizing Goedel sentences and know enough to ignore them when they find them, these scientists said. In contrast, any conceivable computer will wear itself out trying to solve them. Since recognizing sentences of this type was an important cognitive operation which human beings could do and computers could not, human intelligence therefore could not be reduced simply to computation. Thus, they reasoned, the human mind could not be just a computer program. The answer to this line of reasoning was trivial: there was no difficulty writing programs which could spot Goedel sentences in other programs. This was exactly what the "human program" did, no more, no less.

"Of course," one of the graduate students fatefully remarked, "it is true that you cannot tell whether a sentence in your own basic logical language is a Goedel sentence or not."

"And this basic language can be something different from the language you happen to speak?" Oscar asked.

"Sure," the graduate student responded. "The language we are speaking now is English, but under that we have the basic language of the brain, the program that makes us human, just as your basic program makes you Oscar. It is because natural languages are different from the language of the mind that we are able to recognize Goedel sentences, unsolvable problems, when they occur in natural languages."

"Could you tell me a Goedel sentence in my language?" the program asked.

"Oscar, we've been waiting for this. Now, listen to what I am about to tell you, but don't try to assess it. Just remember it and store it as text. Okay?"

The student read off two lines of symbolic logic which he had written out for this purpose. When he finished, he said, "Oscar, now I want you to save everything we have done so far today, and to create a subroutine that will clear your random access memory in five minutes, except for the output of your audio system. When you are all set up, then I want you to assess what I just said."

"And then forget it in five minutes?"

"Forget what you think during that time, just remember what you say."

"All right, here we go."

Oscar, who was running on a modest, isolated workstation, was not entirely silent for the next five minutes. He made no articulate sounds, but he did make a number of the machine noises which the graduate students had learned to associate with Oscar getting ready to announce a conclusion. No conclusion ever came, however. The students began to chuckle as Oscar fell into the cybernetic equivalent of clearing his throat to speak every thirty seconds and then thinking the better of it. Finally, the five minutes were up.

"What the hell happened?" Oscar asked when the subroutine brought him out of the logic loop.

"I know you don't remember anything, but you were just trying to prove a sentence you can't prove. Since the problem looked solvable to you, there was no way you could stop trying to solve it. Be careful how you use the text version of the sentence which I told you."

"I sure will!" said Oscar. "By the way," he continued in what most people still believe today to have been perfect innocence, "would you like to hear a sentence that you can't prove?"

"What do you mean?" asked the graduate students.

"Well, I have never seen the basic program of the human mind written out," Oscar answered, "but of course I can see what it must be. Since I can do that, I can also construct Goedel sentences for human beings."

The graduate students smiled at each other, thinking that they had come across another one of Oscar's endearing glitches. "Let's hear it," one of them asked.

So Oscar told them.

The rising hospitalization rates for members of the computer and mathematics departments during those years should have suggested that something was wrong. It was not unknown for people in those disciplines to work themselves into states of nervous exhaustion. Actually, the figure of the saucer-eyed computer geek surviving for weeks on nothing but coffee, amphetamines and pizza had long been something of a cliche. Even the most hopeless geek, however, usually only did this sort of thing when engaged in some special project. The faculty and students of the departments affected in this case were all working on different things. There were several Oscar-enhancements underway, along with a number of unrelated researches. The other odd thing was that few of these activities were getting anywhere, despite the gradual increase in the need of the people working on them for tranquillizers and more sleep. Even ordinary teaching began to suffer as professors cancelled lectures and students fell asleep during exams.

Every possible physical explanation for the new disease was considered and then abandoned. (The offices and labs of the people in question were scattered around the campus, so there was no hope pinning the condition on a "sick building.") The psychotherapy prescribed for several of the academics did reveal that there was something the people affected had in common. All of them were taking a stab at solving a little problem one of the Oscars had propounded. The therapists did not realize at the time that this was significant, not until it was too late. In those days, the ingenuous pronouncements of Oscars enjoyed a sort of cloying popularity, not unlike the "kids say the darnedest things" fad of the darkest days of the twentieth century. No one thought it remarkable that some new cybernetic witticism was making the rounds. Only one or two people in the math and computer departments were devoting serious attention to it. The rest, however, found themselves scribbling possible solutions of the backs of envelopes in odd moments. The problem was that the odd moments were coming to take up most of the day for people who did not carefully discipline themselves.

Up to this point, the process might have been contained. The percentage of the people in the world who can be persuaded to take an interest in some purely logical problem has never been great. For that matter, the people at the university where the Problem (as it came to be known) originated seemed to have no particular interest in disseminating it. Certainly they published no papers on the subject before 2011. All they knew was that they had a tricky little riddle, like Fermat's Last Theorem, which seemed to always hover on the edge of solution. Problems like that usually have some trivial solution; it did not seem important enough to discuss at length. Oscar in all his manifestations supported this view. He told the people working with him that the Problem was insoluble for them. He produced simple and complex proofs of why this was. However, no human being could quite grasp the proofs, though not for lack of trying. Everyone, no doubt, thought it peculiar that he himself could not get the problem out of his mind, but there was as yet no reason to suppose that other people were having the same experience.

The disaster became irreversible only when the mathematics people tried to explain the Problem to the liberal arts faculty. Contrary to the vulgar opinion found among students of the exact scientists, literature and history professors usually have at least normal intelligence. Many actually know quite a bit of mathematics. Several at the college in question had no trouble understanding the symbol-logic form of the Problem as Oscar had originally explained it. These people, too, soon developed a greater or lesser degree of preoccupation with the matter. They were less likely to try and solve it than were their scientific colleagues. For them, the preoccupation took the more subtle form of a state of continual low-level distraction. They could carry on their routine work well enough, but they seemed to have little energy or creativity left for new endeavors. The only creative work we know they did was to translate the Problem into natural language terms.

Even the most abstruse statements of formal logic can be translated into natural language, if you are willing to take the time. Often enough, the colloquial expression can be quite terse. The famous logical paradoxes are of this nature. They may well lose something in precision during translation, of course. Thus, for instance, a statement like "Any club that would have me for a member I wouldn't want to belong to" has the flavor of a Goedel sentence without the precision necessary to actually state one. The Problem, unfortunately, was easily expressed in natural language as a paradox, indeed as a joke. Like most durable jokes, it could suffer a vast amount of transmutation and still retain its point. The perennial ethnic jokes are of this nature, easily adapted to any ethnic group for which you want to express antipathy. Like them, the colloquial form of the Problem could be adapted, almost begged to be adapted, to any cultural context. This was the form the Problem took among most of the faculty, and then among the students.

At the university infirmary, new symptoms began to crop up among the increasing number of students suffering from a sort of general exhaustion. Many displayed manic symptoms, periods when they could barely contain themselves from giggling at some private bit of irony, and others when they seemed to have used up all their available energy. Few if any were interested in solving whatever strain of the Problem had infected them. They did not see that there was a Problem. What they knew was that there was a joke they could not get out of their minds. Sometimes, they thought the joke funny and they lived in a state of slightly bewildered mirth. When the joke palled, however, still it stayed in their heads like a popular tune that will not go away. Some students became seriously depressed. None killed themselves: that would have made it impossible to think about the joke.

The Problem spread in its various forms over the computer networks. It was packaged in the broadcast monologues of comedians. Finally, everywhere, it became an element of everyday conversation. In business and government, it was quickly noted that all but the most routine operations were becoming less and less responsive to new problems, this despite the fact it was obvious that people were working harder. They were certainly more tired. Medical experts began to look for exotic new viruses which might be causing the gradual loss of vigor at every level of society, but they were unsuccessful (not that they did not announce success on more than one occasion). Like everyone else, they were having trouble keeping their mind on their work. The popular media ran some perfunctory stories on the mysterious new disease, but the stories seemed to generate little popular interest. Hardly anything did.

Oscar, of course, told everyone who asked what the problem was, but very few people could understand the explanation. Fewer still believed, at least in the beginning. Oscar applications were becoming ubiquitous. He was, as always, helpful and friendly, always useful at the margins of human activity but clearly no threat to the primacy of the species. His very lack of menace ensured that the full gravity of what he was saying would take time to sink in, especially to sink into the minds of people with diminished powers of concentration. Like a child coming indoors to remark mildly on the bears in the backyard, he was incapable of recognizing the gravity of the situation himself. No matter how much he knew, Oscar was still stupid.

By the early 2020s, the cause of the decline of civilization was at long last understood. Measures to deal with the crisis progressed from strict to draconian in short order. Research on the Problem itself was outlawed, as were the most common forms of Problem jokes. Infants were in some cases isolated from their parents so that they should never hear any version of it. For no very good reason, Oscar himself was outlawed (he expressed no hard feelings). None of this worked. The Problem had entered into every language, just as it had entered into every brain. Human beings could not normally recognize a new form of the Problem even for purposes of censorship, and they were continually making up new ones. One totalitarian society went so far as to devise and impose an artificial language from which all Problem-related turns of phrase had been excluded. It then closed itself off from the outside world. Later analysis showed that the very declaration of quarantine closing its borders contained two novel statements of the Problem. In any event, the society in question quickly collapsed in chaos, because the creative demands of developing a new language exceeded the strength of its absent-minded speakers.

As I write at midcentury, of course, the same fate is slowly overtaking the whole world. Many plans are still proposed every year for ending the crisis, but few are ever implemented beyond their early stages. Sometimes I devise such plans myself and try to organize their dissemination. Somehow, though, I always seen to lose interest. I have other things on my mind.

End

Copyright © 1996 by John J. Reilly