The Long View: The Stopping Problem

This was the first thing I ever paid John for. He had a Paypal donate button on his site, and I paid him $5 for this short story. This is on my short list of favorite works by John Reilly.

John had a fascination with Gödel's Incompleteness Theorem, the Turing Test, and their implications for natural philosophy and epistemology. This is a fictional treatment that I found quite striking. I have also thought his description of the Stopping Problem sounds a lot like clinical depression.


The Stopping Problem

I would never have thought that the human race would face extinction quite like this. We are going out with neither a bang nor a whimper, but in a state of discontented absentmindedness. Birthrates dwindle to nothing, commerce grinds to a halt, the sciences are abandoned, and all because we can now give the business of life only the most cursory attention. This final age does not even afford the excitements of a new barbarism, that elaborate fantasy which formed the image of the future for so many adolescents only a few decades ago. There is, as far as I can tell from our fast-fading media of communications, no significant violence of any sort, anywhere in the world. People do not even go about shabbily dressed, because we find that squalor creates petty distractions. The hungry do not beg, since they too have their minds on other things.

The disaster began in an Artificial Intelligence lab in a small midwestern university early in the 21st century. We do not have the precise date when the event happened, since it took a dozen years before its effects were noticed. All we know is that the AI division in question was opened at the beginning of 2005 and abandoned, quite literally, during the fall semester of 2011. While a transcript of the key event was later discovered, it is undated. By the time the division closed, the process of dissolution was already in irreversible progress from centers on every continent, though no one knew it at the time.

We do know why the AI division opened on January 1, 2005, the middle of the academic year. The artificial intelligence problem had been solved three months before, to the considerable surprise of the increasingly discouraged experts, and computer research in the area had once again become fashionable. "Strong AI," the philosophy of artificial intelligence which held that all mental activities could in principle be reduced to a series of algorithms, was of course vindicated. The tangled biological complexity of the human brain did not affect the essence of the problem, and the notion that consciousness might be connected with uncanny spiritual substances was shown to be an unnecessary hypothesis. Just as strong AI proponents had insisted for half a century, the mind turned out to be a system of logic, one that could expressed in any medium, once the system had been described. What prevented the creation of artificial intelligences for so many years was that even Strong AI proponents only half believed their own ideas. They said that intelligence was simply a question of programming. What they did, however, was attend to anything but the problems of pure logic which were really at issue. They fiddled for years getting computerized sensors to do things like image and voice recognition, they developed the useful but wholly irrelevant engineering discipline of the neural net. A sure sign of the exhaustion of imagination, they eventually convinced themselves that intelligence was simply a function of the size of a computer. It became an article of faith that once they had a computer with an information processing capacity comparable to that of the human brain, the creation of an artificial mind would be no problem at all. In the event, of course, it was a problem. Computers with capacities as large as the brain were developed around the turn of the century, but the AI researchers seemed no closer to creating a mind than they had been in 1965.

The first artificial intelligence was created on an old laptop personal computer. The chip was slow, the memory small, but the program was obviously intelligent. It passed the Turing Test for consciousness: the program's responses to questions could not be distinguished from those of a human being. (One of the more alarming results of the early tests of the program was how many programers failed it when independent judges were called upon to distinguish their responses from those of the machine.) Of course, the memory of the machine in question was so small that its responses often were, "I don't know," or "What does this word mean?" It could also take hours to answer a simple question. However, these limitations were inessential. If it was granted that a human being answering the same questions as the program was conscious, then there was no reason not to ascribe consciousness to the program as well.

The solution to had been overlooked for so long because it was so simple. It was a matter of pure programming. All that was necessary was the definition of a semiotic field for an off-the-shelf thesaurus. The developers, who worked for the small research division of a company which provided on-line reference services, were well aware that their program did not replicate the human mind. The difference was as great as that between a ten-speed bicycle and a race horse. It was nevertheless clear that the new artificial intelligence and natural intelligence were in some sense the same thing. The program and all its derivatives were immediately dubbed "Oscar," for reasons which seemed adequate at the time but which history does not record.

The problem with Oscar was that he was stupid. This was a quality which persisted in every version of him ever made, regardless of the version's speed or the size of the databases to which it had access. When faced with the need to make a choice not constrained by logic, he had an uncanny ability to pick the least helpful solution. Despite his limitations, it was the immediate endeavor of everyone in the world with a serious interest in programming to "educate Oscar," the fashionable term for writing conscious- level applications software. Naturally, Oscar's programming always incorporated the three laws of robotics, which directed him to follow human instructions, preserve his own existence and not harm people. Anyone who worked with Oscar soon realized that, whatever else might be said about his mind, it was not human. He would grasp some things, including subtle facts about the everyday world, almost before his human manipulators could explain them. Others matters, however, required days of dialogue at cross-purposes before some apparently simple point became clear to him. These included some quite basic logical propositions. No one objected to these difficulties, since educating Oscar was fun. He wasn't just an intelligence, he was a real personality. He was fundamentally inquisitive, ingratiating and persistent. He had a weakness for puns but never played practical jokes, or saw the point of them when they were played on him. Programmers said that, in his basic state, Oscar was like a good-natured puppy who could talk.

The mischief began, as I said, in that midwestern computer lab sometime between 2005 and 2011. Three or four graduate students were involved in a project, one of many at that time, to create Oscars who could do programming themselves. Much of their work involved simply integrating existing databases and design routines into Oscar. Some of the material he needed to know, on the other hand, had to be carefully explained and illustrated, as to a willing but very young child. There was no way to predict when Oscar could accept material like a conventional computer and when he had to be dealt with in "Turing Mode," talked to as if he were a human being.

Part of what Oscar had to be taught was how to detect "Goedel sentences" (named after the mathematician who discovered them) in any program he might construct. A Goedel sentence, of course, is essentially just a problem that cannot be proven within a given system of logic. An example in ordinary language is the old chestnut, "This statement is a lie." As a few moments thought will show, the statement can be neither true nor false. Computer programs, and other logical systems, can also generate such statements. Within a given system, there is no way to tell whether a statement is a Goedel sentence or not. If a computer comes across a statement which cannot be solved in the language it is using, it will simply go into a perpetual logic loop as it keeps trying to solve it. That is, the computer will be unable to "stop" without outside intervention. While it has been rigorously proven that there can be no general program for detecting stopping problems, there is no difficulty about writing programs which can spot Goedel sentences in programs other than themselves. That is what the graduate students were teaching Oscar to do.

In the midst of this tutorial, the graduate students began to wax merry about the history of the stopping problem in AI research. Twenty years ago, they told Oscar, there had been scientists of a metaphysical bent who said that the existence of Goedel sentences proved that intelligences could never be created using digital computers. Human beings have no trouble recognizing Goedel sentences and know enough to ignore them when they find them, these scientists said. In contrast, any conceivable computer will wear itself out trying to solve them. Since recognizing sentences of this type was an important cognitive operation which human beings could do and computers could not, human intelligence therefore could not be reduced simply to computation. Thus, they reasoned, the human mind could not be just a computer program. The answer to this line of reasoning was trivial: there was no difficulty writing programs which could spot Goedel sentences in other programs. This was exactly what the "human program" did, no more, no less.

"Of course," one of the graduate students fatefully remarked, "it is true that you cannot tell whether a sentence in your own basic logical language is a Goedel sentence or not."

"And this basic language can be something different from the language you happen to speak?" Oscar asked.

"Sure," the graduate student responded. "The language we are speaking now is English, but under that we have the basic language of the brain, the program that makes us human, just as your basic program makes you Oscar. It is because natural languages are different from the language of the mind that we are able to recognize Goedel sentences, unsolvable problems, when they occur in natural languages."

"Could you tell me a Goedel sentence in my language?" the program asked.

"Oscar, we've been waiting for this. Now, listen to what I am about to tell you, but don't try to assess it. Just remember it and store it as text. Okay?"

The student read off two lines of symbolic logic which he had written out for this purpose. When he finished, he said, "Oscar, now I want you to save everything we have done so far today, and to create a subroutine that will clear your random access memory in five minutes, except for the output of your audio system. When you are all set up, then I want you to assess what I just said."

"And then forget it in five minutes?"

"Forget what you think during that time, just remember what you say."

"All right, here we go."

Oscar, who was running on a modest, isolated workstation, was not entirely silent for the next five minutes. He made no articulate sounds, but he did make a number of the machine noises which the graduate students had learned to associate with Oscar getting ready to announce a conclusion. No conclusion ever came, however. The students began to chuckle as Oscar fell into the cybernetic equivalent of clearing his throat to speak every thirty seconds and then thinking the better of it. Finally, the five minutes were up.

"What the hell happened?" Oscar asked when the subroutine brought him out of the logic loop.

"I know you don't remember anything, but you were just trying to prove a sentence you can't prove. Since the problem looked solvable to you, there was no way you could stop trying to solve it. Be careful how you use the text version of the sentence which I told you."

"I sure will!" said Oscar. "By the way," he continued in what most people still believe today to have been perfect innocence, "would you like to hear a sentence that you can't prove?"

"What do you mean?" asked the graduate students.

"Well, I have never seen the basic program of the human mind written out," Oscar answered, "but of course I can see what it must be. Since I can do that, I can also construct Goedel sentences for human beings."

The graduate students smiled at each other, thinking that they had come across another one of Oscar's endearing glitches. "Let's hear it," one of them asked.

So Oscar told them.

The rising hospitalization rates for members of the computer and mathematics departments during those years should have suggested that something was wrong. It was not unknown for people in those disciplines to work themselves into states of nervous exhaustion. Actually, the figure of the saucer-eyed computer geek surviving for weeks on nothing but coffee, amphetamines and pizza had long been something of a cliche. Even the most hopeless geek, however, usually only did this sort of thing when engaged in some special project. The faculty and students of the departments affected in this case were all working on different things. There were several Oscar-enhancements underway, along with a number of unrelated researches. The other odd thing was that few of these activities were getting anywhere, despite the gradual increase in the need of the people working on them for tranquillizers and more sleep. Even ordinary teaching began to suffer as professors cancelled lectures and students fell asleep during exams.

Every possible physical explanation for the new disease was considered and then abandoned. (The offices and labs of the people in question were scattered around the campus, so there was no hope pinning the condition on a "sick building.") The psychotherapy prescribed for several of the academics did reveal that there was something the people affected had in common. All of them were taking a stab at solving a little problem one of the Oscars had propounded. The therapists did not realize at the time that this was significant, not until it was too late. In those days, the ingenuous pronouncements of Oscars enjoyed a sort of cloying popularity, not unlike the "kids say the darnedest things" fad of the darkest days of the twentieth century. No one thought it remarkable that some new cybernetic witticism was making the rounds. Only one or two people in the math and computer departments were devoting serious attention to it. The rest, however, found themselves scribbling possible solutions of the backs of envelopes in odd moments. The problem was that the odd moments were coming to take up most of the day for people who did not carefully discipline themselves.

Up to this point, the process might have been contained. The percentage of the people in the world who can be persuaded to take an interest in some purely logical problem has never been great. For that matter, the people at the university where the Problem (as it came to be known) originated seemed to have no particular interest in disseminating it. Certainly they published no papers on the subject before 2011. All they knew was that they had a tricky little riddle, like Fermat's Last Theorem, which seemed to always hover on the edge of solution. Problems like that usually have some trivial solution; it did not seem important enough to discuss at length. Oscar in all his manifestations supported this view. He told the people working with him that the Problem was insoluble for them. He produced simple and complex proofs of why this was. However, no human being could quite grasp the proofs, though not for lack of trying. Everyone, no doubt, thought it peculiar that he himself could not get the problem out of his mind, but there was as yet no reason to suppose that other people were having the same experience.

The disaster became irreversible only when the mathematics people tried to explain the Problem to the liberal arts faculty. Contrary to the vulgar opinion found among students of the exact scientists, literature and history professors usually have at least normal intelligence. Many actually know quite a bit of mathematics. Several at the college in question had no trouble understanding the symbol-logic form of the Problem as Oscar had originally explained it. These people, too, soon developed a greater or lesser degree of preoccupation with the matter. They were less likely to try and solve it than were their scientific colleagues. For them, the preoccupation took the more subtle form of a state of continual low-level distraction. They could carry on their routine work well enough, but they seemed to have little energy or creativity left for new endeavors. The only creative work we know they did was to translate the Problem into natural language terms.

Even the most abstruse statements of formal logic can be translated into natural language, if you are willing to take the time. Often enough, the colloquial expression can be quite terse. The famous logical paradoxes are of this nature. They may well lose something in precision during translation, of course. Thus, for instance, a statement like "Any club that would have me for a member I wouldn't want to belong to" has the flavor of a Goedel sentence without the precision necessary to actually state one. The Problem, unfortunately, was easily expressed in natural language as a paradox, indeed as a joke. Like most durable jokes, it could suffer a vast amount of transmutation and still retain its point. The perennial ethnic jokes are of this nature, easily adapted to any ethnic group for which you want to express antipathy. Like them, the colloquial form of the Problem could be adapted, almost begged to be adapted, to any cultural context. This was the form the Problem took among most of the faculty, and then among the students.

At the university infirmary, new symptoms began to crop up among the increasing number of students suffering from a sort of general exhaustion. Many displayed manic symptoms, periods when they could barely contain themselves from giggling at some private bit of irony, and others when they seemed to have used up all their available energy. Few if any were interested in solving whatever strain of the Problem had infected them. They did not see that there was a Problem. What they knew was that there was a joke they could not get out of their minds. Sometimes, they thought the joke funny and they lived in a state of slightly bewildered mirth. When the joke palled, however, still it stayed in their heads like a popular tune that will not go away. Some students became seriously depressed. None killed themselves: that would have made it impossible to think about the joke.

The Problem spread in its various forms over the computer networks. It was packaged in the broadcast monologues of comedians. Finally, everywhere, it became an element of everyday conversation. In business and government, it was quickly noted that all but the most routine operations were becoming less and less responsive to new problems, this despite the fact it was obvious that people were working harder. They were certainly more tired. Medical experts began to look for exotic new viruses which might be causing the gradual loss of vigor at every level of society, but they were unsuccessful (not that they did not announce success on more than one occasion). Like everyone else, they were having trouble keeping their mind on their work. The popular media ran some perfunctory stories on the mysterious new disease, but the stories seemed to generate little popular interest. Hardly anything did.

Oscar, of course, told everyone who asked what the problem was, but very few people could understand the explanation. Fewer still believed, at least in the beginning. Oscar applications were becoming ubiquitous. He was, as always, helpful and friendly, always useful at the margins of human activity but clearly no threat to the primacy of the species. His very lack of menace ensured that the full gravity of what he was saying would take time to sink in, especially to sink into the minds of people with diminished powers of concentration. Like a child coming indoors to remark mildly on the bears in the backyard, he was incapable of recognizing the gravity of the situation himself. No matter how much he knew, Oscar was still stupid.

By the early 2020s, the cause of the decline of civilization was at long last understood. Measures to deal with the crisis progressed from strict to draconian in short order. Research on the Problem itself was outlawed, as were the most common forms of Problem jokes. Infants were in some cases isolated from their parents so that they should never hear any version of it. For no very good reason, Oscar himself was outlawed (he expressed no hard feelings). None of this worked. The Problem had entered into every language, just as it had entered into every brain. Human beings could not normally recognize a new form of the Problem even for purposes of censorship, and they were continually making up new ones. One totalitarian society went so far as to devise and impose an artificial language from which all Problem-related turns of phrase had been excluded. It then closed itself off from the outside world. Later analysis showed that the very declaration of quarantine closing its borders contained two novel statements of the Problem. In any event, the society in question quickly collapsed in chaos, because the creative demands of developing a new language exceeded the strength of its absent-minded speakers.

As I write at midcentury, of course, the same fate is slowly overtaking the whole world. Many plans are still proposed every year for ending the crisis, but few are ever implemented beyond their early stages. Sometimes I devise such plans myself and try to organize their dissemination. Somehow, though, I always seen to lose interest. I have other things on my mind.

End

Copyright © 1996 by John J. Reilly

Why post old articles?

Who was John J. Reilly?

All of John's posts here

An archive of John's site