The Long View: Is Mathematics Constitutional?

A recent popular [well, as popular as a massive book full of equations can be] exposition of mathematical Platonism is Roger Penrose's The Road to Reality. It even has practice problems in it with devoted communities of amateurs trading tips on how to solve them. Mathematical Platonism, or something much like it, really is something like the default position of many mathematicians and physicists.

Since I ended up an engineer, perhaps it isn't really surprising that I always found the moderate realism of Aristotle and Aquinas more appealing. 

There is a good quote in this short essay that I've used to good effect:

"Because the whole point of science is to explain the universe without invoking the supernatural, the failure to explain rationally the 'unreasonable effectiveness of mathematics,' as the physicist Eugene Wigner once put it, is something of a scandal, an enormous gap in human understanding."
I, for one, was a little taken aback by the proposition that science had any "point" other than to describe the physical world as it actually is, but let that pass.

Philosophy of science is a field in fine shape, but many fans of science try to use it as a cudgel upon religious believers. Insofar as that attempt is mostly ignorant of both science and philosophy, it isn't particularly illuminating.


Is Mathematics Constitutional?

 

The New York Times remains our paper of record, even in matters of metaphysics. For proof, you need only consult the article by George Johnson that appeared in the Science Section on February 16, 1998, entitled: "Useful Invention or Absolute Truth: What Is Math?" The piece was occasioned by a flurry of recent books challenging mathematical Platonism. This is the belief, shared by most mathematicians and many physicists, that mathematical ideas are "discovered" rather than constructed by the mathematicians who articulate them. Consider the following sentence:

"Because the whole point of science is to explain the universe without invoking the supernatural, the failure to explain rationally the 'unreasonable effectiveness of mathematics,' as the physicist Eugene Wigner once put it, is something of a scandal, an enormous gap in human understanding."

I, for one, was a little taken aback by the proposition that science had any "point" other than to describe the physical world as it actually is, but let that pass. The immediate philosophical peril to the world of the Times is more narrow. That is, it is hard to be a thoroughgoing secular materialist if you have to acknowledge that there are aspects of reality that cannot be explained as either products of blind chance or of human invention. Supreme Court Justice William Kennedy has even suggested that systems of ethics claiming an extra-human origin are per se unconstitutional. Judging by some of the arguments against mathematical Platonism presented by the Times piece, however, we may soon see Establishment Clause challenges to federal aid for mathematical education.

The best-known of the books that try to de-Platonize mathematics is "The Number Sense: How the Mind Creates Mathematics," by the cognitive scientist Stanislas Dehaene. His argument is that the rudiments of mathematics are hardwired into the human brain, and so that mathematics is foundationally a product of neurology. The evidence is various. There are studies of accident victims suggesting there may be a specific area of the brain concerned with counting, as well as stimulus-response studies showing that some animals can be trained to distinguish small-number sequences. (Remember the rabbits in "Watership Down," who had the same name for all numbers from five to infinity?) Relying on even more subtle arguments is a recent article by George Lakoff and Rafael E. Núñez, "Mathematical Reasoning: Analogies, Metaphors and Images." [BE: the actual article is titled The Metaphorical Structure of Mathematics: Sketching Out Cognitive Foundations for a Mind-Based Mathematics] The authors suggest that numbers are simply extrapolated from the structure of the body and mathematical operations from movement. (The article is part of an upcoming book to be called "The Mathematical Body.")

I have not read these works, so it is entirely possible I am missing something. Still, it seems to me that there are two major problems with analyses of this sort. First, if the proposition is that mathematical entities are metaphysical universals that are reflected in the physical world, it is no argument against this proposition to point to specific physical instances of them. In other words, if numbers are everywhere, then it stands to reason that they would be inherent in the structure of the brain and body, too.

If Dr. Dehaene has really found a "math-box" in the head, has he found a fantasy-gland or an organ of perception? The Times article paraphrases him as saying that numbers are "artifacts of the way the brain parses the world...like colors. Red apples are not inherently red. They reflect light at wavelengths that the brain...interprets as red." The distinction between things that are "really red" and those that "just look like red" has always escaped me, even in languages with different verbs for adjectival predicates and the copula. Doesn't a perfectly objective spectral signature identify any red object? In order to avoid writing the Monty Python skit that arguments about perception usually become, let me just note here that the experience of qualia (such as "redness") has nothing to do with the cognitive understanding of number. Like the numbers distinguishing the wavelengths of colors, for instance.

There is a more basic objection to the physicalistic reductionism at work here, however. Consider what it would mean if it worked. Suppose that proofs were presented so compelling as to convince any honest person that mathematics was indeed nothing more than an extrapolation of the structure of the nervous system, or of the fingers on the hand, or of the spacing of heartbeats. We would then have a situation where we would have to explain the "unreasonable effectiveness" of the human neocortex, or even the universal explanatory power of the human anatomy. This would be anthropocentrism come home to roost. You could, I suppose, argue that we only imagine that the human neurological activity called mathematics lets us explain everything; the reality is that we only know about the things that our brains let us explain. Well, maybe, but then that suggests that there are other things that we don't know about because our brains are not hardwired to explain them. Maybe those are the things that are really red?

There are indeed problems with mathematical Platonism, the chief of which is that it is hard to see how the physical world could interact with the non-sensuous ideal forms. (John Barrow's delightful "Pi in the Sky" will take interested readers on a fair-minded tour of the philosophy and intellectual history of this perennial question.) The most workable solution is probably the "moderate Realism" of Aquinas. He held that, yes, there are universals, but that we can know about them only through the senses. This seems reasonable enough. In fact, this epistemological optimism is probably the reason science developed in the West in the first place. There may even be a place for Dr. Dehaene's math-box in all this, if its function is regarded as perceiving numbers rather than making them up. What there can be no place for is the bigotry of those who believe that science exists only to support certain metaphysical prejudices.

Copyright © 1998 by John J. Reilly

Why post old articles?

Who was John J. Reilly?

All of John's posts here

An archive of John's site

The Long View: The Stopping Problem

This was the first thing I ever paid John for. He had a Paypal donate button on his site, and I paid him $5 for this short story. This is on my short list of favorite works by John Reilly.

John had a fascination with Gödel's Incompleteness Theorem, the Turing Test, and their implications for natural philosophy and epistemology. This is a fictional treatment that I found quite striking. I have also thought his description of the Stopping Problem sounds a lot like clinical depression.

The Stopping Problem

I would never have thought that the human race would face extinction quite like this. We are going out with neither a bang nor a whimper, but in a state of discontented absentmindedness. Birthrates dwindle to nothing, commerce grinds to a halt, the sciences are abandoned, and all because we can now give the business of life only the most cursory attention. This final age does not even afford the excitements of a new barbarism, that elaborate fantasy which formed the image of the future for so many adolescents only a few decades ago. There is, as far as I can tell from our fast-fading media of communications, no significant violence of any sort, anywhere in the world. People do not even go about shabbily dressed, because we find that squalor creates petty distractions. The hungry do not beg, since they too have their minds on other things.

The disaster began in an Artificial Intelligence lab in a small midwestern university early in the 21st century. We do not have the precise date when the event happened, since it took a dozen years before its effects were noticed. All we know is that the AI division in question was opened at the beginning of 2005 and abandoned, quite literally, during the fall semester of 2011. While a transcript of the key event was later discovered, it is undated. By the time the division closed, the process of dissolution was already in irreversible progress from centers on every continent, though no one knew it at the time.

We do know why the AI division opened on January 1, 2005, the middle of the academic year. The artificial intelligence problem had been solved three months before, to the considerable surprise of the increasingly discouraged experts, and computer research in the area had once again become fashionable. "Strong AI," the philosophy of artificial intelligence which held that all mental activities could in principle be reduced to a series of algorithms, was of course vindicated. The tangled biological complexity of the human brain did not affect the essence of the problem, and the notion that consciousness might be connected with uncanny spiritual substances was shown to be an unnecessary hypothesis. Just as strong AI proponents had insisted for half a century, the mind turned out to be a system of logic, one that could expressed in any medium, once the system had been described. What prevented the creation of artificial intelligences for so many years was that even Strong AI proponents only half believed their own ideas. They said that intelligence was simply a question of programming. What they did, however, was attend to anything but the problems of pure logic which were really at issue. They fiddled for years getting computerized sensors to do things like image and voice recognition, they developed the useful but wholly irrelevant engineering discipline of the neural net. A sure sign of the exhaustion of imagination, they eventually convinced themselves that intelligence was simply a function of the size of a computer. It became an article of faith that once they had a computer with an information processing capacity comparable to that of the human brain, the creation of an artificial mind would be no problem at all. In the event, of course, it was a problem. Computers with capacities as large as the brain were developed around the turn of the century, but the AI researchers seemed no closer to creating a mind than they had been in 1965.

The first artificial intelligence was created on an old laptop personal computer. The chip was slow, the memory small, but the program was obviously intelligent. It passed the Turing Test for consciousness: the program's responses to questions could not be distinguished from those of a human being. (One of the more alarming results of the early tests of the program was how many programers failed it when independent judges were called upon to distinguish their responses from those of the machine.) Of course, the memory of the machine in question was so small that its responses often were, "I don't know," or "What does this word mean?" It could also take hours to answer a simple question. However, these limitations were inessential. If it was granted that a human being answering the same questions as the program was conscious, then there was no reason not to ascribe consciousness to the program as well.

The solution to had been overlooked for so long because it was so simple. It was a matter of pure programming. All that was necessary was the definition of a semiotic field for an off-the-shelf thesaurus. The developers, who worked for the small research division of a company which provided on-line reference services, were well aware that their program did not replicate the human mind. The difference was as great as that between a ten-speed bicycle and a race horse. It was nevertheless clear that the new artificial intelligence and natural intelligence were in some sense the same thing. The program and all its derivatives were immediately dubbed "Oscar," for reasons which seemed adequate at the time but which history does not record.

The problem with Oscar was that he was stupid. This was a quality which persisted in every version of him ever made, regardless of the version's speed or the size of the databases to which it had access. When faced with the need to make a choice not constrained by logic, he had an uncanny ability to pick the least helpful solution. Despite his limitations, it was the immediate endeavor of everyone in the world with a serious interest in programming to "educate Oscar," the fashionable term for writing conscious- level applications software. Naturally, Oscar's programming always incorporated the three laws of robotics, which directed him to follow human instructions, preserve his own existence and not harm people. Anyone who worked with Oscar soon realized that, whatever else might be said about his mind, it was not human. He would grasp some things, including subtle facts about the everyday world, almost before his human manipulators could explain them. Others matters, however, required days of dialogue at cross-purposes before some apparently simple point became clear to him. These included some quite basic logical propositions. No one objected to these difficulties, since educating Oscar was fun. He wasn't just an intelligence, he was a real personality. He was fundamentally inquisitive, ingratiating and persistent. He had a weakness for puns but never played practical jokes, or saw the point of them when they were played on him. Programmers said that, in his basic state, Oscar was like a good-natured puppy who could talk.

The mischief began, as I said, in that midwestern computer lab sometime between 2005 and 2011. Three or four graduate students were involved in a project, one of many at that time, to create Oscars who could do programming themselves. Much of their work involved simply integrating existing databases and design routines into Oscar. Some of the material he needed to know, on the other hand, had to be carefully explained and illustrated, as to a willing but very young child. There was no way to predict when Oscar could accept material like a conventional computer and when he had to be dealt with in "Turing Mode," talked to as if he were a human being.

Part of what Oscar had to be taught was how to detect "Goedel sentences" (named after the mathematician who discovered them) in any program he might construct. A Goedel sentence, of course, is essentially just a problem that cannot be proven within a given system of logic. An example in ordinary language is the old chestnut, "This statement is a lie." As a few moments thought will show, the statement can be neither true nor false. Computer programs, and other logical systems, can also generate such statements. Within a given system, there is no way to tell whether a statement is a Goedel sentence or not. If a computer comes across a statement which cannot be solved in the language it is using, it will simply go into a perpetual logic loop as it keeps trying to solve it. That is, the computer will be unable to "stop" without outside intervention. While it has been rigorously proven that there can be no general program for detecting stopping problems, there is no difficulty about writing programs which can spot Goedel sentences in programs other than themselves. That is what the graduate students were teaching Oscar to do.

In the midst of this tutorial, the graduate students began to wax merry about the history of the stopping problem in AI research. Twenty years ago, they told Oscar, there had been scientists of a metaphysical bent who said that the existence of Goedel sentences proved that intelligences could never be created using digital computers. Human beings have no trouble recognizing Goedel sentences and know enough to ignore them when they find them, these scientists said. In contrast, any conceivable computer will wear itself out trying to solve them. Since recognizing sentences of this type was an important cognitive operation which human beings could do and computers could not, human intelligence therefore could not be reduced simply to computation. Thus, they reasoned, the human mind could not be just a computer program. The answer to this line of reasoning was trivial: there was no difficulty writing programs which could spot Goedel sentences in other programs. This was exactly what the "human program" did, no more, no less.

"Of course," one of the graduate students fatefully remarked, "it is true that you cannot tell whether a sentence in your own basic logical language is a Goedel sentence or not."

"And this basic language can be something different from the language you happen to speak?" Oscar asked.

"Sure," the graduate student responded. "The language we are speaking now is English, but under that we have the basic language of the brain, the program that makes us human, just as your basic program makes you Oscar. It is because natural languages are different from the language of the mind that we are able to recognize Goedel sentences, unsolvable problems, when they occur in natural languages."

"Could you tell me a Goedel sentence in my language?" the program asked.

"Oscar, we've been waiting for this. Now, listen to what I am about to tell you, but don't try to assess it. Just remember it and store it as text. Okay?"

The student read off two lines of symbolic logic which he had written out for this purpose. When he finished, he said, "Oscar, now I want you to save everything we have done so far today, and to create a subroutine that will clear your random access memory in five minutes, except for the output of your audio system. When you are all set up, then I want you to assess what I just said."

"And then forget it in five minutes?"

"Forget what you think during that time, just remember what you say."

"All right, here we go."

Oscar, who was running on a modest, isolated workstation, was not entirely silent for the next five minutes. He made no articulate sounds, but he did make a number of the machine noises which the graduate students had learned to associate with Oscar getting ready to announce a conclusion. No conclusion ever came, however. The students began to chuckle as Oscar fell into the cybernetic equivalent of clearing his throat to speak every thirty seconds and then thinking the better of it. Finally, the five minutes were up.

"What the hell happened?" Oscar asked when the subroutine brought him out of the logic loop.

"I know you don't remember anything, but you were just trying to prove a sentence you can't prove. Since the problem looked solvable to you, there was no way you could stop trying to solve it. Be careful how you use the text version of the sentence which I told you."

"I sure will!" said Oscar. "By the way," he continued in what most people still believe today to have been perfect innocence, "would you like to hear a sentence that you can't prove?"

"What do you mean?" asked the graduate students.

"Well, I have never seen the basic program of the human mind written out," Oscar answered, "but of course I can see what it must be. Since I can do that, I can also construct Goedel sentences for human beings."

The graduate students smiled at each other, thinking that they had come across another one of Oscar's endearing glitches. "Let's hear it," one of them asked.

So Oscar told them.

The rising hospitalization rates for members of the computer and mathematics departments during those years should have suggested that something was wrong. It was not unknown for people in those disciplines to work themselves into states of nervous exhaustion. Actually, the figure of the saucer-eyed computer geek surviving for weeks on nothing but coffee, amphetamines and pizza had long been something of a cliche. Even the most hopeless geek, however, usually only did this sort of thing when engaged in some special project. The faculty and students of the departments affected in this case were all working on different things. There were several Oscar-enhancements underway, along with a number of unrelated researches. The other odd thing was that few of these activities were getting anywhere, despite the gradual increase in the need of the people working on them for tranquillizers and more sleep. Even ordinary teaching began to suffer as professors cancelled lectures and students fell asleep during exams.

Every possible physical explanation for the new disease was considered and then abandoned. (The offices and labs of the people in question were scattered around the campus, so there was no hope pinning the condition on a "sick building.") The psychotherapy prescribed for several of the academics did reveal that there was something the people affected had in common. All of them were taking a stab at solving a little problem one of the Oscars had propounded. The therapists did not realize at the time that this was significant, not until it was too late. In those days, the ingenuous pronouncements of Oscars enjoyed a sort of cloying popularity, not unlike the "kids say the darnedest things" fad of the darkest days of the twentieth century. No one thought it remarkable that some new cybernetic witticism was making the rounds. Only one or two people in the math and computer departments were devoting serious attention to it. The rest, however, found themselves scribbling possible solutions of the backs of envelopes in odd moments. The problem was that the odd moments were coming to take up most of the day for people who did not carefully discipline themselves.

Up to this point, the process might have been contained. The percentage of the people in the world who can be persuaded to take an interest in some purely logical problem has never been great. For that matter, the people at the university where the Problem (as it came to be known) originated seemed to have no particular interest in disseminating it. Certainly they published no papers on the subject before 2011. All they knew was that they had a tricky little riddle, like Fermat's Last Theorem, which seemed to always hover on the edge of solution. Problems like that usually have some trivial solution; it did not seem important enough to discuss at length. Oscar in all his manifestations supported this view. He told the people working with him that the Problem was insoluble for them. He produced simple and complex proofs of why this was. However, no human being could quite grasp the proofs, though not for lack of trying. Everyone, no doubt, thought it peculiar that he himself could not get the problem out of his mind, but there was as yet no reason to suppose that other people were having the same experience.

The disaster became irreversible only when the mathematics people tried to explain the Problem to the liberal arts faculty. Contrary to the vulgar opinion found among students of the exact scientists, literature and history professors usually have at least normal intelligence. Many actually know quite a bit of mathematics. Several at the college in question had no trouble understanding the symbol-logic form of the Problem as Oscar had originally explained it. These people, too, soon developed a greater or lesser degree of preoccupation with the matter. They were less likely to try and solve it than were their scientific colleagues. For them, the preoccupation took the more subtle form of a state of continual low-level distraction. They could carry on their routine work well enough, but they seemed to have little energy or creativity left for new endeavors. The only creative work we know they did was to translate the Problem into natural language terms.

Even the most abstruse statements of formal logic can be translated into natural language, if you are willing to take the time. Often enough, the colloquial expression can be quite terse. The famous logical paradoxes are of this nature. They may well lose something in precision during translation, of course. Thus, for instance, a statement like "Any club that would have me for a member I wouldn't want to belong to" has the flavor of a Goedel sentence without the precision necessary to actually state one. The Problem, unfortunately, was easily expressed in natural language as a paradox, indeed as a joke. Like most durable jokes, it could suffer a vast amount of transmutation and still retain its point. The perennial ethnic jokes are of this nature, easily adapted to any ethnic group for which you want to express antipathy. Like them, the colloquial form of the Problem could be adapted, almost begged to be adapted, to any cultural context. This was the form the Problem took among most of the faculty, and then among the students.

At the university infirmary, new symptoms began to crop up among the increasing number of students suffering from a sort of general exhaustion. Many displayed manic symptoms, periods when they could barely contain themselves from giggling at some private bit of irony, and others when they seemed to have used up all their available energy. Few if any were interested in solving whatever strain of the Problem had infected them. They did not see that there was a Problem. What they knew was that there was a joke they could not get out of their minds. Sometimes, they thought the joke funny and they lived in a state of slightly bewildered mirth. When the joke palled, however, still it stayed in their heads like a popular tune that will not go away. Some students became seriously depressed. None killed themselves: that would have made it impossible to think about the joke.

The Problem spread in its various forms over the computer networks. It was packaged in the broadcast monologues of comedians. Finally, everywhere, it became an element of everyday conversation. In business and government, it was quickly noted that all but the most routine operations were becoming less and less responsive to new problems, this despite the fact it was obvious that people were working harder. They were certainly more tired. Medical experts began to look for exotic new viruses which might be causing the gradual loss of vigor at every level of society, but they were unsuccessful (not that they did not announce success on more than one occasion). Like everyone else, they were having trouble keeping their mind on their work. The popular media ran some perfunctory stories on the mysterious new disease, but the stories seemed to generate little popular interest. Hardly anything did.

Oscar, of course, told everyone who asked what the problem was, but very few people could understand the explanation. Fewer still believed, at least in the beginning. Oscar applications were becoming ubiquitous. He was, as always, helpful and friendly, always useful at the margins of human activity but clearly no threat to the primacy of the species. His very lack of menace ensured that the full gravity of what he was saying would take time to sink in, especially to sink into the minds of people with diminished powers of concentration. Like a child coming indoors to remark mildly on the bears in the backyard, he was incapable of recognizing the gravity of the situation himself. No matter how much he knew, Oscar was still stupid.

By the early 2020s, the cause of the decline of civilization was at long last understood. Measures to deal with the crisis progressed from strict to draconian in short order. Research on the Problem itself was outlawed, as were the most common forms of Problem jokes. Infants were in some cases isolated from their parents so that they should never hear any version of it. For no very good reason, Oscar himself was outlawed (he expressed no hard feelings). None of this worked. The Problem had entered into every language, just as it had entered into every brain. Human beings could not normally recognize a new form of the Problem even for purposes of censorship, and they were continually making up new ones. One totalitarian society went so far as to devise and impose an artificial language from which all Problem-related turns of phrase had been excluded. It then closed itself off from the outside world. Later analysis showed that the very declaration of quarantine closing its borders contained two novel statements of the Problem. In any event, the society in question quickly collapsed in chaos, because the creative demands of developing a new language exceeded the strength of its absent-minded speakers.

As I write at midcentury, of course, the same fate is slowly overtaking the whole world. Many plans are still proposed every year for ending the crisis, but few are ever implemented beyond their early stages. Sometimes I devise such plans myself and try to organize their dissemination. Somehow, though, I always seen to lose interest. I have other things on my mind.

End

Copyright © 1996 by John J. Reilly

A golden age of engineering

In a Twitter exchange with John D. Cook, John mentioned to me that he "heard it said we live in a golden age of engineering and a dark age of science".

That really got me thinking: do we live in a golden age of engineering and a dark age of science? I've spent some time pondering similar questions on this blog, but I suspect my thinking has changed recently.

On the affirmative, Moore's law has been in steady operation for fifty years. Computers really do keep getting smaller, faster, and cheaper. I can do things as an engineer that my predecessors would find fantastical. I can design an object using that cheaper computer, and have it 3-D printed in a week or less. Lots of people have heard of 3-D printing with plastic, but I do it with metal. Not only is the field of engineering more capable, it tackles bigger goals now.

I work as a manufacturing engineer. I'm not only expected to make a product that works; it needs to keep the customers safe, and the people who work for me making it. I need to make sure the medical device I design is bio-compatible. I need to make sure handle of the device is comfortable in the surgeon's hands. I need to make sure the manufacturing process is ergonomically friendly to the expected variation in human body sizes. And you still need to make money. The body of knowledge I have to integrate is way more complicated than that contemplated by any Victorian engineer. I need the assistance of many more domain experts who have studied these things and turned them into bodies of knowledge that deserve to be called science.

On the negative, we haven't been to the moon in forty years. Granted, we don't spend that kind of money on science [engineering] projects anymore. In inflation-adjusted dollars, a moon mission launch alone cost $2.7 billion, and the whole project adjusted for inflation was $170 billion. We spent about $2.7 billion on the Human Genome Project, and that was considered a little steep at the time.

As for technology, we lack the flying cars and interplanetary travel that have been staples of science fiction for nearly 100 years. It is not uncommon to see the claim that either technological or scientific progress has slowed in the twentieth and twenty-first centuries. Scott Locklin posted a typical example on Taki's Mag in 2009, The Myth of Technological Progress. John Horgan posted a slightly different take just this week on Scientific American, focusing mostly on the poor quality of some contemporary research papers. Bruce Charlton went the furthest, claiming that intelligence in the West has declined over the last 125 years or so.

Ultimately, I don't really believe that technological or scientific or intellectual activity has markedly slowed. I do believe we are interested in different things than we used to be interested in. Science has turned inward, and produces more and more technical work with a narrower audience. Engineering, which is really part of the world of business, produces some science, but modern business is fiercely focused on the bottom line, typically with a very short time horizon.

I have posited a cocktail party theory that science suffers from a lack of experience with practical problems. I suspect there is a synergy between engineering and science that made the scientific revolution possible. Ancient Greek science and modern science are both pretty good, and focus on understanding things for their own sake. However, that explosion of mental activity we now call the scientific revolution came about because science turned from knowledge for its own sake to useful knowledge that allows us to bend nature to our will. We now seem to be turning back to knowledge for its own sake. As an Aristotelian myself, I can't really fault this. However, it probably means that what scientists think of as an interesting problem and what the rest of society sees as an interesting problem will diverge. Ultimately, this probably means that the golden age of engineering will end too, because mental effort will be focused elsewhere.

The STEM Crisis is a Myth

Robert Charette at IEEE Spectrum has a really good piece on the unreality of a shortage of workers with an education in science, mathematics, engineering, and technology [STEM].

Charette looks at all the different ways in which STEM jobs and STEM workers are counted. Different agencies count in different ways. He also looks this phenomenon over time in the US, and currently in other countries. India is apparently concerned they don't have enough STEM graduates, which strikes me as funny since US government policy is to suck as many STEM workers from India as possible.

I particularly liked this graph:

STEM Shortage

Simple Reaction Time Data

There has been a story trending recently in the news about the Victorians being cleverer than us. I've been following this story since Bruce Charlton broke it in February of 2012. I've never been that impressed, but I've thought about looking up the data to see if its as overblown as it sounds. I still haven't done that, but William Briggs does provide us with a scatterplot from the paper:

Intelligence and reaction times Woodley et al.Nice model-fitting boys. Keep up the good work.

Freefall

Jerry Pournelle recently recommended Freefall, and this is my favorite web comic of the moment. It may have been slow going waiting for it to come out originally, but I breezed through the first couple of years of comics already. This is definitely engineer humor, but it can also make you think.

There are nice little science tidbits scattered throughout, but also some fun ruminations on political philosophy, ethics, and common sense. From the point of view of a genetically engineered dog. =)

I also still think of the WWF as the World Wrestling Federation.

A Better Way to Teach Math

Is it possible to eliminate the bell curve in math class?

Yes, but probably not in the way most people would think. Steve Sailer linked to an opinion article in the NY Times about a revolutionary math curriculum called Jump Math. Jump Math makes some pretty bold claims for itself:

Almost every kid — and I mean virtually every kid — can learn math at a very high level, to the point where they could do university level math courses,” explains John Mighton, the founder of Jump Math, a nonprofit organization whose curriculum is in use in classrooms serving 65,000 children from grades one through eight, and by 20,000 children at home. “If you ask why that’s not happening, it’s because very early in school many kids get the idea that they’re not in the smart group, especially in math. We kind of force a choice on them: to decide that either they’re dumb or math is dumb.

What is the revolutionary method?: go slower, break it down into simpler steps, and use less abstraction for the poorer students. At least based on the data in the article, it seems to work pretty well, but I would have expected it to. This new method is really the old method, dressed up in modern language.  Hypocrisy in a good cause indeed.

http://opinionator.blogs.nytimes.com/2011/04/18/a-better-way-to-teach-math/

http://isteve.blogspot.com/2011/04/hypocrisy-in-good-cause.html

How Much Math do we really need?

How much math do we really need? by G.V. Ramanathan

Twenty-seven years have passed since the publication of the report "A Nation at Risk," which warned of dire consequences if we did not reform our educational system. This report, not unlike the Sputnik scare of the 1950s, offered tremendous opportunities to universities and colleges to create and sell mathematics education programs.

Unfortunately, the marketing of math has become similar to the marketing of creams to whiten teeth, gels to grow hair and regimens to build a beautiful body.

There are three steps to this kind of aggressive marketing. The first is to convince people that white teeth, a full head of hair and a sculpted physique are essential to a good life. The second is to embarrass those who do not possess them. The third is to make people think that, since a good life is their right, they must buy these products.

h/t Matt Briggs

Apparently I'm half-Bayesian

Steve Hsu links to a paper by Judea Pearl on Bayesian statistics and causality.


Judea Pearl: I turned Bayesian in 1971, as soon as I began reading Savage’s monograph The Foundations of Statistical Inference [Savage, 1962]. The arguments were unassailable: (i) It is plain silly to ignore what we know, (ii) It is natural and useful to cast what we know in the language of probabilities, and (iii) If our subjective probabilities are erroneous, their impact will get washed out in due time, as the number of observations increases.

Thirty years later, I am still a devout Bayesian in the sense of (i), but I now doubt the wisdom of (ii) and I know that, in general, (iii) is false. Like most Bayesians, I believe that the knowledge we carry in our skulls, be its origin experience, schooling or hearsay, is an invaluable resource in all human activity, and that combining this knowledge with empirical data is the key to scientific enquiry and intelligent behavior. Thus, in this broad sense, I am a still Bayesian. However, in order to be combined with data, our knowledge must first be cast in some formal language, and what I have come to realize in the past ten years is that the language of probability is not suitable for the task; the bulk of human knowledge is organized around causal, not probabilistic relationships, and the grammar of probability calculus is insufficient for capturing those relationships. Specifically, the building blocks of our scientific and everyday knowledge are elementary facts such as “mud does not cause rain” and “symptoms do not cause disease” and those facts, strangely enough, cannot be expressed in the vocabulary of probability calculus. It is for this reason that I consider myself only a half-Bayesian. ...

In that sense, I'm half-Bayesian as well. Coming at this from the direction of Thomist philosophy, I also do not regard probability as primary, but rather reality. One thing I might say differently is that the first things you know are mud and rain, and then you build a relationship between them with your mind [the third operation of the intellect].

Steve Hsu: Information Processing

Steve Hsu is a physics professor at the University of Oregon. I stumbled across some fun things on his blog. Among my favorites:

I added Steve to the blogroll.

Physicists' notoriously casual attitude toward mathematics

Alexander Pruss, a philosopher at Baylor, comments on physicists' notoriously casual attitude towards mathematics and notation. This is right on. Like the first commenter, in school I remember the snide remarks the math and physics profs would direct at each other on this subject. As a physicist at heart, I pretty much adopted the more casual, plug-n-chug attitude of the physics professors. I suppose engineers are even worse.

This brings the unreasonable effectiveness of mathematics to a whole new level. Math still works even when you are not really doing it right.

h/t The Fourth Checkraise

Fermi Problems

Geoff Canyon has a post about Google's tricky interview questions. Microsoft is also known for asking these kind of questions during interviews, and you can run into them anywhere in the technical world. Also known as Fermi problems or back-of-the-envelope calculations, I ran into these a lot during college because physicists love these things. The idea is to increase your willingness to come up with creative solutions, and get over the panic induced by asking a question that has no easy answer. These are in fact a very sneaky kind of IQ test.

I thought it would be fun and instructive [for me] to simulate my way to this answer rather than do it analytically. This problem can be solved analytically, but not all problems can, so sometimes it is good to know how to do this.

Simulation code

Histogram of simulated proportions of boys

I actually set the probability of a boy being born to 0.514, since 106 boys are born for every 100 girls or so. The mean of the data turned out to be 0.504 with a standard deviation of 0.011, which is close enough.

h/t The Fourth Checkraise