Just whose side are you on, robo-traitor?

Robin Hanson asked his students what kind of robots they would like to see, and didn't like the answer.

On Tuesday I asked my law & econ undergrads what sort of future robots (AIs computers etc.) they would want, if they could have any sort they wanted. Most seemed to want weak vulnerable robots that would stay lower in status, e.g., short, stupid, short-lived, easily killed, and without independent values.

This is clearly one of those prejudices that must be overcome by mandatory robot sensitivity training. The plebes just won't get with the program! Popular imagination, from Frankenstein's monster to the Terminator, seems to keep coming back to the idea that our creations may not be entirely happy to see us. Or that they may naturally see themselves as a distinct group and act accordingly.

Strong AI advocates and transhumanists press on regardless, because of course we are so good and smart that nothing bad like that could ever happen! I've said here before that I don't expect strong AI to work, because thinking is not computing. I do wonder however, whether the failure of AI stems from a poor conception of both thinking and mathematics. Australian philosopher Jim Franklin has noted that mathematics deals most generally with relations, rather than number. This gives mathematics the ability to deal directly with formal aspects of reality, not only quantitative ones.

So the question is, can Turing machines access the formal aspects of mathematics through Boolean algebra? I don't know the answer to that question. One kind of formal relation, true/false, is pretty easily done, but what about others? Part/whole, same/different, and symmetry seem like good candidates as well. But the trick is getting the computer to be able to abstract those qualities from concrete particulars.

Thinking, or to use its Scholastic name, intellection, is not really the same thing as intelligence, which is what most AI is trying to emulate. Intelligence is mostly processing speed, and is pretty well modeled by neural networks and whatnot. Intellection, on the other hand, is the ability to abstract immaterial forms from embodied particulars. This is the real stumbling block for strong AI, because a computer or robot can be as intelligent as you like, but until it can separate form from matter, it will not be able to think.

As for Hanson's other contention, that we needn't have good robots as long as we have law abiding robots, I wish him luck with that. The basic problem with this Kantian republic approach is that justice and logic alike are cruel, and I suspect that a race of perfectly lawful robots would be more than we could bear. Machines are not built to comprehend a lie, even a pleasant one.

ch/t Jimmy Akin