A new robot locomotion prototype from the eponymous iRobot, using a selective densification/expansion of granular media to enable shape changing, and thereby the ability to move and squeeze through irregular spaces. I find the prototype kind of slimy and disturbing looking, rather like something from a scifi-horror movie. I sense licensing opportunities.
Robin Hanson asked his students what kind of robots they would like to see, and didn't like the answer.
On Tuesday I asked my law & econ undergrads what sort of future robots (AIs computers etc.) they would want, if they could have any sort they wanted. Most seemed to want weak vulnerable robots that would stay lower in status, e.g., short, stupid, short-lived, easily killed, and without independent values.
This is clearly one of those prejudices that must be overcome by mandatory robot sensitivity training. The plebes just won't get with the program! Popular imagination, from Frankenstein's monster to the Terminator, seems to keep coming back to the idea that our creations may not be entirely happy to see us. Or that they may naturally see themselves as a distinct group and act accordingly.
Strong AI advocates and transhumanists press on regardless, because of course we are so good and smart that nothing bad like that could ever happen! I've said here before that I don't expect strong AI to work, because thinking is not computing. I do wonder however, whether the failure of AI stems from a poor conception of both thinking and mathematics. Australian philosopher Jim Franklin has noted that mathematics deals most generally with relations, rather than number. This gives mathematics the ability to deal directly with formal aspects of reality, not only quantitative ones.
So the question is, can Turing machines access the formal aspects of mathematics through Boolean algebra? I don't know the answer to that question. One kind of formal relation, true/false, is pretty easily done, but what about others? Part/whole, same/different, and symmetry seem like good candidates as well. But the trick is getting the computer to be able to abstract those qualities from concrete particulars.
Thinking, or to use its Scholastic name, intellection, is not really the same thing as intelligence, which is what most AI is trying to emulate. Intelligence is mostly processing speed, and is pretty well modeled by neural networks and whatnot. Intellection, on the other hand, is the ability to abstract immaterial forms from embodied particulars. This is the real stumbling block for strong AI, because a computer or robot can be as intelligent as you like, but until it can separate form from matter, it will not be able to think.
As for Hanson's other contention, that we needn't have good robots as long as we have law abiding robots, I wish him luck with that. The basic problem with this Kantian republic approach is that justice and logic alike are cruel, and I suspect that a race of perfectly lawful robots would be more than we could bear. Machines are not built to comprehend a lie, even a pleasant one.
ch/t Jimmy Akin
My essay, Dorothy Sayers on Education, is consistently one of the top hit generators on my site. So, I was not all that surprised to find today that I had been plagarized, by a robot.
I have no other explanation. The post is very clearly mine, but with random words inserted and a bunch of links to credit card offers and whatnot added in. The whole blog seems devoted to just this kind of thing. Mostly physics kind of things, all garbled with random words, but nonetheless recognizable. It struck me, because it is plagarism done by robots, that is mostly seen by robots, in the hope of generating more robot traffic to increase ad revenue. Maybe Skynet really is going to become self-aware.
I will deign to link to the site and further its nefarious mission, but there you have it. Ripped off by a robot.