Monday, March 31, 2008

Learning Robot is "A Lot Like a Puppy"

I'm excited to report that I just read the first news about real AI that's ever made me just a little bit nervous.

First off: apparently, when artificial neural networks are used in AI, they have to be carefully limited to keep robots from doing dumb things. I'm pretty sure that at this level this just means controlling the number of completely irrational solutions the robot comes up with for a problem on the level of "There's a wall in my way," but looking ahead to the future, I like to think that this could mean one missing line of code could stand between peaceful coexistance and robots eradicating humanity.

On a more serious note, the point of the article is that through cleverly mixing artificial neural networks with the traditional "pre-programmed" approach to AI, they found something that works a lot better than either one works on its own.

Here's a quote, because this is just that awesome:

Working in the EU-funded COSPAL project, Felsberg’s team found that using the two technologies together solves many of those issues. In what the researchers believe to be the most advanced example of such a system developed anywhere in the world, they used ANN to handle the low-level functions based on the visual input their robots received and then employed classical AI on top of that in a supervisory function.

“In this way, we found it was possible for the robots to explore the world around them through direct interaction, create ways to act in it and then control their actions in accordance. This combines the advantages of classical AI, which is superior when it comes to functions akin to human rationality, and the advantages of ANN, which is superior at performing tasks for which humans would use their subconscious, things like basic motor skills and low-level cognitive tasks,” notes Felsberg.

What's really interesting is how they handle the problem of setting a robot free to learn on its own. Essentially the robot has no innate criteria for making decisions, so how does it know when it's done something "right?" Well, we tell it so. A human operator has a device with two buttons.

The good boy, button, and the bad boy button.

Wow. Can you imagine when this becomes a toy, or an ethical issue?

There's actually a video game that functions similarly. I guess it'll be harder without a magical fairy.

1 comment:

Gyro said...

I like how they're merging different AI protocols. I was just thinking the other day about how emotions and cognitive thinking are actually two completely separate systems that error-check each other, often with hilarious results.

In the end, teaching robots might be somewhat easier than teaching humans. But we all knew that anyways.

I do like the fact that most of the discussions on the subject of robots recently have addressed how much like us we're making them.

I wonder how that will translate: the ideal us (robots better and more awesome), the convenient us (robots programmed to love and serve), or the real us.

Probably some combination of the above.

Just watch out when we build planetary network-level AIs, and they start experiencing Rampancy...