Page 1 of 1

Robots learn how to "lie"

PostPosted: Sat Aug 22, 2009 9:54 am
by Momo-P
Source
With the development of killer drones, it seems like everyone is worrying about killer robots.
Now, as if that wasn't bad enough, we need to start worrying about lying, cheating robots as well.

In an experiment run at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of Lausanne, France, robots that were designed to cooperate in searching out a beneficial resource and avoiding a poisonous one learned to lie to each other in an attempt to hoard the resource. Picture a robo-Treasure of the Sierra Madre.

The experiment involved 1,000 robots divided into 10 different groups. Each robot had a sensor, a blue light, and its own 264-bit binary code "genome" that governed how it reacted to different stimuli. The first generation robots were programmed to turn the light on when they found the good resource, helping the other robots in the group find it.

The robots got higher marks for finding and sitting on the good resource, and negative points for hanging around the poisoned resource. The 200 highest-scoring genomes were then randomly "mated" and mutated to produce a new generation of programming. Within nine generations, the robots became excellent at finding the positive resource, and communicating with each other to direct other robots to the good resource.

However, there was a catch. A limited amount of access to the good resource meant that not every robot could benefit when it was found, and overcrowding could drive away the robot that originally found it.

After 500 generations, 60 percent of the robots had evolved to keep their light off when they found the good resource, hogging it all for themselves. Even more telling, a third of the robots evolved to actually look for the liars by developing an aversion to the light; the exact opposite of their original programming!


It's kind of debateable whether or not you really wanna consider it lying, I mean...they ARE robots for heaven's sake, but still...pretty interesting.

PostPosted: Sat Aug 22, 2009 11:44 am
by Technomancer
Momo-P (post: 1340320) wrote:Source


It's kind of debateable whether or not you really wanna consider it lying, I mean...they ARE robots for heaven's sake, but still...pretty interesting.


They are robots, but they still are evolving a form of deception (and a strategy for detecting deception). Given more variables, it would be interesting to see what other behaviours might evolve, and how the robots would interact.

PostPosted: Sat Aug 22, 2009 12:01 pm
by Tsukuyomi
Even if they are AI, they are learning ^^ Even if it is trhough glitches and whatnot ^^

That is pretty interesting ^^ I just now imagined a group of robots going,"I didn't find anything. Nope, nothing at all." XDDD

PostPosted: Sat Aug 22, 2009 3:06 pm
by sharien chan
Thats weird. Kind of cool...but mostly weird. Who would have thought? Though sometimes I wonder if people realize that maybe there are some experiments or inventions we shouldn't do.

PostPosted: Sat Aug 22, 2009 7:53 pm
by SnoringFrog
That's definitely interesting, and I agree with Technomancer, it'd be interesting to see how this would go with extra variables involved.

PostPosted: Sat Aug 22, 2009 8:33 pm
by blkmage
The article links to another article which links to a more detailed article.

At the beginning, the light had no significance. The robots then learned to shine a light whenever they found resources. They then learned that if a light shone, then that meant there were resources there. Pretty soon, they realized that resources were scarce, so some of the learned not to shine lights when they found resources. After that, some robots learned not to depend on the light to find resources. As a result, those who continued to shine lights weren't in as bad shape as before.

Also interesting is that the attraction to lights and the notification through lights isn't binary. That is, the robots weren't always attracted to or always ignoring lights. A lot of them would be attracted sometimes and the same goes for the decision to shine a light when resources were discovered.

PostPosted: Sat Aug 22, 2009 9:39 pm
by Fish and Chips
"Killbot, you're not thinking of annihilating humanity are you?"
"No. Sir."

Greatest scientific achievement.

PostPosted: Sat Aug 22, 2009 9:43 pm
by Solid Ronin
Hey, Andrew won't you believe in him...

Image

PostPosted: Sat Aug 22, 2009 10:59 pm
by Warrior4Christ
Knowing a bit about AI, I'm pretty sure they pre-programmed all the behaviour in, including lying, lie-detection, turning light on or off, etc. and then the 'gene' it has determines its behaviour at runtime. So it's no coincidence that they turned to lying, since that existed in their original programming.

PostPosted: Sun Aug 23, 2009 5:39 am
by Technomancer
Warrior4Christ (post: 1340453) wrote:Knowing a bit about AI, I'm pretty sure they pre-programmed all the behaviour in, including lying, lie-detection, turning light on or off, etc.


No, it wasn't. Only the basic aspects of the hardware control were programmed in. The actual program that controlled the robot's behaviour was the genome itself (in other words, a genetic program). So for example, in the case of optical signalling, while the capability always existed in the robot, the question of when and why signalling should occur, was completely controlled by the adaptation of the genetic program.

PostPosted: Sun Aug 23, 2009 8:03 am
by shooraijin
The more interesting part might be the algorithm that "improved" the internal behaviour.

PostPosted: Sun Aug 23, 2009 8:47 am
by Kaligraphic
Of course, the people conducting the experiment had to have been looking for that sort of response. If they'd been looking for the highest total score, they'd have added a bonus based on the highest total score, rather than just using individual scores.

So when a killbot goes berserk because of this sort of programming, for all the weapons and ammo it will have stockpiled, at least it'll also be attacking other killbots as potential competitors. Sporadic, individual killbot murder sprees might be enough to keep a killbot operator out of jail ("It must have just been one of those crazy killbots. Genetic algorithm inbreeding and all that.") but they won't spark an uprising. They'll all be waiting for the opportune moment.

See, it's when they learn to cooperate that they'll have a chance of exterminating us.



More seriously, though, if the study operated as blkmage says, then it isn't lying at all. Rather, it would simply be a lack of active cooperation.

PostPosted: Sun Aug 23, 2009 9:12 am
by Technomancer
Kaligraphic (post: 1340496) wrote:Of course, the people conducting the experiment had to have been looking for that sort of response. If they'd been looking for the highest total score, they'd have added a bonus based on the highest total score, rather than just using individual scores.


True, the fitness criterion will define the kind of solutions arrived at. A more complex simulation (or different scoring) might very have favoured the evolution of cooperative behaviour. A more interesting approach though would be to still focus on individual rewards, but allow for the possibility that the average personal reward might be higher in the presence of cooperation. Such a system might also have room for the evolution of "cheating" strategies, as well as various counter-strategies, etc.

Incidentally, a bit of background on the underlying method can be found here:
http://en.wikipedia.org/wiki/Genetic_programming

PostPosted: Sun Aug 23, 2009 1:06 pm
by minakichan
This is AWESOOOOOOOOOOOOOME!

Or would be if this was the precursor to a robot uprising. =(

PostPosted: Sun Aug 23, 2009 5:41 pm
by Whitefang
I think the robots would have evolved quite differently if they had to seek out mates with compatible scores and would "die" or be decommissioned when their own scores became too low. I would also like to know if the robots ever tried to fool the other robots by turning on their light near no source or the bad source.

PostPosted: Sun Aug 23, 2009 8:01 pm
by ich1990
These self optimizing AI systems are really cool. I can already envision people selectively "breeding" robots for gladiator style competitions.

PostPosted: Sun Aug 23, 2009 8:05 pm
by SnoringFrog
Whitefang (post: 1340617) wrote:I think the robots would have evolved quite differently if they had to seek out mates with compatible scores and would "die" or be decommissioned when their own scores became too low. I would also like to know if the robots ever tried to fool the other robots by turning on their light near no source or the bad source.


I believe something did mention that they would turn their lights on at no/bad source sometimes, because it also mentioned how some developed a lack of dependence on using other robots' lights to locate the good source.

These self optimizing AI systems are really cool. I can already envision people selectively "breeding" robots for gladiator style competitions.
I'm seeing BattleBots on steroids--er...genetic programming.

PostPosted: Sun Aug 23, 2009 10:21 pm
by Fish and Chips
ich1990 (post: 1340673) wrote:These self optimizing AI systems are really cool. I can already envision people selectively "breeding" robots for gladiator style competitions.
"I am Sp@rticus."

PostPosted: Mon Aug 24, 2009 6:19 am
by Maokun
Fish and Chips (post: 1340443) wrote:"Killbot, you're not thinking of annihilating humanity are you?"
"No. Sir."

Greatest scientific achievement.


Fish and Chips (post: 1340709) wrote:"I am Sp@rticus."


I lol'd. You are in a roll, sir.

As for the article I believe that while the "lying" mechanism wasn't programmed in from the beginning, the experiment was conditioned to obtain that result. Personally I found less interesting that the robots lied than the fact they developed a sense of ownership over their findings. Lying was just a device to protect it.

I'd feel worried about the unavoidably upcoming "Day of the Machines" if I weren't rooting myself for a "Night of the Living Dead" scenario. Now, if they were the day and the night of the same day, things could get ugly.

PostPosted: Mon Aug 24, 2009 6:21 am
by shooraijin
Fish and Chips (post: 1340709) wrote:"I am Sp@rticus."


Except that this lot would probably say, "Yeah! He's Sp@rt@cus. Kill him!"

PostPosted: Thu Aug 27, 2009 7:55 am
by NarutoAngel221
Well it seems like robots are getting innovative right now I just do hope they will not be as powerful as they are and hope they can make a robot with a heart like human does

PostPosted: Thu Aug 27, 2009 8:38 am
by ich1990
NarutoAngel221 (post: 1341738) wrote:and hope they can make a robot with a heart like human does


That would probably be a bad idea.

PostPosted: Thu Aug 27, 2009 1:04 pm
by KagayakiWashi
Nooooooo! If it gets much worse than that, videogames will start to cheat and lie to the player even more!

PostPosted: Sun Aug 30, 2009 2:27 pm
by Bobtheduck
KagayakiWashi (post: 1341824) wrote:Nooooooo! If it gets much worse than that, videogames will start to cheat and lie to the player even more!


More than programmers tell them to in order to secure a false sense of challenge? I doubt it.

PostPosted: Mon Aug 31, 2009 8:20 pm
by WhiteMage212
this is getting pretty scary. robots will soon replace low class and physical labor jobs at this rate. The only reason I would love advanced robots that would rebel is because I could carry a chain gun and shoot them all up.