Home / Teaching robots to lie  
Image of Teaching robots to lie

Technology is so grand. With the research from this experiment, we might finally be able to replace the lowest form of humans - politicians - with robots. After all, if you can teach robots to lie, they can perform all of the same duties. (You could probably even use a Commodore 64 for the hardware as clearly there aren't any other skill requirements for the job that require complicated data processing.) Deception is causing someone to accept as true or valid what is false or invalid. But did they really teach a robot to be deceptive? If so, what are the ethical implications? Do you think humanity will be better or worse with robots who are capable of deceiving? (By the way, yes, this research was funded by the US government - after all, who better to research lying than the experts?)


Researchers at the Georgia Institute of Technology may have made a terrible, terrible mistake: They've taught robots how to deceive. It probably seemed like a good idea at the time. Military robots capable of deception could trick battlefield foes who aren't expecting their adversaries to be as smart as a real soldier might be, for instance. But when machines rise up against humans and the robot apocalypse arrives, we're all going to be wishing that Ronald Arkin and Alan Wagner had kept their ideas to themselves...

"The results were also a preliminary indication that the techniques and algorithms described in the paper could be used to successfully produce deceptive behavior in a robot." When asked about whether this was really a good idea, bearing in mind the events of Terminator 2, Arkin added: "We have been concerned from the very beginning with the ethical implications related to the creation of robots capable of deception and we understand that there are beneficial and deleterious aspects. We strongly encourage discussion about the appropriateness of deceptive robots to determine what, if any, regulations or guidelines should constrain the development of these systems.

Robots Taught How to Deceive

The actual research paper - Acting Deceptively: Providing Robots with the Capacity for Deception


Original posting by Braincrave Second Life staff on Sep 30, 2010 at http://www.braincrave.com/viewblog.php?id=338

You need to be logged in to comment.
search only within braincrave

About braincrave


We all admire beauty, but the mind ultimately must be stimulated for maximum arousal. Longevity in relationships cannot occur without a meeting of the minds. And that is what Braincrave is: a dating venue where minds meet. Learn about the thoughts of your potential match on deeper topics... topics that spawn your own insights around what you think, the choices you make, and the actions you take.

We are a community of men and women who seek beauty and stimulation through our minds. We find ideas, education, and self-improvement sexy. We think intelligence is hot. But Braincrave is more than brains and I.Q. alone. We are curious. We have common sense. We value and offer wisdom. We experiment. We have great imaginations. We devour literacy. We are intellectually honest. We support and encourage each other to be better.

You might be lonely but you aren't alone.

Sep, 2017 update: Although Braincrave resulted in two confirmed marriages, the venture didn't meet financial targets. Rather than updating our outdated code base, we've removed all previous dating profiles and retained the articles that continue to generate interest. Moving to valME.io's platform supports dating profiles (which you are welcome to post) but won't allow typical date-matching functionality (e.g., location proximity, attribute similarity).

The Braincrave.com discussion group on Second Life was a twice-daily intellectual group discussions typically held at 12:00 PM SLT (PST) and 7:00 PM SLT. The discussions took place in Second Life group chat but are no longer formally scheduled or managed. The daily articles were used to encourage the discussions.

Latest Activity