On Thursday, Google announced a new “Robot Constitution” that will govern the AI that runs its upcoming army of intelligent machines. Based on Isaac Asimov’s “Three Laws of Robotics,” these safety instructions are meant to steer the devices’ decision-making process. First among them: “A robot may not injure a human being.”The Robot Constitution comes from Google DeepMind, the tech giant’s main AI research wing. “Before robots can be integrated into our everyday lives, they need to be developed responsibly with robust research demonstrating their real-world safety,” the Google DeepMind robotics team wrote in a blog post.
Any fan of science fiction will tell you that you’re asking for trouble when you unleash a machine that, at least on some level, has a mind of its own. The whole point is these robots are supposed to operate themselves. You have set up some guidelines to make sure you can ask R2D2 to move a box without worrying he might drop it on your friend Steve.
Google DeepMind ‘s AI-driven decision-making process guides robots through the physical world.
The Robot Constitution is part of a system of “layered safety protocols.” in addition to not hurting their human counterparts, Google’s robots aren’t supposed to accept tasks involving humans, animals, sharp objects, or electrical appliances. They’re also programmed to stop automatically if they detect that too much force is being applied to their joints, and, at least for now, the robots have been kept in the sight of a human supervisor with a physical kill switch. There’s no word yet on whether or not a robot can feel love