“Hey Google, Please?”: Manners and Robots

0

This morning, my 2-year-old son was frustrated as he tried to get our Google Home Mini to play “Mickey Mouse Clubhouse.” He has successfully gotten the device to decipher his toddler rambling one time, and he ran through the house celebrating, triumphant about his wizardry. Most of the time, though, he has to have one of the other family members act as a translator between him and the robot. 

This morning, I did something I don’t think I’ve ever done before. I asked Google politely. “Hey, Google. Play ‘Mickey Mouse Clubhouse theme song,’ please.” Google shocked me. “Thank you for asking like that,” she said. “Here you go.” 

That’s exactly what I say to my kids when they manage to ask for something politely on the first try. It’s my attempt to gently prod them into better choices by recognizing when they have done something well. 

Is Google conditioning me in how she’d like to be treated? Is this really something I should care about?

The Argument Against Robot Manners

Of course I shouldn’t care about telling the robot “please!” It’s a robot. It doesn’t really have its feelings hurt if I ask harshly. It’s simply parroting back a programmed response, and it’s going to play that catchy little tune no matter how I ask. It’s a machine. It’s no different from pushing the “start” button on my dishwasher, and I don’t say “please” when I do that. Machines are a means to an end, and operating one is a mechanical, not social, interaction. Let’s not be ridiculous. 

The Argument For Robot Manners 

On the other hand . . . hear me out.

Google has rolled out a feature called Pretty Please so that you can enforce manners for certain profiles, which Google deciphers using voice recognition. Conversationally, Google will insist on the “magic word” for requests just like you probably do when your kids are making demands. 

The first argument for using manners with robots when you have kids around is simple consistency of habit. If we expect our kids to be polite in their requests of others in the world, then keeping that consistent here helps to make those manners second nature.

The more complex argument, though, is the one I am more interested in, personally. I teach a robot ethics class for kids, and through that class, we have learned that rules about what we do to robots aren’t really about the robots; they’re about what we do to ourselves. Humans’ hard-wired cognitive responses to other human beings can kick in even when we know that there are no actual feelings in the robot. A scientific study found, for example, that our empathy responses kick in when we see a robot hand getting cut if that hand reveals a human hand. Many people collectively mourned when the robot Hitchbot was decapitated by vandals, and there was no mistaking Hitchbot for an actual human. 

Ethics is the framework by which we decide what is right and wrong, and moral philosophers care—above all—that our ethical frameworks are consistent. Perhaps being polite to the robots is important not for the robots’ sake, but for our own. 

We’re all doing the best we can to parent our kids in a rapidly changing world full of social media, constant screens, and—yes—robot interactions. If throwing in a “please” and “thank you” helps to maintain the social order for the future, it’s really the least of our struggles to adapt to our technologized world.