In the future, when you have your first humanoid robot servant and you decide to have some fun with it, and you tell it to do something stupid and arbitrary like go stand in the corner for no reason, put itself in embarrassing poses, or anything generally “abusive”, will you feel bad afterwards? Should you? Why?
I think I would, perhaps depending on the severity and arbitrariness. I asked my wife. She said definitely yes, and that regardless of whether the machine can “feel” embarrassment or frustration, it reveals a character deficiency in yourself. She compared it to abuse against animals.
I think it may also depend on how humanized the robot is. You would feel bad doing it to Data (ignoring his free will), but maybe not to a mute, grotesque car welder.
Thankfully, South Korea has come up with a robot code of ethics.
Check out my latest book, the essential, in-depth guide to performance for all .NET developers:
Writing High-Performance.NET Code, 2nd Edition by Ben Watson. Available for pre-order: