The Future of Moral MachinesA robotwalks into a bar and says, “I’ll have a screwdriver.” A bad joke, indeed. But even less funny if the robot says “Give me what’s in your cash register.”The fictional theme of robots turning against humans is older than the word itself, which first appeared in the title of Karel Čapek’s 1920 play about artificial factory workers rising against their human overlords. Just 22 years later, Isaac Asimov invented the “Three Laws of Robotics” to serve as a hierarchical ethical code for the robots in his stories: first, never harm a human being through action or inaction; second, obey human orders; last, protect oneself. From the first story in which the laws appeared, Asimov explored their inherent contradictions. Great fiction, but unworkable theory.
The prospect of machines capable of following moral principles, let alone understanding them, seems as remote today as the word “robot” is old. Some technologists enthusiastically extrapolate from the observation that computing power doubles every 18 months to predict an imminent “technological singularity” in which a threshold for machines of superhuman intelligence will be suddenly surpassed. Many Singularitarians assume a lot, not the least of which is that intelligence is fundamentally a computational process. The techno-optimists among them also believe that such machines will be essentially friendly to human beings. I am skeptical about the Singularity, and even if “artificial intelligence” is not an oxymoron, “friendly A.I.” will require considerable scientific progress on a number of fronts.
Read the full story here: The New York Times