Laws of Robotics are essentially the rules by which autonomous robots should operate. Such robots do not yet exist but have been widely anticipated by futurists, novels, in science fiction movies and for those working in or simply interested in research and development in the fields of robotics and artificial intelligence are important pointers to the future…if we are to avoid a Terminator or Matrix type apocalypse (apparently). The most famous proponent of laws for robots was Isaac Asimov.
He introduced them in 1942 in a short story called “Runaround”, although others had alluded to such rules before this. The Three Laws are:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Here’s the man himself discussing the three laws: