Isaac Asimov: The Three Laws of Robotics

The Three Laws,  from the “Handbook of Robotics, 56th Edition, 2058 A.D.”, are:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

The original laws have been altered and elaborated on by Asimov and other authors. Asimov himself made slight modifications to the first three in various books and short stories to further develop how robots would interact with humans and each other. In later fiction where robots had taken responsibility for government of whole planets and human civilizations, Asimov also added a fourth, or zeroth law, to precede the others:

A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

The Three Laws, and the zeroth, have pervaded science fiction and are referred to in many books, films, and other media.