\documentclass{article} \usepackage[utf8]{inputenc} \usepackage{parskip} \title{Writing 1} \author{Matt Strapp} \date{2021-02-12} \begin{document} \maketitle \section*{Trust and Ethics for AVs} One of the most important things in the study of ethics is its defintion. Instead of using the defintion supplied in the paper, I will define ethical behavior as the following: \begin{quote} \emph{Ethical behavior is behavior that while not always for the better for the individual is beneficial for society as a whole.} \end{quote} While this is a flawed defintion, I will use it as I disagree with the one given and believe that utilitarianism would be better for AI purposes. The paper defines four different definitions for ethics in the case of autonomous vehicles (AVs). All of these involve protecting the human cargo that vehicles contain if they would ever fail. While oversimplified this can also be extrapolated to protecting non-human cargo by replacing ``human being'' with something else. The first social norm (SN) is as follows: \begin{quote} \emph{(SN-0) A robot (or AI or AV) will never harm a human being.} \end{quote} The paper correctly defines SN-0 as being practically impossible, as there are some instances where harm cannot be avoided. Dealing with this, SN-1 directly supersedes SN-0. \begin{quote} \emph{(SN-1) A robot will never \textbf{deliberately} harm a human being.} \end{quote} This law is incomplete without its corollary: \begin{quote} \emph{(SN-2) In a given situation, a robot will be no more likely than a skilled and alert human to accidentally harm a human being.} \end{quote} Both SN-1 and SN-2 are alternate tallings of both the Hippocratic oath and Asimov's first law of robotics. \begin{quote} \emph{(SN-3) A robot must learn to anticipate and avoid Deadly Dilemmas.} \end{quote} \end{document}