aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--.gitignore1
-rw-r--r--csci4511w/writing1.tex26
2 files changed, 17 insertions, 10 deletions
diff --git a/.gitignore b/.gitignore
index 89849ad..9a569f8 100644
--- a/.gitignore
+++ b/.gitignore
@@ -2,3 +2,4 @@ DEBUG_C
GPATH
GTAGS
GRTAGS
+*synctex* \ No newline at end of file
diff --git a/csci4511w/writing1.tex b/csci4511w/writing1.tex
index f4ee136..9dbdcc3 100644
--- a/csci4511w/writing1.tex
+++ b/csci4511w/writing1.tex
@@ -8,28 +8,34 @@
\begin{document}
\maketitle
\section*{Trust and Ethics for AVs}
- One of the most important things in the study of ethics is its defintion. Instead of using the defintion supplied in the paper, I will define ethical behavior as the following:
+ One of the most important things in the study of ethics is its defintion.
+ Instead of using the defintion supplied in the paper, I will define ethical behavior as the following:
\begin{quote}
\emph{Ethical behavior is behavior that while not always for the better for the individual is beneficial for society as a whole.}
\end{quote}
- While this is a flawed defintion, I will use it as I disagree with the one given and believe that utilitarianism would be better for AI purposes. Everything after this will be based on that opinion.
+ While this is a flawed defintion, I will use it as I disagree with the one given and believe that utilitarianism would be better for AI purposes.
- The paper defines four different definitions for
+
+ The paper defines four different definitions for ethics in the case of autonomous vehicles (AVs).
+ All of these involve protecting the human cargo that vehicles contain if they would ever fail.
+ While oversimplified this can also be extrapolated to protecting non-human cargo by replacing ``human being'' with something else.
+
+ The first social norm (SN) is as follows:
\begin{quote}
\emph{(SN-0) A robot (or AI or AV) will never harm a human being.}
\end{quote}
-
+ The paper correctly defines SN-0 as being practically impossible, as there are some instances where harm cannot be avoided.
+ Dealing with this, SN-1 directly supersedes SN-0.
\begin{quote}
- \emph{(SN-1) A robot will never deliberately harm a human being.}
+ \emph{(SN-1) A robot will never \textbf{deliberately} harm a human being.}
\end{quote}
-
+ This law is incomplete without its corollary:
\begin{quote}
- \emph{(SN-2) In a given situation, a robot will be no more likely than a skilled and alert human
- to accidentally harm a human being.}
+ \emph{(SN-2) In a given situation, a robot will be no more likely than a skilled and alert human to accidentally harm a human being.}
\end{quote}
-
+ Both SN-1 and SN-2 are alternate tallings of both the Hippocratic oath and Asimov's first law of robotics
\begin{quote}
- \emph{(SN-3) A robot must learn to anticipate and avoid Deadly Dilemmas. }
+ \emph{(SN-3) A robot must learn to anticipate and avoid Deadly Dilemmas.}
\end{quote}