aboutsummaryrefslogtreecommitdiffstats
path: root/csci4511w/writing1.tex
blob: e27d1d975dfd2b30ec3a17b3c5e6e426b0c2f768 (plain) (blame)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
\documentclass{article}
\usepackage[utf8]{inputenc}
\usepackage{parskip}

\title{Writing 1}
\author{Matt Strapp}
\date{2021-02-12}
\begin{document}
   \maketitle
   \section*{Trust and Ethics for AVs}
   One of the most important things in the study of ethics is its defintion.
   Instead of using the defintion supplied in the paper, I will define ethical behavior as the following:   
   \begin{quote}
      \emph{Ethical behavior is behavior that while not always for the better for the individual is beneficial for society as a whole.}
   \end{quote}
   While this is a flawed defintion, I will use it as I disagree with the one given and believe that utilitarianism would be better for AI purposes.


   The paper defines four different definitions for ethics in the case of autonomous vehicles (AVs).
   All of these involve protecting the human cargo that vehicles contain if they would ever fail.
   While oversimplified this can also be extrapolated to protecting non-human cargo by replacing ``human being'' with something else.

   The first social norm (SN) is as follows:
   \begin{quote}
      \emph{(SN-0) A robot (or AI or AV) will never harm a human being.}
   \end{quote}
   The paper correctly defines SN-0 as being practically impossible, as there are some instances where harm cannot be avoided.
   Dealing with this, SN-1 directly supersedes SN-0.
   \begin{quote}
      \emph{(SN-1) A robot will never \textbf{deliberately} harm a human being.}
   \end{quote}
   This law is incomplete without its corollary:
   \begin{quote}
      \emph{(SN-2) In a given situation, a robot will be no more likely than a skilled and alert human to accidentally harm a human being.}
   \end{quote}
   Both SN-1 and SN-2 are alternate tallings of both the Hippocratic oath and Asimov's first law of robotics.
   
   \begin{quote}
      \emph{(SN-3) A robot must learn to anticipate and avoid Deadly Dilemmas.}
   \end{quote}
    

\end{document}