diff options
Diffstat (limited to 'csci4511w/writing1.tex')
-rw-r--r-- | csci4511w/writing1.tex | 57 |
1 files changed, 0 insertions, 57 deletions
diff --git a/csci4511w/writing1.tex b/csci4511w/writing1.tex deleted file mode 100644 index 9c92680..0000000 --- a/csci4511w/writing1.tex +++ /dev/null @@ -1,57 +0,0 @@ -\documentclass{article} -\usepackage[utf8]{inputenc} -\usepackage{parskip} - -\title{Writing 1} -\author{Matt Strapp} -\date{2021-02-12} -\begin{document} - \maketitle - \section*{Trust and Ethics for AVs} - One of the most important things in the study of ethics is its definition. - Instead of using the definition supplied in the paper, I will define ethical behavior as the following: - \begin{quote} - \emph{Ethical behavior is behavior that while not always for the better for the individual is beneficial for society overall.} - \end{quote} - While this is a flawed definition, I will use it as I disagree with the one given and believe that utilitarianism would be better for AI purposes. - - - The paper defines four different definitions for ethics with autonomous vehicles (AVs). - These involve protecting the human cargo that vehicles contain if they would ever fail. - While oversimplified this can also be extrapolated to protecting non-human cargo by replacing ``human being'' with its equivalent. - - The first social norm (SN) is as follows: - \begin{quote} - \emph{(SN-0) A robot (or AI or AV) will never harm a human being.} - \end{quote} - The paper correctly defines SN-0 as being practically impossible, as there are some instances where harm cannot be avoided. - Dealing with this, SN-1 directly supersedes SN-0. - \begin{quote} - \emph{(SN-1) A robot will never \textbf{deliberately} harm a human being.} - \end{quote} - This law is incomplete without its corollary SN-2: - \begin{quote} - \emph{(SN-2) In a given situation, a robot will be no more likely than a skilled and alert human to accidentally harm a human being.} - \end{quote} - SN-1 adds intent in a similar way that the American legal system separates crimes like murder and manslaughter. - This makes SN-0 obsolete by being possible to execute. - While this is still oversimplified if there was only one law and corollary allowed for AV AI it should be SN-1. - The main problem with this oversimplification is described in the paper as a `Deadly Dilemma', where the trolley problem is the example. - While I believe that if there is no other alternative sacrificing the few for the sake of the many is always the correct choice that SN-3 should be added onto SN-1 to complete the laws about AVs. - \begin{quote} - \emph{(SN-3) A robot must learn to \textbf{anticipate} and avoid Deadly Dilemmas.} - \end{quote} - Avoiding ``Deadly Dilemmas'' will be difficult because perception is always limited. - If it can be done SN-3 should be before SN-1 in a priority list as avoiding any potential harm is always preferable to the alternative. - - - These ethical principles are used in the paper as a way for an AI to increase its trust among society. - This trust is especially required for dealing with something as potentially deadly as vehicles are with humans alone. - While trust always has potential to be misleading, increasing overall trust of AVs will allow their research and eventual use to take place in the coming years. - People will never allow something that cannot be trusted to be in charge of something that potentially can cause the demise of others, but they will be more open to use something that has been proven to not cause bodily harm. - Failsafes that can always resort back to human input will be required for when the AI cannot pick the correct option, but this also creates problems. - Humans will need to be taught that AVs will still need to be guided as there are some things that AI cannot predict. - This is especially true if an AV would also share the road with human drivers. - Since humans are fickle and unpredictable, an AI entirely based on prediction will potentially not be able to predict that which cannot be predicted. - -\end{document}
\ No newline at end of file |