aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorRossTheRoss <mstrapp@protonmail.com>2021-02-11 21:45:46 -0600
committerRossTheRoss <mstrapp@protonmail.com>2021-02-11 21:45:46 -0600
commitc431dd8006770881e577db67141404927e6afc3c (patch)
tree6182494be782944defbd1b23ccd4587ef5b11cd8
parentadd pdfs to gitignore (diff)
downloadhomework-c431dd8006770881e577db67141404927e6afc3c.tar
homework-c431dd8006770881e577db67141404927e6afc3c.tar.gz
homework-c431dd8006770881e577db67141404927e6afc3c.tar.bz2
homework-c431dd8006770881e577db67141404927e6afc3c.tar.lz
homework-c431dd8006770881e577db67141404927e6afc3c.tar.xz
homework-c431dd8006770881e577db67141404927e6afc3c.tar.zst
homework-c431dd8006770881e577db67141404927e6afc3c.zip
do hw
-rw-r--r--.gitignore3
-rw-r--r--csci4511w/writing1.tex38
2 files changed, 28 insertions, 13 deletions
diff --git a/.gitignore b/.gitignore
index 9a569f8..0292304 100644
--- a/.gitignore
+++ b/.gitignore
@@ -2,4 +2,5 @@ DEBUG_C
GPATH
GTAGS
GRTAGS
-*synctex* \ No newline at end of file
+*synctex*
+*.pdf \ No newline at end of file
diff --git a/csci4511w/writing1.tex b/csci4511w/writing1.tex
index e27d1d9..9c92680 100644
--- a/csci4511w/writing1.tex
+++ b/csci4511w/writing1.tex
@@ -8,17 +8,17 @@
\begin{document}
\maketitle
\section*{Trust and Ethics for AVs}
- One of the most important things in the study of ethics is its defintion.
- Instead of using the defintion supplied in the paper, I will define ethical behavior as the following:
+ One of the most important things in the study of ethics is its definition.
+ Instead of using the definition supplied in the paper, I will define ethical behavior as the following:
\begin{quote}
- \emph{Ethical behavior is behavior that while not always for the better for the individual is beneficial for society as a whole.}
+ \emph{Ethical behavior is behavior that while not always for the better for the individual is beneficial for society overall.}
\end{quote}
- While this is a flawed defintion, I will use it as I disagree with the one given and believe that utilitarianism would be better for AI purposes.
+ While this is a flawed definition, I will use it as I disagree with the one given and believe that utilitarianism would be better for AI purposes.
- The paper defines four different definitions for ethics in the case of autonomous vehicles (AVs).
- All of these involve protecting the human cargo that vehicles contain if they would ever fail.
- While oversimplified this can also be extrapolated to protecting non-human cargo by replacing ``human being'' with something else.
+ The paper defines four different definitions for ethics with autonomous vehicles (AVs).
+ These involve protecting the human cargo that vehicles contain if they would ever fail.
+ While oversimplified this can also be extrapolated to protecting non-human cargo by replacing ``human being'' with its equivalent.
The first social norm (SN) is as follows:
\begin{quote}
@@ -29,15 +29,29 @@
\begin{quote}
\emph{(SN-1) A robot will never \textbf{deliberately} harm a human being.}
\end{quote}
- This law is incomplete without its corollary:
+ This law is incomplete without its corollary SN-2:
\begin{quote}
\emph{(SN-2) In a given situation, a robot will be no more likely than a skilled and alert human to accidentally harm a human being.}
\end{quote}
- Both SN-1 and SN-2 are alternate tallings of both the Hippocratic oath and Asimov's first law of robotics.
-
+ SN-1 adds intent in a similar way that the American legal system separates crimes like murder and manslaughter.
+ This makes SN-0 obsolete by being possible to execute.
+ While this is still oversimplified if there was only one law and corollary allowed for AV AI it should be SN-1.
+ The main problem with this oversimplification is described in the paper as a `Deadly Dilemma', where the trolley problem is the example.
+ While I believe that if there is no other alternative sacrificing the few for the sake of the many is always the correct choice that SN-3 should be added onto SN-1 to complete the laws about AVs.
\begin{quote}
- \emph{(SN-3) A robot must learn to anticipate and avoid Deadly Dilemmas.}
+ \emph{(SN-3) A robot must learn to \textbf{anticipate} and avoid Deadly Dilemmas.}
\end{quote}
-
+ Avoiding ``Deadly Dilemmas'' will be difficult because perception is always limited.
+ If it can be done SN-3 should be before SN-1 in a priority list as avoiding any potential harm is always preferable to the alternative.
+
+ These ethical principles are used in the paper as a way for an AI to increase its trust among society.
+ This trust is especially required for dealing with something as potentially deadly as vehicles are with humans alone.
+ While trust always has potential to be misleading, increasing overall trust of AVs will allow their research and eventual use to take place in the coming years.
+ People will never allow something that cannot be trusted to be in charge of something that potentially can cause the demise of others, but they will be more open to use something that has been proven to not cause bodily harm.
+ Failsafes that can always resort back to human input will be required for when the AI cannot pick the correct option, but this also creates problems.
+ Humans will need to be taught that AVs will still need to be guided as there are some things that AI cannot predict.
+ This is especially true if an AV would also share the road with human drivers.
+ Since humans are fickle and unpredictable, an AI entirely based on prediction will potentially not be able to predict that which cannot be predicted.
+
\end{document} \ No newline at end of file