Personal blog written from scratch using Node.js, Bootstrap, and MySQL. https://jrtechs.net
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

106 lines
5.6 KiB

  1. <youtube src="sOKEIE2puso" />
  2. Although the movie *I Robot* has not aged well, it still brings up
  3. some interesting ethical questions that we are still discussing
  4. concerning self driving cars. The protagonist Detective Spooner has
  5. an almost unhealthy amount of distrust towards robots. In the movie, a
  6. robot decided to save Spooner's life over a 12 year old girl in a car
  7. accident. This ignites the famous ethical debate of the trolley
  8. problem, but, now with artificial intelligence. The debate boils down
  9. to this: are machines capable of making moral decisions. The surface
  10. level answer from the movie is presented as **no** when Spooner's
  11. presents car crash antidote. This question parallels the discussion
  12. that we are currently having with self driving cars. When a self
  13. driving car is presented with two options which result in the loss of
  14. life, what should it choose?
  15. <iframe width="100%" height="315" src="https://www.youtube.com/embed/ixIoDYVfKA0" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>
  16. When surveyed, most people say that they would prefer to have self
  17. driving cars take the utilitarian approach towards the trolley
  18. problem. A utilitarian approach would try to minimize the total
  19. amount of harm. MIT made a neat
  20. [website](http://moralmachine.mit.edu/) where it presents you with a
  21. bunch of "trolley problems" where you have to decide who dies. At the
  22. end of the survey the website presents you with a list of observed
  23. preferences you made when deciding who's life was more important to
  24. save. The purpose of the trolley problem is merely to ponder what
  25. decision a self driving car should make when **all** of its
  26. alternatives are depleted.
  27. ![Moral Machine](media/selfDrivingCars/moralmachine3.png)
  28. We still need to question whether utilitarianism is the right moral
  29. engine for self driving cars. Would it be ethical for a car to take
  30. into account you age, race, gender, and social status when deciding
  31. if you get to live? If self driving cars could access personal
  32. information such as criminal history or known friends, would it be
  33. ethical to use that information? Would it be moral for someone to make
  34. a car which favored the safety of the passengers of the car above
  35. others?
  36. ![Moral Machine](media/selfDrivingCars/moralMachine.png)
  37. Even though most people want self driving cars to use utilitarianism,
  38. most people surveyed also responded that they would not buy a car
  39. which did not have their safety as its top priority. This brings up a
  40. serious social dilemma. If people want everyone else's cars to be
  41. utilitarians, yet, have their own cars be greedy and favor their
  42. safety, we would see none of the utilitarian improvements. This
  43. presented us with the tragedy of the commons problem since everyone
  44. would favor their own safety and nobody would sacrifice their safety
  45. for the public good. This brings up yet another question: would it be
  46. fair to ask someone to sacrifice their safety in this way?
  47. In most cases, when a tragedy of the commons situation is presented,
  48. government intervention is the most piratical solution. It might be
  49. the best to have the government mandate that all cars try to maximize
  50. the amount of life saved when a car is presented with the trolley
  51. problem. Despite appearing to be a good solution, the flaw in this
  52. does not become apparent before you us consequentialism to examine
  53. this problem.
  54. ![Moral Machine](media/selfDrivingCars/moralMachine6.png)
  55. Self driving cars are expected to reduce car accidents by 90% by
  56. eliminating human error. If people decide to not use self driving cars
  57. due to the utilitarian moral engine, we run the risk of actually
  58. loosing more lives. Some people have actually argued that since
  59. artificial intelligence is incapable of making moral decisions, they
  60. should take no action at all when there is a situation which will
  61. always results in the loss of life. In the frame of the trolley
  62. problem, it is best for the artificial intelligence to not pull the
  63. lever. I will argue that it is best for self driving cars to not make
  64. ethical decisions because, it would result in the highest adoption
  65. rate of self driving cars. This would end up saving the most lives in
  66. the long run. Plus, the likelihood that a car is actually presented
  67. with a trolley problem is pretty slim.
  68. The discussion over the moral decisions a car has to make is almost
  69. fruitless. It turns out that humans are not even good at making moral
  70. decisions in emergency situations. When we make rash decisions
  71. influenced by anxiety, we are heavily influenced by prejudices and
  72. self motives. Despite our own shortcomings when it comes to decision
  73. making, that does not mean that we can not do better with self driving
  74. cars. However, we need to realize that it is the mass adoption of self
  75. driving cars which will save the most lives, not the moral engine
  76. which we program the cars with. We can not let the moral engine of the
  77. self driving cars get in the way of adoption.
  78. The conclusion that I made parallels Spooner's problem with robots in
  79. the movie *I Robot*. Spooner was so mad at the robots for saving his
  80. own life rather than the girl's, he never realized that if it was not
  81. for the robots, neither of them would have survived that car crash.
  82. Does that mean we can't do better than not pulling the lever? Well...
  83. not exactly. Near the end of the movie a robot was presented with
  84. another trolley problem, but, this time he managed to find a way which
  85. saved both parties. Without reading into this movie too deep, this
  86. illustrates how the early adoption of artificial intelligence ended up
  87. saving tons of lives like Spooners. It is only when the technology
  88. fully develops is when we can start to avoid the trolley problem
  89. completely.