Personal blog written from scratch using Node.js, Bootstrap, and MySQL. https://jrtechs.net
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

84 lines
5.7 KiB

  1. <iframe width="100%" height="315" src="https://www.youtube.com/embed/_MFGx8d1zl0" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>
  2. Although the movie *I Robot* has not aged well, it still brings up some interesting ethical questions
  3. that we are still discussing concerning self driving cars. The protagonist Detective Spooner
  4. has an almost unhealthy amount of distrust towards
  5. robots. In the movie, a robot decided to save Spooner's life over a 12 year old girl in a car accident.
  6. This ignites the famous ethical debate of the trolley problem, but, now with artificial intelligence.
  7. The question boils down to this: are machines capable of making moral decisions. The
  8. surface level answer from the movie is **no** when it presents Spooner's car crash antidote.
  9. This question parallels the discussion that we are currently having with self driving cars.
  10. When a self driving car is presented with two options which result in the loss of life,
  11. what should it choose?
  12. <iframe width="100%" height="315" src="https://www.youtube.com/embed/ixIoDYVfKA0" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>
  13. When surveyed, most people say that they would prefer to have self driving cars take the utilitarian
  14. approach towards the trolley problem. A utilitarian approach would try to minimize the
  15. total amount of harm. MIT made a neat [website](http://moralmachine.mit.edu/) where it presents you with a
  16. bunch of "trolley problems" where you have to decide who dies. At the end of the survey the
  17. website presents you with a list of observed preferences you made with the trolley problem.
  18. The purpose of the trolley problem is merely to ponder what decision a self driving car
  19. should make when **all** of its alternatives are depleted.
  20. ![Moral Machine](media/selfDrivingCars/moralmachine3.png)
  21. We still need to question whether
  22. utilitarianism is the right moral engine for self driving cars. Would it be ethical
  23. for a car to take into account
  24. you age, race, gender, and social status when deciding if you get to live?
  25. If self driving cars could access personal information such as criminal history or known friends, would it
  26. be ethical to use that information? Would it be moral for
  27. someone to make a car which favored the safety of the passengers of the car above
  28. others?
  29. ![Moral Machine](media/selfDrivingCars/moralMachine.png)
  30. Even though most people want self driving cars to use utilitarianism, most people surveyed also responded
  31. that they would not buy a car which did not have their safety as its top priority.
  32. This brings up a serious social dilemma. If people want everyone else's cars to be utilitarians,
  33. yet, have their own cars be greedy and favor their safety, we would see none of the utilitarian improvements. This
  34. presented us with the tragedy of the commons problem since everyone would favor their own
  35. safety and nobody would sacrifice their safety for the public good. This brings up yet another question:
  36. would it be fair to ask someone to sacrifice their safety in this way?
  37. In most cases, when a tragedy of the commons situation is presented, government intervention is
  38. the most piratical solution. It might be the best to have the government
  39. mandate that all cars try to maximize the amount of life saved when a car is presented with the
  40. trolley problem. Despite appearing to be a good solution, the flaw in this does not become apparent before you us
  41. consequentialism to examine this problem.
  42. ![Moral Machine](media/selfDrivingCars/moralMachine6.png)
  43. Self driving cars are expected to reduce car accidents by 90% by cutting out human error. If people
  44. decide to not use self driving cars due to the utilitarian moral engine, we run the
  45. risk of actually loosing more lives. Some people have actually argued that since
  46. artificial intelligence is incapable of making moral decisions, they should actually take
  47. no action at all when there is a situation which will always results in the loss of life.
  48. In the frame of the trolley problem,
  49. it is best for the artificial intelligence to not pull the lever. I will argue that
  50. it is best for self driving cars to not make ethical
  51. decisions because, it would result in the highest adoption rate of self driving cars which in
  52. the long run would save the most lives. The likelihood that a car is actually presented with
  53. a trolley problem is pretty slim.
  54. The discussion over the moral decisions a car has to make is almost fruitless. It turns out
  55. that humans are not even good at making moral decisions in emergency situations. When we make rash decisions
  56. influenced by anxiety, we are heavily influenced by prejudices and self motives. Despite our own shortcomings when it
  57. comes to decision making, that does not mean that we can not do better with self driving cars. However,
  58. we need to realize that it actually is the mass adoption of self driving cars which will save the most lives, not
  59. the moral engine which we program the cars with. We cannot let the moral engine of the self driving
  60. car get in the way of adoption.
  61. The conclusion I made parallels Spooner's problem with robots in the movie *I Robot*. Spooner was so mad at the robots for
  62. saving his own life rather than the girl that he never realize that if it was not for the robots, neither of them would
  63. have survived that car crash. Does that mean we can't do better than not pulling the lever? Well... not exactly.
  64. Near the end of the movie a robot was presented with another trolley problem, but, this time he managed to
  65. find a way which saved both parties. Without reading into this movie too deep, this illustrates how the early
  66. adoption of the robots ended up saving tons of lives like Spooners. It is only as the technology fully develops
  67. is when we can start to avoid the trolley problem completely.