Death by robot: Robin Henig addresses automation and morality

As robots and mechanized transactions become increasingly more commonplace, questions about their abilities and their “humanness” become ever more urgent and complicated. Science writer Robin Marantz Henig explores some of the issues surrounding robots in life-and-death situations in this January 2014 article in the New York Times Magazine

Read it here: Henig, "Death by Robot"

 

  1. Henig asserts that the decision-making processes of  a driverless car in an emergency situation are fundamentally different from the processes of a human driver. What is the difference? Do you agree that the two situations really are different? Why or why not? 
  2. Henig presents a lot of information in a straightforward journalistic manner. In what ways does she reveal her own position? Point to specific passages or statements. Is her position expressed effectively? Why or why not?
  3. Henig opens her article with the example of Sylvia, an injured elderly woman, and Fabulon, her robot caregiver. Henig sets up this fictional example so that readers will desire a specific outcome. What is the desired outcome? What would need to change in the robot’s programming to bring about that outcome?
  4. Henig states that the fundamental problem is that of mixing “automation with morality” and that “most people intuitively feel the two are at odds.” What do you say? Are automation and morality at odds? Are our moral decisions more than complex algorithms? Why or why not? Write an essay in which you address these questions, using Henig and/or any of her sources as your They Say.

79 thoughts on “Death by robot: Robin Henig addresses automation and morality

  1. Kayla's avatar Kayla

    As far as the driverless cars’ decision-making process being fundamentally different from ours I disagree. If a human driver is forced into a situation where it’s either this car or that car some of us would try to make the better decision. That is some of us; most humans think of their selves first though, so a plus side would be to that the robot would no matter what make the better decision. The main thing is that the robot can make these decision in time to actually do what they are trying to do.

    Like

  2. Robin Henig states in his article that sense robots and technology are becoming more and more common, people have to start questioning their ability more, now than ever. I agree with what Henig says, because robot’s abilities are a lot different from humans’ abilities, and they’re a lot riskier too. For example, lets say in a factory where robots are made, someone made a mistake making the robot, which in result causes the robot to make mistake later on where it’s serving it’s purpose; this could end up being a fatal mistake end that can cause someone to lose their life.

    Like

  3. Ashley's avatar Ashley

    The difference between a human driver and a driverless car would be the fact that humans are capable of assessing the full situation and making a choice based on that specific situation. A robot can’t assess the situation to the fullest, therefore there would be way more driving errors and deaths due to robots. I do agree that the two situations are different because with being a human you can make a decision that wouldn’t hurt anyone and a robot as mentioned might look at the least amount of people being hurt by an accident… But the robot may not have had to make that choice, it may have had another… Due to it’s settings… It doesn’t know any other option that to choose the least amount of people being hurt or killed.

    Like

  4. Melanie Almonte's avatar Melanie Almonte

    Henig is right about the decision making processes of a driverless car differentiating between the processes of a human drivers in an emergency situation. The difference is moral understanding and ethics. I agree that the two situations are really different from eachother. A driverless car can only depend on it’s background information programmed in the vehicle in order to develop a quick solution. Where as, for a human driver, they depend on their morals and conscious to help guide in this abrupt situation. Ethics becomes a key factor in this situation as well. It’s difficult for a human to determine what is right and what is wrong based on their experiences through out life. Everyone is different. If given the same opportunity to a robot, then how can we trust that it could make a “wise” decision based on human morals when it is only a program designed by another human. In conclusion, there is a major difference between the decision making between a driverless car and human driver.

    Like

  5. Bryce's avatar Bryce

    I think that in the long run, developed robots will help to run and control society. I agree with Robin Henig when he says that “robots will make fewer mistakes than humans”, because robots are programmed for those specific tasks which make it hard for them to make a mistake. Unlike how a human can through “ethical decision making” which gives humans the ability to assess a situation and come up with the best possible option. Robots lack the ability to make decisions which can make them weak for certain tasks such as deciding how to handle a car crash if ones inevitable. But in the near future scientists are working towards the goal of creating robots with this kind of ability. Once robots evolve to this i believe they will integrate into society. I believe the integration of robots into society will be a good thing because if everything goes as planned they can do many tasks that a human can but much better. Robots could take easy jobs such as picking up littered trash or washing cars, which can make them very useful in our every day lives.

    Like

  6. Makaylah Keith's avatar Makaylah Keith

    3.
    When it comes to the topic of robots, most of us will readily agree that robots will improve the future. Where this agreement usually ends, however, is on the question of will robots harm humans? Whereas some are convinced that robots will become dominant and control society, others maintain that robots will be consistent when making decisions that will either help or harm society. Most people will expect that the desired outcome of the fictional example will be the robot giving Sylvia the dose of painkiller to relieve her pain and relax her body. Henig stated that the robot is required to make the patient pain-free. The painkiller will resolve the problem, however, the robot is not supposed to give Sylvia her medicine without consulting the supervisor. The robot’s programming will need to include a manual of options to do if the supervisor cannot be reached. The manual will assist the robot in making decisions without the supervisor’s consent. The manual programmed in the robot will allow the outcome to be achieved. Supervisors and consultants are so diverse in their views that it’s hard to generalize about them, but some are likely to object on the grounds that the patient may not need the painkiller; the patient could be asking for more medicine because she knows the supervisor cannot be reached. Also, the robot would not know if the patient is in pain or becoming an addict of painkillers. The robot may not give the patient medicine because he is not following the rules, which will make the patient suffer, however, if the robot continues to give the patient unauthorized medicine Sylvia can overdose.
    Airiel emphasizes that robots cannot give the desired outcome because robots do not have the capacity to make tough decisions. Airiel stated, “In my experience, my internet goes down frequently because of bad service, so a robot that needs to contact a supervisor will not be able to in a situation that could be life or death.” I agree with her because someone may be in a life or death situation and the person may die if the robot cannot make decisions on its own. Alyssa’s theory about robots not being able to detect human emotions is extremely useful because it sheds insight on the difficult problem of not making logical decisions. According to Alyssa, “Most autonomous robots are unable to emit or detect human emotions, and in extreme cases they wouldn’t be able to make a humanely logical decision.” Since robots cannot detect human emotions, it would never know when the patient is in extreme pain and needs unauthorized medication.

    Like

  7. Kaylin Graham's avatar Kaylin Graham

    Robin Henig of Death by Robert begins her article with the injured elderly woman and her robot caregiver so that the reader can also agree that more robots in the future will be able to handle making human-like decisions. According to Henig, the changes that would need to occur are the that they need to make more autonomous robots. Autonomous robots are highly intelligent robots that can perform tasks at a higher rate than regular robots. Although most humans are uncomfortable with the idea of interacting with a more human-like machine, Robin Henig states that “introducing more autonomous robots into our lives seems like a done deal”. The only way to improve is to create more robots with morals and that have advanced human-like tendencies. Compared to humans, robots are more efficient even if they are made with more emotion. I agree somewhat with Svea in that robots are able to complete more without distraction and mistakes. However creating more robots that think similarly to humans change shift that perception. I agree with Madelene in that the care and protection of humans is an upside to having robots around, however certain decisions are better made by humans. Using the driverless car for instance, is beneficial in trying the keep the driver safe on the one hand. But on the other hand the reaction of the car could malfunction on the road causing more harm than good.

    Like

  8. Chris Perez's avatar Chris Perez

    Henig opens her article with the fictional example of Sylvia and her robot caregiver. Sylvia is in pain and requests more painkillers. The issue with this is that the robot can’t give her more medication without asking Sylvia’s caregiver, the problem is that Sylvia’s internet is down and the robot can’t ask the caregiver. The question presented is what is needed to change to bring out the outcome Sylvia wants. The robots internal programming needs to be re-programmed to make its own decision about if it should give Sylvia the medication. The programming would have to make the robot go through a series of steps to “think about” before making the decision, similar to what Henig said about the self-driving cars. Similar to how humans think and go through a series of thoughts on what the best outcome us is, robots needs to be programmed to do the same. Henig states is that while robots can and do help people a lot, they should not be depended upon in certain situations as of now. The technology isn’t as advanced as we want it to be for people to be completely dependent upon robots.
    One blogger argues that Henig says that driverless cars are not safe and we should not trust them. That same blogger agrees that we should not trust self-driving cars because they don’t have the same though process as a human. Rachel, the blogger, says that if it comes to an ethical situation where the decision between hitting a pedestrian or another car has to be made, a robot cannot make that decision. Brian Veilleux agrees that robotics are good but only to a certain degree. He is not in support of driverless cars. He believes that even with advanced algorithms in a robots programming still is not enough. He agrees with Rachel, that robots won’t be able to make the ethical decisions that humans can make. His example of this is that instead of choosing to swerve into another car or a pedestrian, humans can think of a third option, and that is to completely miss both and take the impact in one’s own car. Here many robotics advocates would probably object that robots can be smarter than humans.
    I agree with the advocates. I believe that with the right technology and developments that robots will be smarter than humans. If engineers can figure out how to write code to make the robots function in the first place, I do believe that there is nothing that they cannot code into a computer to make it do anything the engineer wants. I think that Rachel and Brian are both wrong in their thought that we should not put complete trust into robots. If a robot is programmed to do a certain task, the chances of it malfunctioning are very low, that’s why I believe that robots are more likely to prevent accidents. Everyone has heard the saying “we are all human, we make mistakes,” yes we are, and one of those mistakes can be getting into a car crash on accident. A robot is not human and is made not to make mistakes.

    Like

  9. Craig J.'s avatar Craig J.

    Another blogger I analyzed was sean, I agree when he said People of course, may want question whether or not it really is the human that is at fault. Yet is it always true that when a kid gets charged with a crime that the parents are at fault? is it always the case, as i have explained that the programmers are at fault? No, something can go wrong with the programming and the robot can end up hurting someone. Now is that the programmers or the robots fault? Did the robot mean to hurt that person or did something just go wrong, and should we really have robots that are basically are slaves? Everyone has their own opinion, mine is that we should give the robots and their programmers their chance.” We should give this robot thing a try, if something happens with the program it could easily be fixed. You cant use “What ifs”, I think they should have a chance to show how useful they could be.

    Like

  10. Craig J.'s avatar Craig J.

    Without a doubt Robots play a huge role in today’s society. Throughout the article “Death by Robot”, Marantz Henig pushes for readers to understand that Robots are not capable in performing in todays society. Henig argues that the decision-making processes of a driverless car in a emergency situation are fundamentally different from the processes of a human driving. He said “say that why should cars and other robots be able to make the choice between which humans life gets taken?” On the one hand, I agree with Henig that a Robot shouldn’t determine whether a person die or not. But on the other hand, I still insist that we should use Robots in our lives because they could be useful and make things easier.
    One blogger I analyzed was Svea Cheng. She said “Many fear that robots are detrimental to mankind, as seen in the science fiction films. Such ideas are in movies for a reason- they are fiction themselves. People must keep the greater picture in mind- no, robots are not here to replace humans. They are here to enhance lifestyles globally, and to make a difference in combating world problems.” I agree with this statement because people have this terrible idea that robots are going to turn on the human race. That isn’t true. The robots can helps us out a lot and is going to be part of our future.

    Like

  11. Dylan Schleigh's avatar Dylan Schleigh

    In Henig’s article, she explains her opinion on how she feels about robots in today society’s and in the near future. She shows examples and theoretical scenarios on why robots may not be the best thing to replace humans. In the beginning of her article, she gives the readers a scenario about a woman who has broken two of her ribs, and has a personal robot to dispense her pills. In this predicament, it’s Sunday, and the network connection in the elderly woman’s house is down and the robotic pill dispenser can’t connect to the doctors. What should the robot do? It wasn’t programmed to work on its own, it needs supervision from a real Doctor to dispense the pills but can’t receive confirmation. Henig’s point here is that, robots are not the best thing to rely on, during certain predicaments. She has a few more examples and scenarios that show how robots won’t be able to function well regardless of how well they are manufactured. She even claims that humans have a hard time trying to function properly in life of death situations, so why should we let machines do it? Henig reveals her position on the topic by showing multiple ways on how machines can go wrong, and not produce the amazing results most of us think they do. She doesn’t have a lot of positive things to say about robots in her article. I believe she is showing more of a negative vibe towards robots, which i couldn’t agree more with. I’m not into the thought of robots taken over the world, or taking over humans jobs and a humans right to make mistakes. Humans need to make mistakes and learn from those mistakes, that’s how humans continue to grow and learn. As bad as it sounds, humans need to make mistakes and learn to have set backs or some form of mistake and learn from their mistakes. If not, we may all look like the characters from Wall-E soon, floating around in our robotic chairs.
    In response to Brooke Towns, i completely agree with you. Hands down, her. She took the words directly out of my mouth and pasted them into the blog. I think the most powerful statement she makes in her argument is “we are just playing God.” this statement sent a shiver down my spine. What she is saying is true, the more power and responsibility we place on the robots, the more they become like us. One day, we might have an epidemic on our hand. I’m not saying robots will take over the world like in the movie Terminator. But, the more responsibility we give the machines, the more we will have to rely on them, and as a human being i will not have a machine take care of me.
    In one others blogger’s views, Svea Cheng, her views on robots are that they are only here to help correct human error and help make things more efficient. She goes on to say machines are a “matter of efficiency and increased success rates”. I agree with her on the matters that machines are here to make humanity a better place. While it is true that machines and robots are only here to help humans, it does not necessarily follow that machines have the right to replace humans and be used to fix our human “errors.”
    Machines are all around us, i’m using one right now to express my feelings towards them. I think machines deserve a purpose in this world, but i do not think they deserve to have the responsibility of a human. I have a smart phone, tablet, computers, and much more machines that help me do everyday tasks that i complete. Yes, they help me complete my tasks, they don’t preform them for me. As i said before, if machines did everything for me, i’d look like a character from Wall-E floating around and texting on my galaxy S23. Machines are not the worst thing in the world, i enjoy the help they provide, i just feel more confident doing something myself than have a robot do it for me. If i was a doctor, i’d much rather see my patients face to face than have pills dispensed to them via robot. Robots have a place in this world, they just can’t have every place in this world.

    Like

  12. Adam Michalak's avatar Adam Michalak

    In the article, “Death by Robot”, Robin Henig gives a scenario, and a few real world examples of situations that could happen if we had robots working in our society or in our military. But, the question we face is whether the decision-making processes of a self-driving car in an emergency situation differ from the processes of a human driver. Robin states that there is a clear difference in the processing power of a self-driving car from the processing power of a human. In an emergency situation a self-driving car will look for the best cores of action to save the most lives, and to take the least amount of damage as possible. I however, believe that a human would try to just save themselves and not worry about anyone else that is around them. Although Colin O’Bryan has a different take on the subject, he states that robots would be perfect for this job because they don’t fear death. In a scenario of a car crash, a human subject would most likely not be calm, whereas their robot counterpart would be in control of the situation at all times and would be able to safely bypass the accident. An example Colin uses is when you are playing a video game like G.T.A. The mortality of the driver, or in this case the player, is not considered in death because they can just respawn just the same as a robot can re-upload.Now, in Gabriel Factora’s response to this article, she states that robots should not be given this task of self-driving cars because robots do not have ethics, and they can’t weigh certain situations the same way a human would. I disagree with Gabriel’s statement. While it is true what Gabriel states that “as humans, we are built to feel emotion which enable us to incorporate ethics”, what she doesn’t state is that we also learn these things from our parents or, “our creators.” So I maintain the fact that if parents can teach us ethics, why can’t the creators of robots program their robots to learn ethics. But I believe they can be based on Gabriel’s statement, “Robots are designed to follow certain algorithms to complete certain tasks.”

    Like

  13. Josh's avatar Josh

    The topic of robots is both exciting and scary all at the same time. The thought of a driver-less car sounds great, but the thought of a real life terminator is very frightening. Robots can do a lot of good but can also very easily cause a lot of unintentional harm. For example, the scenario given at the beginning of the article. Robots aren’t as reliable as humans and have a lot more that can go wrong. Humans have a sense of discernment that robots just don’t posses, at least not yet. However, the technology in the world of robotics is quickly advancing and shows no signs of slowing down. With today’s technology who knows what task will next be replaced with the use of robotics.

    Like

  14. Hannah Schumacher's avatar Hannah Schumacher

    The difference between a human and a machine is, with a machine, you lose emotions and feelings. Machines have no ethics or morals like humans do. Humans can react to situations with more empathy than machines do and they can, sometimes, make a decision that may not have a ‘good’ likely outcome. In other words, humans can take a risk whereas a robot knows no such feeling. And that is why there is a huge difference between robots and humans.

    Like

  15. Nancy's avatar Nancy

    Technology has changed a lot especially in our lives. Humans do not have the same skills as machines do, we are unable to be perfect, which is sometimes needed. Robots allow human to have help, and we do not have to work extra hard.We will get enough rest. Don’t get me wrong, humans are smart than robots, and nobody will never replace that, but getting help will always be welcome.

    Like

  16. Sterling's avatar Sterling

    To me arguing the morality and ethics that a robot can have is circular. A robot can only be as moral and ethical as the programmer. As in the hypothetical example of Fabulus, the robot, the mistake is not having a contingency plan for possible system failures. Autonomous robots have great potential to help and improve our lives, but it is our responsibility to ensure that there is a form of checks and balances.

    Like

  17. Julie Thomas's avatar Julie Thomas

    When talking about robots we should not forget that they are a human product. Everything in the robots is directly related to their creators’ decisions; so we cannot blame a robot because it does not have “sense of morality” without blaming its designers. Robots are machines that you can fill up with whatever you want, good or bad things, that is up to you –if you are their maker-. Human’s judgments and behaviors are based in society’s values as robot’s judgments and behaviors are based in creators’ values. Building a robot that is expected to perform human activities should be the jointed task of engineers and professionals in the field of human sciences such psychology, sociology, anthropology, etc. In conclusion, morality is a quality that belongs to humans and can be placed as the algorithms that shape machine’s brain.

    Like

  18. Gabriel's avatar Gabriel

    Robin Henig suggest in his own article “Death by Robot” that robots in today’s world are programmed to make moral decisions. As these robots make decisions, the robot will know when to ‘Stop’ through programmed guilt factor…But then Henig says that robots feel no emotion and that is why robotics are being used in many situations. Last time I checked, Guilt, is an emotion. Robots might be able to weigh a outcome and pick the best option, but robots cannot feel guilt as well as humans.

    Like

  19. Ja'Lynn Crook's avatar Ja'Lynn Crook

    Heig explains how having a driverless car would be a better option than having human drivers. He describes how human drivers tend to make mistakes and it would close to impossible for a driverless car to get in a wreck with another. He suggested that the only reason why a driverless car would be in a wreck would be due to a pedestrian. Human drivers make mistakes such as texting while driving and not paying attention to the road in front of them. I agree that havig a driver less car would indeed be better because you then have the solution to less mistakes being made on the road. Although this is true, i do believe that many of us would actually miss driving on our own. Learning how to drive and experiencing it is a great responsibility in life and many of us want to take that step.

    Like

  20. stephanie macfarlane's avatar stephanie macfarlane

    I know that robots are important and with new technology improving every day its just a a matter of time before you see them in so many different ares of life. robotics in the medical field are already in use and they improve all the time. I’m not real keen of the idea of robotics in car manufacturing or robots handling meds in a hospital. the robot is only as good as the person who programs them. i feel that there is still to much room for error and in a life or death situation I would still want the human assistance and compassion.

    Like

  21. Whitney Alexander's avatar Whitney Alexander

    In this article “Death by Robot”, the idea that robots can have morality, and use human-like judgement are a little far fetched for me. In situations that require split second decisions or “choosing the lesser of two evils”, I don’t feel that you can truly imitate human emotion with algorithms. I worry that when a robot malfunctions it cannot perform its intended purpose. This may cause a human to be hurt by the hands of a robot. There may be a few basic decisions that a robot can be programmed to do, but there’s a line that can’t be crossed. There’s a reason humans are at the top of the food chain and are the most intelligent species on earth.

    Like

  22. John's avatar John

    I believe that error by robot is more likely to happen than human error. Robots are man made machines that aren’t always perfect. For example, since we are talking about vehicles driving themselves, car manufactures cannot create a perfect car in today’s times. What I mean by that is, every year a new vehicle is made, and each vehicle manufacture comes out with a recall on that particular vehicle. So my question is, if we can’t create a perfect car now, what makes you think we can create a perfect car with no recalls that drives itself? I feel, that in order to proceed with technology, we need to fix the errors that exist now, before we build more errors onto the ones we have now.

    Like

  23. David Rowcroft's avatar David Rowcroft

    When looking at the set of situations the article presents it is important to view the robot’s choices not as an algorithm but as the decision of many minds. Yes, if a robot needs to make a choice it would be done via programing not emotion but that is a good thing. It would be like having the ability to pause time in the middle of an incident to make sure the right decision is made. Experts can toil over the tough decisions for hours so that they can give robots the same solution in an instant. The ethical concern with algorithmic decision making is not that the wrong choice will be made because in theory that will not happen. The fear comes from the realization that a decision needs to be made, a random choice would not suffice. Impulse does not exist in machines so every decision would be made with the best reasoning preloaded into the computer. If the argument was, should ethical decisions be made with expert opinion in mind, then the obvious answer would be yes. Allowing for computers to make ethical decisions just means letting experts decide and that is better than a random impulse decision.

    Like

  24. One of the question that pick is question one. I believe that driverless cars in an emergency situation would be better because not only you will see less accidents, it will get you to your location faster. When a person is on the wheel in an emergency it can be bad because they can rush and make things worse and end up crashing to another car. A driverless car will not rush because its in there programing where they need to go and what its front of them. Also thats what makes them different. I believe that it is really different because a computer makes a decision faster than humans. Reading the post from Svea Cheng I agree that technology makes life better than before but at the same time it can make it worse. If we always rely on technology, then what happens if we can’t rely on it anymore?

    Like

  25. Brandon Sonntag's avatar Brandon Sonntag

    I would say the difference between a driverless car and a human driver would be that a human driver might make more wrong decisions in an emergency situation than a driverless car. One example would be if you were driving and you were not watching the road and were about to hit a car, you may not be able to react quickly enough before its to late. Where as a driverless car would use its laser sensors and cameras to analyze and predict how fast your car is going as well as the distance between you and the car ahead to decide whether or not to stop or swerve out of the way. A second example would be if you were driving and you were not watching the road and you were going to hit two different cars with a number of people in both, a human probably would not be able to decide what to do. On the other hand a driverless car could analyze and determine which car would be able to take the least amount of damage without killing any of the passengers inside. I agree because I think that driverless cars could make much better choices than humans could in emergency situations while driving.

    Like

  26. Dan's avatar Dan

    I do agree that automation in certain things is extremely beneficial but that we have to be very careful that it does not get out of hand. Things like that, I think, have the power to be corrupted quite easily.

    Like

  27. KJ's avatar KJ

    Alannah’s theory of the hard to prevent potential killings is extremely useful because it sheds light on the difficult question of would robots hurt humans more than they would help them. She makes a valid point that a car would not know if it were to hit an infant or an adult, but it might swerve to avoid hitting another car, and hit an infant anyway. This leads to the issue of who would be responsible for the killing if a robot was driving a car. Robin Marantz Henig addresses that issue in the beginning of the article, when discussing the large team of people who would be needed to help build a robot car. One idea that Alannah does not talk about is the idea of a car killing its owner. Henig explains that the robot car would be programmed to try to save the most people it can; so for example, if the robot car, holding one passenger, is about to hit a car with six passengers, the car will try to save the six, very possible hurting its own passenger. Based on her examples and legitimate points, I agree with what Alannah is saying in terms of robot cars being more harmful than helpful.

    Like

  28. Jocelyn Milkovich's avatar Jocelyn Milkovich

    Although technology is great and becoming more and more advanced everyday, I don’t think robots could do everything that humans do and do the tasks efficiently and well. I agree with the author that it might sound great to have robots drive cars around for you and have them take care of your loved ones when people can’t help or they are just not around, but there can be so many problems that can evolve from the use of too much technology. It is a problem in my eyes because society already rely on machines way more than we should and making a machine to run more machines just sounds like big problems. For example Henig talks about Robots driving cars. Robots don’t have feelings and don’t take human lives into consideration so if there is an accident they’re just going to do what they’re programmed to while if it was a human the person may try and avoid or try to uncomplicated the problem. Humans are unique in how we are “programmed” to handle things and that is what makes us a unique species. Technology malfunctions and breaks all of the time. If a robot was driving a car and just stopped working it could cause major accidents even disasters. Just like if it was helping take care of a human and stopped the person would no longer have help and would have to find a way to get someone to help them because the machine quit. I feel using robots would only make things worse off, not better. It would become dangerous letting the machines do tasks that humans are perfectly capable of doing already.

    Like

Leave a reply to Ja'Lynn Crook Cancel reply