Death by robot: Robin Henig addresses automation and morality

As robots and mechanized transactions become increasingly more commonplace, questions about their abilities and their “humanness” become ever more urgent and complicated. Science writer Robin Marantz Henig explores some of the issues surrounding robots in life-and-death situations in this January 2014 article in the New York Times Magazine

Read it here: Henig, "Death by Robot"

 

  1. Henig asserts that the decision-making processes of  a driverless car in an emergency situation are fundamentally different from the processes of a human driver. What is the difference? Do you agree that the two situations really are different? Why or why not? 
  2. Henig presents a lot of information in a straightforward journalistic manner. In what ways does she reveal her own position? Point to specific passages or statements. Is her position expressed effectively? Why or why not?
  3. Henig opens her article with the example of Sylvia, an injured elderly woman, and Fabulon, her robot caregiver. Henig sets up this fictional example so that readers will desire a specific outcome. What is the desired outcome? What would need to change in the robot’s programming to bring about that outcome?
  4. Henig states that the fundamental problem is that of mixing “automation with morality” and that “most people intuitively feel the two are at odds.” What do you say? Are automation and morality at odds? Are our moral decisions more than complex algorithms? Why or why not? Write an essay in which you address these questions, using Henig and/or any of her sources as your They Say.

79 thoughts on “Death by robot: Robin Henig addresses automation and morality

  1. maree's avatar maree

    The difference is that human vs robot error is that human crashes may cause death.no i do not agree that the two are really different because i believe both are a possibility of human death. I nelieve robots woulld cause more deaths because machines break, and thats all robots are, machines. Her position is not expressed effectively because i could not find her specific position through the entire article.

    Like

  2. I feel robots can cause just as many or even more errors long term do to the fact that machines do break and they don’t always do exactly what you want them to do. Human obviously make errors when they are doing their work but if a robot breaks in the middle of the job then the rest of the work will be wrong unless someone checks on the machine.

    Like

  3. Meghan's avatar Meghan

    When looking at the whole situation there are many things different between a robot and a human mind while driving. The first big thing, the computer can process things a lot faster than a human can. For example, you are driving on the freeway and the person on the next lane is not paying attention and attempts to change lanes. You have about 10 or 15 seconds to react and do something. The computer may be able to process the situation and can avoid it. As humans we only pay so much attention to things. We tend to get distracted by the small things such as, your cell phone, the radio, the person your driving with, the city or the things that are going on outside of the car. If a computer was driving most of those things are being taken out of the danger column. The computer isn’t worried about those things. It is more focused on doing what it was told to do, in this case drive.
    Yes I do agree that the two things are very different. The computer is so much more complex than the human brain. There are only so many things we can do at once, and texting and driving is not one of them. There is noting good about that. But, we may be able to save so many more lives by having computers do it all for us. They can make decisions in a split second. There are so many people that can be saved by that. People don’t always think the situations thought and this will help solve those. All in all, there are many advantages that can come from having a non-human operating a vehicle.

    Like

  4. How well do the participants in these exchanges summarize one another’s claims before making their own responses? How would you characterize any discussion? Is there a true meeting of the minds or are the writers sometimes caricatured or treated as straw men? How do these online discussions compare with face to face discussions you have in class? What advantages does each offer?

    Like

  5. Joshua Natividad's avatar Joshua Natividad

    A man drives a car and does not pay attention leading to the death of two people. A robot drives a car with slight error in its calculations kills two people. The situation remains the same although with different drivers. Do the situations change? No, they remain the same because of the same end result of two deaths.
    One might say that the situations change because of the differences in a robot and a human but the two do share similar characteristics. A robot uses pre-made rules that they must follow and achieve. A human goes through a childhood in which they create their own rule sets that they follow. Both a robot and a man process information through some sort of “brain” in their system. A robot uses a processor that serves as the brain of it.Both “brains” try to take actions that lead to the most favorable outcome although the end result might lead to two deaths the result does seem more favorable than the death of ten. With the various situations that occur when driving, a robot and a human both do reach similar processes when taking the wheel.

    Like

  6. Alannah's avatar Alannah

    There have been movies showing what life would be like with robots, such as “I, Robot”. They could help humans, but they can do the opposite also. My stand is that robots should not be created to help humans. One, robots could mouth function. A good example is of the elderly woman who’s robot couldn’t get her painkiller because the robot stopped working. A human wouldn’t just stop working. Second, robots don’t have ethics. For instance a robotic car will try to avoid a collision using radar. If the car swerves it could hit another car with an infant. No one wants a baby to die in their right mind, but a robot can’t make those kinds of decisions. I don’t believe we should have robots help us.

    Like

  7. Elena's avatar Elena

    Robots contribute greatly to our society and often do their job better than humans. However, while I believe we should take advantage of our technological advancements and use robot power, I think a certain danger lies in relying on machines too heavily.
    As close as we may come to understanding morality and decision making, I think that robots will never be able to completely replicate a human and that to attempt to do this would cause unforeseen consequences. Too many possible unpredictable situations exist for a coder to accurately provide a robot with the correct response, compromising human safety. Robots are breakable and are unable to improvise, two things that could make it difficult when faced with new, unexpected scenarios. Also, as Asaro brings up in the article, “A machine, ‘is not capable of considering the value of those human lives’ that it is about to end”. Replacing morality with robots appears to me as taking away what makes us humans. Without having to consider the rights and wrongs of human life we will lose our ability to make these decisions. I feel it is unnatural for a machine to display the same characteristics as a human. While robots do open new possibilities to fix human error, and they should be continued to be used in our society, we must limit ourselves so as not to let robots completely rule our lives.

    Like

  8. Hyun-Jun Lee's avatar Hyun-Jun Lee

    Although Henig brings up a lot of great questions and arguments about robots, I disagree with her position. The ethical decision is made when the person chooses whether or not the robot needs human authorization to manage narcotics, or to shoot. Machines respond to their inputs. All ethical decisions are made by humans. The author supports her points by using the sources from roboticists and philosophers rather than the engineers, which makes her evidence for the argument weak. Wallach, for example, talks of a “moral Turing test” in which a robot’s behavior will someday be indistinguishable from a human’s to show the optimism about robots’s ethical prospects. However, Wallach never discusses what kind of human should be the standard for robots to follow. Also in the car accident situation, a person would not want to get sacrificed by the robot to protect a group of people who committed D.U.I. Also during a wars or other life-threatening situations, a person would not want his or her robot not to shoot the wounded opponent, who attempt to kill him or her. The program of the robots should be about finding the lowest possibilities to avoid collisions during dangerous situations rather than choosing which decision would be beneficial.

    Like

  9. Robots are in no doubt part of the human life. Whether that might be in the soon future or the present, robots are begining to be created. The question is whether it is ethical to allow robots to function in certain situations by giving the robots human characteristics. I agree that robots should be allowed to function, except in certain situations. Robots are machines, not humans. Therefore it is impossible for a robot to fully understand the value of a human life. Henig gives the situation of a robotic car sacrificing deaths to avoid the most possible destruction. The robot does not know the value of life that is going to be sacrificed but only knows to follow the algorithm it has been given. It would be unfair for someones life to be held by a robot unknowing of lives value. At the same time, the benefits of a robot can not be denied. Robots can function in certain situations that restrict humans. Such as the emotion of fear, panic, etc… Robots are also not bound by the same physical abilities of humans, therefore they can preform certain tasks much more effectively then humans. Robots are undeniably a way to benefit the human race and should be allowed to function given certain situations, at the same time, as the human race, we must not allow robots to control our lives. The human race must never forget that robots are machines and not humans.

    Like

  10. Robots are beginning to take a part in our society by accomplishing basic tasks to help humans with their efficiency. In recent years, scientists have been trying to take the robot to a new level. They want to find a way to give robots human characteristics such as morality and the ability to make ethical decisions. I believe that robots should not have these human characteristics, and only be used to help humans with basic tasks.
    A robot is not alive; it is simply programed to do tasks and does them. Even though new technology has given robots the ability to accomplish tasks under many different circumstances and use a superficial sense of morality; it is still not human. Humans are able use morality and make ethical decisions, because they are alive. They have life experiences, complex emotions, and instincts. This is something that a robot could never achieve. If you were to have a robot driving a car everything would run smoothly, until a complex conflict requiring a sense of morality arises. A scientist could program he robot to use “moral math” and always choose the option that kills fewer humans, but shouldn’t it be the human driver’s choice to decide what to do in the moment? What if the more ethical option is not right due to the circumstances provided? Humans should only let the robot drive when there are no obstacles requiring higher level decision-making. Overall, I believe that robots should never be made with human characteristics or have the ability to make complex ethical decisions.

    Like

  11. Alexandra Ro's avatar Alexandra Ro

    Robots are integrating more and more into our modern society, whether in medicine or the military. This brings up the issue of the extent to which a robot can be both ethically and efficiently effective. To a certain extent, I agree with Henig that robots should be kept out of real-world situations that involve a morality and something greater than algorithms and computerized assessments.
    In daily civilian life, robots remain questionable. The lack of emotion and use of digital calculations enable them to be efficient but not always ethical. As posed by Wendell Wallach in the article, a driverless car would only be able to determine its actions by making “rapid probabilistic predictions based on what the observed objects have been doing”. Objectively, they are programmed to limit the amount of damage caused from the collision. If a human is driving the car, past experiences on the roads will kick into action, and they will also try to be least affected by the collision. The difference is that while robots, in their ‘moral math’, are set to handle certain situations in certain ways, humans have a wider ability to control and think. They develop from situations where creativity and intelligence are tested such as in the game at a four-way stop sign where they negotiate who will go first. In return, humans learn to adapt and become more efficient in their decisions, minimizing the amount of human error.There are no limits as to what the average human being can do under stress, pressure, or anxiety.
    In the military however, robots prove to be useful in conducting offensive strikes on enemies in war without losing more human life. Many of these autonomous weapons systems like drones and cruise missiles are programmed to target certain areas or are not allowed to because of civilian-related activity that would cause more casualties than necessary. Morality proves not to be an issue here because of the constant development these robots have under the “international rules of war”.
    Overall, humans should rise to the occasion in situations where higher level decisions are called for and leave the technical areas for robots to figure out. Robots may excel in efficiency, but humans are wired for everything and anything.

    Like

  12. Robots Are Not Humans
    3. Common sense seems to dictate that many people would want the outcome that the robot makes the decision to giver her the medication so she will not be in pain anymore. However, what if that woman was an addict and she really was not in pain but just wanted more medicine? The robot would not know if she really was in pain and would not know what to do since it could not contact the supervisor. Robin Marantz Henig states, “The robot must do what its human asks it to do. The robot must not administer medication without first contacting its supervisor for permission.” In order for the outcome to be the desired one, the robot would need more advanced problem solving skills and a greater intelligence to understand the difference between right and wrong. These robots would need to think exactly like a human would but without the negative thoughts and feelings such as anger, fear, or jealousy, etc to be able to understand the right thing to do in those types of situations. Henig quotes Matthias Scheutz when he says, “Human caregivers would have a choice and would be able to justify their actions to a supervisor after the fact. But these are not decisions, or explanations, that robots can make.” What is more important is that these robots can not make these decisions so we can not trust them to take care of a human being. In my experience, my internet goes down frequently because of bad service, so a robot that needs to contact a supervisor will not be able to in a situation that could be life or death. A robot can not give the desired outcome if they do not have the capacity to make those decisions.
    Do you think it is possible to program a robot to make the right decisions? And if it is possible, would you put your life in the hands of a robot?

    Like

  13. Joshua P's avatar Joshua P

    Can a car decide the better option?
    1. It is often said that robotic cars are the future of the world. Being able to steer themselves, park themselves, even brake themselves is really appealing to the human brain. But, will making these cars this advanced be a good idea? The cars that control themselves do not have the technology to be able to decide on the better option. According to Henig, “Here’s the difficulty, and it is something unique to a driver-less car: If the decision-making algorithm were to always choose the option in which the fewest people die, the car might avoid another car carrying two passengers by running off the road and risking killing just one passenger: its own”. Basically, Henig is saying that the cars are not going to be smart enough to make the decision to kill the other two passengers or to kill the robotic cars passenger. If the car was human operated the human might be able to swerve out of the way crashing in a way that wouldn’t kill anyone. Another examples Henig states is, “Or it might choose to hit a Volvo instead of a Mini Cooper because its occupants are more likely to survive a crash, which means choosing the vehicle that is more dangerous for its owner to plow into”. Here is another way that the car might decide is the only two options it has. Perhaps there are other options that humans can think of but not robots.
    Could a robotic mind make better decisions than a human mind in a stressful situation?

    Like

  14. In the article “Death by Robot” by The New York Times Magazine, bioethics claim that “Driverless cars will no doubt be more consistently safe than cars are now, at least on the highway, where fewer decisions are made and where human drivers are often texting or changing lanes willy-nilly” (Henig). Although, the two situations are different considering the differences between teen drivers and a robot. If texting has really became the biggest risk in our society then robots would make a improvement besides the fact that robots are not always that perfect and bug free. By demonstrating how robots are helpful, Henig presents a lot of information about different robot aids to help people for many things that we could do manually without the help of a robot. I agree that they are trying to aim for people who would not be able to do things themselves because my experience with not having the ability to do something that is too difficult for yourself to do when no one else is around to help you confirms it. Creating these types of things will cause people to become more lazy and not do anything more than depend on a object to do everything for you. To improve the robots would be helpful for the elderly since they are not able to take much care of themselves if they were injured. The desired outcome they are trying to achieve and would need to make the robots more intelligent. According to Henig “There’s a term for this discomfort the sense that when a robot starts to seem almost but not quite human, it is even more disturbing than if it were obviously a machine”. Automation with morality are at odds because they want to create the robots to the point where you cannot tell the difference between human and robot. Though I concede that robots can help the elderly or people with disability’s, I still insist that giving robots morality would create a bond between human and robot. Indeed, it is highly likely that morality could be possible. Nonetheless, that would be taking the evolution a bit to far. Anyone familiar with creating morality similar to the human behavior should see that this would be a difficult task for today’s technology to achieve the full life like movement that we have. Henig argues “One day robots will be even more morally consistent than humans.” This evolution in robots could surpass humans in many ways and be faulty in others, as in the movies you see that robots are not safe and in maybe in others they perfectly can coexist with humans can or cannot create many problems with social interactions with others and health problems from depending on the robots to do everything for people. Would robots ever get to the point of achieving human morality?

    Like

  15. Ruben's avatar Ruben

    Henig defaults the problematic dissensions of robot morality to the hot button issue that robotics and morality are at an in-pass that seemingly has no solution in sight. The focus of the whole topic of robotics is kilter to the real issue at hand; the idea of adding morality to robots in general. There are a plethora of examples in films and novels where robots go bad because of their moral wiring. In May a whole movie revolves around the plot of a robot questions its morality and orders it was programmed to do and turns AWOL in The Avengers: Age of Ultron. The fact of the matter is a robot is just wires and metal and despite the wiring’s similarities to humans neurotic wiring system, it is still an inanimate, lifeless object. To say a robot doesn’t have intelligence however is blasphemous as robots and electronics do things we humans could do in milliseconds rather than minutes. But to add a sense of morality to the object will add to a cluster of more issues.
    Henig also throws the word morality around without accurately defining which form of morality is right. Yes Henig cites Scheutz quoting that “morality, broadly, as a factor that can come into play when choosing between contradictory paths” (Henig). Lawrence Kolhberg split morality into 3 stages; preconvetional, conventional, and postconventional morality. In each stage, each person feels that their moral judgement is right based upon the stages justification whether it is self-interest or upholding the rules of society. Regardless of the three stages they depict how morality is not a concrete “algorithm” that can just be implanted into a robot or anyone; that morality forms from growth and experience.

    Like

  16. Henig argues that the decision-making processes of a driverless car in an emergency situation are fundamentally different from the processes of a human driver. I would claim that driverless cars would have to adhere to a specific formula that formatted for general situations, and the technology would not be able to make certain decisions regard life or death situations. This is supported by the article Death by Robot, when Henig states “If the decision-making algorithm were to always choose the option in which the fewest people die, the car might avoid another car carrying two passengers by running off the road and risking killing just one passenger: its own.” A human driver although under this stressful situation could possible do something different that the robot wouldn’t saving themselves. I feel the human drivers differ from robotics in the fact that they would be quicker in response to problems that involve ethically moral solutions, but the question I raise is that in the future is it possible for scientist to create robots with the moral reasoning of a human being?

    Like

  17. Brian Veilleux's avatar Brian Veilleux

    According to the article “Death By Robot” Marantz Henig discusses how a driverless car and a human driver would react in an emergency situation. I think the big difference here is the human factor because in the article henig mentions how there’s a choice between hitting an SUV or hitting a kid i think its unethical and immoral because for the human aspect some people may be very moral and swerve completely off the road and miss both the SUV and Kid but on the other hand you have the driverless car maybe hitting the kid or choosing the SUV personally I wouldn’t really like a robotic car making that decision. Henig then goes onto explaining how driverless cars may choose a certain car to hit because it would know that everyone would survive and it is great but I do not like the idea of a car deciding I am the safest car to hit in the situation when the car is supposed to be safe and not supposed to be putting the passenger at risk and I also would not like the fact that if I was in the driverless car that I would not be able to do anything but wait until I crash into something in an emergency situation but then the human factor plays in on what if was another driver who put the driverless car into the decision of the accident. Henig states “Here’s the difficulty, and it is something unique to a driverless car: If the decision-making algorithm were to always choose the option in which the fewest people die, the car might avoid another car carrying two passengers by running off the road and risking killing just one passenger: its own. Or it might choose to hit a Volvo instead of a Mini Cooper because its occupants are more likely to survive a crash, which means choosing the vehicle that is more dangerous for its owner to plow into.” I support robotics but robotic cars are just something that does not do it for me I get that companies are trying to make cars safer and have less accidents but you have to remember were all human accidents happen you cannot control that but I understand that if a person is under the influence of something drugs, alcohol that these cars would be suitable for the situation.
    Lastly if a driverless car did make a decision to take out a kid or a pedestrian for that matter as a driver you’re taught to avoid pedestrians at all costs so if the driverless car makes that decision and kills the pedestrian who is supposed to be held responsible? the car? the company who programmed the car?

    Like

  18. Henig argues that driverless cars are not safe because they will not be able to make the decisions that even humans struggle over. Henig states, “If the decision-making algorithm were to always choose the option in which the fewest people die, the car might avoid another car carrying two passengers by running off the road and risking killing just one passenger: its own.” I agree with Henig because cars do not actually have a human mind and know right from wrong. If one of the sensors broke your whole life is at risk. Sensors on your car do not know which move to make in that split second of time before a collision. Wallach says, “Let’s say the only way the car can avoid a collision with another car is by hitting a pedestrian. “That’s an ethical decision of what you do there, and it will vary each time it happens,”. These cars now put everyone around you in risk instead of just yourself. If you do not believe in the car and you get hit by someone else who owns the car you will be very upset.
    How will a self driving car work on a 5 lane highway full of bumper to bumper traffic? Will missing your exit be more frequent?
    ~Rachel

    Like

  19. Sean's avatar Sean

    According to the article “Death by Robot” Marantz Henig say that why should cars and other robots be able to make the choice between which humans life gets taken? i think Marantz is wrong do to the fact that he overlooked that people have to make these choices too. Basically Marantz is saying is that because they are not human they should be able to make a choice of taking a humans life, but what is the difference when we go hunting and we have choice to kill another animal, its the same thing as when Martanz say that the cars have to “choose the option in which the fewest people die.” Some may say that Robots are just another animal in the kingdom, they may be man made but are still being controlled by human just like other animals are locked up in zoo. Robots are slowly being programmed by humans so they have no free will, their for they are not the ones making the choice that is the human the programmed it. Much like when Martanz say “A robot,s lack of emotion is precisely what makes many people uncomfortable with the idea of trying to give human characteristic,” People of course, may want question whether or not it really is the human that is at fault. Yet is it always true that when a kid gets charged with a crime that the parents are at fault? is it always the case, as i have explained that the programmers are at fault? No, something can go wrong with the programming and the robot can end up hurting someone. Now is that the programmers or the robots fault? Did the robot mean to hurt that person or did something just go wrong, and should we really have robots that are basically are slaves? Everyone has their own opinion, mine is that we should give the robots and their programmers their chance.

    Like

  20. Robin Henig argues that the decision-making processes of a driverless car in a emergency situation are fundamentally different from the processes of a human driver and I have mixed feelings about it. On one hand, I agree that that when overlooking the risk factors and benefits of operating a driverless car it possesses and upholds the importance of precautious driving and promoting safety for people both on and off the road, But on the other hand (and a more prominent fact) Driverless cars dont have the ability to make a decision based on reasoning and logic such as humans so therfore, they are diffenent in the comparison. Patrick Lin, director of the Ethics and Emerging Sciences Group at Cal Poly said,” It evokes the classic Ethics 101 situation known as the trolley problem: deciding whether a conductor should flip a switch that will kill one person to avoid a crash in which five would otherwise die.” A decision which, shouldnt be left up to a car to determine who lives or die. In That case, what could be a alternative method to prevent the deaths of pedestrians or citizens in an emergency situation without leaving the decision up to a car?

    Like

  21. Marc's avatar Marc

    Although I agree with Hening on robots are not only better but that they do pose a serious question on what our near future is going to become, up to a point, I cannot accept his overall conclusion that the desired outcome of beginning that it is taking jobs away from our people. On the other hand, creating jobs in the robotic and computer department but all the people that have pharmaceutical jobs an jobs and medicine are going to lose them because there will be no need for them. We do not need robots, right now computers are sufficient enough. “The coders who built Fabulon have programmed it with a set of instructions: The robot must not hurt its human. The robot must do what its human asks it to do. The robot must not administer medication without first contacting its supervisor for permission. On most days, these rules work fine. On this Sunday, though, Fabulon cannot reach the supervisor because the wireless connection in Sylvia’s house is down. Sylvia’s voice is getting louder, and her requests for pain meds become more insistent.” Basically, Hening is saying that robots can have problems and are not always 100% reliable but the majority of the time they are. If there happens to be an emergency a robot can’t do what a human can. If they are in the driver less car and there is an emergency vehicle coming the vehicle is suppose to pull over, but it may fail in that. “If the decision-making algorithm were to always choose the option in which the fewest people die, the car might avoid another car carrying two passengers by running off the road and risking killing just one passenger: its own. Or it might choose to hit a Volvo instead of a Mini Cooper because its occupants are more likely to survive a crash, which means choosing the vehicle that is more dangerous for its owner to plow into.” On the other hand these are really cool cars and hope to see them in the near future. If the robot has a malfunction sort of like this one did, then what? What is in store for our future?

    Like

  22. June Cera's avatar June Cera

    Throughout this article Hening pushes for the readers to feel like robots are still not able to live in the same world as humans at there current stage of development. With all the problems between morality and emotions and how a robot should act in specific situations it is very difficult to find the answer to the question that Hening states. One example Hening brings up is “Here’s the difficulty, and it is something unique to a driverless car: If the decision-making algorithm were to always choose the option in which the fewest people die, the car might avoid another car carrying two passengers by running off the road and risking killing just one passenger: its own. Or it might choose to hit a Volvo instead of a Mini Cooper because its occupants are more likely to survive a crash, which means choosing the vehicle that is more dangerous for its owner to plow into.” this quote shows how the robots would have trouble with the moral decisions that a human being would make in a split second. I do support the idea of robot helping humans in the future although I believe it will be difficult for these robots to be excepted and integrated into society to the point where humans except all the decisions that the robots make. I do think it is possible but humans might have to change their moral views to be able to except these robots and that is what I think will be the hardest part of this integration of robots. What will happen in the future ? Will humans be able to except these ideas ? I don’t know, I believe that we will have to introduce these robots into society and try to make them work before completely giving up on the idea of robots helping in society.

    Like

  23. Brooke Towns's avatar Brooke Towns

    Employing robots to complete human tasks does have some positive results, but the negative consequences greatly outweigh them. Our human integrity, which has been tested over the past few years, will diminish in certain areas of life. As Henig pointed out in the article, war robots which will need to make decisions about whether to shoot or not shoot can be detrimental. Sure, soldiers’ lives will be saved, but at the cost of our morality. If a literal killing-machine were to shoot someone’s innocent family member, would there be a trial? Of course not, because murder is committed by humans, and robots cannot be jailed. This family would simply have to accept the death. Humanized robots will reduce humans to objects, because no matter how programmed a machine is, metal (or plastic, or anything that a robot could be made out of) cannot feel emotion. In fact, by even trying to complete such a robot, we are just playing God, which reduces our morality that much further.
    Although it may seem far-fetched, humans may end up like the ones in Pixar’s “Wall-e” if robots come far enough. Like the elderly woman in Henig’s blog, healthy people may eventually buy machines to do simple tasks for them, like get a drink out of the fridge or make their bed. Although no lives are at stake in this analogy, humans’ integrity is at question. What has become of society if people are too lazy to get their own drink out of the fridge? Do we really want to support such a society? In short, automation is not a question of morality in the sense of its uses, but rather what it is not being used for.

    Like

  24. Madelene C's avatar Madelene C

    Although robots benefit humans by means of care and protection, I oppose the implementation of robotic intervention. In relation to driverless cars, robots use a vast database, created by humans, to make moral decisions in an emergency. But what constitutes moral? The selfdriving car choses an action based off of the stablity of surounding cars and the amount of people in each car. What would happen if this system caused a car accident with a single person car that carried a pregnant woman? Though the car had only one person in it, this calculated risk jepardized two lives. This ethics system will only thrive with technological improvement, but the self driving car would only work perfectly if there were no regular cars, only self driving cars; this way everything could be automated. Over time, I fear, humans will rely too much on this “flawless” ethical plan, leading to a lack of determination; therefore, a reliant society. In conclusion, I think the ethics of robots is flawed, and will lead to reliance. If this technology continues and one day shorts-out, we would not be able to conduct ourselves in a normal manner. Today, our lives revolve around our phones; in fact some people cannot live without their phones. If this dependency transfers to our transportation, we would not function as a society.

    Like

  25. Svea Cheng's avatar Svea Cheng

    In the long run, our world as we know it would benefit tremendously with robot employment. Robots are practical, efficient, and precise. They are capable of performing tasks that surpass the abilities of humans, when programmed correctly. What humans can do, robots can execute even better. As humans, we must accept error, mistakes, and uncertainty; however, this is not the case for robots. Machines, inherently, are programmed to be accurate, meticulous, and calculating. Human minds are no match. Take, for instance, a simple calculator. Though the human mind is able to compute basic arithmetic, some more than others, it cannot rival the abilities of calculating devices. Not does the human mind fall short in abilities, it lags behind in both time and efficiency. While an above average math student can solve a problem in perhaps two minutes, a device can spit out an answer in a matter of seconds. Furthermore, while humans are limited in energy and may only compute mathematical problems for so long, calculators last for long periods of time. Now, blow up this idea to a grander scale. This calculator can be a life-sustaining device. Lives depend on it. The well-being of individuals is jeopardized. In such a scenario, there is no room for human error, shortcomings, miscalculations, or lack of time. Instead, a machine designed to tend to such a crisis will be far more likely to restore the human to good health. In essence, the robots are indeed the best solution. They inevitably loom ahead in the horizon on mankind. At the rate technology is developing today, there is no doubt machines will be designated to greater and more important tasks. It is simply a matter of efficiency and increased success rates. The implementation of robots will only lead to a more innovative and productive world. Industry will soar. There will be a health-care revolution.
    However, many oppose the use of robots, partially because they simply are not “human.” Many claim that they are incapable of human morals, and cannot- or rather should not- make decisions as properly as humans. As humans, we logic through the dilemmas we face for the best solution. Many fear that robots cannot do this. They fear that robots are detrimental to mankind, as seen in the science fiction films. Such ideas are in movies for a reason- they are fiction themselves. People must keep the greater picture in mind- no, robots are not here to replace humans. They are here to enhance lifestyles globally, and to make a difference in combating world problems.

    Like

  26. Gabriel Factora's avatar Gabriel Factora

    We cannot deny the fact that as time goes on, our reliance on robots increase more and more. This is due to the fact that robots are more efficient than humans. Due to the advancements in technology, robots can get jobs done quicker and better than humans.
    Although this is the case, I believe that robots should not be allowed to perform tasks that require ethical and moral decisions. This is due to the fact that unlike humans, robots are not designed to feel. Robots are designed to follow certain algorithms to complete certain tasks.This means that they cannot change the way they respond to situations but instead respond with how they are programmed to. As humans, we are built to feel emotions which enable us to incorporate ethics in almost everything that we do. This means that before taking action, we consider whether what we are about to do is right or wrong. When put into dire situations, our creative minds can come up with possible solutions that could potentially change the outcome of such situations. As of now robots do not have the ability to perform such a task. Robots are not able to feel emotions. This results into them being unable to weigh on certain situations that might involve another human being’s life. These factors show that it might be better for robots not to replace humans in such tasks.

    Like

  27. Matya Kaye's avatar Matya Kaye

    Technology is a focus of the modern world. Inventors create new, futuristic products everyday, but now they have focused on another idea: robots. In our world, robots are becoming more prevalent in our society, and they can be found in many places including factories and homes. But researchers have set their sights on achieving a new level of robotic, and they hope to create robots with a sense of morality. But underneath this idea lies the fear of artificial intelligence and what it could mean to mankind.
    Many of the robots in today’s society lack many of the traits that we consider “human”. This causes a problem as we try to create robots to tend to sick patients or fight in wars. The human characteristics such as guilt and sympathy that are required in the decision making needed for these jobs. Humans and robots think differently. Drivers make decisions every second as they drive, based on many details, but how can robots think the same way as human?
    The answer? They cannot. In this article, Robin Marantz Henig discusses the way scientists trying to replicate the human thought process. He says that they use mathematical equations that calculate the difference between the expected damage of a weapon and the actual damage. This shows how robots have to be programmed with math instead of emotion, and this lack of emotion will never allow robots to make moral decisions.
    The basic lack of self awareness dismisses the possibility that robots could have morals. Robots can be programmed to believe what humans believe, but the lack of thought is a line that cannot be crossed.

    Like

  28. Sahar Kaleem's avatar Sahar Kaleem

    While many find comfort in the fact that people have limited morality to an equation, I do not believe that morality is that simple. Morality is a uniquely human characteristic and cannot simply be defined by an equation. Morality is defined by the emotions tied to certain experiences. Even with the balance and guilt systems mentioned, there will always be a situation where there will be an exception to the reasoning of the program. The human mind and morality is far too complex and variable for someone to program into a robot. Even if the morality equation worked, machines do not have emotions. While an action taken by a robot might be the most reasonable, humans might not take kindly to the choice. Take the driver less car for instance; while crashing into one person is better than crashing into six, there could be larger repercussions for killing certain people over others. If the car decides to crash into the leader of some country instead of a car containing two people, it may seem right to the robot, but to humans, it could be taken as a declaration of war or a bad relationship between the two governments. While machines are very helpful to people, I think we should keep moral decisions for humans to make.

    Like

  29. Jarling's avatar Jarling

    Morality is more than an algorithm. No matter how much people people try to stick to what they think is right, under certain conditions there is always an exception. People have agreed that lying is bad. But if Person B were to ask Person A about the surprise party for Person B, then the obvious course of action is that Person A will lie and most people will not find it morally wrong. If there were two cars and one of them had to crash. There’s 5 people in one car and 3 in another, the better one to crash would be the 3rd one because there’s less people who would be injured. However, if you know that the car with 3 people is filled with murderers and the car with 5 people is filled with innocent civilians one might suggest that the car with less people should crash even though the initial moral decision was to injure less people. Morality is more than an algorithm.

    Like

  30. Josh Haeker's avatar Josh Haeker

    Robots only run based on countless lines of code. Because humans are not omnipresent, they cannot therefore create any amount of code that will give the robot a prerogative in every situation. When the robot detects a situation it is not coded with the solution for, it will stop running code. It will shut down, which is in general a very bad thing. Take the given situation of a car crash. Because the robot relies solely on its programmers’ ability to predict all the aspects of a given car crash, when a facet of the particular situation makes the situation become undecipherable for the robot, in essence it has no code to run, it will simply stop. A car shutting down in the middle of a “potential accident” situation is possibly the worst decision the robot can “make”. In short, robots can never achieve human levels of intelligence because humans will never be omnipresent, and therefore cannot code the robot to deal with situations their minds cannot even fabricate. There is no point to developing ‘moral’ processors for robots that cannot even do the original function, let alone decide the most ‘moral’ choice.
    -just my thoughts

    Like

  31. Shaun Adams's avatar Shaun Adams

    I definitely feel as if robotics could make our lives easier, but lets face it, we’re not gonna go into some world where “Irobot” (the movie) happens. I agree with the statements explaining how a machine could have the emotions or decision making of a human, or the ability to consider the value of the lives of the humans around it. Yes, it would be great to have robotics to drive us around so we can text, or talk, or do whatever you would with the car driving you around, but computers have a processing limit to them, and if too many things are going on, it could essentially stop working for a little bit, just like a computer with its RAM being completely used up.
    Plus, if we get to this point that see in movies or talk about giving robots the AI, (this is completely theoretical) the movies we watch Like the new Avengers Movie or Irobot where the robot(s) go on a a rampage because they had a corrupted file or their coding was hacked and messed with. But just as much as robots could be a benefit, they could just as much be a burden. I personally think we should advance out technologies yes, but not make robots to where we would rely on them everyday. The prices would also be very high, and if t=we were to make them anything like the technology we have now, you’d be out of date with them every year or so, making you want to pay more for a newer unit. I feel making robots for every day use and ease would be ridiculous, but for the military it would be a little bit more reasonable.

    Like

  32. Doug Vaughan's avatar Doug Vaughan

    The difference between a driverless car, and a human driver is that the driverless car has a set path and strict rules it must follow, and a human driver has instinct and reaction time and the ability to make the best decision in an instant or to adapt to their situation. I agree that the two situations are different because even though the two drivers are doing the same action it’s how each preform the action is different because if you see a bunch of fallen branches on the road the human avoids by waiting for it to be clear then go on the other side of the road, and pass the fallen branches, but the robot cant exit the lines, and might plow through the branches, and that might result in damages to the car, and even if it does avoid the branches how does the car know when it is safe to pass on the other side Joshua Natividad disagrees when he writes “Both robot and man process info through… “brain” try to take actions that lead to the most favorable outcome”, yes I agree that the robot does process information, but machines aren’t perfect Maree agrees when she says “robots would cause more deaths because machines break, and that’s all robots are, machines”, humans have a way to adapt to the situation the human brain can change to what situation it is in and come up with solutions to on how overcome the problem it is facing, but how ever a robot does not adapt it will have set thoughts on how to solve a problem and will only use those thoughts and if they all fail then the robot will not know what to do, and will shut down and then the car won’t be able to do anything (if the robot didn’t total your car already).

    Like

  33. Keith M.'s avatar Keith M.

    You would think that the idea of artificial intelligence would be an idea of having computers or robots perform natural everyday human activities without flaws. This is because the whole concept of it is to simply have no errors because humans are “naturally programmed” to make mistakes. Henig asserts that the decision-making processes of a driver-less car in an emergency situation are fundamentally different from the processes of a human driver. Here many readers and bloggers would probably object to Henig’s claim for multiple reasons. A popular reason would most likely be that machines, gadgets, and technology all together at some point in time have malfunctions or glitches. Another reason might be because of all of the science fiction movies that have been made that tell viewers to simply stay away from the artificial intelligence idea. I would agree with naysayers in this particular case.No matter how many test you run, there is bound to be some sort of malfunction that will occur, causing a monumental catastrophe.
    It would be nice to have a robot doing most of my daily routines for me—especially driving me from point A to point B. However, I am not too sure that a machine programmed to run and understand a thousand emergency scenarios—specifically for this topic, involving driving with other vehicles around it—is capable of making absolutely no errors at all. Blogger Joshua Natividad joined the conversation with this opening: “A man drives a car and does not pay attention leading to the death of two people. A robot drives a car with slight error in its calculations [and] kills two people. the situation remains the same although with different drivers.” In sum both drivers made the same mistake with the same ending result. How can that be possible if the machines are programmed to be flawless? The answer is simple. Machines are not perfect. They are bound to come into some sort of malfunction. Aside from that, they are also embedded with mistakes. Many would object this statement but I would counter that all machines are made by humans. It is in the nature of all humans to make mistakes. Thus, the machines are flawed. The only way for a machine to be flawless would be for it to be made by another flawless machine. That kind of contradicts itself right? Because if there’s a (flawless) machine creating other flawless machines, how is that possible then? Who created the said flawless machine that is capable of doing the same? In sum, machines have the natural occurrence to make the same mistakes as humans.

    Like

  34. Cara M.'s avatar Cara M.

    Henig discusses her views on how technology is taking over the world. Most specifically in her article she addresses her favorite and a very controversial issue, driverless cars versus human drivers. Some view driverless cars to be just like human drivers, while others feel they are completely different. Nevertheless both followers and critics of driverless cars will probably argue that if a robot kills someone or causes an accident who will be to blame. Henig views these fancy driverless cars to have both pros and cons. The driverless cars take away the “human error” like texting and driving and drunk driving. However driverless cars aren’t set up to react to every possible situation, for example if there are nine different things going on at once the robot might not be able to maneuver around all of them.
    Joshua Navidad addressed the nay sayers by saying if a robot or a human gets into an accident they both mean the same thing, both were drivers who got into accidents. Joshua argues that “A robot uses pre-made rules that they must follow and achieve. A human goes through a childhood in which they create their own rule sets that they follow.” He is saying both a human and robot have some kind of brain and rule system, both which make mistakes so they should be treated the same way. Elena takes on a view point similar to that of Joshua’s. She believes that robots are great, but relying too much on machines can lead to errors just like the ones humans make so why take the risk. Elena is right in saying that “robots will never be able to completely replicate a human and that to attempt to do this would cause unforeseen consequences.” Attempting to give robots the jobs humans have done for millions of years can cause so many problems that we wouldn’t even see coming.
    Henig, Joshua Navidad, and Elena all had one central idea in mind that computers are great but they can mess up just like humans can. Driverless cars and human drivers both go out on the road not knowing what is to come. I agree that computers aren’t as great as everyone thinks because my experience with them is that computers crash and go crazy and you don’t always know how to fix the problem. Humans make poor decisions and can do horrible things, but they never shut down unexpectedly. Overall I think machines are trying to take over the world and that isn’t the best thing for our society.

    Like

  35. Ashley C's avatar Ashley C

    Here many bloggers would think that robots would be a good change in our society but a select few would probably object that it is because in reality a robot helping humans would not always be the best case in certain situations. When reading the article, you read about Sylvia an injured elderly woman and her caregiver robot Fabulon. The desired outcome we wanted was for Fabulon the robot caregiver, to do its job and bring Sylvia her painkillers but the robot was not able to complete the job. Fabulon was not able to reach Sylvia because the wireless connection was down.
    With that being said I agree with Alannah and Cayman because they both say that although having a robot could be a good thing, it is also bad because they can easily break down and stop doing its job right in the middle of its job and then you will not get anything out of it. I agree when Alannah says “humans wouldn’t stop working” because it is true, if you asked a human to go get the painkillers then they would not just randomly stop they would go get it and then bring it back, but a robot could stop working and it would not be able to finish its task and that is what happened to Fabulon. I also feel as though robots do not think the same as humans they are not able to make the same decisions as humans. They only think about what they were made to do but nothing else.

    Like

  36. Abby L.'s avatar Abby L.

    Writing in the magazine, Robin Henig discusses how we use technology every day and it is continuing to grow. The article, Death by Robot, starts out with an example of Sylvia, an injured elderly woman, and Fabulon, her robot caregiver. Matthias Scheutz of the Human-Robot Laboratory from Tuffs University explains the dilemma, “On the one hand, the robot is obliged to make the person pain free; on the other hand, and it can’t make a move without its supervisor, who can’t be reached.” So, Sylvia cannot be given her medications. The argument here is that human caregivers have morals and can make decisions unlike robots who have to keep to strict rules. As talked throughout the article, computer scientists are teaming up with philosophers, linguistics, lawyers, and human rights experts to try finding decision points that robots would need to work through to think about right and wrongs like humans. According to Alannah, there have been movies showing what life would be like with robots, such as “I, Robot”. They can help humans, but they can do the opposite as well. Her stand is that robots should not be created to help humans. One reason why is because robots could have a malfunction. Second, robots don’t have ethics and will never be able to think and make decisions like humans do. According to Elena, her view is that robots contribute greatly to our society and often do their job better than humans. However, she believes a certain danger lies in relying on machines. There are too many situations for a coder to correctly provide a robot with the correct response.
    My own view is that we should use technology and continue to improve it but, we should not put too much trust in machines to provide safety. Robots are being used today for many things but they should not get to the point of thinking like humans and taking places of humans in certain jobs.

    Like

  37. Fatima T.'s avatar Fatima T.

    When Henig introduces his argument by giving a short story about this old lady asking for medications from her robot he basically explains the problem with having robots making decisions for a human and how it can be very negative but finding a way to solve it is a must.
    Nevertheless, both followers and critics of the article Death by Robot will probably argue that there is no way robots can be improved to the point that they can fully help the human race by making good decisions that will possibly save their life. Shaun himself writes “I definitely feel as if robotics could make our lives easier, but lets face it, we’re not gonna go into some world where “i robot” happens”, Shaun’s point is that making robots completely realistic with the decision making characteristics totally made by the robot itself is not realistic. However, the author of the article points out that Scheutz believes robots can be improved to make better and logical decision making skills on their own, because in Scheutz view he writes “one day robots will be even more morally consistent” The essence of Scheutz argument is that it is possible for robots to become even more advanced.
    I Believe that robots can not be improved to that point of having moral decision making on their own because technology hasn’t yet become that advanced to have robots make decisions without some type of human made programming. Recent studies like these shed new light on scientist will overcome that impossibility, which previous studies haven’t addressed.

    Like

  38. Alyssa's avatar Alyssa

    Although using robots in place of human beings can have a positive result on technology advancement; some major consequences can occur. As Henig states a scenario with Sylvia and her helper robot Fabulon, she showcases the flaws of a robot who has boundaries, like a wireless connection issue, unlike a normal human would have. Most of these autonomous robots are unable to emit or detect human emotions, and in extreme cases they wouldn’t be able to make a humanely logical decision. For example, agreeing with Henig, driverless cars are rarely safe or capable of driving in a town setting because of the various stoplights and human errors at intersections, the robot would come at a mental crossroads and either make an extreme error, or rarely choose the most morally correct answer. In addition, most robots need repairs, which makes them even less reliable than humans. This means that in most cases, a good percentage of these machines would be stuck in a repair shop, unlike a human. In all reality, robots may be good for technological advancement, but they aren’t always healthy for our well being.

    Like

  39. Max Z's avatar Max Z

    This whole situation about robots and humans can go back and forth for one main reason. Being that it’s natural for humans to make mistakes and for technology making mistakes, not one man or one machine are born or created perfect. With that being side there will still be the same amount of accidents if you have a machine driven car over a human driving a car. The example of Sylvia, an injured elderly woman had two broken ribs was on bed rest while a robot was taking care of Sylvia giving her pain medication when told to. The robot had to make a decision whether to help relieve Sylvia’s pain by giving her the pain killers or had to wait till further notice. Sylvia’s connection was down and the robot was not able to contact for further instructions so the robot was left to think. Like humans we were born to think and make decisions. After reading this article robots and humans are alike in a way with human error and robotic error and the ability to think and feel. Robots will never fully understand what it means to be human but robots could be the answer for a lot of things in the near future. Our human instinct and human morals are something that we need to survive and communicate with one another. Being put into a situation will rely on human instance to kick in and save the day robots aren’t and will never have human instincts because they are created from machine.

    Like

  40. Sonia DeMaio's avatar Sonia DeMaio

    Many Americans assume that when you write you are just talking to yourself. But the book explained that every piece of writing is just a response to something else and if it wasn’t for that something else you wouldn’t have anything to say or write about. I personally think that is a very interesting way to look at it. When you think about it, it’s true. When you write an essay in school you are responding to the question and or prompt that was given and without that you would have nothing to write about. Even when you are writing for fun on your own something you heard or saw gave you the ideas that you have to write what your writing. When you realize that every idea comes from another idea it makes you think and look at things a little differently.

    Like

  41. craigoryjarod's avatar craigoryjarod

    In his essay, “Death by Robot”, Robin Marantz Henig talks about the uprising of robotics in society, and the importance that they will most likely hold in the near future. Having read the article, it’s evident that Henig supports the introduction of robotics in society, such as the self-driving car, robotic military weapons, and health-aid assistants. In the last paragraph of the essay, he mentions words from Wendell Wallach, chairman of the technology-and-ethics study group at Yale’s Interdisciplinary Center for Bioethics, as well as Matthias Scheutz of the Human-Robot Interaction Laboratory at Tufts University. Wallach believes that in the future, a robot’s behavior will be “indistinguishable” from that of a human. Scheutz believes that robots will eventually become “more morally consistent than humans”. Although some may feel “discomfort” or “uncanny valley”, as the discomfort is referred to in the article, regarding the ability of robots to one day be more ethically consistent than humans, I agree with the optimistic view of the experts. I feel the same sort of optimism as expressed by Wallach and Scheutz, and lean more towards the advocacy of Henig, rather than the discomfort and disagreement expressed by those who oppose the idea of advanced robots taking roles in society. I believe that to have such advanced technology, to the point of perfecting human qualities, would be extremely beneficial to the further development of society, and would provide for immense technological and societal opportunity.

    Like

  42. Trevor P's avatar Trevor P

    After reading through Henig’s article “Death by Robot”, it is clear that he is addressing several key points that are to arise if robots were to become leading factors in all fields. Such ideas as automated cars that would have to judge between whether or not to hit a car or a person if the incident were to come. But also an idea that peaked my interest, Military robots that would have to determine to shoot someone or not. The idea that very soon could change the way warfare is seen all together. With the idea in hand that these military robots would have to determine what to do through ethics and morality when faced with human interaction. Is this person injured? Are there civilians around? Is this person needed for interrogation? These questions come to mind when addressing an idea such as this. But what about an all out war between robots? The future of warfare could very well be that of machines destroying one another far away from humans. A wasteland of scrap metal in the end, and the last robot standing decides which side has won the war. A neat idea that could very well become a reality very soon.

    Like

  43. Carl S.'s avatar Carl S.

    “Death By Robot” by Robin Marantz Henig was an excellent article found on nytimes.com. Its sums up my opinion rather well and discusses the moral dilemma of giving morals to robots. Sounds strange, but the real selling point of this article was really just the first paragraph about a medical care robot, unable to make a decision to give someone medication because it cannot complete its primary step in programming, but conflicts with other parts of its programming producing a mental stalemate. Its dives deeper than that as well, going so far as to bring up the decision making ability of Googles autonomous car and its ability to decide what course it should take for damage stabilization. I agree with this article, I believe giving morals to robots could be a beneficial thing, but as confronted in this article, can also have drawbacks.

    Like

  44. Andrew Pehlke's avatar Andrew Pehlke

    I feel the issue with robot morality is that robots or machines will always make decisions based on the algorithms they are programmed with. This can be a good thing as it would eliminate human emotions that are often at fault when humans are forced to make important decisions (panic, sluggishness, distraction). However, it would also eliminate humans ability to abstractly weigh the possibilities in a scenario. Blindly choosing what action to take based on predetermined data (as robots do) is dangerous when many situations do not have a clear cut correct answer. This is where, in my opinion, our ability to think abstractly gives us the largest advantage over robots. Robots are told what the correct answer is in their set of algorithms. They then react in the way that makes that correct choice the most likely. However, what this correct choice is varies greatly by situation. It is, in fact, our ability to read each situation and determine what is the correct choice for that specific situation that, to me, makes it very difficult for a robot to be programmed to make the same kinds of decisions as a human.

    Like

  45. Colin O'Bryan's avatar Colin O'Bryan

    A self-driving car and a car driven by a person would be extremely different in an emergency situation.A robot can be re-uploaded if it were destroyed in a car accident. The robot would not freeze up, it would stay calm because it will not die. This calm would allow the car driven by a robot would react faster than a human driven car. Of course I haven’t crashed automated cars and cars driven by people, to test the results. So an explanation to my thinking can come from video games. Why does a gamer not slow down in a game like G.T.A. (Grand Theft Auto)? It’s because death would only result in a re-spawn, much like a re-upload. The mortality of the drive or in the examples case the player is not considered in the situation. In other words a person self-preservation reaction to situations is not triggered.

    Like

  46. Laura Rowe's avatar Laura Rowe

    Robot’s are a very handy tool for our society to have and may be able to work longer then a human ever could. But I think that if we keep progressing as fast as we are with our technology, they could end up taking over most of our jobs. Also robots are simply not human and can not think they way we do as a human which is very dangers as we have seen in movies like “I Robot” playing Will Smith. We need to be careful on how smart we make these robots before its to late!

    Like

  47. James Hynes's avatar James Hynes

    In the article “Death by Robot” Robin Henig discusses how robots may react correctly in a normal situation but when given a situation where things don’t go according to plan they may not be able to improvise. I believe we should not give robots the ability to think outside the box. To me this may be very dangerous because it reminds me of the movie iRobot. While I do believe robots can be very helpful to people I think they should have limited roles. I don’t really think a robot army attack would happen but I think there will be a handful that go haywire.

    Like

  48. Dustin Mattingly's avatar Dustin Mattingly

    The prospect of employing robots into our daily lives is an interesting one. This article brings forth a few good points regarding the morality of robots. An example provided was about the use of military robots, and their ability to shoot and kill a human. From all of the accounts I have come across, killing someone can take a heavy emotional toll on a person. It is never something done lightly, and this article implies that the killer shows a form of respect when they acquire such a powerful emotion after killing someone. However, I do not believe that a lack of this emotion is a sign of disrespect. Surely someone will mourn a loss such as that, and it doesn’t seem likely that the person killed will care so much about who or what killed them. Besides, if military robots were used by both sides of the conflict, it would ultimately lead to a decrease in the loss of human life.

    Like

  49. Nicole Williams's avatar Nicole Williams

    I believe that Scientists need to re-evaluate what they are trying to make robots do in today’s society. A robot is supposed to enhance simple things in today’s society, like basic medical radiation scanning or technology that makes a medical examination more easy. As well as trying to provide a service robots tend to lack intellect if you don’t word something properly and you wouldn’t get the same results as if a person who had the knowledge you were trying to receive could give it to you. For example you go online and its a math website, robots can give you the answer to the problem but they cannot break it down like you probably need it to be in order to achieve success with it. Robots think more complex and in depth and them trying to help you learn something is just not accurate because they are programmed with immediately processing that answer and solving it with no problem. That’s definitely not how the human brain works, we need foundation to learn and acquire new skills and abilities. To me robots can produce more errors than humans do, We both aren’t perfect but I think that a self driving car and robots in the surgical industry doing the surgeons job isn’t better than humans doing it, for one jobs are going to be lost, and two its scary into thinking that something that is programmed through code is doing something that is that in charge of your life when something in the programming could go wrong or it just stop working all together in the surgery room.

    Like

  50. Cory Sadler's avatar Cory Sadler

    As a former member of the armed forces, I find the prospect of using robots in a military setting to be especially thought provoking. While the ethical dilemma of machines killing humans is apparent, there’s also the issue of accountability. Who would be responsible for a malfunction that resulted in a war crime?

    Like

Leave a reply to Nicole Williams Cancel reply