Jennifer Ware

Jennifer Ware: When bad stuff comes from technology

Maniac9.jpg

Via The New England Board of Higher Education (nebhe.org)

It’s an unpleasant reality, but also an inevitable one: Technology will cause harm.

And when it does, whom should we hold responsible? The person operating it at the time? The person who wrote the program or assembled the machine? The manager, board or CEO that decided to manufacture the machine? The marketer who presented the technology as safe and reliable? The politician who helped pass legislation making the technology available to consumers?

These questions reveal something important about what we do when bad things happen; we look for an individual—a particular person—to blame. It’s easier to make sense of how one person’s recklessness or conniving could result in disaster, than to ascribe blame to an array of unseen forces and individuals. At the same time, identifying a “bad guy” preserves the idea that bad things happen because of rogue or reckless agents.

Sometimes—as when hackers create programs to steal data or drone operators send their unmanned aircraft into disaster zones and make conditions unsafe for emergency aircraft—it is clear who is responsible for the adverse outcome.

But other times, bad things are the result of decisions made by groups of people or circumstances that come about incrementally over time. Political scientist Dennis Thompson has called this "the problem of many hands." When systems or groups are at fault for causing harm, looking for a single person to blame may obfuscate serious issues and unfairly scapegoat the individual who is singled out.

Advanced technology is complex and collaborative in nature. That is why, in many cases, the right way to think about the harms caused by technology may be to appeal to collective responsibility. Collective responsibility is the idea that groups, as distinct from their individual members, are responsible for collective actions. For example, legislation is a collective action; only Congress as a whole can pass laws, while no individual member of Congress can exercise that power.

When we assign collective responsibility and recognize that a group or system is morally flawed, we can either try to alter it to make it better, or to disband it if we believe it is beyond redemption.

But critics of collective responsibility worry that directing our blame at groups will cause individuals to feel that their personal choices do not matter all that much. Individuals involved in the development and proliferation of technology, for instance, might feel disconnected from the negative consequences of those contributions, seeing themselves as mere cogs in a much larger machine. Or they may adopt a fatalistic perspective, and come to see the trajectory of technological development as inevitable, regardless of the harm it may cause, and think to themselves, “Why not? If I don’t, someone else will."

Philosopher Bernard Williams argued against this sort of thinking in his “Critique of Utilitarianism” (1973). Williams presents a thought experiment in which “Jim” is told that if he doesn't kill someone, then 20 people will be killed—but if he chooses to take one life, the other 19 will be saved. Williams argues that, morally speaking, it is beside the point whether someone else will kill or not kill people because of Jim's choice. To maintain personal integrity, Jim must not do something that is wrong—killing one person—despite the threat.

In the case of an individual who might help develop “bad” technology, Williams’ argument would suggest that it does not matter whether someone else would do the job in her place; to maintain her personal integrity, she must not contribute to something that is wrong.

Furthermore, at least some of the time our sense of what is inevitable may be overly pessimistic. Far from being excused for our participation in the production of harmful technologies that seem unavoidable, we may have a further obligation to fight their coming to be.

For many involved in the creation and proliferation of new technologies, there is a strong sense of personal and shared responsibility. For example, employees at powerful companies such as Microsoft and Google have made efforts to prevent their employers from developing technology for militarized agencies, such as the Department of Defense and Immigration and Customs Enforcement. Innovators who have expressed some remorse for the harmful applications of their inventions include Albert Einstein (who encouraged research that led to the atomic bomb), Kamran Loghman (the inventor of weapons-grade pepper spray), and Ethan Zuckerman (inventor of the pop-up ad), all of whom later engaged in activities intended to offset the damage caused by their innovations.

Others try to make a clear distinction between how they intended their innovations to be used, and how those technologies have actually come to be used down the line—asserting that they are not be responsible for those unintended downstream applications. Marc Raibert, the CEO of Boston Dynamics, tried to make that distinction after a video of the company’s agile robots went viral and inspired dystopian fears in many viewers. He stated in an interview: "Every technology you can imagine has multiple ways of using it. If there's a scary part, it's just that people are scary. I don't think the robots by themselves are scary."

Raibert’s purist approach suggests that the creation of technology, in and of itself, is morally neutral and only applications can be deemed good or bad. But when dangerous or unethical applications are so easy to foresee, this position seems naive or willfully ignorant.

Ultimately, our evaluations of responsibility must take into consideration a wide range of factors, including: collective action; the relative power and knowledge of individuals; and whether any efforts were made to alter or stop the wrongs that were caused.

The core ethical puzzles here are not new; these questions emerge in virtually all arenas of human action and interaction. But the expanding frontiers of innovation can make it harder to see how we should apply our existing moral frameworks in a new and complicated world.

Jennifer Ware is an editor at Waltham, Mass.-based MindEdge Learning who teaches philosophy at the City University of New York. 

 

Jennifer Ware: 'The Trolley Problem' and robotics

silver.jpg

From the New England Board of Higher Education (nebhe.org)

 

NEBHE's Commission on Higher Education & Employability has thought hard over the past year about the increasing role of artificial intelligence and robotics in the future of life and work.

Many others are also waking up to this landscape, which not so long ago seemed like science fiction. Waltham, Mass.-based MindEdge Learning, for example, plans to devote regular blog posts to ethical questions in a world in which humans coexist with sophisticated—even humanoid—robots.

The first blog post by Jennifer Ware, a MindEdge editor who teaches philosophy at CUNY, begins by asking: What happens when robots do immoral things? Whom do we hold responsible? How do we navigate the fact that our own biases and prejudices will inevitably make their way into programs for the machines we build? Should we construct sophisticated, humanoid machines simply because we can? Is it wrong to treat robots in certain ways, or fail to treat them in other ways? How might our relationships with robots color our relationships with other humans? What happens if robots take over parts of the workforce? How might robots give us insight into what it means to be human? She writes:

Programming Machines to Make Moral Decisions: The Trolley Problem

"Machines have changed our lives in many ways. But the technological tools we use on a day-to-day basis are still largely dependent on our direction. I can set the alarm on my phone to remind me to pick up my dry cleaning tomorrow, but as of now, I don’t have a robot that will keep track of my dry cleaning schedule and decide, on its own, when to run the errand for me.

As technology evolves, we can expect that robots will become increasingly independent in their operations. And with their independence will come concerns about their decision-making. When robots are making decisions for themselves, we can expect that they’ll eventually have to make decisions that have moral ramifications–the sort of decisions that, if a person had made them, we would consider blameworthy or praiseworthy.

Perhaps the most talked-about scenario illustrating this type of moral decision-making involves self-driving cars and the “Trolley Problem.” The Trolley Problem, introduced by Phillipa Foot in 1967, is a thought experiment that is intended to clarify the kinds of things that factor into our moral evaluations. Here’s the gist:

Imagine you’re driving a trolley, and ahead you see three people standing on the tracks. It’s too late to stop, and these people don’t see you coming and won’t have time to move. If you hit them, the impact will certainly kill them. But you do have the chance to save their lives! You can divert the trolley onto another track, but there’s one person in that path that will be killed if you chose to avoid the other three. What should you do?

Intuitions about what is right to do in this case tend to bring to light different moral considerations. For instance: Is doing something that causes harm (diverting the trolley) worse than doing nothing and allowing harm to happen (staying the course)? Folks who think you should divert the trolley, killing one person but saving three, tend to care more about minimizing bad consequences. By contrast, folks who say you shouldn’t divert the trolley tend to argue that you, as the trolley driver, shouldn’t get to decide who lives and dies.

The reality is that people usually don’t have time to deliberate when confronted with these kinds of decisions. But automated vehicles don’t panic, and they’ll do what we’ve told them to do. We get to decide, before the car ever faces such a situation, how it ought to respond. We can, for example, program the car to run onto the sidewalk if three people are standing in the crosswalk who would otherwise be hit–even if someone on the sidewalk is killed as a result.

This, it seems, is an advantage of automation. If we can articulate idealized moral rules and program them into a robot, then maybe we’ll all be better off. The machine, after all, will be more consistent than most people could ever hope to be.

But articulating a set of shared moral guidelines is not so easy. While there’s good reason to think most people are consequentialists–responding to these situations by minimizing pain and suffering–feelings about what should happen in Trolley cases are not unanimous. And additional factors can change or complicate people’s responses: What if the person who must be sacrificed is the driver? Or what if a child is involved? Making decisions about how to weigh people’s lives should make any ethically minded person feel uncomfortable. And programming those values into a machine that can act on them may itself be unethical, according to some moral theories.

Given the wide range of considerations that everyday people take into account when reaching moral judgments, how can a machine be programmed to act in ways that the average person would always see as moral? In cases where moral intuitions diverge, what would it mean to program a robot to be ethical? Which ethical code should it follow?

Finally, using the Trolley Problem to think about artificial intelligence assumes that the robots in question will recognize all the right factors in critical situations. After all, asking what an automated car should do in a Trolley Problem-like scenario is only meaningful if the automated car actually “sees” the pedestrians in the crosswalk. But at this early stage in the evolution of AI, these machines don’t always behave as expected. New technologies are being integrated into our lives before we can be sure that they’re foolproof, and that fact raises important moral questions about responsibility and risk.

As we push forward and discover all that we can do with technology, we must also include in our conversations questions about what we should do. Although those questions are undoubtedly complicated, they deserve careful consideration–because the stakes are so high.''