Wicked problems

The following projects have been used for team projects at the Georgia Institute of Technology.

Designing the morality of things: Artificial intelligence in the workplace

Peter-Paul Verbeek shows in Moralizing technology: Understanding and designing the morality of things (2011) with the example of ultrasound imaging machines that technologies often open up new spaces for decisions that are ethically significant, and that they shape ethical decision making. An ultrasound image constitutes the unborn as a person, independently from the body of its mother, and as a potential patient with “abnormalities,” but it also allows a kind of bonding among mother, father, and the unborn that was never possible before (pp. 24-27). “Moral decisions about pregnancy and abortion in many cases are shaped in interaction with the ways in which ultrasound imaging makes visible the unborn child. … Ultrasound imaging actively contributes to the coming about of moral actions and the moral considerations behind these actions“ (p. 38). At the same time there are other technologies that reduce the space for ethical decision making. If there is a speed bump in front of a school, you do not consider the ethical question whether you should slow down, you do slow down. If you wear one of Amazon’s wristbands that tracks your hand movements and that vibrates in case there is not enough movement, then there is no need for ethical motivation to work harder. You know that you are monitored.

Technologies play a fundamental role in how we perceive the world and in how we act. This implies far-reaching moral responsibilities both for the designers of technologies and for its users.

Imagine your team is a task force in a large corporation that plans to invests heavily in the research and development of artificial intelligence in the workplace, focusing on tracking and controlling employees. The company is concerned about its reputation and wants to move forward in a way that takes ethical concerns into account. Your team is charged with developing guidelines that will determine the design of this new technology. What should these guidelines include? Take into account the broadest variety of possible proposals—based on a stakeholder analysis—and then develop what seems to be the most convincing set of guidelines.

(problem_id: 480)

Go to top

Designing the morality of things: Robotic Caregivers for the Elderly

Peter-Paul Verbeek shows in Moralizing technology: Understanding and designing the morality of things (2011) with the example of ultrasound imaging machines that technologies often open up new spaces for decisions that are ethically significant, and that they shape ethical decision making. An ultrasound image constitutes the unborn as a person, independently from the body of its mother, and as a potential patient with “abnormalities,” but it also allows a kind of bonding among mother, father, and the unborn that was never possible before (pp. 24-27). “Moral decisions about pregnancy and abortion in many cases are shaped in interaction with the ways in which ultrasound imaging makes visible the unborn child. … Ultrasound imaging actively contributes to the coming about of moral actions and the moral considerations behind these actions“ (p. 38). At the same time there are other technologies that reduce the space for ethical decision making. If there is a speed bump in front of a school, you do not consider the ethical question whether you should slow down, you do slow down. If you wear one of Amazon’s wristbands that tracks your hand movements and that vibrates in case there is not enough movement, then there is no need for ethical motivation to work harder. You know that you are monitored.

Technologies play a fundamental role in how we perceive the world and in how we act. This implies far-reaching moral responsibilities both for the designers of technologies and for its users.

Imagine your team is a task force in a large corporation that plans to invests heavily in the research and development of Robotic Caregivers for the Elderly. The company is concerned about its reputation and wants to move forward in a way that takes ethical concerns into account. Your team is charged with developing guidelines that will determine the design of this new technology. What should these guidelines include? Take into account the broadest variety of possible proposals—based on a stakeholder analysis—and then develop what seems to be the most convincing set of guidelines.

(problem_id: 735)

Go to top

Designing the morality of things: Designing robots and AI that counter human biases

Peter-Paul Verbeek shows in Moralizing technology: Understanding and designing the morality of things (2011) with the example of ultrasound imaging machines that technologies often open up new spaces for decisions that are ethically significant, and that they shape ethical decision making. An ultrasound image constitutes the unborn as a person, independently from the body of its mother, and as a potential patient with “abnormalities,” but it also allows a kind of bonding among mother, father, and the unborn that was never possible before (pp. 24-27). “Moral decisions about pregnancy and abortion in many cases are shaped in interaction with the ways in which ultrasound imaging makes visible the unborn child. … Ultrasound imaging actively contributes to the coming about of moral actions and the moral considerations behind these actions“ (p. 38). At the same time there are other technologies that reduce the space for ethical decision making. If there is a speed bump in front of a school, you do not consider the ethical question whether you should slow down, you do slow down. If you wear one of Amazon’s wristbands that tracks your hand movements and that vibrates in case there is not enough movement, then there is no need for ethical motivation to work harder. You know that you are monitored.

Technologies play a fundamental role in how we perceive the world and in how we act. This implies far-reaching moral responsibilities both for the designers of technologies and for its users.

Imagine your team is a task force in a large corporation that plans to invests heavily in the research and development of robots and AI that counter human biases. The company is concerned about its reputation and wants to move forward in a way that takes ethical concerns into account. Your team is charged with developing guidelines that will determine the design of this new technology. What should these guidelines include? Take into account the broadest variety of possible proposals—based on a stakeholder analysis—and then develop what seems to be the most convincing set of guidelines.

(problem_id: 736)

Go to top

Designing the morality of things: Robots that “nudge” people to be more ethical

Peter-Paul Verbeek shows in Moralizing technology: Understanding and designing the morality of things (2011) with the example of ultrasound imaging machines that technologies often open up new spaces for decisions that are ethically significant, and that they shape ethical decision making. An ultrasound image constitutes the unborn as a person, independently from the body of its mother, and as a potential patient with “abnormalities,” but it also allows a kind of bonding among mother, father, and the unborn that was never possible before (pp. 24-27). “Moral decisions about pregnancy and abortion in many cases are shaped in interaction with the ways in which ultrasound imaging makes visible the unborn child. … Ultrasound imaging actively contributes to the coming about of moral actions and the moral considerations behind these actions“ (p. 38). At the same time there are other technologies that reduce the space for ethical decision making. If there is a speed bump in front of a school, you do not consider the ethical question whether you should slow down, you do slow down. If you wear one of Amazon’s wristbands that tracks your hand movements and that vibrates in case there is not enough movement, then there is no need for ethical motivation to work harder. You know that you are monitored.

Technologies play a fundamental role in how we perceive the world and in how we act. This implies far-reaching moral responsibilities both for the designers of technologies and for its users.

Imagine your team is a task force in a large corporation that plans to invests heavily in the research and development of Robots that “nudge” people to be more ethical. The company is concerned about its reputation and wants to move forward in a way that takes ethical concerns into account. Your team is charged with developing guidelines that will determine the design of this new technology. What should these guidelines include? Take into account the broadest variety of possible proposals—based on a stakeholder analysis—and then develop what seems to be the most convincing set of guidelines.

(problem_id: 737)

Go to top

Designing the morality of things: Using AI-based learning technologies in college education

Peter-Paul Verbeek shows in Moralizing technology: Understanding and designing the morality of things (2011) with the example of ultrasound imaging machines that technologies often open up new spaces for decisions that are ethically significant, and that they shape ethical decision making. An ultrasound image constitutes the unborn as a person, independently from the body of its mother, and as a potential patient with “abnormalities,” but it also allows a kind of bonding among mother, father, and the unborn that was never possible before (pp. 24-27). “Moral decisions about pregnancy and abortion in many cases are shaped in interaction with the ways in which ultrasound imaging makes visible the unborn child. … Ultrasound imaging actively contributes to the coming about of moral actions and the moral considerations behind these actions“ (p. 38). At the same time there are other technologies that reduce the space for ethical decision making. If there is a speed bump in front of a school, you do not consider the ethical question whether you should slow down, you do slow down. If you wear one of Amazon’s wristbands that tracks your hand movements and that vibrates in case there is not enough movement, then there is no need for ethical motivation to work harder. You know that you are monitored.

Technologies play a fundamental role in how we perceive the world and in how we act. This implies far-reaching moral responsibilities both for the designers of technologies and for its users.

Imagine your team is a task force in a large corporation that plans to invests heavily in the research and development of AI-based learning technologies for college education. The company is concerned about its reputation and wants to move forward in a way that takes ethical concerns into account. Your team is charged with developing guidelines that will determine the design of this new technology. What should these guidelines include? Take into account the broadest variety of possible proposals—based on a stakeholder analysis—and then develop what seems to be the most convincing set of guidelines.

(problem_id: 738)

Go to top

Brain-computer interfaces

Advances in brain-related science and technology open up new possibilities of bridging the boundary between what is inside and outside of our head in both directions.

On one hand, there is remarkable progress in decoding neural activity and using that code to control external devices. Paralyzed persons are able to use a robotic arm; Facebook recently revealed plans to create a “silent speech” interface that would allow people to type at 100 words a minute straight from their brain; the Defense Advanced Research Projects Agency (DARPA) is developing similar technology to allow non-vocalized communication between soldiers on the battlefield; and many more such projects are on their way.

On the other hand, technologies are being developed to intervene in brain activities, for example to treat such conditions as Parkinson’s disease or epilepsy, but also to provide vision for the blind. Visionaries imagine a world in which humans use these technologies to “upgrade” their capabilities, “to acquire new skills at will or to communicate telepathically with others” (Economist, Jan 6, 2018). Elon Musk talks about pumping “images from one person’s retina straight into the visual cortex of another; creating entirely new sensory abilities, from infrared eyesight to high-frequency hearing; and ultimately, melding together human and artificial intelligence” (ibid.).

It is obvious that the benefits of these technologies can be life changing. But they also raise serious ethical concerns. The ability to decode neural activity leads to questions such as: What kinds of information would the technology collect? What about private thoughts? Who would own the data? How would privacy be protected? How could the accuracy of the technology be ensured? And for technologies that intervene in brain activities: What are the societal consequences if only a few can afford substantial cognitive enhancement? What would intervening in brain activities mean for the possibility of autonomous decision making? Will brain interventions change the notion of responsibility for one’s actions? What about potential misuse of these technologies, for example to manipulate people? It might even be possible to enslave people without making them realize that. What about those who might enjoy electronic self-manipulation (like taking the blue pill in The Matrix)? What would be the societal consequences of that? How would brain-computer interfaces change what it means to be human?

Imagine your team is a task force in a large company that invests heavily in the research and development of brain-computer interfaces. The company is concerned about its reputation and wants to move forward in a way that takes ethical concerns into account. Your team is charged with developing guidelines that will determine the design of brain-computer interfaces that the company plans to produce in the future. What should these guidelines include? Take into account the broadest variety of possible proposals—based on a stakeholder analysis—and then develop what seems to be the most convincing proposal.

(problem_id: 481)

Go to top

Facial Recognition Technology

Facial recognition technology (FRT) can identify people from their images; it can also be used to identify certain diseases, traits such as intelligence or sexual orientation, or moods such as a tendency to violence. Some companies want to use FRT to let people purchase goods simply by a nod or another preprogrammed response; others want to scan for known criminals and identify new ones; and still others want to allow people to better organize their personal life by keeping a highly searchable visual record of their activities. But all of this pivots on taking pictures of people—in public or private, with or without consent, online or offline—and comparing them with pictures in a database that also contains personal information. Somebody will know who you are, where you are, and what you are doing. Does FRT hold the key to a safer, more efficient, and smarter world? If so, at what cost? Is opting out even feasible for this type of technology? What kind of governance structures should be established for regulating FRT?

Imagine your team is a task force that is charged with drafting a law that regulates the design or use of facial recognition technology in public places. What should such a law include? Take into account the broadest variety of possible proposals—based on a stakeholder analysis—and then develop what seems to be the most convincing proposal.

Harari, Y. N. (2018). Why Technology Favors Tyranny. Artificial intelligence could erase many practical advantages of democracy, and erode the ideals of liberty and equality. It will further concentrate power among a small elite if we don’t take steps to stop it. The Atlantic, (Oct), 64-70. Retrieved from https://www.theatlantic.com/magazine/archive/2018/10/yuval-noah-harari-technology-tyranny/568330/

Economist. (2017, Sep 9). Nowhere to hide: What machines can tell from your face. Economist, 11. Retrieved from https://www.economist.com/leaders/2017/09/09/what-machines-can-tell-from-your-face

Economist. (2017, Sep 9). Facial technology: Advances in AI are used to spot signs of sexuality. Economist, 73. Retrieved from https://www.economist.com/science-and-technology/2017/09/09/advances-in-ai-are-used-to-spot-signs-of-sexuality

Economist. (2018, June 2). Apartheid with Chinese characteristics: China has turned Xinjiang into a police state like no other. Economist, 19-22. Retrieved from https://www.economist.com/briefing/2018/05/31/china-has-turned-xinjiang-into-a-police-state-like-no-other

Economist. (2018, June 2). Technology and surveillance: Does China’s digital police state have echoes in the West? Economist, 11. Retrieved from https://www.economist.com/leaders/2018/05/31/does-chinas-digital-police-state-have-echoes-in-the-west

(problem_id: 475)

Go to top

Facial Recognition Technology (short)

Facial recognition technology (FRT) can identify people from their images; it can also be used to identify certain diseases, traits such as intelligence or sexual orientation, or moods such as a tendency to violence. Some companies want to use FRT to let people purchase goods simply by a nod or another preprogrammed response; others want to scan for known criminals and identify new ones; and still others want to allow people to better organize their personal life by keeping a highly searchable visual record of their activities. But all of this pivots on taking pictures of people—in public or private, with or without consent, online or offline—and comparing them with pictures in a database that also contains personal information. Somebody will know who you are, where you are, and what you are doing. Does FRT hold the key to a safer, more efficient, and smarter world? If so, at what cost? Is opting out even feasible for this type of technology? What kind of governance structures should be established for regulating FRT?

Imagine your team is a task force that is charged with drafting a law that regulates the design or use of facial recognition technology in public places. What should such a law include? Take into account the broadest variety of possible proposals—based on a stakeholder analysis—and then develop what seems to be the most convincing proposal.

(problem_id: 479)

Go to top

Robotic Caregivers for the Elderly

Given that certain populations in the world are rapidly aging, especially in the United States, Western Europe, and Japan, there is a pressing need to address their health care needs. One potential option is to provide these individuals with robots that could assist them with their medications and other health-related tasks. Yet there are many ethical issues to examine relating to this “technological fix” including whether this may decrease the amount of human contact that the elderly receive, and privacy issues based on the fact that these robots will collect data of people in their care. Furthermore, effects on employment should also be relevant.

Imagine your team is a task force that is charged with drafting a law that regulates the design or use of robots for the elderly. What should such a law include? Take into account the broadest variety of possible proposals—based on a stakeholder analysis—and then develop what seems to be the most convincing proposal.

(problem_id: 476)

Go to top

Micro-targeting in social media

Social media allows effortless communication with large numbers of people. People enjoy using Facebook, YouTube, Twitter, and other social media, and they are useful to spread information and to organize things quickly, also politically as in the civic uprisings of the Arab spring or in Ukraine. In recent years, however, several concerns have been raised that might be so serious—or have the potential to become so serious—that social media should be regulated to a certain degree.

One of these concerns is that social media companies can measure our reactions to personal posts, news stories, pictures, and ads and use these data for micro-targeted advertisement. Political parties and interest groups (lobbyists, wealthy individuals, PACs SuperPACs, etc.) are increasingly using similar data, algorithms, and artificial intelligence to test how people with specific personality traits (openness; conscientiousness; extroversion; agreeableness; and neuroticism) and sensational interests (militarism; the violent-occult; intellectual interests; paranormal credulousness; and wholesome activities) react to thousands of different versions of a political ad so that they can tweak it to get to us almost on an individual level (Economist 2017a, 2017b). These technologies can substantially increase the efficacy and efficiency of manipulation and propaganda; they can reinforce people’s prejudices and biases; and they can contribute to societal fragmentation and polarization.

Imagine your group is an expert team that a think-tank convened to answer the question how societies should deal with the problems listed above. Should there be new laws or regulations? How could unintended consequences of such regulation be avoided? How could social media be returned to a position where it is actually benefiting humanity? Take into account the broadest variety of possible proposals—based on a stakeholder analysis—and then develop what seems to be the most convincing proposal.

References:

(problem_id: 36; also 477)

Go to top

Controversial figures and events of the past: How to deal with them if eradicating them from history is not an option?

[Version for citizen participants]

Highly partisan debates on Confederate Memorials that are fought over their removal tend to overlook that there is a real problem that goes beyond the dichotomy of honoring the memory “of the most courageous and patriotic serviceman the world has ever known: the Confederate soldier” (Lochlainn Seabrook, 2018) and fighting the celebration of “those who defended slavery and tried to destroy the Union” (Stacy Abrams, the Democratic candidate for Georgia governor). The problem is that we always need to define who we are in relation to a past that cannot simply be ignored. What should we tell our children in school? How should we present ourselves in public spaces?

Communities all over America will, at some point in time, need to determine how they can deal with controversial figures and events from their past. In order to generate knowledge and experiences of how this can be achieved, we invite you to engage in a deliberative process that focuses on a particular memorial or street name in the City of Atlanta that is connected to the Confederacy.

Your first task is to determine on which memorial or street name you want to work over the coming weeks.

For the next step in this project, please follow the instructions that are provided in the Reflect! work plan.

(problem_id: )

Go to top

[Version for student support team]

Highly partisan debates on Confederate Memorials that are fought over their removal tend to overlook that there is a real problem that goes beyond the dichotomy of honoring the memory “of the most courageous and patriotic serviceman the world has ever known: the Confederate soldier” (Lochlainn Seabrook, 2018) and fighting the celebration of “those who defended slavery and tried to destroy the Union” (Stacy Abrams, the Democratic candidate for Georgia governor). The problem is that we always need to define who we are in relation to a past that cannot simply be ignored. What should we tell our children in school? How should we present ourselves in public spaces? How should, for example, the city of Stone Mountain, Georgia, answer these questions, a city which not only neighbors America’s largest Confederate Memorial and the birthplace of the second Ku Klux Klan, but that also is about 75% African American today? How should the city of Cumming, Georgia, answer these questions where the infamous “night riders” launched a coordinated campaign of arson and terror against black citizens in 1912 to “cleanse” the city, which was then kept “all white” well into the 1990s?

Communities all over America will, at some point in time, need to determine how they can deal with controversial figures and events from their past. In order to generate knowledge and experiences of how this can be achieved, the task of your team is to organize and facilitate a deliberative process among stakeholders in a community that struggles with this challenge right now. Supported by the Atlanta History Center, and focusing on a real problem in a community, you will work with a small group of people who represent a variety of different world-views, values, or interests. This group will need to meet about four times for 1 – 2 hours for deliberations that will be structured by the Reflect! Work plan “Reflective Consensus Building on Wicked Problems. For deliberation among stakeholders” that is available on the Reflect! platform.

(problem_id: 488)

Go to top

Georgia, Florida and Alabama: Water wars

What could be done to resolve the dispute about water management in the Apalachicola-Chattahoochee-Flint (ACF) River Basin in which Alabama, Georgia, and Florida are involved for the last two decades?

(problem_id: 1)

Go to top

Develop an interdisciplinary research proposal

Given that certain populations in the world are rapidly aging, especially in the United States, Western Europe, and Japan, there is a pressing need to address their health care needs. One potential option is to provide these individuals with robots that could assist them with their medications and other health-related tasks.

Imagine your team is preparing a grant proposal for a large interdisciplinary research project on “Robot Caregivers for the Elderly”. Since your proposal will only succeed in getting funding if all technical, organizational, financial, legal, ethical, and other issues are convincingly covered, your work on the research proposal should start with a stakeholder analysis: Who will be affected by such a fundamental change in providing care and who will play a role in, or influence any decision regarding robot care for the elderly? And: What would these stakeholders propose should be done when it comes to using robots for care? Take the broadest variety of possible stakeholder proposals into account and then develop a list of components that should be essential for your research proposal.

Work through the Reflect! Work Plan to solve this problem. Your first task is to find an agreement on the exact problem formulation. Feel free to replace the topic “Robot Caregivers for the Elderly” by any other topic of your choice. But make sure that your problem is formulated so that it requires a stakeholder analysis and a reflection on what the stakeholders would propose to solve the problem.

(problem_id: 441; also 478)

Go to top