THIS CASE WILL BE USED IN THE NYC/NJ & LONG ISLAND REGIONAL COMPETITIONS
Recent technological developments in artificial intelligence have enabled new techniques of manipulating images, audio, and video. Of particular concern among these is the ability to create and deploy AI-generated media, or “deepfakes.” Innovations in machine learning have greatly increased the availability and sophistication of fake audio and video clips, which make it possible to realistically depict people saying or doing things they never actually said or did.
Deepfaking video footage for entertainment purposes may bring some interesting benefits. Recent examples have used deepfaking technology to create new fan-fiction content based upon the film industry’s CGI representations of older or deceased actors in some of its most popular films. A now-viral video builds upon footage from Lucasfilm’s 2016 Rogue One: A Star Wars Story and showcases lifelike and foreboding footage of franchise villain, Grand Moff Tarkin (portrayed by Peter Cushing, who died in 1991). The video also shows a “de-aged” portrayal of a youthful Princess Leia Organa, one of the Star Wars franchise’s most beloved characters (portrayed by Carrie Fisher, who was in her late 50s at the time Rogue One was produced). Deepfake clips like these have delighted fans across the internet, and the YouTube creator who produced them has since been hired as a special effects artist at Lucasfilm.
Given the fact that many citizens’ information environments are already complicated by an increased frequency of speculation, misinformation, and motivated reasoning, some researchers worry that deepfakes will cause worsening “truth decay” by engaging citizens’ cognitive biases in ways that open both individuals and groups to “novel forms of exploitation, intimidation, and personal sabotage.”
Potential concerns about deepfakes range from questions about the accuracy of media portrayals to worries about the ability to convincingly put words into the (digital) mouths of high-profile public figures. In one recent case, documentary producer Morgan Neville admitted to commissioning a software company to create a synthetic audio voice for the documentary’s deceased subject, the late television star Anthony Bourdain. Neville did not disclose the presence of the AI-generated voice in the film, allowing viewers to believe that the voice was indeed Bourdain’s own.
Others worry about the more profound social and political implications, for example, of synthetic viral videos which falsely depict House Speaker Nancy Pelosi as visibly intoxicated during a press conference, or which have put words that were never actually said into the mouth of former President Barack Obama. Many worry about an environment in which we can no longer trust what we see with our own eyes. As philosopher Regina Rini suggests, “we ought to think of images as more like testimony than perception. In other words, [we] should only trust a recording if [we] would trust the word of the person producing it.”
- How, if at all, should the use of AI-driven “deepfake” technology be constrained by policymakers?
- What skills and dispositions are needed for internet users to engage knowledgeably and deftly in an information world characterized by deepfakes?
- In what ways are deep fakes a new and distinctive threat to public discourse and understanding? In what ways are they not so different from other forms of misinformation?
This is case #13 from the 2021-2022 Regional HSEB Case packet, developed by the Parr Center for Ethics. The full case packet can be found here.
- Robert Chesney and Danielle Keat Citron, “A Looming Challenge for Privacy, Democracy, and National Security”, California Law Review 1753, 2019.