Navigating the AI-Driven Landscape of Media Manipulation
The rise of artificial intelligence (AI) has revolutionized many fields, but its application in media manipulation poses significant threats to public discourse and democracy. As AI-generated content becomes increasingly sophisticated and challenging to detect, the potential for disinformation and misinformation grows significantly. The impact of these deceptive tactics can be damaging, undermining trust in journalism and swaying public opinion about critical issues. For example:
- AI-generated images have been used to spread disinformation and conspiracy theories related to major events, such as Hurricanes Milton [ref] and Helene [ref]
- AI-driven phishing attempts, including romance scams, have resulted in millions of dollars in losses [ref]
There are other areas beyond these where generative AI can broadly amplify false narratives with significant impact.
Understanding AI-Manipulated Media
AI-manipulated media, such as deepfake videos, refers to content that has been altered or generated by AI to deceive or misinform the public. This manipulation can distort public perception and influence societal attitudes, for example, potentially leading to voter manipulation, polarization, or a decline in trust in legitimate news sources. Manipulated media encompasses a variety of forms and modalities.
Modalities of Manipulated Media:
- Text – The rise of Large Language Models (LLMs) has resulted in the rapid spread of misinformation through mechanisms such as fabricated scientific papers [ref], AI-generated news articles [ref], and bot-driven narratives [ref]. For example, this year, the Justice Department seized domain names of nearly a thousand social media accounts linked to bad actors who created an AI-enhanced bot farm to spread disinformation [ref, ref2].
- Images – Tools like Stable Diffusion and DALL-E can create hyper-realistic scenes that never existed from text prompts, while techniques such as generative fill allow for editing existing images to misrepresent reality. These advancements create an array of concerns, from questioning the validity of photographic evidence in court cases [ref] to questioning copyright violations of artist’s personal styles [ref]. Furthermore, synthetic images can sway public sentiment, for example, by falsely suggesting celebrity endorsement of political candidates [ref].
- Audio – Audio can also be generated or manipulated to create realistic voice clones of people. This was exploited in January 2024, when an estimated tens of thousands of New Hampshire voters received a robocall in the voice of President Biden instructing them not to vote in the upcoming primaries [ref]. A recent study from UC Berkeley found that people struggle to reliably identify AI-generated voices, mistaking them as real 80% of the time [ref].
- Video – AI-driven video editing tools can create deepfake videos that misrepresent individuals’ statements or actions. Deepfake videos have been employed for digital scams and fraud [ref], as well as for creating misleading fake news broadcasts [ref].
The scale and sophistication of AI-manipulated media present substantial challenges. Manipulated media often exploits cognitive biases, making it easier for misinformation to spread and take root in public consciousness. The sheer volume of content generated can overwhelm efforts to detect and counteract these attacks.
Tools For Awareness and Prevention
Spotting AI-manipulated media can feel like a daunting task in today’s dynamic and fast-paced information landscape. Studies, like the one by Nexcess, have shown that humans struggle to distinguish between real and AI-generated content. However, there are practical steps to increase your awareness and vigilance. One way is to prioritize rational analysis over emotional reactions. AI-generated media is often ripe with telltale artifacts that become easier to spot with a little practice. For example, AI generators often still struggle with accurate and realistic depictions of human faces, hands, or limbs. A careful inspection of an image or video may reveal such artifacts. You can also look for resources, such as Northwestern University’s Kellogg School of Management’s Detect Fakes project or the MIT Media Lab’s DetectFakes Experiment, that provide training to help sharpen your detection skills. Education and media literacy programs can also help you navigate this evolving challenge. Reading articles like “Navigating the Digital Era: Media Literacy in the Age of AI” and “10 Ways to Spot Fake Videos and Falsehoods on the Internet,” will help you be more informed and better equipped to identify misinformation.
Fortunately, the responsibility is not entirely left to the casual media consumer. U.S. agencies have funded research initiatives to help protect the public from misinformation campaigns. For example, the Defense Advanced Research Projects Agency (DARPA) Semantic Forensics (SemaFor) program created comprehensive forensic technologies to help mitigate online threats perpetuated via synthetic and manipulated media. Kitware, among other research teams from academia and commercial organizations, has contributed significantly to SemaFor efforts by developing novel automated approaches across modalities. For example, the Kitware team has been active in developing powerful detectors of AI-generated images and deepfake videos [ref, ref]. Our team has been building models that identify whether text-based media such as news stories [ref] are AI-generated, and if so by which LLM [ref], as well as to detect privacy attacks in multi-agent LLM chat [ref]. Our defensive algorithms leverage subtle inconsistencies to flag potentially generated content, for example, discrepancies between the audio and visuals in videos [ref] or mismatch between images and text captions in online articles [ref, ref]. These digital forensic technologies are crucial for restoring trust in the media and ensuring that citizens are well-informed.
We believe that successfully addressing disinformation will require collaboration between stakeholders, including individuals, media organizations, and policymakers. By fostering technological advancements and raising public awareness, we can work towards a more informed society that is capable of navigating the complexities of modern media landscapes. To learn more, please contact our team.
This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001120C0123. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Defense Advanced Research Projects Agency (DARPA).