Platforms‘ policies on AI-manipulated and generated misinformation: Difference between revisions
From LINKS Community Center
Dinu (talk | contribs) (Created page with "{{Guideline |Publishing Organisation=EU disinfo lab |Language=English |Year Published=2024 |Target Audience=Civil Society, Researchers |Status=Published |Covers Thematic=Legal...") |
Dinu (talk | contribs) No edit summary |
||
(One intermediate revision by the same user not shown) | |||
Line 2: | Line 2: | ||
|Publishing Organisation=EU disinfo lab | |Publishing Organisation=EU disinfo lab | ||
|Language=English | |Language=English | ||
|Year Published= | |Year Published=2023 | ||
|Target Audience=Civil Society, Researchers | |Target Audience=Businesses, Civil Society, Media, Policy Makers, Researchers | ||
|Status=Published | |Status=Published | ||
|Covers Thematic=Legal/Standards, Verification | |Covers Thematic=Legal/Standards, Verification | ||
Line 15: | Line 15: | ||
* Despite offering opportunities for legitimate purposes (e.g., art or satire), AI content is also widely generated and disseminated across the internet, causing – intentionally or not – harm and deception. | * Despite offering opportunities for legitimate purposes (e.g., art or satire), AI content is also widely generated and disseminated across the internet, causing – intentionally or not – harm and deception. | ||
* In view of these rapid changes, it is crucial to understand how platforms face the challenge of moderating AI-manipulated and AI-generated content that may end up circulating as mis- or disinformation. | * In view of these rapid changes, it is crucial to understand how platforms face the challenge of moderating AI-manipulated and AI-generated content that may end up circulating as mis- or disinformation. | ||
* Are they able to distinguish legitimate uses from malign uses of such content? | ** Are they able to distinguish legitimate uses from malign uses of such content? | ||
* Do they see the risks embedded in AI as an accessory to disinformation strategies or copyright infringements, or consider it a matter on its own that deserves spe- cific policies? | ** Do they see the risks embedded in AI as an accessory to disinformation strategies or copyright infringements, or consider it a matter on its own that deserves spe- cific policies? | ||
* Do they even mention AI in their moderation policies, and have they updated these policies since the emergence of generative AI to address this evolution? | ** Do they even mention AI in their moderation policies, and have they updated these policies since the emergence of generative AI to address this evolution? | ||
* The present factsheet delves into how some of the main platforms – Facebook, Instagram, TikTok, X (formerly Twitter), and YouTube – approach AI-manipulated or AI-generated content in their terms of use, exploring how they address its potential risk of becoming mis- and disinformation. | * The present factsheet delves into how some of the main platforms – Facebook, Instagram, TikTok, X (formerly Twitter), and YouTube – approach AI-manipulated or AI-generated content in their terms of use, exploring how they address its potential risk of becoming mis- and disinformation. |
Latest revision as of 15:16, 29 September 2023
Created: 29 September 2023
Last edited: 29 September 2023
Last edited: 29 September 2023
Quick Facts
Publishing Organisation:
EU disinfo labYear:
2023Languages:
EnglishStatus:
PublishedCovers Thematic
Target audience
Audience experience level
Disaster Management Phase
Synopsis
EXECUTIVE SUMMARY
- The development of artificial intelligence (AI) technologies has long been a challenge for the disinformation field, allowing content to be easily manipulated and contributing to accelerate its distribution.
- Focusing on content, recent technical developments, and the growing use of generative AI systems by end-users have exponentially increased these challenges, making it easier not just to modify but also to create fake texts, images, and audio pieces that can look real.
- Despite offering opportunities for legitimate purposes (e.g., art or satire), AI content is also widely generated and disseminated across the internet, causing – intentionally or not – harm and deception.
- In view of these rapid changes, it is crucial to understand how platforms face the challenge of moderating AI-manipulated and AI-generated content that may end up circulating as mis- or disinformation.
- Are they able to distinguish legitimate uses from malign uses of such content?
- Do they see the risks embedded in AI as an accessory to disinformation strategies or copyright infringements, or consider it a matter on its own that deserves spe- cific policies?
- Do they even mention AI in their moderation policies, and have they updated these policies since the emergence of generative AI to address this evolution?
- The present factsheet delves into how some of the main platforms – Facebook, Instagram, TikTok, X (formerly Twitter), and YouTube – approach AI-manipulated or AI-generated content in their terms of use, exploring how they address its potential risk of becoming mis- and disinformation.
- The analysis concluded that definitions are divergent, leaving users and regulators with diverse mitigation and resolution measures.
Linked to
- Technologies
- Use Cases
-
None. See all Technologies.
- None. See all Use Cases.