From LINKS Community Center
EU disinfo lab
English
2023
Businesses, Civil Society, Media, Policy Makers, Researchers
Published
Legal/Standards, Verification
Advanced
https://www.disinfo.eu/wp-content/uploads/2023/09/20230928 platformpolicies-on-ai.pdf
EXECUTIVE SUMMARY
- The development of artificial intelligence (AI) technologies has long been a challenge for the disinformation field, allowing content to be easily manipulated and contributing to accelerate its distribution.
- Focusing on content, recent technical developments, and the growing use of generative AI systems by end-users have exponentially increased these challenges, making it easier not just to modify but also to create fake texts, images, and audio pieces that can look real.
- Despite offering opportunities for legitimate purposes (e.g., art or satire), AI content is also widely generated and disseminated across the internet, causing – intentionally or not – harm and deception.
- In view of these rapid changes, it is crucial to understand how platforms face the challenge of moderating AI-manipulated and AI-generated content that may end up circulating as mis- or disinformation.
- Are they able to distinguish legitimate uses from malign uses of such content?
- Do they see the risks embedded in AI as an accessory to disinformation strategies or copyright infringements, or consider it a matter on its own that deserves spe- cific policies?
- Do they even mention AI in their moderation policies, and have they updated these policies since the emergence of generative AI to address this evolution?
- The present factsheet delves into how some of the main platforms – Facebook, Instagram, TikTok, X (formerly Twitter), and YouTube – approach AI-manipulated or AI-generated content in their terms of use, exploring how they address its potential risk of becoming mis- and disinformation.
- The analysis concluded that definitions are divergent, leaving users and regulators with diverse mitigation and resolution measures.Property "Synopsis" (as page type) with input value "EXECUTIVE SUMMARY</br></br>* The development of artificial intelligence (AI) technologies has long been a challenge for the disinformation field, allowing content to be easily manipulated and contributing to accelerate its distribution. </br></br>* Focusing on content, recent technical developments, and the growing use of generative AI systems by end-users have exponentially increased these challenges, making it easier not just to modify but also to create fake texts, images, and audio pieces that can look real. </br></br>* Despite offering opportunities for legitimate purposes (e.g., art or satire), AI content is also widely generated and disseminated across the internet, causing – intentionally or not – harm and deception.</br></br>* In view of these rapid changes, it is crucial to understand how platforms face the challenge of moderating AI-manipulated and AI-generated content that may end up circulating as mis- or disinformation. </br>** Are they able to distinguish legitimate uses from malign uses of such content? </br>** Do they see the risks embedded in AI as an accessory to disinformation strategies or copyright infringements, or consider it a matter on its own that deserves spe- cific policies? </br>** Do they even mention AI in their moderation policies, and have they updated these policies since the emergence of generative AI to address this evolution?</br></br>* The present factsheet delves into how some of the main platforms – Facebook, Instagram, TikTok, X (formerly Twitter), and YouTube – approach AI-manipulated or AI-generated content in their terms of use, exploring how they address its potential risk of becoming mis- and disinformation.</br></br>* The analysis concluded that definitions are divergent, leaving users and regulators with diverse mitigation and resolution measures." contains invalid characters or is incomplete and therefore can cause unexpected results during a query or annotation process.
No
Created: 29 September 2023
Last edited: 29 September 2023
Platforms‘ policies on AI-manipulated and generated misinformation
Quick Facts
Publishing Organisation:
EU disinfo lab
Year:
2023
Languages:
English
Status:
Published
Covers Thematic
Legal/Standards Legal Requirement means any federal, state, local, municipal, foreign or other law, statute, constitute, principle of common law, resolution, ordinance, code, edict, decree, rule, regulation, ruling or requirement issued, enacted, adopted, promulgated, implemented or otherwise put into effect by or under the authority of any Governmental Body. </br></br>Source: https://www.lawinsider.com/dictionary/legal-requirement</br></br>Standards are voluntary documents that set out specifications, procedures and guidelines that aim to ensure products, services, and systems are safe, consistent, and reliable. They cover a variety of subjects, including consumer products and services, the environment, construction, energy and water utilities, and more.</br></br>Source: https://www.standards.org.au/standards-development/what-is-standard
Verification Verification is an extra or final bit of proof that establishes something is true.</br>To verify something is to make sure it's correct or true, so verification is an action that establishes the truth of something.</br></br>Source: https://www.vocabulary.com/dictionary/verification
Target audience
Businesses companies, local business networks, solution providers, suppliers of goods and services
Civil Society Civil society is a target group in LINKS which comprises citizens, civil society organizations, educational institutions, vulnerable groups, social movement organizations
Media The term media refers to any means of distribution, dissemination or interpersonal, mass or group communication of works, documents, or written, visual, audio or audiovisual messages (such as radio, television, cinema, Internet, press, telecommunications, etc.)</br></br>Entities using multiple communication channels are often called Media
Policy Makers local, national, and European agencies and institutes, public authorities, standardization bodies
Researchers research institutions and scientific communities
Disaster Management Phase
Synopsis
EXECUTIVE SUMMARY
- The development of artificial intelligence (AI) technologies has long been a challenge for the disinformation field, allowing content to be easily manipulated and contributing to accelerate its distribution.
- Focusing on content, recent technical developments, and the growing use of generative AI systems by end-users have exponentially increased these challenges, making it easier not just to modify but also to create fake texts, images, and audio pieces that can look real.
- Despite offering opportunities for legitimate purposes (e.g., art or satire), AI content is also widely generated and disseminated across the internet, causing – intentionally or not – harm and deception.
- In view of these rapid changes, it is crucial to understand how platforms face the challenge of moderating AI-manipulated and AI-generated content that may end up circulating as mis- or disinformation.
- Are they able to distinguish legitimate uses from malign uses of such content?
- Do they see the risks embedded in AI as an accessory to disinformation strategies or copyright infringements, or consider it a matter on its own that deserves spe- cific policies?
- Do they even mention AI in their moderation policies, and have they updated these policies since the emergence of generative AI to address this evolution?
- The present factsheet delves into how some of the main platforms – Facebook, Instagram, TikTok, X (formerly Twitter), and YouTube – approach AI-manipulated or AI-generated content in their terms of use, exploring how they address its potential risk of becoming mis- and disinformation.
- The analysis concluded that definitions are divergent, leaving users and regulators with diverse mitigation and resolution measures.