Synthetic and manipulated media policy
You may not deceptively share synthetic or manipulated media that are likely to cause harm. In addition, we may label Tweets containing synthetic and manipulated media to help people understand their authenticity and to provide additional context.
You should be able to find reliable information on Twitter. That means understanding whether the content you see is real or fabricated and having the ability to find more context about what you see on Twitter. Therefore, we may label Tweets that include media (videos, audio, and images) that have been deceptively altered or fabricated. In addition, you may not share deceptively altered media on Twitter in ways that mislead or deceive people about the media's authenticity where threats to physical safety or other serious harm may result.
We use the following criteria as we consider Tweets and media for labeling or removal under this policy as part of our ongoing work to enforce our rules and ensure healthy and safe conversation on Twitter (additional information is available below):
1. Is the content synthetic or manipulated?
In order for content to be labeled or removed under this policy, we must have reason to believe that media, or the context in which media are presented, are significantly and deceptively altered or manipulated. Synthetic and manipulated media take many different forms and people can employ a wide range of technologies to produce these media. In assessing whether media have been significantly and deceptively altered or fabricated, some of the factors we consider include:
- whether the content has been substantially edited in a manner that fundamentally alters its composition, sequence, timing, or framing;
- any visual or auditory information (such as new video frames, overdubbed audio, or modified subtitles) that has been added or removed; and
- whether media depicting a real person have been fabricated or simulated
We are most likely to take action (either labeling or removal, as described below) on more significant forms of alteration, such as wholly synthetic audio or video or content that has been doctored (spliced and reordered, slowed down) to change its meaning. Subtler forms of manipulated media, such as isolative editing, omission of context, or presentation with false context, may be labeled or removed on a case-by-case basis.
We will not take action to label or remove media that have been edited in ways that do not fundamentally alter their meaning, such as retouched photos or color-corrected videos.
In order to determine if media have been significantly and deceptively altered or fabricated, we may use our own technology or receive reports through partnerships with third parties. In situations where we are unable to reliably determine if media have been altered or fabricated, we may not take action to label or remove them.
2. Is the content shared in a deceptive manner?
We also consider whether the context in which media are shared could result in confusion or misunderstanding or suggests a deliberate intent to deceive people about the nature or origin of the content, for example by falsely claiming that it depicts reality. We assess the context provided alongside media to see whether it makes clear that the media have been altered or fabricated. Some of the types of context we assess in order to make this determination include:
- The text of the Tweet accompanying or within media
- Metadata associated with media
- Information on the profile of the account sharing media
- Websites linked in the Tweet, or in the profile of the account sharing media
3. Is the content likely to impact public safety or cause serious harm?
Tweets that share synthetic and manipulated media are subject to removal under this policy if they are likely to cause serious harm. Some specific harms we consider include:
- Threats to the physical safety of a person or group
- Risk of mass violence or widespread civil unrest
- Threats to the privacy or ability of a person or group to freely express themselves or participate in civic events, such as:
- Stalking or unwanted and obsessive attention
- Targeted content that includes tropes, epithets, or material that aims to silence someone
- Voter suppression or intimidation
While we have other rules also intended to address these forms of harm, including our policies on violent threats, election integrity, and hateful conduct, we will err toward removal in borderline cases that might otherwise not violate existing rules for Tweets that include synthetic or manipulated media.
We also consider the time frame within which the content may be likely to impact public safety or cause serious harm, and are more likely to remove content under this policy if we find that immediate harms are likely to result from the content’s presence on Twitter.
Note: We may also take action on synthetic and manipulated content under our non-consensual nudity policy (such as pornographic media altered to insert the faces of people not actually involved) or other parts of the Twitter Rules.
Labeling and removal
In most cases, if we have reason to believe that media shared in a Tweet have been significantly and deceptively altered or fabricated, we will provide additional context on Tweets sharing the media where they appear on Twitter. This means we may:
- Apply a label to the content where it appears in the Twitter product;
- Show a warning to people before they share or like the content;
- Reduce the visibility of the content on Twitter and/or prevent it from being recommended; and/or
- Provide a link to additional explanations or clarifications, such as in a Twitter Moment or landing page.
In most cases, we will take all of the above actions on Tweets we label.
Media that meet all three of the criteria defined above—i.e. that are synthetic or manipulated, shared in a deceptive manner, and is likely to cause harm—may not be shared on Twitter and are subject to removal. Accounts engaging in repeated or severe violations of this policy may be permanently suspended.
* Other parts of the Twitter Rules apply and may lead to the removal of the content, particularly where there is high likelihood of severe harm, such as a threat to someone’s life or physical safety.