1. Is the content synthetic or manipulated?
In order for content to be labeled or removed under this policy, we must have reason to believe that media, or the context in which media are presented, are significantly and deceptively altered or manipulated. Synthetic and manipulated media take many different forms and people can employ a wide range of technologies to produce these media. In assessing whether media have been significantly and deceptively altered or fabricated, some of the factors we consider include:
- whether the content has been substantially edited in a manner that fundamentally alters its composition, sequence, timing, or framing;
- any visual or auditory information (such as new video frames, overdubbed audio, or modified subtitles) that has been added or removed; and
- whether media depicting a real person have been fabricated or simulated
We are most likely to take action (either labeling or removal, as described below) on more significant forms of alteration, such as wholly synthetic audio or video or content that has been doctored (spliced and reordered, slowed down) to change its meaning. Subtler forms of manipulated media, such as isolative editing, omission of context, or presentation with false context, may be labeled or removed on a case-by-case basis.
We will not take action to label or remove media that have been edited in ways that do not fundamentally alter their meaning, such as retouched photos or color-corrected videos.
In order to determine if media have been significantly and deceptively altered or fabricated, we may use our own technology or receive reports through partnerships with third parties. In situations where we are unable to reliably determine if media have been altered or fabricated, we may not take action to label or remove them.
2. Is the content shared in a deceptive manner?
We also consider whether the context in which media are shared could result in confusion or misunderstanding or suggests a deliberate intent to deceive people about the nature or origin of the content, for example by falsely claiming that it depicts reality. We assess the context provided alongside media to see whether it makes clear that the media have been altered or fabricated. Some of the types of context we assess in order to make this determination include:
- The text of the Tweet accompanying or within media
- Metadata associated with media
- Information on the profile of the account sharing media
- Websites linked in the Tweet, or in the profile of the account sharing media
3. Is the content likely to impact public safety or cause serious harm?
Tweets that share synthetic and manipulated media are subject to removal under this policy if they are likely to cause serious harm. Some specific harms we consider include:
- Threats to the physical safety of a person or group
- Risk of mass violence or widespread civil unrest
- Threats to the privacy or ability of a person or group to freely express themselves or participate in civic events, such as:
- Stalking or unwanted and obsessive attention
- Targeted content that includes tropes, epithets, or material that aims to silence someone
- Voter suppression or intimidation
While we have other rules also intended to address these forms of harm, including our policies on violent threats, election integrity, and hateful conduct, we will err toward removal in borderline cases that might otherwise not violate existing rules for Tweets that include synthetic or manipulated media.
We also consider the time frame within which the content may be likely to impact public safety or cause serious harm, and are more likely to remove content under this policy if we find that immediate harms are likely to result from the content’s presence on Twitter.