Donald Trump had to use an adviser’s Twitter to promise an orderly transition of power to Joe Biden after he was locked out of his own social media accounts.
Social media has evolved rapidly in recent years, from taking a more hands-off approach in the early days, to implementing stricter sanctions on users breaking an ever-growing list of rules.
Platforms have faced the moral dilemma of balancing free speech against hate speech, and no world leader has tested the line more than Mr Trump.
Here, the PA news agency looks at how social network policies have been shaped by the prolific tweeter during his tenure, leading up to his eventual suspension on the big sites.
Mr Trump had already posted a number of controversial comments on Twitter before his rise to power and within his first year as president, generating debate about whether they should be censored.
North Korean Leader Kim Jong Un just stated that the “Nuclear Button is on his desk at all times.” Will someone from his depleted and food starved regime please inform him that I too have a Nuclear Button, but it is a much bigger & more powerful one than his, and my Button works!
— Donald J. Trump (@realDonaldTrump) January 3, 2018
A tweet in January 2018 in particular caused alarm, warning North Korean leader Kim Jong Un that Mr Trump had a “much bigger” and “more powerful” nuclear button.
It was left up by Twitter, prompting the firm to clarify its stance on world leaders.
At the time, the social network said: “Blocking a world leader from Twitter or removing their controversial tweets would hide important information people should be able to see and debate.
“It would also not silence that leader, but it would certainly hamper necessary discussion around their words and actions.”
A wider debate about disinformation was taking hold, leading Twitter to introduce fact-checking labels, notifying users about unverified claims.
There is NO WAY (ZERO!) that Mail-In Ballots will be anything less than substantially fraudulent. Mail boxes will be robbed, ballots will be forged & even illegally printed out & fraudulently signed. The Governor of California is sending Ballots to millions of people, anyone…..
— Donald J. Trump (@realDonaldTrump) May 26, 2020
In May last year, the firm placed such labels on tweets from his personal account for the first time after he said postal votes will be “forged” and create a “rigged election”.
For many years after Mr Trump emerged as a political figure, Facebook said it would not take action against content from political leaders – even if it contained false claims and would otherwise break Facebook rules – because the public deserved to hear unfiltered statements from politicians.
Chief executive Mark Zuckerberg repeatedly said it was not Facebook’s role to be the “arbiter of truth”.
This was despite significant backlash and ultimately several public apologies from Facebook after the 2016 US presidential election, when waves of disinformation were allowed to spread on the platform by often Trump-supporting internet users who were embraced by the president, and were pushing a number of conspiracy theories.
One theory was the pizzagate conspiracy, which centred around fictitious claims of a child sex abuse ring based at a Washington DC pizzeria, falsely linked to high-ranking Democrats including Mr Trump’s then presidential rival Hillary Clinton.
The unfounded claims remained in circulation on Facebook and other online discussion boards such as 4chan and 8chan in various forms all the way through to the 2020 election.
It was not until 2020 that Facebook finally changed its policy on political leaders, including Mr Trump, when it began adding warning labels to posts which violated its policies – long after other major platforms such as Twitter had started doing so.
– YouTube
YouTube has faced issues with disinformation and hate speech but not nearly as many have been a direct result of content shared by Mr Trump.
However, there was the problem of the QAnon conspiracy theory shared by other users, which claims Mr Trump is fighting a secret war against “deep-state enemies” and a cabal of child sex traffickers.
YouTube was forced to take action on the baseless claims, saying in October that it had removed tens of thousands of QAnon videos and terminated hundreds of channels under its existing content rules.
The Google-owned firm went a step further, prohibiting content that “targets an individual or group with conspiracy theories that have been used to justify real-world violence”.