I just published a short piece in the F-T in the wake of legal threats against users who tweeted or retweeted a link to a BBC report of child abuse that turned out to be wrong. Here’s the full text –
Those who didn’t see the false child abuse accusations against Lord Alistair McAlpine on an ill-considered BBC documentary may have instead heard about them through social media. This week, London’s Metropolitan Police suggested they might file charges against those Twitter users who sullied the reputation of the retired Conservative politician by knowingly repeating the lie that he was a child abuser. But the police may be less fearsome to the average BBC-linking tweeter than Lord McAlpine himself.
His attorneys say they have identified 1,000 original libellous tweets and 9,000 more retweets. Under the UK’s plaintiff-friendly libel law, the conventional wisdom holds that even a simple retweet which simply echoes others’ content could be actionable, whether or not the user thought it to be false. In addition to a £185,000 settlement with the BBC, Lord McAlpine’s lawyers are inviting implicated tweeters with fewer than 500 followers to make a donation to charity, and those with more followers to agree to bespoke settlements. Such invitations are declined at one’s peril — at least for those who live in the UK or any other place with an agreement to enforce UK civil judgments.
Such a broad-based attack on individuals is unwise and uncalled for, even as the injury that inspires it is mortifying. The problem is that what appears to be a trivial, momentary action — retweeting something of interest — can now create or magnify a falsehood as powerfully as if it had aired on national television. If a television station can be held responsible for what it broadcasts, why not the individuals whose collective megaphone rivals that of the BBC?
The answer is that television stations can and should have fact checking and legal departments as part of the cost of responsible business. Individuals cannot be held to a similar practice, and a series of uneven threats that stills the speech of only the most lawyer-sensitive will unduly undermine the huge value of a service such as Twitter. There may be call to go after the most egregious malicious actors — those who intentionally seek to sow untrue and damaging information about a specific person — but the very identification of 10,000 uncoordinated tweets and retweets suggests something other than bad faith by all. Traditional media can remain vibrant precisely by upholding a higher standard and helping social media to sift truth from falsehood.
Nor would charging Twitter itself with the broadcaster or newspaper editor’s policing function help. Trying to force Twitter to prescreen material would likely result in the service simply refusing to display any tweets to users located in the UK. Expecting it to monitor all tweets to block a tiny proportion of bad ones is unrealistic. As US Justice Felix Frankfurter warned in 1956, striking down a Michigan law that forbade bookstores from selling immoral books, we should not burn the house to roast the pig.
It is dicey enough to attempt automated processes to take down identical copies of copyrighted music and movies on services like YouTube, where robots scan 100 years of video every day looking for alleged infringements. To seek to pressure intermediaries to judge the murkier areas of truth and falsehood, and then squelch tweets as they emerge, would require a level of intrusion that even China has not managed. Italy found out as much when, in 2010, prosecutors obtained a criminal conviction against top Google executives for allowing someone to upload a YouTube video depicting the bullying of an autistic boy. The video was a needle in the haystack that comprises 72 hours of footage uploaded every minute, and the convictions for not finding and dealing with it quickly enough satisfied no one with an interest in the dispute. Google had removed the video within two hours of being alerted to it by authorities, and the verdict remains under appeal.
There are ways to improve the status quo. Microblogging will look different 10 years from now. Services such as Twitter can, and will, hone ways for people not only to retract what they have said, but to relay a follow-up message through all those who repeated it. Those who willfully initiate a devastating lie can often be identified and shamed, and those who unwittingly repeat it can, if the technology makes it simple to do, assist in undoing its damage.
Lord McAlpine’s situation bears some resemblance to the unhappy 2005 discovery by RFK press aide John Siegenthaler that his Wikipedia entry had billed him, absurdly, as a conspirator in RFK’s assassination. Mr Siegenthaler lamented the site’s fact-checking and toyed with litigation against the initially-anonymous editor who created his entry.
The editor, who was eventually unmasked, apologised and resigned from his day job. Mr Siegenthaler urged the employer to show mercy. Meanwhile, Wikipedia tightened its rules and practices for the creation and editing of new articles, especially biographies of living persons, and over time it has tweaked its software to be able to undo many instances of vandalism with only one click. Wikipedia has chosen to do so despite enjoying broad immunity under US law for what happened.
Technologies that greatly empower people to communicate with one another are transformative enough to cause injury. Their sharp edges can best be sanded by enlisting people of good faith to help correct the wrongs they may have inadvertently amplified. We should rarely invoke litigation or prosecution, which can chill legitimate speech and cantonise the internet, as material will be withheld selectively from regulation-heavy jurisdictions.
The internet can help us to understand and own the ethical dimensions of what we do online, and to make morally informed, rather than legally compelled, choices about the information we absorb and refract onward.