Twitter relied on bots, foreign contractors and busybody users dubbed “tattles” to enforce its misinformation policies, the latest report on the company’s internal practices revealed Monday.
In a case study, the Free Press reporter David Zweig focused on an Aug. 11 tweet from user @KelleyKga, who describes herself in her Twitter biography as an “‘Internet rando’ and public health fact checker — Focused on Covid and kids, restoring normalcy, and Covid metrics in Georgia.”
On that day, Kelley took issue with another user who claimed COVID-19 had been the “leading cause of death from disease in children” since December 2021.
“What an excellent example of cherry picking!” @KelleyKga wrote back. “If you narrow it down to only the specific months you specify, which include the largest Covid wave (seen across the world) AND you ignore all non-disease deaths AND you ignore cancer, heart disease, SIDS, then Covid is ‘leading.’”
Kelley’s tweet was initially flagged by a bot trained in machine learning and artificial intelligence, which Zweig acknowledged was an “impressive” feat of engineering.
However, the reporter added, the bot system was “too crude for such nuanced work” as sorting out analysis about a global pandemic.
In addition to the bot flag, Kelley’s tweet was reported by an unspecified number of users, dubbed “tattles” in Twitter’s internal system. That triggered a review by a human moderator, who labeled Kelley’s tweet “misleading” despite the fact that she had included data from the Centers for Disease Control and Prevention in her post.
“In my review of internal files, I found countless instances of tweets labeled as ‘misleading’ or taken down entirely, sometimes triggering account suspensions, simply because they veered from CDC guidance or differed from establishment views,” wrote Zweig, who also pointed out that moderation decisions were made by contractors in overseas places like the Philippines.
“They were given decision trees to aid in the process, but tasking nonexperts to adjudicate tweets on complex topics like myocarditis and mask efficacy data was destined for a significant error rate,” he wrote.
At the time of her tweet being labeled “misleading,” Kelley K said she had inquired to Twitter about the matter and received a response that the social media giant wanted to “prioritize review and labeling of content that could lead to increased exposure or transmission” — with no regard for whether the information was correct.
“Twitter’s misinformation policy around Covid only goes one way,” he wrote. “It specifically allows tweets that exaggerate the risks of Covid, and only applies to tweets that they think ‘minimize’ Covid … Fear over facts is basically the official policy of Twitter’s censors when it comes to Covid.”