Twitter limits content-enforcement tools as US election looms

Technology


Twitter Inc, the social network being overhauled by new owner Elon Musk, has frozen some employee access to internal tools used for content moderation and other policy enforcement, curbing the staff’s ability to clamp down on misinformation ahead of a major US election.

Most people who work in Twitter’s Trust and Safety organization are currently unable to alter or penalise accounts that break rules around misleading information, offensive posts and hate speech, except for the most high-impact violations that would involve real-world harm, according to people familiar with the matter. Those posts were prioritised for manual enforcement, they said.

People who were on call to enforce Twitter’s policies during Brazil’s presidential election did get access to the internal tools on Sunday, but in a limited capacity, according to two of the people. The company is still utilizing automated enforcement technology, and third-party contractors, according to one person, though the highest-profile violations are typically reviewed by Twitter employees.

San Francisco-based Twitter declined to comment on new limits placed on its content-moderation tools.

Twitter staff use dashboards, known as agent tools, to carry out actions like banning or suspending an account that is deemed to have breached the policy. Detection of policy breaches can either be flagged by other Twitter users or detected automatically, but taking action on them requires human input and access to the dashboard tools. Those tools have been suspended since last week, the people said.

This restriction is part of a broader plan to freeze Twitter’s software code to keep employees from pushing changes to the app during the transition to new ownership. Typically this level of access is given to a group of people numbering in the hundreds, and that was initially reduced to about 15 people last week, according to two of the people, who asked not to be named discussing internal decisions. Musk completed his $44 billion deal to take the company private on October 27.

The scaled-back content moderation has raised concerns among employees on Twitter’s Trust and Safety team, who believe the company will be short-handed in enforcing policies in the run-up to the US midterm election on November 8. Trust and Safety employees are often tasked with enforcing Twitter’s misinformation and civic integrity policies — many of the same policies that former President Donald Trump routinely violated before and after the 2020 elections, the company said at the time.

Other employees said they were worried about Twitter rolling back its data access for researchers and academics, and about how it would deal with foreign influence operations under Musk’s leadership.

On Friday and Saturday, Bloomberg reported a surge in hate speech on Twitter. That included a 1,700% spike in the use of a racist slur on the platform, which at its peak appeared 215 times every five minutes, according to data from Dataminr, an official Twitter partner that has access to the entire platform. The Trust and Safety team did not have access to enforce Twitter’s moderation policies during this time, two people said.

Yoel Roth, Twitter’s head of safety and integrity, posted a series of Tweets on Monday addressing the increase in offensive posts, saying that very few people see the content in question. “Since Saturday, we’ve been focused on addressing the surge in hateful conduct on Twitter. We’ve made measurable progress, removing more than 1500 accounts and reducing impressions on this content to nearly zero,” Roth wrote. “We’re primarily dealing with a focused, short-term trolling campaign.”

Musk tweeted last week that he hadn’t made “any changes to Twitter’s content moderation policies” so far, though he has also said publicly that he believes the company’s rules are too restrictive and has called himself a free-speech absolutist.

Internally, employees say, Musk has raised questions about a number of the policies and has zeroed in on a few specific rules that he wants the team to review. The first is Twitter’s general misinformation policy, which penalizes posts that include falsehoods about topics like election outcomes and Covid-19. Musk wants the policy to be more specific, according to people familiar with the matter.

Musk has also asked the team to review Twitter’s hateful conduct policy, according to the people, specifically a section that says users can be penalized for “targeted misgendering or deadnaming of transgender individuals.”

In both cases, it is unclear if Musk wants the policies to be rewritten or the restrictions removed entirely.

!function(f,b,e,v,n,t,s)
{if(f.fbq)return;n=f.fbq=function(){n.callMethod?
n.callMethod.apply(n,arguments):n.queue.push(arguments)};
if(!f._fbq)f._fbq=n;n.push=n;n.loaded=!0;n.version=’2.0′;
n.queue=[];t=b.createElement(e);t.async=!0;
t.src=v;s=b.getElementsByTagName(e)[0];
s.parentNode.insertBefore(t,s)}(window, document,’script’,
‘https://connect.facebook.net/en_US/fbevents.js’);
fbq(‘init’, ‘444470064056909’);
fbq(‘track’, ‘PageView’);



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *