Bumble will start removing profiles that falsely report users for their identity

Bumble will start removing profiles that falsely report users for their identity

Technology


Bumble has announced a new policy that explicitly bans identity-based hate and it will take action against those who intentionally submit false reports targeting other users for their identity. The platform intends to remove repeat offenders.  The company noted in its press release that 90% of reports it received against gender-nonconforming people actually did not violate any terms and were eventually dismissed.

Such reports often contained language concerned with the gender of the reported person speculating that their profile might be fake. Taking a tough stance on people who send out false reports like these, Bumble said that it may even boot repeat offenders from the platform.

The dating app also says that it’ll review each report and take appropriate action. The rollout of the policy also includes implicit bias training and discussion sessions with all safety moderators to check how bias can exist when moderating. Azmina Dhrodia, Bumble’s safety policy lead, said in a statement, “We always want to lead with education and give our community a chance to learn and improve. However, we will not hesitate to permanently remove someone who consistently goes against our policies or guidelines.”

“The company defines identity-based hate as content, imagery or conduct that promotes or condones hate, dehumanisation, degradation, or contempt against marginalized or minoritised communities with the following protected attributes: race, ethnicity, national origin/nationality, immigration status, caste, sex, gender, gender identity or expression, sexual orientation, disability, serious health condition, or religion/belief,” according to the press statement.

“We want this policy to set the gold standard of how dating apps should think about and enforce rules around hateful content and behaviours. We were very intentional to tackle this complex societal issue with principles celebrating diversity and understanding how those with overlapping marginalized identities are disproportionately targeted with hate,” added Dhrodia.

Aside from human moderation, Bumble already uses automated measures to protect users against comments and images that’ll negatively impact them. The company says that using such safeguards, it’s been successful in detecting 80 per cent of community guidelines violations even before they get reported, which is “part of the company’s commitment to reduce and prevent harm before it happens.”

Content on Bumble that’s not automatically detected can be brought to moderators’ attention through the Block + Report feature, which allows users to report someone for identity-based hate, either straight from their profile or via a chat.

!function(f,b,e,v,n,t,s)
{if(f.fbq)return;n=f.fbq=function(){n.callMethod?
n.callMethod.apply(n,arguments):n.queue.push(arguments)};
if(!f._fbq)f._fbq=n;n.push=n;n.loaded=!0;n.version=’2.0′;
n.queue=[];t=b.createElement(e);t.async=!0;
t.src=v;s=b.getElementsByTagName(e)[0];
s.parentNode.insertBefore(t,s)}(window, document,’script’,
‘https://connect.facebook.net/en_US/fbevents.js’);
fbq(‘init’, ‘444470064056909’);
fbq(‘track’, ‘PageView’);



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *