Sunday, November 26, 2023

Meta 'misled' the public through a campaign that downplayed the amount harmful content on Instagram and Facebook, court documents show

Facebook
New court documents argue that Meta misled the public and its users about the amount of harmful content on its platforms.
  • A newly unsealed complaint accused Meta of misleading the public about harmful content on its platforms.
  • It argued that Meta publicly touted low rates of harmful content while internal data revealed higher rates.
  • Meta used these reports to convince the public its platforms were safer than they actually were, per the complaint.

Meta may have significantly downplayed the rates of misinformation, hate speech, discrimination, and other harmful content on its platforms, according to a newly unsealed complaint against the company filed on behalf of 33 states.

The complaint accused Meta of creating quarterly reports known as the Community Standards Enforcement Report (CSER) that tout low rates of community standards violations on its platforms, but exclude key data from internal user experience surveys that evidence much higher rates of user encounters with harmful content. 

For example, Meta said that for every 10,000 content views on its platforms only 10 or 11 would contain hate speech, or about 0.10% to 0.11%, per data collected from July through September 2020 in its CSER report. Meta defines hate speech per the CSER as "violent or dehumanizing speech, statements of inferiority, calls for exclusion or segregation based on protected characteristics or slurs."

But the complaint said an internal user survey report from Meta known as the Tracking Reach of Integrity Problems Survey (TRIPS) — which an internal memo at Instagram once called "our north star, ground-truth measurement" — reported significantly higher levels of hate speech just months earlier. An average of 19.3% of users on Instagram and 17.6% of users on Facebook reported witnessing hate speech or discrimination on the platforms according to a TRIPS report from May 2020, cited by the complaint. 

Likewise, an average of 12.2% of Instagram users and 16.6% of Facebook users reported seeing graphic violence on these platforms, and over 20% users witnessed bullying and harassment, per the complaint's summary of the TRIPS report.

However, the company defines graphic violence as content that "glorifies violence or celebrates the suffering or humiliation of others on Facebook and Instagram" and also notes that bullying and harassment are "highly personal by nature" so "using technology to proactively detect these behaviors can be more challenging than other types of violations," per the CSER.

The complaint — which cited several other statistics on harmful content that had been gathered from various internal reports — argued that Meta concealed these figures and used reports like the CSER to "create the net impression that harmful content is not "prevalent" on its platforms."

The complaint, which was put together using "snippets from internal emails, employee chats and company presentations" according to the New York Times, did not delve into many details on the methodology for its internal user surveys like TRIPS, or another one its cites called Bad Experiences & Encounters Framework (BEEF). It only noted that they're both "rigorous surveys" used to poll users about their interactions with harmful content like suicide and self-harm, negative comparison, misinformation, bullying, unwanted sexual advances, hate speech, or discrimination. So, it's unclear how much of the discrepancy between the figures in these internal reports, and those in the CSER, could be explained by the gray area between Meta's definition of content violations and users' definitions. 

Meta did not respond to Business Insider's request for a comment, but said in a statement to The New York Times on Saturday that the states' complaint "mischaracterizes our work using selective quotes and cherry-picked documents. 

Read the original article on Business Insider


from All Content from Business Insider https://www.businessinsider.com/meta-downplayed-harmful-content-on-platforms-via-public-reports-2023-11
via gqrds

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.

Back To Top