(This article previously appeared in the Jewish Currents email newsletter; subscribe here!)
On September 22nd, the consulting company Business for Social Responsibility (BSR) published a report on how Meta—the social media company that owns Facebook, Instagram and WhatsApp—unfairly removed Palestinian social media posts during Israel’s assault on Gaza and its raids on the Al Aqsa Mosque last May, harming Palestinian freedom of expression.
The report underscored heavy-handed content moderation by Facebook and Instagram, which Palestinian social media users claim censors critics of Israeli repression. During the May 2021 escalation in violence, digital rights groups documented hundreds of cases of Palestinian user content being removed, hidden, or otherwise suppressed. These restrictions have undermined Palestinian users’s effort to use social media to document Israeli human rights abuses. “The Palestinian cause hasn’t received fair coverage in mainstream media. Social media platforms are supposed to open the floor for activists and human rights defenders to share their narratives,” said Mona Shtaya, an advocacy advisor for 7amleh-The Arab Center for Social Media Advancement. Instead, social media companies have “shown a willingness to silence Palestinian voices if it means avoiding potential political controversy and pressure from the Israeli government,” according to internet policy experts Emerson Booking and Eliza Campbell’s 2021 article in Foreign Policy.
BSR’s report—commissioned by Meta following a recommendation made by Facebook’s Oversight Board last year—was notable for revealing the underlying causes of Meta’s content removal last May. According to the report, the company overenforces Palestinian Arabic language content for three likely reasons. Major Palestinian political factions, including Hamas—the main group that fought with Israel in May—are on Facebook’s blacklist of “Dangerous Individuals and Organizations” because they are considered “terror” organizations. This led to content removals for anyone deemed to “praise, support or represent” those groups. Moreover, Palestinian Arabic content wasn’t reviewed by speakers of the Palestinian dialect of Arabic. Lastly, the data used to train Meta’s Arabic speech algorithm relied on human reviewers whose “lack of linguistic and cultural competence” contributed to Palestinian Arabic posts being unfairly flagged as violating the company’s moderation policies. BSR contrasted Meta’s overenforcement of Palestinian social media posts with its underenforcement of Hebrew-language posts, which the report attributes to Meta installing an algorithmic “hostile speech classifier” for Arabic, but not for Hebrew.
Digital rights advocates welcomed the release of the report, which they initially feared might be suppressed or only released in summary form, like a recent Facebook-commissioned report on human rights in India. “The report validates the lived experiences of Palestinians,” said Marwa Fatafta, the Middle East and North Africa policy manager for digital rights group Access Now. “They cannot tell us anymore that this is a system glitch. Now they know the root causes.” At the same time, digital rights advocates told Jewish Currents that the BSR report was far from perfect. BSR’s report concluded that the company did not intentionally seek to harm Palestinian users’ expression, an assertion digital rights defenders took issue with. “These are all actions and inactions that the company took, so I don’t see how you can not see them as intentional,” said Deborah Brown, a senior researcher and advocate on digital rights at Human Rights Watch. In a joint response to the BSR report, dozens of human rights groups noted they’d been “calling Meta’s attention to the disproportionately negative impact of its content moderation on Palestinians for years,” so “even if the bias started out as unintentional, after knowing about the issues for years and not taking appropriate action, the unintentional became intentional.”
The BSR report also omitted discussion of another issue digital rights groups have long demanded clarity on: the role of the Israeli Cyber Unit in Meta’s suppression of Palestinian content. Founded within Israel’s Ministry of Justice in 2015, the unit issues thousands of requests to Facebook to remove content the government says violates Israeli law or the social media companies’ terms of service. According to a 2018 report by the Israeli State Attorney’s office, social media companies complied with the Cyber Unit’s removal requests 90% of the time. Digital rights groups have long called on Facebook to explain how it evaluates requests from the Cyber Unit. “We’ve wanted more clarity on this because Meta refuses to provide answers,” said Fatafta. “Users deserve transparency on whether their piece of content has been removed as a result of the Israeli government’s request.”
BSR’s report ends with 21 recommendations to Meta, ranging from improvements in assessing Arabic-language content to publishing more details on its content moderation policies so users can better understand them. Meta responded to the report saying it will fully or partially commit to most of the recommendations, while assessing the feasibility of six of them. (There is one recommendation Meta said it will not consider: BSR’s suggestion that Meta fund research into how social media platforms interpret counter-terrorism laws.) Fatafta, however, said Meta’s response was “underwhelming and not very encouraging” because the company’s language on how they would implement the recommendations was far too vague. “We need a concrete action plan,” said Fatafta, “with clear timelines and with full transparency.”