May 19, 2024

Meta’s supervisory board disagrees with the company’s decision to take down two videos related to the Israel-Hamas war—one of kidnapped hostages of Israel by Hamas, and one that the aftermath of the bombing of Gaza’s Al-Shifa Hospital by the Israeli Army.

On December 7, the board opened an expedited review of the two videos to Meta removed the posts for the “violation of both its violence and incitement and its dangerous organizations and individuals’ rules” but shortly afterwards reinstated. It marked the board’s first emergencies.

On 19 Dec. the board said it disagreed with Meta’s initial decision, which found that “restoring the content to the platform, with a ‘mark as disturbing’ warning screen, is consistent with Meta’s content policies, values ​​and human rights responsibilities” in both cases.

Meta’s crisis response to the attack by Hamas—considered a terrorist organization in the US and elsewhere—was to lower the “removal threshold” for violent and graphic content, including filming videos in which hostages are identifiable. Adapting the policy is a problem, but its impersonal implementation is a bigger one, the supervisory board argued.

Board members say AI is to blame

Khaled Mansour, a author and human rights activist who was on the Supervisory Board since May 2022said artificial intelligence (AI) tools were to blame for the misjudgment as both the initial removal decision and the rejection of the user’s appeal were made automatically based on a classifier score, without any human review.

The Oversight Board said “inadequate human oversight of automated moderation during crisis response may result in the erroneous removal of speech that may be of substantial public interest,” according to a post by Mansour on X.

Hamas features in the list of what Meta calls “dangerous organizations and individuals” along with the likes of ISIS, the Ku Klux Klan and Hitler. However, the automatic filter is not good at distinguishing posts that discuss or even condemn Hamas from posts that support it. Meanwhile, Instagram’s translator tool inserted the word “terrorist” into the translation of bios and captions that contained the word “Palestinian,” the Palestinian flag emoji, or the Arabic term for “praise be to God,” 404 Media reported In October. Palestinian activists concerned about Instagram’s algorithm banning their posts have a algoprate term for Palestinians—P*les+in1ans—to bypass the automatic censorship.

The subject has exposed prejudices in the past. After the 2021 conflict between Israel and Palestine, a human rights due diligence report at the request of the Supervisory Board found “Arabic content has greater over-enforcement,” suggesting that it is more likely to be removed by Meta’s automated content moderation systems.

Citable: Meta should be better prepared to moderate conflicting content

“We see videos and photos, sometimes only of bystanders or journalists, that are removed and it is not clear why. We really advocate for more context-specific content moderation… [W]I’ve seen enough crises in the past decade to get some sense of some patterns and what kind of tools need to be in place.”

Marwa FataftaMENA policy and advocacy director at digital rights advocacy nonprofit Access Now

Israel-Hamas war content on Meta, by the numbers

12: Days it took the Oversight Board to issue a verdict on the two Israel-Hamas war-related posts. It is expected to issue expedited responses within 30 days.

500%: Rise in anti-Semitism and Islamophobia within 48 hours of the October 7 Hamas attack that broke the war, according to the Global Project Against Hate and Extremism

795,000: Pieces of content Meta removed or flagged as disturbing in three days after October 7 for violating policies such as Dangerous organizations and individuals, Violent and graphic content, Hate speech, Violence and incitement, Bullying and harassmentand Coordinating damage in Hebrew and Arabic


q
A
B
H
h
F
m
T
B
h
h
t
l
m
p
e
s
p
b
C
U
r
t
l
m

Leave a Reply

Your email address will not be published. Required fields are marked *