Share

“What happened on 7/10 on social media was the largest hijacking of our major applica

0 0

“I think content moderation is more important today than it ever has been before because of the new risks that AI and ChatGPT-like platforms are posing in terms of validating toxic content,” said CyberWell CEO Tal-Or Montemayor. “And I think that this risk was even highlighted further in the largest hijacking of social media platforms. During the October 7 attack by Hamas… the terrorist organizations were able to exploit the social media platforms and it highlighted the need for more automation and content moderation than ever before.”

CyberWell is a non-profit dedicated to fighting antisemitism across major social media platforms. In the months following the October 7 attack on Israel by Hamas, there has been an explosion in antisemitic hate speech online that the organization is determined to help remove.

1 View gallery

Tal-Or Cohen CyberWell

CyberWell CEO Tal-Or Montemayor

(Photo: Hagar Bader)

To date, it claims to have recorded an 86% increase in antisemitism online and helped remove more than 50,000 pieces of content. According to various studies that it has produced, October 7 denial, rape denial, and antisemitic troupes have grown, with 61% of internet users openly calling for or justifying violence against Jews following Hamas’ attack.

Some users online praise the work of organizations like CyberWell, which are dedicated to fostering a healthy and safe environment online for Jews and any other minority group. However, there are growing factions in online communities that grow wary of content moderation schemes, labeling them as a gateway for censorship and methods to impose limitations on free speech at the request of government agencies or private companies with certain ideological leaning.

So who is to say what constitutes hate speech and what is considered censorship overreach in our online discourse? “What happened on October 7 on social media platforms was the largest hijacking of our major applications by a terrorist group… Every single democracy that is at threat of a terrorist attack should be looking at social media platforms through the lens of national security,” she added.

Tell me a bit about CyberWell.

“CyberWell is a non-profit that launched in May 2022 as the first-ever open database of online antisemitic content. That database essentially acts as an engine of transparency, reflecting the state of online antisemitism and emerging antisemitic trends. The way that we implement that data collection is by essentially acting as the online antisemitism compliance officer for social media platforms.

“We are what’s called a ‘Trusted Partner’, which is a specific status granted to non-profits that work with major social media platforms. Today, we’re a Trusted Partner of Meta, which covers the apps of Facebook, Instagram and Threads, and also TikTok. We also share data with other major social media platforms that we’re monitoring at CyberWell. So that would include YouTube and X.

“We monitor across major social media platforms in both English and Arabic, which is also very unique to the online antisemitism space because so much of the efforts are focused exclusively on English.”

What have you seen since October 7?

“We certainly know after October 7 that antisemitism is an international problem and has kind of generated new energy and new violence, specifically in the Arabic-speaking world in terms of Arabic online antisemitism as well. Our first proof of concept period was when Kanye West had his public meltdown and attacked Jews online in October 2022. And then our real road test was October 2023, following October 7, where we saw an incredible 86% increase in online antisemitism. So nearly a doubling of antisemitic content that we’re typically seeing online. Following Hamas’s October 7 attacks, we were able to provide social media platforms with real-time alerts and guidance to emergent trends.

“So that’s not only actionable recommendations, but we give them the data points that they would need to then go independently into their platform and remove that content at scale.

“As a result, we were responsible for the removal of over 50,000 pieces of antisemitic content over a month because of this real-time monitoring. So we’re taking a very tech and data-based approach to this channel that social media platforms have for working with non-profits. And since online antisemitism is probably the most formidable and fastest-growing form of Jewish hatred today, I think our work has never been more necessary than before.”

You’ve definitely entered the world at the right time. You mentioned Meta and TikTok – why is X not involved in that?

“We do share data and alerts actively with X, but X no longer has an official channel, program, or forum with which they convene non-profits… They did have an official kind of trustees forum for non-profits and members of civil society and that was disbanded once Musk took over in October 2022. So right around the time that there was this wave of antisemitism inspired by Kanye West and also the re-platforming of many known white supremacists and racists onto X. At that time, that official forum was dissolved. So we do send alerts to X when we feel like we can make a difference, but there is no official forum for them to work with non-profits.”

What is their response rate when you send them content?

“Our most recent data, which gives a snapshot of 2023, actually shows that X has significantly increased their rate of removal from last year’s rate of removal. Our average rate of removal that we tracked in 2022 for content that we had been sending to the platform was only 23.8%. That average has increased to 32.1% in 2023, which is an improvement, but it still means that 70% of the time, if you’re a Jewish user and you’re reporting online antisemitism, it’s going to be left up online. And I’ll say to X’s credit that almost 40% of the content that we reported to X was removed.

“In our last two reports, we focused on the issue of the October 7 denial online, which is easy to label the newest form of Holocaust denial that there is today. We saw that not only did X have the lowest rates of removal in the first week or two of reporting that content, but also that the recommendation algorithm for them is driving views around this content.”

What would you say to people who are skeptical of the whole so-called ‘Censorship Industrial Complex’?

“I think that the story of X gives you a good indication of why content moderation is so important. Let’s look at the industry itself: social media platform revenue is coming from advertisers. Advertisers do not want their brand advertised next to garbage or hate speech. That’s why, among other things, content moderation evolved. It came from a concern of the paying customers of the social media platforms. It is in the interest of the paying customers of the social media platforms that their platforms are clean or clean enough so that they’re not causing any kind of reputational damage. And we have seen that evolve to be the industry best practice standard.

“Social media platforms effectively deal with hate speech content and content and toxic content in kind of a self-regulating way. Thus far, except for the Digital Services Act that just was passed in the EU [in 2022], and enables the EU government to fine social media platforms for not removing illegal hate speech. So that’s the first time that the government has stepped in and said, ‘You’re not doing a good enough job with this toxic content. We want more of it removed’. So now, in addition to the financial incentive, we have a legislative incentive for this content moderation.

“I think content moderation is more important today than it ever has been before because of the new risks that AI and Chat GPT-like platforms are posing in terms of validating toxic content. And I think that this risk was even highlighted further in the largest hijacking of social media platforms. During the October 7 attack by Hamas, we had this outpouring of violent, hateful content that essentially perpetuated the attack far beyond its existence in space and time and terrorized entire populations. The terrorist organizations were able to exploit the social media platforms during the October 7 attacks and it highlighted the need for more automation and content moderation than ever before.”

You may also like...