Antisemitism in the era of social media and AI
Philip NottinghamHow antisemitic narratives are spread using online video on social media
In the digital age, everything is seen first through the prism of online video, shared primarily on social media. While evolution has primed us to trust what we see and hear above all else – the camera is not always truthful.
Much of what we see online is footage that has been manipulated, tweaked and “contextualised” to tell convenient antisemitic lies, and widely shared using distribution methods implemented to win over younger audiences.
Rather than being the actions of master editors, much of this dishonesty is achieved through simple adjustments to found footage and then actively promoted across TikTok, X, Instagram and YouTube by activists and state actors with malicious intent
In this session, we’ll look at concrete examples of production techniques such as:
– Context Dropping
– Context Fabrication
– Cutting and Cropping
– Performed Fiction
– Baiting and Switching
– AI Editing
And distribution techniques such as:
– Algorithm manipulation
– Search engine optimisation
– Psychological triggers for virality
– Faking view counts
– Scaling faceless accounts
– Authority fabrication
– Targeted paid media
To explore the means by which the oldest hatred is being spread within the new media landscapes.
Katharina Soemer and Elisabeth SteffenPotentials and limits of AI-supported analyses of antisemitism: two case studies on TikTok and Telegram
After October 7, 2023, global antisemitism surged, amplified by social media’s rapid spread of content. This highlights the need for AI-assisted analysis to detect patterns in large datasets. While supervised methods require costly labeled data and struggle with evolving antisemitic expressions, unsupervised ap- proaches like BERTopic can analyze unlabeled data and adapt to dynamic contexts. In two case studies, we examine English TikTok content from a potentially antisemitic influencer and German Telegram posts from conspiracy theorists, addressing how 10/7 shaped discussions, content volume, emerging subtopics, and dominant antisemitic traits, while exploring AI’s role in antisemitism research. The rise of global antisemitism after October 7, 2023 has been substantially amplified by social media, which enable the rapid dissemination of antisemitic content. This surge highlights the need for machine- assisted analysis of such. Discovering patterns in large datasets is central to AI, and the methods for doing so are manifold. With the vast volume of antisemitic posts, AI-supported detection of tropes and patterns is essential. However, such analysis requires researchers with field expertise as current AI methods struggle with the unique challenges posed by antisemitic content, such as its latent, coded, and context-dependent nature.
How can AI analyze large volumes of potentially antisemitic data? Most research relies on supervised methods, which require manually labeled training data, which is time-consuming and costly to create. Antisemitism datasets are scarce and often tied to the language and context of their collection time, limiting models’ ability to recognize new manifestations of antisemitism. Unsupervised methods like topic modeling can analyze large, unlabeled datasets and adapt to dynamic contexts, such as social media after 10/7. However, such methods require post-hoc interpretation of results by researchers with field knowledge, and do not allow for embedding this knowledge in the modeling process itself.
To examine developments on social media after 10/7, we apply an unsupervised AI approach (BERTopic) in two case studies: We analyze English-language TikTok videos and their comments by a potentially antisemitic influencer and German-language content from conspiracy theorist Telegram channels, and address the following questions: How did 10/7, and its aftermath shape discussions on these platforms, and how did the volume of related content change over time? What sub-topics emerged, and what antisemitic traits were dominant? How can unsupervised AI methods support the discovery of antisemitic content in large datasets? How can AI methods be applied in antisemitism research?
Vered Andre’evDenial on social media of the 7 October massacre: trends, narratives and “the cycle of violence”
Despite documentation of violence committed against civilians uploaded in real-time to digital platforms by Hamas terrorists, the day after October 7, a malicious dis- and misinformation campaign spread quickly across social media, distorting and denying that the attacks took place. CyberWell dedicated its resources to monitoring and analyzing this discourse.
CyberWell’s initial analysis reflected a 313-piece dataset of content collected from Facebook, Instagram, TikTok, X, YouTube, gaining over 25 million views. We identified three main sub-narratives: there was no rape; Israel orchestrated the violence; Israel and Jews profited from the massacre. Like Holocaust denial, these narratives are fundamentally antisemitic as they erase or distort atrocities committed against Jews with the goal of justifying further hate and violent acts. CyberWell continues to detect the same cycle of violent event denial online in the wake of ongoing antisemitic attacks.
This report will detail these narratives and incorporate metrics from a dataset of 500 October 7-denying posts, emphasize the need for more effective action against digital antisemitism, particularly in the context of violent event denial and distortion, underscore the urgency of improving platform policies and enforcement, and offer recommendations to combat the spread of antisemitic content in our online spaces.
Felicity IzhariAutomating media bias detection with AI: research and application
During WWII in my hometown in Germany, media coverage of alleged Jewish aggression far away was weaponized to justify the eviction and expropriation of Jews. Today, a similar pattern emerges as anti-Israel bias in the media fuels hostility toward Jewish communities. Yet, detecting this bias at scale remains a challenge.Building on the analytical framework of the BBC Report and the labor-intensive research behind it, this presentation explores how AI can make bias detection faster, scalable, and continuous. This work adapts the BBC Report methodology to develop a bias detection engine, automating large-scale analysis. The engine is applied to coverage from a self-regulated, left-leaning outlet, contrasting it with the regulated, centrist BBC. It focusses on Omission Patterns, which become clear at scale and can be used for evidence-based calls for balanced reporting. A Case Study comparing media coverage of antisemitic versus islamophobic incidents investigates how equivalent topics might be used to approximate a bias index.
This talk examines AI-assisted monitoring of anti-Israel bias and how such a tool can deepen understanding of bias patterns and improve response strategies. As Philip Graham of the Washington Post said: “Journalism is the first rough draft of history.” If we fail to challenge bias where it begins, we risk watching history repeat itself.
