“06880” readers are bright people.
We know that our social media feeds are manipulated by algorithms. The stories, videos, images, ads and clickable links I see are different than yours.
We know we are getting a skewed view of the world — one that reinforces what we already believe, and separates us further from those who believe differently.
We know all that. But — as we scroll, click and scroll again, endlessly and mindlessly — we seldom think about what those seemingly ordinary posts mean to our lives.
We think of social media as a galaxy of free speech.
In reality, it’s a universe of hate speech.
Dr. Matthias Becker has spent years studying those ideas. He just finished a $3 million-plus research grant on antisemitism, and wrote a book about it.

Dr. Matthias Becker
In his new position at New York University as the Address Hate Research Scholar, he is exploring digital hate, implicit communication, and the social impact of AI-driven platforms.
He regularly advises governments and tech companies on ways to mitigate online hatred.
On April 21 (7 p.m., Westport Library), Dr. Becker brings his research and insights to Westport.
“Decoding Bias & Hate on Social Media” is the next in a series of Common Ground Initiative programs. CGI hosts positive conversations on how to encourage respectful, constructive dialogue, and tackle challenging issues.
Dr. Becker is an engaging, thoughtful speaker. His insights are relevant to anyone on social media — in other words, everyone.
But they’re especially important for young people, who gobble up social media constantly, and may be less cognizant of what they see and why. The hate speech they see online — not always identifiable as such — can have an especially pernicious effect on developing minds.

So as part of the April 21 event, the Common Ground Initiative is sponsoring a “Decode Hate Video Challenge.”
Students throughout Fairfield County are invited to meet with Dr. Becker at 6 p.m. Over pizza, they’ll learn about explicit and cover hate and bias online — from obvious slurs to hidden memes.
At 7, they’ll listen to his talk. Then, they’re challenged to make a 1- to 2-minute video, showing any kind of hate, bias or manipulation online.
It can be related to sports, music, movies, pop culture, race, religion, ethnicity, sexuality, misogyny — or anything else. The video should be personal, and include ideas on what people or platforms might do differently.
The deadline is May 15. On May 28 the top 5 videos will be judged by a VIP panel — for cash prizes of $1,000, $750 and $500.
“Hate doesn’t announce itself,” Dr. Becker says. “Neither does the AI that’s spreading it.

“Most of what circulates online doesn’t look like the crude hatred of decades past. It look like irony, insinuation, strategic ambiguity — ideas traveling in plain sight, just below the threshold of what most people would call extreme.
“The distinction between free speech and hate speech matters enormously here. And it’s precisely this coded, ambiguous nature of modern hate that makes drawing that line so difficult, and so consequential.
“That also makes these expressions extraordinarily hard to detect, for humans and AI systems alike.”
Dr. Becker’s research addresses 3 elements of the problem: “coordinated bad actors who deliberately exploit divisive issues, and manufacture disinformation at scale”; platform algorithms that reward outrage and amplify emotionally charged content, and elements of online communication itself — anonymity, mutual reinforcement, constant exposure to extremity — that “turn ordinary users into unwitting amplifiers of hate.”
An even deeper problem, Dr. Becker says: “Most public debate about AI and hate focuses on what AI produces — offensive outputs, extremist content.
“That’s real. But it’s downstream of a harder issue: what AI absorbs.
“Every major model shows consistent bias toward hateful associations — not because engineers are hateful, but because models were trained on centuries of human text in which those associations are already embedded.
“You can add guardrails. The underlying associations remain.”
(“Decoding Bias & Hate on Social Media” is free. Click here for more information, and to register.)
(“06880” covers upcoming events, technology, cultural trends — and, like today, their intersection. If you appreciate stories like this, please click here to support our work. Thank you!)

Timely and important program. Many thanks to the organizers.
I recall how five years ago, when people were getting really tired of the pandemic, there was a spate of hate crimes against Asians. When the ADL predicted that the hate would turn antisemitic, I thought that was a stretch. Then a woman with whom I used to work put up a post on Facebook claiming that a certain billionaire Holocaust survivor had started the pandemic! I unfriended and blocked her.
That was just the start of it. The increase in antisemitic incidents was already well under way before the October 7, 2023 terrorist attack.
I am not Jewish, by the way. But I do not countenance antisemitism.