In Dublin, this founder is building digital immunity to disinformation
Angelika Sharygina's interactive games expose Irish students to small doses of disinformation to build resilience.
Meet Angelika Sharygina, an Afghan Ukrainian PhD researcher, policy advisor, and one of Ireland’s 30 under 30 award recipients. Focused on disinformation, co-designing information literacy solutions, and preventing the spread of misinformation in war zones, she’s now intent on helping Irish students identify AI-generated media that blurs the lines between what’s real and what’s false.
In 2024, she gave a TED Talk titled “Verify Before You Amplify,” in which she spoke about the “dangerous simplicity with which AI can fabricate news.” Her strategy? To view misinformation as a virus, and one that can be treated with the right inoculation. “Many are treating [AI] as an all-answer encyclopedia,” she explains. “Are we really prepared to counter the bad actors that are going to use it to harm children?”
That’s why her startup is betting on a tactic she calls smart exposure methodology, in which you expose individuals to small doses of misinformation in a controlled environment to help them identify fake information in real life. Mathematically, treating misinformation like a virus is pretty accurate: it spreads much the same way as an epidemic, especially in high-risk environments.
When we talk, Angelika points out that many of the same methods used for years to spark violence in war zones and highly fraught regions of the world are now spreading to regions that were once considered stable democracies, such as Ireland, the UK, and the U.S. “I would say one of my biggest concerns is the lack of literacy on technological advancements,” she explains.
By that, Angelika means the spread of generative AI in rapidly, coherently creating deepfakes, voice fakes, misleading information, and content designed specifically to spread violence. AI misinformation can affect everything from the stock market to medicine, and overall, humans aren’t great at spotting it. (According to MIT Technology Review, AI-generated misinformation might even be more believable when it’s machine-generated.)
So she decided to do something about it. Drawing on her expertise with war zone disinformation and its viral spread, she started designing interactive games in partnership with neuroscience experts, creating a low-stakes environment that would safely expose students to common examples of scenarios in which they might encounter fake, misleading, or otherwise manipulative media content.
Digital bots, for instance, might pretend to be friends, sending messages asking for emergency funds, while social media posts encourage children to submit personal data at their own risk. With the rise of deepfakes, it can be deeply challenging to know for sure what’s real and what’s not. For younger students especially, Angelika explains, “if someone messages them online and says 'this is your cousin, this is your friend’ and they ask for their location using a voice fake, they need to be prepared.”
After all, even though we often think of children and teens as being more digitally-native, digitally-fluent, and skeptical than adults with technology, that’s not necessarily the case. In fact, they’re deeply at risk for falling for misleading content. As we talk, Angelika cites a study from the Commission on Fake News and the Teaching of Critical Literacy Skills in Schools that places that percentage of young people that can accurately flag fake news at a mere 2%. She pauses, then puts it this way: “Kids are willing to believe a lot of what they see online.”
The tricky part of misinformation, and of AI more broadly, is that it’s difficult to shut Pandora’s box once you’ve opened it. Artificial intelligence is changing the white-collar workforce, and “you can’t just forbid or ban AI outright,” Angelika cautions. “Or else the kids are behind.” As Derek Thompson writes in The Atlantic, AI may already be creating an entry-level skills gap, in which one theory is that AI is “competing” for key first jobs. Navigating how to use AI at work is rapidly becoming a skill you can’t escape, not if you want to integrate into an AI-skilled workforce.
Young users aren’t the only ones who could benefit from a platform of this sort. Elderly adults who live alone are routinely targeted in financial scams, and inadvertently absorb AI political information and news through Facebook and adverts. Vulnerable populations are more at risk for disinformation campaigns. And those facing diagnosis of Alzheimer’s or other cognitive disorders could benefit from additional protection against scams designed to play on confusion and loneliness. In all of these cases, it would help individuals to be more equipped to recognize disinformation online.
But at the moment, Angelika’s mission is extremely focused: to create a platform that, is, while still preventing the spread of misinformation, is accessible to students. Her first pilot launched in the United Kingdom. Now, co-designing the program in partnership with Digital Hub Ireland, she’s working with Irish 10- and 11-year-olds to scale the second. In Dublin, she wants to start working with more students, parents, teachers, and school boards, integrating AI coaches into the platform to design tight feedback loops and help children quickly build resilience.
Why not write another paper? She laughs. As part of her policy work, she’s already written plenty of AI guidelines and long documents for national governments that kids won’t reference. “It’s already very difficult to capture their attention,” she points out. So she thinks it’s time for something more practical and pertinent, a platform that will expose students to situations they might be presented with in real life and then help them prevent it from happening in the future.
“Basically like a vaccine,” she explains. “So they can be immune to the larger virus.”🧬
— Elise Leise, Editorial @ Nova
From the archives: