AI Pays If You Talk Like a Man
EPISODE SUMMARY
This week, Remoy shows us another side of AI with the help of grad researcher Maria Teleki. How is it that AI and podcasts come together to reinforce patriarchy…and our socioeconomics? Let’s get into it!
EPISODE NOTES
IIn this week’s episode, Remoy takes us to another side of AI. Grad computer science researcher Maria Teleki lends her findings to shed light on how AI feeds our existing patriarchal notions and structures
What is a large language model? We’re sure you’ve heard about this term, also known as an LLM, that’s becoming more and more relevant to our everyday lives.
Remoy gives us the definition.
Maria follows up with how this technology learns about us and how we speak. Are there speech differences between genders? Maria’s research searches for answers to that question in…podcasts.
A particular word tends to emerge in women’s speech patterns.
But it’s not just that one word. Maria and her team’s research uncover different common words and patterns in the ways people speak to their own gender.
So what does that mean for LLMs? Maria explains.
LLMs learn these gendered speech patterns and their programming informs their response. If the programming stems from patriarchy, the response will be patriarchal. The bias thrives.
Maria continues, breaking down how when LLMs recognize speech to be men’s, users get better results, especially in more profitable sectors.
The bias doesn’t stop at gender. Race and sexuality bias in LLMs also creates gaps and inequalities in technology usage.
There is hope! Maria gives us a silver lining.
Referenced on this episode:
That paper that Maria and her colleagues wrote that was the basis for this episode? You can go more into those technomasculine AI weeds here: Masculine Defaults via Gendered Discourse in Podcasts and Large Language Models
And you can see, at scale, the work Maria has as is continuing to dig into on her Github page
COMPANION PIECES:
Some folks are uncovering AI bias in topics other than speech at the Algorithmic Justice League
More on how AI biases impact folks’ bottom line based on gender
The UN has more to say about this about AI, gender bias and development
Our Guest This Week
Maria Teleki
Maria Teleki is a fourth-year PhD Student in Computer Science at Texas A&M University.
Her research rethinks spoken language understanding by modeling disfluency, spontaneity, and variability as fundamental features of human communication. She aims to develop next-generation conversational AI that thrives under real-world conditions — systems that generalize across speakers, domains, and contexts to power scalable, speech-centric applications in information access, recommendation, and decision support.