Google DeepMind has hired Hume AI’s CEO and top engineers to enhance voice AI, reflecting its growing importance in user interactions.

The money details are secret, but Hume AI says it will keep sharing its technology with other AI labs. This agreement shows that AI companies think voice interfaces will be crucial for talking to customers. They think understanding feelings through voice is very important.
Hume AI plans to earn $100 million in 2026 by working with AI labs on better voice helpers. John Beadle, a cofounder of AEGIS Ventures, which invested in Hume AI, says the company has raised $74 million so far.
Alan Cowen, the CEO who has a PhD in psychology, will join Google DeepMind along with around seven other engineers. They will help integrate voice and emotional understanding into Google’s new AI models.
Hume AI has put a lot of money into making voice systems that can detect emotions. They train their systems using real conversations to teach them emotional clues. At Google, Cowen and his team will work on making voice and emotion technology better.
Andrew Ettinger, the new CEO of Hume AI, believes voice will soon be the main way people talk to AI. He says the company will show its new models soon.
Beadle thinks AI that can detect feelings and adjust responses will be very important for customers. He believes there is much room to improve AI’s helpfulness in picking up users’ emotions.
The Hume AI deal may help Google compete strongly with OpenAI’s ChatGPT, which already offers a natural voice mode. Google has also worked with Apple for a new Siri version powered by Gemini.
This deal shows how big tech companies can hire top talent without going through a typical buyout. The Federal Trade Commission is now looking closer at these “aqui-hire” deals.
In the past, Google DeepMind spent $3 billion to use technology from Character.ai, a company making lifelike chatbots. Other companies like Microsoft and Amazon have made similar deals to hire top talent.