TwitterFacebookInstagramPinterestYouTubeTumblrRedditWhatsAppThreads

Voice To Become The Next Interface For AI: ElevenLabs CEO Mati Staniszewski

Voice To Become The Next Interface For AI: ElevenLabs CEO Mati Staniszewski

Voice will emerge as the dominant interface for artificial intelligence, ElevenLabs co-founder and CEO Mati Staniszewski has said.

People will interact with machines as models more, moving beyond text and screens, Staniszewski, who will be at NDTV’s Ind.ai Summit 2026 on Wednesday at ITC Maurya in New Delhi, added.

Earlier, speaking at the Web Summit in Doha, Staniszewski claimed that voice models, such as those created by ElevenLabs, have recently progressed beyond merely imitating human speech. Their tone now includes emotion and intonation to work with the large language models’ reasoning powers. He contended that the outcome is a change in how people use technology.

Talking about the future, Staniszewski said, “Hopefully all our phones will go back in our pockets, and we can immerse ourselves in the real world around us, with voice as the mechanism that controls technology.”

This idea helped ElevenLabs to raise $500 million this week at a valuation of $11 billion. The AI sector, as a whole, also seems to be shifting its focus to voice models. Apple seems to be quietly developing speech-adjacent, always-on technologies through acquisitions like Q.ai. OpenAI and Google have both made voice a key component of their next-generation strategies.

No Centrally Sponsored Scheme for Construction Workers Welfare in Jammu Kashmir: Govt

Voice is the next stage of AI research since control is shifting from tapping displays to speaking as AI permeates wearables, automobiles and other new gear.

According to Staniszewski, future voice systems will depend more on context and the lasting memory that has been accumulated over time, rather than repeating every command. It will make interactions seem more natural and require less work from users.

While high-quality audio models have largely lived in the cloud, Staniszewski shared that ElevenLabs is working towards a hybrid approach that combines cloud and on-device processing. This move is intended to support new hardware, such as headphones and other wearables, where voice becomes a constant companion rather than a feature you choose when to use.

ElevenLabs and Meta are already partnering to integrate ElevenLabs’ voice technology into products like Instagram and the company’s virtual reality platform, Horizon Worlds.

Staniszewski stated that he would be open to the idea of collaborating with Meta on its Ray-Ban smart glasses when voice-driven interfaces spread into other form factors.

At NDTV’s Ind.ai Summit 2026, Staniszewski will talk about AI voice tech that turns unreal synthetic speech into hyper-realistic audio and how it will transform content creation, entertainment and human-machine interaction.

VoM News Desk
VoM News Desk

VoM News is an online web portal in jammu Kashmir offers regional, National & global news.

Scroll to Top