|

‘Phones had their time’: Meta UK chief Matras pushes smart glasses future

‘Phones had their time’: Meta UK chief Matras pushes smart glasses future
Matras: 'Phones had their time'. Image Credit: Advertising Week Europe 2024

“Phones had their time for the last few decades. The next form factor is going to be smart glasses.”

Derya Matras, Meta’s VP, UK, Northern Europe, Middle East and Africa, told a packed crowd at Advertising Week Europe in London today that the tech company believes that in the near future, people will want “different categories of devices” to interact with online platforms.

That includes not just virtual-reality headsets like Meta’s Quest or the Apple Vision Pro, but also smart glasses, like Meta’s partnership with Ray Ban.

“At Meta, AI and the metaverse are intrinsically linked,” Matras said, adding that the industry “might have underestimated” how quickly AI technology is converging with what Meta calls “the metaverse.”

Multimodal learning can unlock AI development

As Meta develops its large language model Llama 3, it is continuing to add multimodal capabilities to the product, meaning the AI will be able to react to not just text, but also audio and visual information.

Last month, the company announced it would roll out integration of Llama 3 within its apps, including Facebook and Instagram, allowing users to access its generative AI assistant without needing to exit the Meta experience.

Can smart glasses be a key tool for creators?

But Matras appeared most interested in the future possibilities of integrating Llama 3 into the company’s smart glasses, which allow users to press a button on the glasses to ask an AI to, for example, help with picking out an outfit based on clothes the AI is able to see via the cameras on the glasses.

“It’s seamless, like you have an assistant with you, always,” said Matras. She added that the possibilities are particularly great for developments in accessibility, such as through giving smart glasses wearers a set of AI eyes that can describe physical environments and provide localised information.

The presentation underscored the rapid development of new use cases for increasingly sophisticated generative AI models, and came on the back of a presentation earlier this week by competitor AI company OpenAI, which demonstrated the multimodal capabilities of its own updated AI model, GPT-4o.

Matras further suggested that giving generative AI the capacity for multimodal learning will be key to further developing the technology in the direction of artificial general intelligence (that is, AI that can perceive the world and reason on its own, rather than simply produce knowledge learned via a large language model), which she argued was still a long ways away.

Importance for advertisers and open source developers

During her presentation, Matras walked through additional developments Meta has made in artificial intelligence, including not just its efforts in generative AI with Llama and its integration across its suite of apps, but also the positive benefits of Meta’s AI tools to advertisers through its Advantage Plus programme.

Earlier this month, Meta launched enhancements to its generative AI tools for advertisers, including capabilities to aid in creative image and text generation.

Matras highlighted that the tools can help marketers “build the language […] and consistent tone of voice for your brand,” in a “seamlessly integrated” way moving forward.

Meta launches enhanced gen-AI features for advertisers

Matras also stressed the importance of Meta’s stance toward open sourcing its AI, arguing that open source developers can help “make it safer” by having “more eyes” on AI products as they develop. She added that by open sourcing its AI model, companies “downstream the value chain” will be able to directly benefit from AI without having to invest

She explained: “We are empowering the downstream ecosystem to work with these models to make amazing apps that everyone can use.”

Media Jobs