Meta, the tech giant, is rolling out a series of innovative features, including voice mode and image editing capabilities within its chat function.
Key Points
- Meta, the tech giant, is rolling out a series of innovative features, including voice mode and image editing capabilities within its chat function
- Rowan Cheung, the founder of AI newsletter The Rundown AI, shared the news on X
- Meta is testing ‘Imagined for you’ AI-generated content that will show up on users Facebook and Instagram Feeds
Rowan Cheung, the founder of AI newsletter The Rundown AI, shared the news on X.
4. Meta is testing ‘Imagined for you’ AI-generated content that will show up on users Facebook and Instagram Feeds.
— Rowan Cheung (@rowancheung) September 25, 2024
You can “tap a post to take the content in a new direction” or “swipe to see more content imagined for you in real-time” AI-generated content, tailored to each… pic.twitter.com/Fs8jVbmzfA
Per his post, Meta AI now enables users to share photos and receive replies directly within the chat interface, similar to ChatGPT. However, Meta goes one step further by allowing users to edit these images — whether it’s removing objects, adding accessories like hats, or changing backgrounds — all seamlessly integrated into the conversation flow. This feature is currently available only in the United States.
Moreover, Meta is introducing experimental AI features for Reels, including automatic video dubbing and lip-syncing across various languages.
AI-Powered Meta Innovations: New Models, On-Device Capabilities, and Enhanced AR Glasses
Meta is also testing ‘Imagined for You’ AI-generated content that will appear on users’ Facebook and Instagram feeds. Users can interact with this content by tapping a post to take it in new directions. They can also swipe to see more real-time, tailored suggestions. Cheung applauded the move and said this feature “is coming way sooner than I expected.”
Related: Influencers Join Trend With AI Animals as Social Media Feeds React
In addition, Meta is releasing Llama 3.2 models, which include medium-sized vision language models (11B and 90B). These are competitive with Claude 3 Haiku and GPT4o-mini in image recognition.
Along with this, two lightweight, text-only models (1B and 3B) are also being introduced, designed to fit onto edge and mobile devices. These support 128k context tokens and are SOTA (state-of-the-art) for many on-device use cases.
As per Meta, running these models locally will ensure prompt responses and enhances data privacy by keeping user information on the device.
Furthermore, Meta is upgrading its Ray-Ban Meta glasses with new AI improvements. These enhancements include the ability to remember things seen and set reminders, scan QR codes, view real-time video, and offer live language translation.
Related: North Korea Uses Banned Nvidia GPUs to Supercharge Crypto Theft Efforts
The company has also announced Orion—new AR glasses that seamlessly integrate both augmented reality (AR) and artificial intelligence into everyday life, making them a logical choice for AI wearables.
The voice mode feature is slated to roll out in the United States, Canada, Australia, and New Zealand over the next month, while other features are being introduced gradually across different platforms. These innovations mark a significant leap forward in Meta’s commitment to enhancing user experience through cutting-edge AI technology.
