Rifat Ahmed | রিফাত আহমেদ

Meta Connect 2023 More AI, smart Ray Bans and mixed reality

Meta Connect 2023: More AI, smart Ray Bans and mixed reality

Catch up on the exciting new Meta AI, Quest 3, and Smart Glasses announcements from Meta Connect 2023 in today’s The Business Standard

You can read it in today’s newspaper (30 September 2023) or online: Meta Connect 2023: More AI, smart Ray Bans and mixed reality.

 

Read it online at TBS

 

 

 

View this post on Instagram

 

A post shared by Rifat Ahmed (@rifat5670)

 

Post by @rifat5670
View on Threads

 

https://www.tumblr.com/rifat5670/729859661602422785/via-meta-connect-2023-more-ai-smart-ray-bans

.

.

Read the unedited original article below: 

Meta Connect 2023 — Connecting virtuality and reality using Meta AI, Quest 3 and Smart Glasses

The 2-day-long Meta Connect is nothing short of a celebration for developers and VR enthusiasts across the world. With new announcements of physical products and features for the Meta suite of apps, Mark Zuckerburg and his team never fails to grab the attention of an everyday Facebook user or a dedicated Quest fan.
Apart from the excitement of exploring new dimensions in VR, the community had much more expectations from this year’s Connect based on Meta’s AI-related developments throughout the year, and they were not disappointed.
The first and most anticipated announcement was the introduction to Quest 3, Meta’s new mainstream mixed reality device. After the success and widespread popularity of Quest 2, this announcement was something VR enthusiasts and developers have been waiting for years.
From first impression, Quest 3 is the worthy successor to the unanimously praised Quest 2, which had already changed the landscape of virtual reality and its use in personal multimedia consumption. The new version makes even more promises with its comfortable, wire-free, and more importantly, a much thinner form factor.
The new pancake lenses and the redesigned body made it possible for Quest 3 to be 40% slimmer than its predecessor. Besides the lenses, the display itself got a significant upgrade with ten times more pixels than the last gen, 4K resolution, and improved color accuracy. The enhanced 3D audio makes it an even more immersive experience under the hood.
On the specs side of things, the new Quest 3 is now equipped with a more capable Snapdragon XR2 Gen 2 chip, designed specifically to power VR, MR, and XR devices.
Even though it still comes with a pair of upgraded and streamlined controllers, this powerful chip allows for more accurate hand tracking, making Quest 3 almost a controller-free experience like Apple Vision Pro.
But Quest 3’s claim to fame is its excellent mapping abilities using the in-built cameras and sensors. Unlike the older Quest devices, Meta is actually marketing Quest 3 as a mixed-reality device, ditching their VR persona. This marketing shift seems to be more logical and in tune, considering how the devices actually work.
Where Quest 2 and the original Quest were showcased as VR gears that let users slide into a virtual world, Quest 3 flips the table and takes a somewhat opposite approach. Instead of putting the user entirely within a virtual reality, it augments virtuality within our three-dimensional reality, creating an accessible bridge between virtual and real.
Thanks to the mapping capabilities of the dedicated camera and depth sensor on the headset, Quest 3 not only offers a fully immersive virtual reality to dive into but also allows augmentation of digital objects within the real world, creating a mixed reality where users can see and interact with both digital and real world.
Upon mapping, the dimensions of the rooms and the belongings function as surfaces for the digital objects to interact.
This interaction can be either in an anchoring form where users simply drop a digital object, be it a screen or an ‘augment’ onto a surface where it’ll stay forever, or in an interacting form where the digital object will bounce off or slide down on the physical surface.
This blended reality also makes gaming on Quest much more immersive and expressive. Instead of playing in a screen-like atmosphere inside Quest 2 or Quest, this time around, gamers can take advantage of the space in their actual surrounding and drop a big screen in the middle to play on a giant screen, offering a pseudo-console gaming experience with friends in real life.
This screen augmentation, coupled with twice the GPU in the new Snapdragon chip, makes gaming in Quest 3 a smooth and rich experience with more details on titles coming through Xbox cloud gaming integrations.
But Quest 3 is meant to be something people would use every day in various aspects of their lives, including their work, which is why Meta has announced integration to platforms like Microsoft 365 and Adobe Substance 3D Modeler.
Zuckerburg and his team then went on to show the AI developments they made over the last few months.
Besides building and sharing the open-source language model Llama 2 with the world, Mark’s team spent a significant amount of time showcasing their generative AIs.
They announced their own chatbot, aptly named Meta AI, that can be used to get answers to basic questions and address requests like other AI or smartphone assistants in the market. But since Meta AI is powered by both Meta and Microsoft’s Bing search, it can search information in real-time from the internet, making it more up-to-date compared to OpenAI’s ChatGPT.
Once available worldwide, this Meta AI can be summoned across Meta products like Messenger, WhatsApp or Instagram Direct.
To make the AI chatting experience even more efficient and customized, Meta is also releasing a few variations of it as standalone personas, like Max, the sous chef or Lily, the personal editor. Meta is also giving voices to these AI personas in the coming months, ensuring a more immersive experience when interacting.
Emu is another AI project that the Meta team presented during the keynote. Expressive Media Universe, or Emu, is an image-generative AI model that can create images from text prompts like DALL-E or Midjourney.
But unlike these generators, Emu is much faster in processing. On average, it takes around 5 seconds to output an image from a prompt, making it faster than almost all the AI models for image generation.
This image-generative AI, along with the Meta AI chatbot, would also be accessible from any personal or group chats across Meta platforms. Simply typing ‘@Meta AI’ would trigger the chatbot within a conversation, and the ‘@Meta AI /imagine’ followed by a text prompt would create an image using Emu. The image generator can also be used to create one-of-a-kind never-before-seen stickers within chats.
AI is also coming to Meta apps like Instagram.
Initially, AI will be introduced in the photo editing section of the app, where AI will be able to quickly change the background of a photo or even restyle the subject with a text prompt.
All these AI-powered capabilities and years of experience developing Quest devices are coming to another exciting Meta product, the new Ray-Ban Meta Smart Glasses.
The new Smart Glasses are a result of Meta’s collaboration with EssilorLuxottica and Ray-Ban.
This pair of ordinary-looking glasses house 12 MP ultra-wide cameras and Snapdragon AR1 Gen1, which enables it to record video at 1080p. To store the videos and photos, there is 32GB of internal storage built in.
Taking advantage of the first-person point of view of the camera in front, Meta designed the glass to record video and take photos that are meant to be shared directly to Meta suite of products. That’s why Meta baked in Live Streaming from the Smart Glasses through Apps like Instagram.
To enhance the quality of live streaming, the Smart Glasses pair is equipped with a 5-microphone array and custom-designed speakers with 50% more max volume and twice the bass than last-gen glasses.
On top of that, Ray-Ban Meta Smart Glasses are the first device with Meta AI built into it. That means, all the features of Meta AI, including asking the chatbot and making action requests, will be built into the pair.
This integration will allow users to both live the moment and capture it to savor later, connecting virtuality and reality together seamlessly.
By simply verbally prompting a request like ‘Hey Meta, take a photo and send it to mom’, a sequence of actions will be triggered, including taking a photo from the user’s point of view, using the cameras on the front, and sending the captured photo to the recipient through the most used chatting platform for the sender-receiver pair.
Next year, Meta is also making the glasses multi-modal by allowing the integrated system to see through the camera.
This will open up doors for even more helpful queries like ‘What am I seeing?’. A prompt like this will permit the Meta AI to see what the person sees through the camera and cross-match it with objects in its training data set or images from the internet.
As expected, these AI integrations in Meta’s messaging platforms and Smart Glasses can be used to create dangerous, unethical or even illegal outputs that can inflict harm to the users in many ways. That’s why Meta is rolling these features slowly onto their products and allowing enough time to polish the mishaps and misuse by red-teaming with experts.
Meta has also announced AI studio, a platform to use the capabilities of Meta AI and build amazing things with it. However, since these AI integrations are expected to roll out slower than other Meta-announced products over the associated safety concerns, it will be a long time before we see developers take full advantage of the AI capabilities.

I'd love to know your thoughts...

Scroll to Top