Home |
Meta
Enhancing virtual reality accessibility

Evolving the next-generation of human connection in Meta's Reality Labs

Meta's Reality Labs team is at the forefront of building next-generation technologies that bridge the physical and digital worlds. Focused on virtual reality (VR), augmented reality (AR), and wearable technologies, Reality Labs aims to transform how people connect, work, and play in the metaverse.
As a product designer on the XR Insights team, I was placed across various projects that sought to improve user accessibility for VR experiences, including:
+ Improved user experience for device localization by designing a more intuitive user experience
+ Enhanced third-party developer capabilities by designing XR Simulator, allowing developers to build and test their virtual experiences without a headset
+ Designed new features and system components and improved the UI/UX for internal AR/VR training tools on the MetaSim platform
+ Ideated user experience use cases during a product exploration for an AI-powered hearable device
+ Led the UI/UX for the internal Codec Avatar research & development platform used for testing and debugging generative 3D models

Unifying the approach to guided user localization

Localization is the ability for the VR headset to track its position and orientation in a physical space. Using a mix of sensors and cameras known as SLaM the headset can determine where it is relative to its starting point.
In the current system, SLaM gets 5 seconds to successfully localize the device. If not, users are presented with a Guardian Not Found modal to initiate the UX flow to create a new room map.
Although SLaM’s localization performance improves over time (84% after 5 seconds vs. 92% after 15 seconds), users are opting to draw a new boundary rather than wait.
By creating a new guided localization UX, we can offer a low friction experience that maximizes localization success.

Concept 1

Nodes

Nodes are placed around the user's environment to help their device build the map. Users are drawn to the first node using spatial audio with a line flowing to the next node when its successfully identified.

Concept 2

Paint Chips

The user's environment is shrouded in a colored veil, indicating the areas of the map that have yet to be filled in. As the user looks around, geometric shapes, or paint chips, fade off and give way to a full color view of their space.

Shipped Experience

Room Capture Realignment

Available in the Quest Pro, Quest 3, and Quest 3s, device localization now occurs automatically when a user dons their headset or decides to draw a new Guardian boundary.
Leveraging the headset's high-resolution external cameras and depth sensing algorithms, users can experience persistent SLaM during their VR experiences, as their device identifies object positions in real-time.

Empowering third-party app development with XR Simulator

The Meta XR Simulator is a lightweight OpenXR runtime developed by Meta to assist developers in creating and testing virtual reality (VR) applications. It allows for the simulation of Meta VR devices and their features at the API level, enabling developers to test and debug applications without the constant need to wear a headset, streamlining the development process and facilitates automation by simplifying testing procedures and was built on Immediate Mode GUI (IMGUI) for better integration with Unity and the Unreal Engine.

Understand datasets and train VR models with the MetaSim platform

As more robust and complex datasets are developed to train the Quest devices to improve VR experiences, new internal tools used to monitor and analyze this data are needed. These tools also require periodic updates to ensure efficient use for XFN engineering teams.
3D Asset Store
An internal Digital Asset Management (DAM) system available to all of Reality Labs
DAMIT
A framework aiming to boost the productivity of ML work streams via cataloging datasets and connected models
Gaia
Store and access data in a secure and privacy aware way in Meta internal systems
JEF
An end-to-end framework for efficiently developing and executing large-scale pipelines, integrated with, and built on top of Reality Labs infrastructure
Metro Pipeline
For exporting Gaia files to Halo for annotation and writing Halo annotation results back to Gaia annotation storage
VRS Web Player
Playback video streams in VRS files

A centralized place for end-to-end data visibility and management

As the MetaSim Connected Experiences workflows continue to evolve and new features and UI/UX paradigms are introduced, a tool-centric approach to using the platform becomes increasingly obsolete.
By designing each tool to comply with the XDS design language, a workflow approach to using these tools will take affect, making their functions and integrations more seamless.

Exploring a new kind of hearable, powered by AI

Wearables are deeply personal - we need to build different form factors and styles to fit the needs for everyone and reach scale. While we are building a suite of glasses and EMG watches with camera, there are people who may not prefer glasses or are in situations where glasses may not work well.

We explored a new kind of hearable that can be carried all day, quickly popped on and understands the world around you— that makes a helpful, co-present AI that’s always available.

Concept C01

Headphones +AI

Designed to replace a user’s existing headphones, delivering high quality listening experience coupled tightly with contextual AI.
This product builds upon the standards for headphone- excelling at audio streaming, hands-free calling and noise cancellation – while delivering a set of helpful AI experiences. Product system consists of two in-ear headphones and a smart charging case.

Concept C02

AI Hearable

A single hearable device purpose-built for access to contextual AI throughout the day. The device is focused on voice interactions, leveraging cameras and microphones to analyze the nearby environment.
This does not displace your headphones, though it is able to fill momentary gaps in audio streaming between your phone. You can still comfortably wear in-ear headphones simultaneously for a proper listen.

The takeaways

In designing AI-driven experiences, we found latency to be the biggest friction point, especially in multimodal interactions where text-based solutions often sufficed. Wakeword-less, multi-turn interactions felt more natural and engaging, while continuous camera sensing unlocked new possibilities but required optimized connectivity and LLM infrastructure. LLMs performed well with overlapping images, though precise targeting, like pointing at objects, improved recognition. Finally, even light personalization of AI’s personality significantly enhanced user engagement, creating a stronger connection and more intuitive experiences.

Defining the future of telepresence with Codec Avatars

Ultra-realistic virtual representations that replicate users' facial expressions and movements with remarkable precision, Codec Avatars utilize advanced machine learning techniques and sophisticated hardware, including VR headsets equipped with eye and face tracking sensors. These avatars aim to revolutionize digital communication by enabling immersive telepresence experiences
Supporting the Codec Avatar R&D team, I worked alongside engineers to design and develop the internal debugging platform, Workspace.

Review and compare generative avatars

The Avatars tab shows a gallery of all Codec Avatars a user has generated.  Selecting a Codec Avatar sets the main Avatar that will be used in Workspace experiences, as well as displaying detailed information around the encoder and decoder.
Supporting the Codec Avatar R&D team, I worked alongside engineers to design and develop the internal debugging platform, Workspace.

Debug and playtest Avatars in Workspace Missions

Missions allow users to engage with newest innovations in the Codec Avatar technology and offer their perspective and opinions in a research environment.
Each mission contains details regarding what is expected of the user, mission requirements, and expected completion time.
The Mission screen shows users their Codec Avatar alongside a survey form that presents a series of requests that users are asked to evaluate.

Designing in-call accessibility features

Ultra-realistic virtual representations that replicate users' facial expressions and movements with remarkable precision, Codec Avatars utilize advanced machine learning techniques and sophisticated hardware, including VR headsets equipped with eye and face tracking sensors. These avatars aim to revolutionize digital communication by enabling immersive telepresence experiences
Pin the action bar to the user’s wrist in hand gesture mode and enable the controller’s side-trigger to reveal / hide the call bar when using controllers.

Maturing the technology behind teleprescence communication

As more users engage with Codec Avatars, Workspace will be essential in advancing the technology required to create a digital twin that offers a realistic expression of the user.

Let's build something great together.

If you like what you see and want to collaborate, get in touch!