Wearables are deeply personal - we need to build different form factors and styles to fit the needs for everyone and reach scale. While we are building a suite of glasses and EMG watches with camera, there are people who may not prefer glasses or are in situations where glasses may not work well.
We explored a new kind of hearable that can be carried all day, quickly popped on and understands the world around you— that makes a helpful, co-present AI that’s always available.
In designing AI-driven experiences, we found latency to be the biggest friction point, especially in multimodal interactions where text-based solutions often sufficed. Wakeword-less, multi-turn interactions felt more natural and engaging, while continuous camera sensing unlocked new possibilities but required optimized connectivity and LLM infrastructure. LLMs performed well with overlapping images, though precise targeting, like pointing at objects, improved recognition. Finally, even light personalization of AI’s personality significantly enhanced user engagement, creating a stronger connection and more intuitive experiences.