Linda Wen, a design researcher at Microsoft Research, delivered an open studio in which she shared insights from her work on designing AI systems that prioritize human agency. Drawing on her background in international relations and a professional trajectory that led her into design and AI, Linda emphasized the importance of building AI tools that empower users—especially those from marginalized communities.
Her talk centered around two projects developed at Microsoft: Find My Things, an AI-powered accessibility feature, and Muse, a new generative AI model designed to support game creators.
Find My Things is part of the Seeing AI app, used by over 100,000 blind or low-vision individuals. This feature allows users to locate personal objects—such as keys, hair ties, or guide canes—using personalized object recognition. The system uses a “teachable AI” paradigm, enabling users to train the model on their own items through a series of guided video recordings. This personalization addresses a key challenge: existing foundation models often perform poorly on images taken by blind or low-vision users, due to issues like blur, exposure, or non-standard framing.
Two major learnings emerged from this work. First, evaluating AI performance goes beyond aggregate accuracy. For real-time tools, inference speed and device compatibility can be more important than marginal improvements in precision. In testing, a smaller model that produced results quickly outperformed a more accurate but slower model in user experience.
Second, user interaction with AI models is just as critical as the model itself. Early users often had incorrect mental models about how cameras worked, especially those with no prior experience using digital cameras. To address this, Linda’s team developed onboarding tutorials using tactile and auditory metaphors (e.g., thinking of the phone as a pair of eyes, or using sound like a metal detector). Co-design was central throughout: from data collection to interface design, the project was developed in close partnership with blind and low-vision citizen designers.
In the second half of the talk, Linda introduced Muse, the first AI model designed specifically for gameplay ideation. Muse generates both game environments and human character actions, enabling users to explore interactive game sequences. The research team conducted workshops with 27 game creators from diverse backgrounds—including AAA and Indie studios, and blind designers—to understand how Muse could augment the creative process.
Across both projects, empowering users to collaborate meaningfully with AI systems was central. Rather than designing AI to replace human input, both Find My Things and Muse demonstrate approaches that enhance human agency through co-creation, personalization, and inclusive design practices.
For those interested, the Find My Things paper is available through the ASSETS conference, and the Muse model is publicly released via Hugging Face and Azure AI.
About Linda Wen
Linda Wen is a design researcher on the Game Intelligence team at Microsoft Research Redmond. She is passionate about creating safe and responsible AI technology that is accessible to everyone, regardless of their abilities, socioeconomic status, race/ethnicity, gender, or countries of origin.
Currently, Linda leads UX research on an interdisciplinary team focused on developing generative AI models for gaming. She is excited about AI’s potential to empower game creators worldwide, enabling them to tell their unique stories. Previously, she worked on the Teachable AI Experiences team, where she designed personalized accessibility solutions for blind and low-vision users leveraging the power of AI.
Linda holds a Master of Science degree in design engineering from Imperial College London and a Master of Art degree in global innovation design from the Royal College of Art. She studied international relations and linguistics for her undergraduate degree at Pomona College in Los Angeles.