Exploring Context-Aware Digital Assistants

COMPANY: QUALCOMM COMPUTER VISION TEAM

ROLE: DESIGN STRATEGY, USER RESEARCH, CONCEPT DESIGN

TEAM: ANNE KONERTZ, CHAD WILKIE

TIME: 4 MONTHS

The Challenge

If a digital assistant had visual capabilities, what could it do for you? Qualcomm UX was approached by the computer vision engineering team to explore meaningful, human-centered applications for computer vision-enabled digital assistants on devices.

An Ethnographic Approach

We began by conducting a competitive analysis of the functionalities of various digital assistants to identify areas for improvement. Next, we framed our research challenge by examining the usage of mobile phones and home cameras. After preliminary research, we recruited participants from two main user groups:

  1. “Super socials”: These are power users of camera-enabled social media applications (e.g., Snapchat, Instagram) aged between 16 and 22.

  2. “Smart home enthusiasts”: This group consists of individuals who own multiple smart cameras and other digital devices in their homes.

Through in-depth user interviews and home visits, we uncovered key trends in both groups related to their behavior patterns, preferences, unmet needs, and privacy concerns. For instance, we found that while most users initially purchased smart cameras for security reasons, these devices also contributed to family social and emotional connections. One man, who frequently traveled for work, used his Nest Cam to say goodnight to his young sons during his business trips.

Facilitating Brainstorming Sessions

After analyzing and synthesizing the raw data from our interviews and field observations, we organized a group brainstorming session with 20 to 30 engineers from the computer vision, audio, and machine learning teams. The goal was to tap into their technical expertise, build a deeper understanding of users' problems and desires, and generate a wide range of ideas.

In the first half of the session, we presented our user research findings. Following this presentation, we divided into four teams of five and conducted a brainstorming session centered around a persona we had just introduced. To inspire new ideas grounded in our research, we created "scenario cards" that included user scenarios for each persona along with photos of relevant events or activities.

At the end of the session, we allocated time for a physical rapid prototyping activity, where participants built and bodystormed prototypes of their best solutions in real-time use.

We made scenario cards from our user research findings to spur our brainstorming session.

Printing out posters of research personas and scenario cards served as a tangible reminder to focus on user needs and contexts throughout the brainstorming session.

From Synthesis to Storytelling

After our brainstorming sessions, we filtered through the ideas and conducted several additional UX-focused brainstorming sessions. This process allowed us to identify the best twenty use cases across different user groups. We then refined these use cases into design concepts by storyboarding the user experience.

I sorted, ranked, and expanded on concepts generated by our group brainstorm.

I refined our initial findings into experience concepts through sketching and storyboarding.

We iterated on our storyboards and presented them in a deck highlighting how Qualcomm technologies could enhance those experiences.

Refining Our Concepts Through Video Prototyping

We collaborated with the engineering team to identify the best ideas, refining those storyboards into video experience prototypes. These videos helped us pinpoint areas where the QC computer vision team could innovate.

The photos above are stills from two separate video prototypes illustrating our best computer vision use cases for mobile phones and home smart cameras. 

Impact

Our research led to the creation of a series of illustrated human-centered use cases and video prototypes for mobile phones and smart home cameras. We also discovered new edge computing technology that Qualcomm could develop to improve the functionality and user experience of computer vision-enabled digital assistants.

"Sharon did an absolutely amazing job. Presenting qualitative user findings can be tricky. Sharon set them in context to make them relevant for the team and presented them in a very professional and engaging way. It was fantastic!"

— Anne Konertz, UX Lead

Previous
Previous

Testing a Blockchain Digital Identity Solution

Next
Next

Imagining the Future of Automotive