Some senior engineers at Bose found a way to run models directly on a pair of custom Bose NC700 headphones that enabled various listening experiences to be enjoyed when used in conjunction with a forked Bose Music iOS application. This was an initial and technical proof of concept project.
Marketing and Product groups were anxious to try the new modes out to help determine which, if any, would resonate with customers and warrant further development in the hopes to differentiate our product offerings from our competition. Engineering was excited as well. Design had not yet been engaged.
Initially there were twelve listening modes, some of them with names of locations, others with suggested activity matches.
A mode is a way of describing a shortcut to an experience that can be listened to or experienced. For example, a conversational type of mode may make it easier to hear yourself (self-voice or voice-based transparency), those around you, remove unwanted environmental noise, affect your own voice in some way to make it clearer, etc. It can use digital signal processing, machine learning, specific microphone usage, etc. in order to deliver such specific experiences.
Due to the fact that ML models can run on headphones (without requiring powerful hardware like an Android or iPhone), there is much a reduced overall latency experienced in the system, allowing for quicker processing and less-obvious lags in modified responses to sound events and situations. It allows for pleasant experiences that would not have been enjoyable before.
The initial group of modes was reduced to nine for sake of an initial internal test. Some of the modes were similar enough to warrant consolidation.
When the Marketing and Product Managers took the units home to test, they encountered numerous issues that centered on a few key aspects of the experiences.
Example simplified journey map
"Put a designer on this."
- "There are too many modes."
- "I need to use my phone every time I want to switch experiences?"
- "What do the modes actually do?"
- "These should be easy to use."
- "They are annoying and confusing."
- "I can't remember these."
The modes existed were a simple List of titles with single-select checkboxes beside them. You could activate one at a time, which was perfectly acceptable, but their actual function was questionable because they had no descriptions. Some of them sounded quite similar. It was quite difficult to remember which mode you were currently using. There were no settings for any of the modes. There was a lot of guess work, too many modes to consider, not enough information to make repeatable and enjoyable experiences possible, and in its current state untestable.
The modes were implemented by engineers for engineers - but even they were having a difficult time. User-centered design was needed.
I was invited to join the team to tackle the design challenges, to improve the overall experience, to allow us to test not only modes themselves but also the experience design and interface direction that would lead the project to new and desired shippable products. Products that would help differentiate our line and deliver better experiences for our customers and those thinking about potentially buying their first Bose offering.
In an important meeting before setting about the design work, a process timeline was agreed to by the team on how best to approach this user-centered design challenge.
After looking at the current list of available modes, I thought there were too many and I found some whitepapers that backed up the findings that people can most easily retain 5-7 items. This is why automobiles have 6 presets per frequency band in them. So my hypothesis was that concentrating on 6 presets would improve usability. Having half that number active in a carousel would also be a good idea.
I checked with the project's firmware engineer about repurposing a button on the headphone's left ear cup. It was currently being used to bring up an optional assigned voice assistant (i.e. Siri), so after clearing this with the team - it could be used to toggle through modes. After each toggle, we'd use TTS (Text To Speech) to recite the mode to the user so they knew which mode they were in. A press and hold could toggle mode use on and off.
Upon first launch of the prototype application (Bose Music), step the user through the modes - giving them more appropriate titles (based on activity), an identifiable icon, and a description. The user can skip through the onboarding - the information will be available on-demand later, but they can become familiar with what each of them does before trying them.
The user can choose a settings sheet per mode in order to tweak the settings for that mode. In the settings sheet, the mode will again explain its functionality to remind the user of it's intended purpose.
If the user wants to change the modes available in the carousel toggle, they have a designed user interface which allows them to change their order and to set the active carousel using easy drag and drop operations.
Putting the modes list into edit mode turns each mode's settings icon into a gripper handle, allowing the freedom to drag to replace and change the order of mode items in their respective positions.
I called weekly design meetings to go over the current design. These were sometimes attended in person, sometimes using Teams video conferencing while sharing designs in Miro (which some in Visioneering & Validation preferred). I would also call up quick impromptu critiques with User Research to get their thoughts from one of their designers.
We farmed out the iOS application to a team who were already involved in the integration of the iOS to Headphone communication layer - MindTree located in India. I would document designs and deliver that documentation as well as the assets to them in order to allow them to complete our agile weekly scrums.
E-Ink would provide battery-powered devices energy efficient means to display the currently selected mode (which would also be displayed in the application) as well as simple battery state. This could be useful for inspection before donning the headphones but was ruled out as it would have added significant development time before being ready to test on device.
My hopes is that it could be paper prototyped in a different set of tests in the future. The display could be on the toggle button itself. rOr beside it if we could get a membrane to match in either case: black background, white text - on midnight headphones, silver background, black text on silver headphones.
A virtual voice assistant was considered. We already have Google and Siri integration and have tinkered with custom VPAs in the past. One could certainly request a listening mode using your voice and if there was a match, switch to that mode. It would most likely be, "Hey headphones, turn on xyz mode." Bose sounds a lot like "Bows" after all. This was deemed to be out of scope for this testing, but as a backup in regards to availability it could be a useful addition one day.
We set up eleven units for team members to take home and use. We were able to iterate on the design, the language used, the setting details for each mode, and refine the naming conventions.
Daily dscout surveys were completed by eleven participants from Bentley College. They were given a Get Started booklet I printed for them, an email about the dscot survey, their reward for participation, and how there would be an hour-long interview at the conclusion when they turned their hardware back in to us.
100% of the participants completed testing and came to rely on mode use during their week. They enjoyed the offered modes, the experience of being introduced to them, and liked how easy they were to activate and manage.