User Test pic 2.png

Airbiquity Car Console
Usability Testing

Farestart-logo2b.gif

Car Console Usability Testing

 
teal 4.png

Overview


Our team performed usability testing on a new car "infotainment" console. In our carefully devised test approach, users selected and navigated music apps. Our detailed analysis revealed many major opportunities.

gray 3.png

My Contributions


Scope Definition, Participant Recruitment, Test Design, Test Tasks & Script, Test Kit, User Testing, Session Coding, Analysis, Reporting, Design Recommendations

 

 

 

What's Interesting?

 

Car Audio Equipment

The interactive console we were given to test was mounted in a sort of inventor's electronic box covered in switches, lights, and plug-ins. To eliminate these distractions, we constructed a dashboard mask from black foam core and a photocopied car console photo.  With mask in place, the users were able to focus entirely on the consumer controls.

For our test sessions, we positioned a video camera to capture screen images as well as user facial expressions, gestures, and physical interactions. We were only able to simulate a parked vehicle interaction, but with expanded scope would incorporate additional distracting visual interactions, such as a large screen driving simulator.

 

 

Project Scope

 

Stakeholder Position

Each interface designed by Airbiquity must be heavily tailored to meet automobile OEM's (original equipment manufacturer's) specifications. Certain things like physical knobs, labeled buttons, and some on-screen controls are set by the OEM. Our user tests would need to execute tasks that included a mix of Airbiquity and OEM controls.

Scope Definition

Together, we selected an active project that would really bring value to Airbiquity. We scoped so that we could plan, execute, analyze, and report within a 3 month timeline, defining success criteria, deliverables, and practical logistics. 

Equipment Shakeout

Our final scope was solidified only after we had a chance for our own hands-on shakeout of the equipment and interfaces. Certain tasks would remain out of scope, including booting the test equipment and syncing with a mobile phone, so that we could focus on the apps themselves. 

Research Questions

gray 2.png

How successfully can first-time users find music apps using the  system?

teal 1.png

How successfully can first-time users operate Pandora  through the system?

gray 3.png

Do users understand the functions of the menu options and controls?

 

Tasks

Find and Launch Pandora. 
How easily can a user find Pandora and other music apps?
 

Pandora Basics
Once in Pandora, do they understand the basic functions and icons? 

Pandora Sort & Search.
Can the user use the  console to sort and search for stations?

Create New Pandora Station
How effectively can the user create a new station?

Switch to IHeartRadio
How easily can they switch from Pandora to another app?

Switch back to Pandora
Once in iHeartRadio can they easily switch back to Pandora?

 

Participants

We tested with six participants, as well as one pilot to work out the kinks prior to proper testing. Given the focus on streaming music applications, we limited our participant pool to people who regularly use services like Pandora and Spotify on their smartphone.

We included a mix of frequent and occasional drivers, as frequency of driving may influence the amount of interaction a person has with a car infotainment system. We did not test for age differences (a larger sample would have been required), so we had a mix of participants in two wide age­ bands. Similarly, we did not test for gender differences and included an equal number of female and male participants.

TECH

Smartphone (6)

Music Apps (6)

DRIVES / WEEK

0-2 days (2)

3+ days (4)

gray 1.png

AGE

21-35 (3)

36-50 (3)

GENDER

Female (3)

Male (3)

 

Test Kit

Test Scripts 

Participant

Presented one task or question at a time, asking to think aloud or indicate on a numeric scale.

Facilitator 

Have prompting scripts & secondary probing questions ready, with branching logic & speaking tone.

Note Taker

Have room for notes in script, with fields & checkboxes prepared for anticipated outcomes.

 

Other Test Kit Items

Questionnaires
Screening, Pre-Test, and Post-Test (SUS) questionnaires

Checklists
Day of Test Checklist, and Equipment Setup Checklist

Greetings & Forms
Greeting & Welcome Script, Consent Form, and Nondisclosure Agreement

 

Analysis

Videos & Notes

We watched session videos and reviewed notes, listing observations and quotes in a spreadsheet.

Codes

Prior to review, we identified simple codes for common notations, as well as a task completion scale. 

Key Issues Map

Certain common issues became evident across multiple users and were mapped in a trending chart.

 

Sample Findings

Users Struggled to Find Music Apps

gray 4.png

MEDIA:
should lead directly to apps on connected phone

MORE [+]:
discontinue use for apps

VIA MOBILE:
rename or eliminate

“MEDIA will most likely pop up however many apps I have, like Pandora, Spotify, and anything else like that.”

Path 1: Media Button

Media 1.png

6 of 6 - believed MEDIA would obviously lead to a list of apps.

3 of 6 - thought you could not access music apps via the SOURCE button.

4 of 6 - could not understand what VIA MOBILE would do. 

2 of 6 never made it to the apps screen and none thought it was easy.

Path 2: More (+) Button

6 of 6 thought MORE [+} would NOT include apps.

4 of 6 did not understand why VIA MOBILE was here.

Media 4.png

6 of 6 required coaching to pursue this path.

“The MORE button might as well say THINGS. Who knows what’s behind there?”

Reporting & Recommendations

Both a high level presentation deck and a more detailed final written report were created for this project. Both were structured around the tested tasks, with key observations from our analysis listed and supported by quotations, videos, illustrations, and frequency metrics. We were sure to feature both the positive findings as well as the issues and opportunities.

With such clearly illustrated issues, we could make some strong high level design recommendations. Interestingly, some of the issues were rooted in OEM-mandated functionality, and solutions would require negotiation of those boundaries. Airbiquity was glad to have our reports to support these requests, and expressed interest in more immediately modifying other areas that were already fully within their control.


 

Reflections

 
 

Work Intake & Scope Definition: 

  • A key to our success was the meeting we built with our clients in our first meetings.
  • With a thorough and clear outline, we guided our first meeting toward a clear understanding and agreement of scope and success criteria.
  • We had a wide range of projects to choose from, but we helped the client understand our process, constraints, and timeline so that they could make the best value decisions.

Audio Console Equipment:

  • Having our own dedicated audio console which we could study offsite was very important.
  • We had far more flexibility to study the unit and perfect our scope and scripts prior to testing.
  • One area of opportunity was that we were unaware that the unit could receive over the air software updates, which led to unexpected experience changes.
  • Our equipment checklist exposed this issue, and fortunately we were able to update our scripts in the hour prior to three user tests.

Participants:

  • The client had very little requirements here, and basically would allow for anyone who drives.
  • With a small sample size, we were able to get a well rounded response with a mix of age ranges, genders, and driving frequencies. All were required to have smart phones and music app experience.

Test Kit & Scripts: 

  • Preparing the test kit and scripts took a great deal of time, but allowed more seamless test sessions.
  • Of particular value was the detailed Facilitator / Notetaker script, with its IF/THEN branching logic and checkboxes for anticipated selections and paths.
  • Our tests were designed for at least two team member roles, with a facilitator and note taker, but the kit allowed for a single person to execute if needed.

Analysis: 

  • Video recorded sessions enabled us to find exact quotes and clips that supported our findings.
  • The team was able to divide workload because we first calibrated our coding strategies.
  • Key findings within individual tasks became clear after just a few users, with our six users providing more than enough repetition of issues and insights.

Reporting: 

  • The mix of medias presented really served to support our claims and recommendations.
  • A well structured document focused around tasks and issues allowed us to easily index to specific areas of audience interest.
  • Video clips and quotes really elicited emotional responses from some clients, seeming to quickly resolve long-term design disagreements between team members.

Files and Deliverables:

 
 

Copyright © Paul Townsend