Insight

Supporting a Remote Mobile User-Testing Process

A woman with brown hair and sunglasses on her head in a blue shirt stands in front of a metal wall
Katy Listwa Content Strategist/UX

Overview

Many of our website-based projects involve some level of involvement in the user testing process. Whether we are just providing input, producing the testing materials or even designing or conducting the tests, there is always an understanding that everyone’s experience is valuable and an open mind is key to getting the most informative results.

Recently one of our clients in the public health sector tasked us with co-planning, and, in the end, providing a very polished prototype for mobile user testing.

Our first consideration was that the deadline was definitely a limiting factor strategy-wise.

Our schedule allowed for only one round of testing, using a mobile-only prototype (based on the analytics from our client’s similar sites, they predicted that more than 80% of their users would be accessing the site via a mobile device). Since we only had time for one round, the testing prototype would include a full visual design with nearly all types of pages on the site represented in some capacity.

Based on positive past experience, our clients chose the platform usertesting.com as the source of test subjects and as a testing platform. We chose to conduct moderated rather than self-guided testing sessions, which we were very pleased with after the fact - when users got stuck on particular elements the moderators were able to dig into the reasons for the confusion with further questions and help them past the hangup so we could get their feedback on the rest of the task.

But before we got that far, we had to find test subjects. For this type of in-depth moderated testing, about 5-7 good test subjects will identify the vast majority of usability problems/insights. Increasing the number of subjects further doesn’t give much further return. It turned out that the target geographical age-range limitations of the project really limited our pool of testing subjects. We ended up finding 4 users that fit all of our specs and 3 others that fit all but one.

In the meantime, the prototype was being produced. Our client was instrumental in making the decision that no dummy text or wireframe-level material could be included. In their experience, unrealistic or unfinished looking elements would “throw off” users, and they would focus on that instead of the functionality. This added quite a bit to our workload as we wouldn’t normally create a full visual prototype. But the benefit of having users experience something so close to the final product was very important in this case, especially for the more complex functionality that needed to be highly intuitive.

Also, all content needed to be real or at least “realistic” since the planned tasks would be very content reliant (e.g. “find an article about [a particular subject]”). For minor pages that weren’t yet built, we included a fully designed “page under construction” page with a big “go back” button.

And so testing proceeded with a very well-flushed-out pixel-perfect prototype and 7 test subjects, and as always, the findings proved worth the effort.

There were many small but impactful changes we made to some of the more complex functionality. Sometimes it was just a matter of adjusting a word or two for clarity, and other times our approach had to be entirely reframed. For instance, a searchable database on the site was initially presented pre-populated with a full list of unfiltered results. Many users skipped over the instructions and assumed that the results had somehow already been filtered based on their location data, resulting in confusion. We resolved this by removing the pre-populated list altogether to clarify that the user had to enter their own data.

It is always difficult as “content experts” to put yourself in the shoes of the typical user who doesn’t have the same familiarity with terminology and other elements of an interface.

It was also interesting to see how users’ understanding of certain content we’d been working with for many years could be viewed through a different lens based on the ever-evolving landscape of the internet. Some of the content items in the articles section of the site were newly labeled as “testimonials” for health-related services. A number of test subjects would instinctively scroll to the bottom of the services page when asked to find these testimonials as if they were looking for an Amazon-style star review. The content hadn’t changed, but trends and expectations had.