We learned that not including an onboarding process in the prototype really confused
people. There’s a fairly large discrepancy on many of the usability questions that is telling of the
problems some users were facing. Some people had an easy time, others struggled. The goal,
obviously, would be for all users to rate the experience a 10 / 10 on ease of use and conveyance.
Thankfully, all the responses on the need for technical assistance were on the low end, between 1
and 5. Using this data, we can add an actual value to the struggles of our users, allowing us to have a
more solid justification for design changes.
Interestingly, the tests done in person had much lower ratings for a lot of these questions.
One participant rated the app as being generally difficult to use and expressed the need for
assistance in navigating it with the ratings 1, 8, 5, 8, 5, and 3 respectively. Conversely, one of the other users tested in person gave fairly high ratings and did not express the need for assistance. This participant gave the ratings 7, 10, 8, 2, 6, and 8 respectively. This user did have a hard time navigating, but rated everything else higher. This dichotomy between the tests is actually problematic and needs to be solved. Some of our users are struggling while others are scraping by.
One participant commented the following:
"I didn't know what the app was for until I saw the survey above say "police data". I just saw graphs about heinous crimes and thought, "why would someone want an app about tracking violence?”"
This feedback, and much more, greatly informed the final prototype. We found a lot of small moments of friction in the user's flow, and ironed them out in the next milestone.