Case Study
High Speed Research
Research is only as valuable as it’s ability to keep up with other teams and deliver actionable insights.
This becomes even more relevant with only two researchers serving hundreds of product and engineering employees in an organization. To be most effective, a templatized ReOps with just enough flexibility is critical.
The Goals:
1) Every two months - deliver benchmark metrics, pain points, and insights to senior leadership
2) Support research needs of four development teams (150+ people) attempting to meet 3,000+ usability and functional requirements.
The Plan:
1) Use three methodologies that inform each other to best triangulate the data.
2) Overlap one method’s read-out by the next methods planning.
3) Out-source recruiting to a partner with background in Human-Centered Design.
4) Templatize the data capture and analysis for each methodology.
5) Use the templates to have designers on the team help with data capture.
Methodologies
Surveys:
Why: To get a broad understanding of user needs and expectations for topics including navigation, data type/visualization, help tools, login method, and device usage for the system being developed.
How: The surveys were distributed to users for one month of response collection. Microsoft Excel was used to organize and analyze the data with multiple regressions, ratios, correlation matrices, radar charts, and histogram visual analysis. Analytics and subsequent insights were presented to a cohort of 100+ stakeholders for comments and questions before a report was written and finders were demonstrated to the development teams.
Focus Groups:
Why: To collect feedback on current and alternative designs for components of the COTS product being deployed including navigation, data visualization, help tools, and login screen.
How: Interviews consisted of three partial prototypes of possible system configurations that were counterbalanced to correct for participant fatigue. Between two and four users participated in each session with upwards of 30 sessions scheduled in each of the four, one-week long rounds of testing. The analysis consisted of comment affinity mapping and vote counting for each of the partial prototypes.
Usability Tests:
Why: To test the current system’s usability and identify user pain points needing remediation.
How: 30-minute sessions were conducted with individual users and consisted of three counterbalanced tasks for the user to complete. Upwards of 35 users participated in each of the three, one-week long rounds of testing. Metrics such as clicks, time on task, task success rate, System Usability Score (SUS), and Single Ease Questions (SEQ), were all manually tracked and later analyzed using Microsoft Excel.