In over two decades of practicing user interface architecture in one form or another, I sought to infuse some consistency and methods in measuring the success of new user interfaces, compared to those being replaced.
In recent years, standards such as NIST CISU-R (NISTIR 7432) provide much needed guidance in the areas of usability requirements and testing, but so far I find this approach very effective, especially when dealing with large, complex interfaces.
The universal usability matrix that I present here provides a method to estimate the impact of a current user interface on clients and end users, and how a new proposed interface compares. Replacing the typical anecdotal, qualitative assessments that a current UI is 'bad' or 'inefficient' or 'Not user friendly', the matrix affords:
- Consistent, objective and quantitative evaluation of a current and new UI
- Identification and prioritization of the most problematic tasks in the current UI from a usability perspective
- Comparison of the effectiveness of new UI as early as wireframing phase
- Quantitative support to demonstrate to stakeholders and customers improvement in the new UI, that typically translates to TCO such as reduced staff, reduced training time, higher productivity, etc.
The matrix typically includes 3 sections:
- UI Load – An inventory of all interface elements the end-user encounters during interaction. Yes - Just count the windows, dialogs, buttons, pull-downs and other object the user sees, including labels. -- These impact the visual load and the cognitive processing effort required to digest the flow. The higher the count, the more complex the interface. When variations in the flow of the same task exist, (typical in a globally deployed software with a single localized/customized UI), count major variations, since UI loads will differ.
- Task Priority – Based on frequency (Occurrence x repetition) and efficiency, higher ranking tasks are most critical. High priority tasks with a high UI Loads are the most critical
- Task Impact – add context by reflecting the usability profile weight and the task priority.
An important aspect of the usability matrix is in its ability to provide context to different customers. Especially relevant to UI of enterprise software where very small and very large sites need to be supported, and when the software is deployed globally, and large variations in workflows and definitions exist:
Usability Profiles provide the context for the universal usability matrix. The profiles are used in evaluating the Task Impact score. Each task has an associated ‘cost’ in terms of the time it takes to complete one instance of the task (efficiency), the time it takes to perform the task repeatedly as required to process all students. (Productivity), and the associated usability profile. Profiles have properties and attributes that are most relevant to the project.
There is no limit to the number of profiles used, or to the number of properties associated with the profiles. However, I found it best to limit the number of profiles to those that reflect the most typical clients. The process of developing the matrix is most successful when practical. Too many profiles have diminished returns since typically most attention is focused on the extremes.
Here is a basic, and in my opinion generic and reusable template to develop a task priority score. I use Excel or a Filemaker database to document the actual interfaces:
Attributes: Annually, Monthly, Weekly, Daily
Attribute Weight: 1,2,3,4
Note: The higher the Occurrence, the higher to weight.
Attributes: 1 to 10, 11-30, Over 30
Attribute Weight: 1,2,3
Note: Number of times the task is performed during Occurrence period. The higher the repetition, the higher to weight.To isolate Task Frequency use Occurrence X Repetition
Attributes: Less than 1 minute, 1 -2 minutes, 3-5 minutes, 5-10 minutes, 10-15 minutes, Over 15 minutes
Attribute Weight: 1,2,3,4,5,6
Note: Estimated time to perform a single task instance without exceptions. The lower the efficiency, the higher to weight.
Attributes: = Efficiency x Repetition x Occurrence
Property: Task Priority Score
Attributes: = Productivity
Finally, I am sure of that someone, somewhere may have developed a similar idea - I would love to know. Otherwise, feel free to use and extend this framework.