WG 2: Comparing UEMs: Strategies and Implementation

Coordinator: Gilbert Cockton - University of Sunderland (UK)

Interested partners: Jean Vanderdonckt, Oscar Pastor, Ebba Hvannberg, Asbjørn Følstad, Mark Springett, Kasper Hornbæk, Erik Frøkjær, Simos Retalis, Jan Gulliksen, Timo Jokela, Leena Norros, Quentin Limbourg, Dominique Scapin, Christian Jetter, Matic Pipan, Effie Law, Marco Winckler, Laila Dybkjær


The primary objective of this Activity is to identify effective strategies to compare UEMs. Whereas Activity 1 deals with the horizontal dimension depicted in Table 1, Activity 2 deals with the vertical dimension. Of particular importance is how to define the key parameter “effectiveness” reliably and validly. In Activity 1, some preliminary results on this definitional task may be obtained. Furthermore, the exact conditions and contexts in which UEMs are applied should be recorded with great caution. In accord with the suggestions made in the previous research, the range of application contexts should be as large as possible. Besides, it is relevant to conduct close observations on how evaluators build consensus when they are confronted with the discrepancies in the UPs identified and in the severity ratings ascribed to individual UPs. In this Activity, empirical, analytic and model-based UEMs will be compared. Their major distinctions lie in whether end-users are involved and whether prototypes (low/high fidelity) or models (task-/user-based) are employed. In addition, the relationships between internal, external and in-use metrics can be examined through the investigations of different types of UEMs. Nonetheless, the UEMs to be compared depend on the research foci of individual project teams. A possible but not exhaustive list includes:

  • User Tests (thinking aloud/observation techniques)
  • Heuristic Evaluation (various lists of heuristics/principles)
  • Cognitive/Heuristic Walkthrough
  • Critical Incident Inquiry
  • GOMS (Goals, Operators, Methods and Selection) analysis
  • Systematic Usability Evaluation (SUE)
  • CASSM: Concept-based Analysis of Surface and Structural Misfits (previously known as “Ontological Sketch Modelling”)
  • Programmable User Modelling, etc.

Table 1: Two-dimensional Investigations of UEMs

Theoretical Background Scope of Application Cost-effectiveness User/Evaluator Effect Limits of Techniques Involved Level of Acceptance Possible Extension ...
UEM 1                
UEM 2                
UEM 3                

Expected Outcomes: Recommendation on the deployment of a particular UEM under certain conditions and contexts with the expected effectiveness may be proposed to usability practitioners. Moreover, some insights into the consensus-building process among evaluators can be gained. More important, the criteria for defining real usability problems can be derived through the meticulous observations and analyses of the findings acquired in this Activity.