0% found this document useful (0 votes)
13 views28 pages

Usability Evaluation Methods Explained

The lecture covers key concepts in evaluation, including different evaluation methods and their importance in usability testing. It distinguishes between evaluation and testing, emphasizing that evaluation assesses user needs and interface effectiveness, while testing focuses on identifying bugs. Various evaluation types and techniques, such as heuristic evaluations and usability testing, are discussed to inform design decisions and improve user experience.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views28 pages

Usability Evaluation Methods Explained

The lecture covers key concepts in evaluation, including different evaluation methods and their importance in usability testing. It distinguishes between evaluation and testing, emphasizing that evaluation assesses user needs and interface effectiveness, while testing focuses on identifying bugs. Various evaluation types and techniques, such as heuristic evaluations and usability testing, are discussed to inform design decisions and improve user experience.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Structure of this lecture

e Understanding of key concepts and terms used in


evaluation
o Introduction to different types of evaluation
methods and how they are used
e Understanding of usability and its importance
e Overview of the typical usability testing process
and tools
Learning Outcomes

® Evaluation techniques are explored and this links to


the learning outcome "Understand the issues
involved in developing and evaluating interfaces to
interactive applications".
What is user experience?
® “User experience encompasses all aspects of the end-user’s
interaction with the company, its services, and its products.
The first requirement for an exemplary user experience is to
meet the exact needs of the customer...Next comes
simplicity and elegance that produce products that are a
joy to own, a joy to use. True user experience goes far beyond
giving customers what they say they want, or providing
checklist features...” (Nielsen-Norman Group)

“User experience and interface design in the context of


creating software represents an approach that puts the user,
rather than the system, at the center of the process. This
philosophy, called user-centered design, incorporates user
concerns and advocacy from the beginning of the design
process and dictates the needs of the user should be
foremost in any design decisions.” (Microsoft)
Evaluation is how you assess you
got it right...
® “Evaluation” is closely related to the design issues
and techniques.

® However carefully we follow HCl guidelines and


standards, when designing systems we need to
step back from the developing system and evaluate
it. Otherwise we may create a monster-...

e Watch me
... but, evaluation is NOT testing!
e Evaluation is not the same as “testing”, because
testing is designed to find bugs whereas evaluation
has a different focus.

® A system may be totally bug free and yet still have


a poor interface

® Evaluation is either predictive testing (does not


involve users and is quick and relatively
inexpensive) or usability testing (which does
involve users).
Why, what, how and when to
evaluate...
Evaluation is a continuous process that examines:

OWhy: to check users’ requirements and that they can use


the product and they like it
OWhat: a conceptual model, early (often paper) prototypes
of a new system and later, more complete prototypes
OHow: in natural and laboratory settings
OWhen: throughout design; finished products can be
evaluated to collect information to inform new products
When to evaluate...

U Formative evaluations: done during design and


development to check that a product continues to meet
users' needs. Results are then fed back into the design
activity.

U Summative evaluations: done to assess the


success or quality of a finished product

M
National Institute of Standards and Technology (NIST), the
International Standards Organization (ISO), the British
Standards Institute (BSI)
EVALUATION TYPES
Types of evaluation are...
Q Settings not involving users - e.g. to predict,
analyze and model aspects of the interface

Q Controlled settings - involve users; e.g. usability,


testing and experiments in lab-based settings

Q Natural settings - involve users; e.g. field studies


and ‘in the wild’ studies to see how the product is
used in real world
Settings with NO users...
U Settings where the researcher has to imagine or model
how an interface is likely to be used

U Inspection methods are commonly employed:

v Heuristic evaluation
v Cognitive walkthroughs
v’ Analytics e.g. web analytics
v'Models e.g. Keystroke Level Model, Fitts’ Law
Heuristic evaluation
® A heuristic is a guideline, general principle or rule of
thumb e.g. “Students who do all the tutorial exercises do better in
examinations.” There may be exceptions, but on the whole it’s true.

® It is a discount usability inspection method and the


focus is on the interface. It is undertaken by HCI
experts.

e User-centred, highly practical approach, natural


behaviour in environment or lab.
Nielsen’s 10 Usability Heuristics
Visibility of system status
=

Match between system and the real world


N

User control and freedom


W

Consistency and standards


A

Error prevention
: How?
oV

Recognition rather than recall


Flexibility and efficiency of use
0N

Aesthetic and minimalist design


Help users recognize, diagnose, and recover from errors
©

Help and documentation


=
Shneiderman’s Eight Golden
Rules
Strive for consistency.

Enable frequent users to use shortcuts.

Offer informative feedback.

Design dialog to yield closure.

Offer simple error handling.

Permit easy reversal of actions.

Support internal focus of control.

Reduce short-term memory load


How to do a heuristic evaluation..

= Briefing session: Description of what the expert


evaluators are supposed to do

= Evaluation period: 1-2 hour period during which


evaluators independently inspect the product.
= Usually two passes are done: first pass gives overall feel for flow
of interface; second pass focuses on specific interface elements
= Experts often choose specific tasks to focus evaluation

= Debriefing session: Experts come together to discuss


their findings, to prioritize problems, and to suggest
solutions
Cognitive walkthroughs
Focus on ease of learning.
Designer presents an aspect of the design and
usage scenarios.

Expert is told the assumptions about user


population, context of use, task details.

One or more experts walk through the design


prototype with the scenario.
The 3 questions
Experts are guided by 3 questions:

Will the correct action be sufficiently evident to the user?


Will the user notice that the correct action is available?
Will the user associate and interpret the response from the
action correctly?
As the experts work through the scenario they
note problems.
Controlled setting with users...
Q..enable evaluators to control what users do, when
they do it, and for how long
U..enable evaluators to reduce outside influences
and distractions
QUsability testing: typical example of controlled
evaluation
v generally done in lab-based settings
v primary goal to determine if an interface is usable by the
intended user population
v'can be supplemented by observation, interviews,
experiments
What is usability testing...
“Extent to which a product can be used by specified users to
achieve specified goals with effectiveness, efficiency and
satisfaction in a specified context of use.”
-1SO 9241

v’ Effectiveness: accuracy and completeness; does it do what


users need?
v’ Efficiency: resources expended/spent; how quickly can users
perform tasks?
v’ Satisfaction: comfort and acceptability of system; how pleasant
is it to use? (Koivunen and May, 2002; Nielsen, 2012)
How do | test usability?
O Goals and questions focus on how well users
perform tasks with the product
O Data collected by video, interaction logging, and
thinking-aloud
a User satisfaction questionnaires and interviews
provide data about users’ opinions (subjective
measures)
o Comparison of products or prototypes is
common
]
N
Testing is central!
What type of data should | collect
(metrics)?
Focus is on time to complete task AND number and
type of errors (Wixon and Wilson, 1997):
UTime to complete a task UNumber and type of errors
UTime to complete a task after a per task
specified time away from the| | dNumber of errors per unit of
product time
O Number of users successfully| | dNumber of times online help
completing a task and manuals accessed
UNumber of users making an
error
How many participants is enough?

O The number is a practical issue


O Depends on:
v'schedule for testing;
v availability of participants;
v cost of running tests
O Acceptable to have 5-12 users (Dumas and Redish,
1999); Nielsen (2000) recommends 5 users
O Some experts argue that testing should continue until
no new insights are gained
The “Thinking Aloud” Method
® Need to know what users are thinking, not just
what they are doing

® Ask users to talk while performing tasks


= tell us what they are thinking
= tell us what they are trying to do
= tell us questions that arise as they work
= tell us things they read C )

~ :
... or natural settings with users
U Field studies to evaluate people in their natural settings
v help identify opportunities for new technology
v’ establish requirements for a new design
v facilitate the introduction of technology, or inform deployment
of existing technology in new contexts

U Goal is to be unobtrusive and not to affect what people


do during evaluation
U Methods typically used are observation, interviews, and
logging
U Example paper
A USABILITY TESTING CASE
STUDY
Usability testing the iPad!
O 7 participants with 3+ months experience with iPhones
O Signed an informed consent form explaining:
v'what the participant would be asked to do;
v'the length of time needed for the study;
v'a promise that the person’s identity would not be disclosed;
and
v'an agreement that the data collected would be confidential
and would be available to only the evaluators
O Then they were asked to explore the iPad
Problems and actions...

O Problems detected: {al. TheBerils of Plastic


v'Accessing the Web was difficult
v'Lack of affordance and feedback
v Getting lost
v'Knowing where to tap
O Actions by evaluators:
v'Reported to developers
O Accessibility for all users
important

Adapted from Preece, Sharp and Rogers (2015)


Developing an evaluation plan..
e Set evaluation (usability) goals
e Select the tools and techniques you will use
e Establish membership of the evaluation team

“Produce a set of heuristics derived from relevant


standards and guidelines; choose the method and the
evaluators; run the test (storyboard or prototype);
collect the comments; analyse the results; identify areas
where the design needs to be improved.”
Some key points
Evaluation and design are very closely integrated.
Different evaluation methods are used for different purposes
at different stages of the design process and in different
contexts of use.
Evaluators mix and modify methods to meet the demands of
evaluating novel systems.
Some of the data gathering methods (see lecture notes) are
used in evaluation as for establishing requirements and
identifying users’ needs, e.g. observation, interviews, and
questionnaires.

Evaluations can be done in controlled settings such as


laboratories, less controlled field settings, or where users are
not present.

You might also like