SUS System Usability Scale

Explanation

The System Usability Scale (SUS) is a method for giving "grades" to UX

Created in 1986 by John Brooke as a quick UX test on “Green-Screen” terminals, SUS has become an industry standard for evaluating UX. Research has shown it can reliably measure “overall user satisfaction” with statistically valid data even with as few as 30 users or less  – as long as they are well selected and want to participate. The result is a score between 0 and 100 (68 is average), which companies can compare to industry benchmarks. SUS capabilities end when one wants to see how users actually perform when completing critical tasks on a website.

Examples

10 questions, 100 points, 5 grades

It’s a questionnaire with 10 questions about using a product or service that result in one figure that can be translated into a grade.

The scoring can be translated to grades

How-to Guide

SUS already works with 30 motivated, objective participants. Some even say 5 is okay. The first step is to find these testers and have them use the product or service (if they haven't already)

In a recent analysis of an internal SUS survey as SAP, it showed that adding 20-30 respondents to the existing 30, had less than 5% margin of error. 160 respondents or more reduce the margin of error to about 2.5 where it more or less stays even if hundreds more are added.This example shows that 30 well chosen respondents deliver a usable result.

“I usually feel like I tested too many users not too few. Unusable tasks reveal themselves pretty early on.”, Dean Barker, director of UX at Sage CRM, MeasuringU finds only 11% difference. 

The participants get 10 questions. Digitally or on paper is both fine but the order of the questions can't be changed (it affects the scoring).

The survey page from the original test by John Brooke from 1986. (download it or create your own)

My recommendation: add one question on "how long have you been using this app/site", and separate results into those who have experience with the app/site versus those that have little to none.

The difference in rating can be as much as 30%, according to MeasuringU.

Take the odd questions (1,3,5,7,9) and substract 1 from each.

The “positive attitudes”

The question

User selected

Formula to apply

Result to note

I think that I would like to use this system frequently.

4

4-1 = 3

3

I thought the system was easy to use.

5

5 – 1 = 4

4

I found the various functions in this system were well integrated.

1

1 – 1 = 0

0

I would imagine that most people would learn to use this system very quickly.

2

2 – 1 = 1

1

I felt very confident using the system.

3

3 – 1 = 2

2

Take the even questions (2,4,6,8,10) , you use 5 as the start number and substract whatever the user selected.

The “negative attitudes”

The question

User selected

Formula to apply

Result to note

I found the system unnecessarily complex.

5

5 – 5 = 0

0

I think that I would need the support of a technical person to be able to use this feature.

4

5 – 4 = 1

1

I thought there was too much inconsistency in this feature.

2

5 – 2 = 3

3

I found the feature very cumbersome to use.

3

5 – 3 = 2

2

I needed to learn a lot of things before I could get going with this feature

1

5 – 1 = 4

4

Sum up all results

Odd questions

10

Even questions

13

Formula to apply

10 + 13

Sum to note

23

Multiply by 2.5

Sum

23

Formula to apply

23 x 2.5

Result to note

57.5

Check the results against the SUS grade system where a "school" grade from A (best) to F (worst) is associated with a result range.

The average is 68. Result can vary up to 30% depending on prior exposure of a tester with the app or site. If that is a concern, see step 2 for adding an experience question and sort results.

Alternative: Use questions 4 and 10 separately to check the learnability of the app/site versus its usability, which is tracked by all other questions.

Combination

SUS + NPS

Many UX designers combine SUS with NPS into one questionnaire. It’s hardly more effort and generates two popular results.

Combination of NPS with SUS, chart by Measuring U

Evaluation

SUS is fast, cheap, and highly addictive

Most UX designers recommend to use the grade as a starting point into a real UX analysis that also looks at what exactly goes wrong where, not just at how people like it.

It’s not diagnostics

1.
SUS can only show how good or bad the usability of a website is. It can’t show why that is.

Saying versus doing

2.
As with NPS, there is that discrepancy between what a user says and reality. They might also say they will recommend that site to others but not do it in reality.

Everyone is not equal

3.
The opinion of one tester might not be as valuable as that of another.

No real comparison

4.
Comparison to other companies don’t yield any insights. SUS doesn’t tell you what you have to do differently to reach the higher score of a competitor or why the competitor even has that high score.

Addictive mind game

5.
Like NPS, it is addictive and people often spend more time tweaking numbers than actively working on their UX.

Being used to bad UX

6.
User with prior experience will rate an app or site as more usable. This can be as much as 30%.

6: Statistic and article on prior exposure by Measuring U

Do you agree? Have you used this method? Did it work? 

Leave a Reply

Your email address will not be published. Required fields are marked *

More infos