- Accusations of ‘user error’ are common, but in reality, these are just errors in design.
- Frameworks like the Seven Stages of Action can help you to evaluate design, and understand potential usability issues.
- Prototypes offer a safe, closed environment for you to test how these errors might arise.
We’ve all been accused of it, and probably admitted to it ourselves at some point: the dreaded user error. The PICNIC (problem in chair, not in computer), the IBM (idiot behind machine)… take your pick.
Essentially the accusation that you, as the person responsible for operating the thing, have done something that has broken the thing and it’s entirely your fault.
But is there such a thing as a user error? Is the problem ever in the chair, or is it always in the computer?
For an answer to those questions, it’s helpful to look back to the end of World War II and the work of a relatively unknown psychologist by the name of Paul Fitts (see the oddly brief Wikipedia page here) on the causes of aircraft accidents.
The beginning of the end for “user error”
The USAF’s B-17 bomber was, by all accounts, rather a good aircraft but suffered from the pretty major problem of being quite “crashy”. As described in Kuang and Fabricant’s 2019 book User Friendly, the nature of accidents involving the insanely rapidly-developed and deployed bomber, “slid back and forth on a scale of tragedy to tragicomic: pilots who slammed their planes into the ground after misreading a dial; pilots who fell from the sky never knowing which direction was up; the pilots of B-17s who came in for smooth landings and yet somehow never deployed their landing gear.”
Fitts’ data was the crash reports from incidents which almost exclusively listed the cause as “pilot error”. His genius was the realisation that, if pilot error really was the cause, there would be a degree of unpredictability about the nature of the crashes. Where others saw consistency, Fitts saw a pattern.
Fitts and his colleagues decided that it might be a good idea to actually go and look at the aircrafts in question, discovering for themselves what might be going on. You may recognise this idea if you’re familiar with The Toyota Way and the principle of Genchi Genbutsu, which essentially means: “go and have a look for yourself”. What they found, again from User Friendly, was not “evidence of poor training. [They] saw, instead, the impossibility of flying these planes at all.”
I think, therefore I fail
A quick diversion. Generally speaking we use consciousness for activities that are high in novelty, high in complexity and high in ill-definedness, farming out all other related activities to our subconscious mind. Imagine avoiding something that’s just fallen off the back of a lorry in front of you on the motorway. Moving your foot to the brake and the action of twirling your arms on a steering wheel are not novel, complex or ill-defined once you are well-practised; the movement of an obstacle bouncing randomly in your path always is – it’s never going to be exactly the same. Outsourcing the physical actions to the unconscious mind allows you to focus fully on reacting to the changing nature of the situation you’re dealing with. One of the reasons a more experienced driver is less likely to hit the bouncing thing is that less brain power is dedicated to ‘the doing’, freeing up more bandwidth to deal with ‘the reacting’.
Landing an almost-certainly badly damaged aircraft, after a life-threatening sortie, barely into your 20s during a world war scores pretty high on novel, complex and ill-defined. Therefore the environment to facilitate that landing must be subconsciously effortless – the lowest possible levels of novelty, complexity and ill-definedness.
During their observations Fitts’ team found that this was absolutely not the case.
Aside from badly designed instrumentation, one of their key findings was that the separate levers which controlled the aircraft’s flaps and landing gear were identical, and right next to each other. Pilots focused on landing who were reaching for the landing gear lever were instead grasping the flaps lever, slowing their speed and piling the plane into the runway with the landing gear still stowed safely in the fuselage. A pilot error. Fitt’s colleague Alfonse Chapanis redefined that: this was a design error.
His solution of beautiful simplicity, which is still used today, was to make the levers different shapes: a practice that is now known as shape coding.
The Seven Stages of Action
Shape coding should not just be the preserve of aircraft cockpits and other complex, high stakes environments. Think of the last time you pulled a push door because it had a handle instead of a plate, or found yourself waving at a tap in a public toilet for a couple of examples of when proper shape coding should have made your day effortlessly easier and less embarrassing. As designers of things (whether that’s a process, product or anything else) we should always take care to make the interaction with those things as straightforward as possible. Make it as close to impossible for the user to fail to get the result they want as possible.
In The Design of Everyday Things, Donald Norman defines a useful model for thinking about how this can be done: The Seven Stages of Action. His model says that there are two parts to any action: executing the action and evaluating the results, which are made up of seven stages:
- Goal (form the goal)
- Plan (the action)
- Specify (an action sequence)
- Perform (the action sequence)
- Perceive (the state of the world)
- Interpret (perceive the change)
- Compare (the change you perceived with the goal you formed)
Stages 2-4 are defined as the bridge of execution – the actions taken to test your goal against the world. Steps 3-7 are defined as the bridge of evaluation – the actions taken to assess the effectiveness of the actions you took, in order to decide if the action has been satisfactorily completed, or to facilitate the definition of a revised plan. How did the world respond to what I did to achieve my goal?
We can use these stages as tests against which we can model our users’ interactions. What will the user want to achieve when they interact with our design? What will they want to do in order to achieve that goal, how will they formulate that action and what will the action look like? And just as importantly, what feedback will our design give the user in order that they can effectively alter their approach if it doesn’t work?
Affordances and signifiers
Any interaction with something in the world is defined by affordances and signifiers, and the nature of those things define the nature of the interaction a user is likely to have. In the case of the door, the door itself is the affordance – it is something the environment offers the user: an opportunity to move from one room to the next; or reduce noise; or limit access; telegraph the nature of a meeting and so on and so forth. Signifiers serve the purpose of making clear the nature of the affordance and the correct way in which to interact with it. It is by paying close attention to those signifiers and what they might tell a user that we can really start to eliminate user error from our designs. In the case of the door they may be as explicit as a “PUSH” sticker, as suggestive as a plate in place of a handle, or implicit and subtle as the design of the hinges.
It’s worth remembering that the absence of a signifier is also powerful: no visible keyhole on a door might remove “it might be locked” from the list of possible reasons for a goal/world misalignment. That might be positive or negative, depending on how you want a user to interact with your door. Additionally there are cultural and technological signifiers outside of your control that may need to be considered. At the Volkshotel in Amsterdam you exit the lift on your floor and there is a set of doors that require you to scan your keycard in order to gain access to the corridor where your room is. There is a sign on the door that says “Scan your card on the wall on the left” – I’m so used to cleverly integrated technology I assumed it literally meant that: “scan your key card on the wall on the left”. A couple of failed attempts and I realised that there was a keycard scanner box a bit further back, just outside the natural radius of the look you do when you’re told “on the left”.
Some tips for prototyping
Aside from just being aware of human factors, what are the practical steps you can take to integrate them into your design thinking? A beginners’ mindset is always a good thing to try to use, especially in the design of complex products and systems. As is rigorous user observation and all the other standard design thinking practices.
I’d like to focus a bit more specifically on prototyping though, and a few good practices for more effective prototype testing with users.
The first and most important rule is do not explain your prototype. Present it to the tester and ask them to interact with it. Do not correct them, but closely observe the way they interact and, if they struggle, don’t help them: instead ask them why they decided on the action they took: “why did you pull that door?” “Because it has a handle.” If there are dead-ends, which will be especially prevalent in Wizard of Oz or (in some cases) Fake Door prototypes, ask the user what they would expect to happen after that dead-end. Then, try to add that and test it with your next set of users.
The second rule is, wherever possible, don’t collect insights yourself. Have another person as the scribe while you engage the user in conversation, while the scribe categorises things the user says under the headings “Likes”, “Dislikes”, “Questions” and “Ideas” and use these to iterate and retest your prototype. Quiz the user, and ask why they did the things they did without directing their interaction with the prototype.
A third good rule is to be careful about what you include in the prototype and don’t be too concerned or get hung up on “resolution”: a prototype doesn’t have to look amazing. If there’s a button on your prototype, why is it there? What do you want from the test user? If it’s feedback about position or size don’t make it bright red, since the user will almost certainly offer you their thoughts on the colour. Don’t be afraid to exaggerate the property you do want feedback on: if it is colour, make it a disgusting brown, purple, bright red and green rainbow nightmare.
Finally, and most importantly, never explain away negative user feedback by saying “they just didn’t get it”. Make no mistake, if they didn’t: that’s your fault, not theirs.
The &us team have worked with leading organisations like HP, Novartis and the Post Office on prototyping and designing products to ensure users have a smooth experience. If you’d like to find out how we can help you, get in touch with us right here.