HCI Course Part 4 - Heuristic Evaluations

This is part four in my series about the Human-Computer Interaction course I took through CourseraRead all my posts for the full story.

An important part of this course is the peer review and self-evaluation process. If you take the process seriously you can learn a huge amount by evaluating the work of other students as well as your own work. Each assignment has a grading rubric to help you figure out what kind of feedback to give and how to score.

One neat thing they do is show you the entire grading rubric in advance of completing the assignment. That way you can see what your peers will be looking for when reviewing your work. Usually these line up with what you would want to accomplish when completing a similar task on the job.

Additionally, when it comes time to do the peer reviews you do some practice peer reviews first. After each practice review you are presented with the actual scores and reasoning used to give that grade. This process lets people from all different backgrounds fine tune their reviewing skills.

The peer review and self-evaluation process for Assignment 3 Wireframing also gave us a chance to practice a common UX technique: heuristic evaluations.

What are heuristic evaluations?

The process of heuristic evaluations was developed by Jakob Nielson as an effective, easy, quick and inexpensive way to analyze a user interface design for usability issues. A small set of people evaluate the design using a set of usability principals called "heuristics."

10 Design Heuristics (from 10 Usability Heuristics for User Interface Design)

  1. Visibility of system status
  2. Match between system and the real world
  3. User control and freedom
  4. Consistency and standards
  5. Error prevention
  6. Recognition rather than recall
  7. Flexibility and efficiency of use
  8. Aesthetic and minimalist design
  9. Help users recognize, diagnose and recover from errors
  10. Help and documentation

For each issue found the evaluator lists the heuristic broken, the location the issue was found and the severity of the issue (i.e. is it just annoying or does it make the program unusable?).

Each evaluator finds a slightly different set of issues. All issues found are compiled for analysis and prioritization: how fast can it be fixed and how important is it to fix?

Here are some issues I found while doing the peer reviews and self-evaluations for Assignment 3. Obviously these are a bit out of context without seeing the wireframes themselves but at least you can get a feel for how this works.

Heuristic BrokenDescription / LocationSeverity

User controlNo way to delete tasks that are already listed on the Daily Plan4

Error preventionTrash button is very close to complete button on buy tickets page. It could be easy for people to accidentally delete a task when they meant to complete it.1

Consistency"That's correct" and "Ignore" buttons occupy the same part of the screen as each other but on sequential pages.2

Minimalist designTask overview page could be too much visual information. However, the information presented falls within "recognition rather than recall" heuristic.0

So next time you find yourself being frustrated with an app or website stop and consider which heuristic has been broken. At least it will take your mind off the task at hand for a moment of reflection.