Pilot Usability Study (Group)

Due: Tuesday, November 20, 2001

Goals
The goal of this assignment is to learn how to perform a simple usability test and to incorporate the results of the test into design changes in your prototype. In practice, this "pilot" study would be used to redesign your experiment before running the study with a larger pool of participants.

Prototypes
You will be performing this test using the code you produced from the last assignment.

Participants
You will find three participants (i.e., volunteers who are not in this class) to work through your benchmark tasks. Remember, it must be voluntary. You should get the participants to sign an informed consent form and obtain other demographic information (e.g., age, sex, education level, major, experience with your type of tasks & application, etc.)

Benchmark Tasks
Your test will use the tasks that you have been working on all semester. They should include 1 easy task, 1 medium task, and 1 difficult task. These tasks should give good coverage of your interface; if they don't you better redesign them in advance.

Measures and Observations
Although it will be hard to get statistically significant bottom-line data with only three participants and a rough prototype, you should measure some important dependent variables to get a feel for how it is done (i.e., task time, # of errors, etc.).

You will concentrate on process data. For example, you should instruct your participant to think aloud. You should make a log of critical incidents (both positive and negative events). For example, the user might make a mistake and you notice it or they might see something they like and say "cool". Set up a clock that only the observers can see (one or more of you should observe), and write down a log containing the time and what happened at that time when a critical incident occurred.

If you happen to have access to a video camera, it is fine to use it -- note the time that you start taping so that you can find your critical incidents later on tape. You may wish to use a tape recorder if you don't have a video camera.

Procedure
You will give the participant a short demo of the system. Do not show them exactly how to perform your tasks. Just show how the system works in general and give an example of something specific that is different enough from your benchmark tasks. You should write-up a script of your demo and follow the same script with each participant.

The participant will then be given task directions for the first task that tells them what they are trying to achieve, not how to do it. When they are finished, you will give them the directions for the next task and so on. Each participant will perform all 3 tasks. You will want to keep the data separate for each task and participant.

Location
Due to the noise in the computer labs, it would be preferable to find an empty room or a space with only a few other people in it in which to conduct your experiment.

Results
You must report your results (values of dependent variables, summaries of those values, and summaries of the process data) and in the "Discussion" section draw some conclusions with respect to your interface prototype. You should also say how your system should change if those results hold with a larger user population. This should be the most important part of the write-up. We want to understand how you would fix your system as a result of what you observed.

Write-up
Your write-up, turned in on paper and on the web, should follow this outline with separate sections for the top-level items (number of pages/section are approximate). It should be about 4 pages, plus appendices and any sketches that illustrate what you are describing:

1.Introduction
        Introduce the system being evaluated (1 paragraph)
        State the purpose and rationale of the experiment (1 paragraph)
2.Method
        Participants (who -- demographics -- and how were they selected) (1/2 page)
        Apparatus (describe the equipment you used and where) (1 paragraph)
        Tasks (1/2 page) [you should have this already... fix it up if we have commented]
                describe each task and what you looked for when those tasks were performed
        Procedure (1/2 page)
                describe what you did and how
3.Test Measures (1/4 page)
        describe what you measured and why
4.Results (1 page)
        Results of the tests
5.Discussion (1 page)
        what you learned from the pilot run
                what you might change for the "real" experiment
                what you might change in your interface from these results alone
6.Appendices
        Materials (all things you read --- demo script, instructions -- or handed to the participant -- task instructions)
        Raw data (i.e., entire merged critical incident logs)