References
User-defined Motion Gestures for Mobile Interaction by Jaime Ruiz, Yang Li, Edward Lank. Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.
Author Bios
- Jaime Ruiz is currently a fifth-year doctoral student in the HCI Lab in the Cheriton School of Computer Science at the University of Waterloo.
- Yang Li is currently a Senior Research Scientist working for Google. He spent time at the University of Washington as a research associate in computer science and engineering. He holds a PhD in Computer Science from the Chinese Academy of Sciences.
- Edward Lank holds a Ph.D. in Computer Science from Queen's University. He is currently an Assistant Professor in the David R. Cheriton School of Computer Science at the University of Waterloo
- Hypothesis
- Although modern smartphones contain sensors to detect three-dimensional motion, there is a need for better understanding of best practices in motion-gesture design.
- Methods
- 20 participants were asked to design and perform a motion gesture with a smartphone device that could be used to execute a task on the smartphone. These gestures were then analyzed and several were selected to be included in the rest of the study. In the following experiment, the participants were given a set of tasks and and a set of motion gestures. The participants performed each gesture and rated them based on how well the gesture matched the task and how easy it was to perform.
- Results
- Participants tended to design gestures that mimicked normal motions, such as putting the phone to their ear. They also were able to relate interacting with the phone to interacting with a physical object, such as turning a phone upside down to hang up like the old phones. Tasks that were considered opposites always resulted in similar gestures, but performed in opposite directions. A diagram of the resulting gesture set is shown at the top of this post.
- Contents
- The paper begins by running an experiment to determine how participants feel gestures ought to be mapped to make them easy and intuitive. The results were fairly consistent, though some unexpected. After the research, the paper describes a second question: the set of parameters manipulated by the participants. They determine that there are two different classes of taxonomy dimensions: gesture mapping and physical characteristics. Gesture mapping can be further interpreted based on metaphor, physical, symbolic, and abstract.
I felt that this was a fascinating read and is a good example of progress in the mobile interaction arena. The hypothesis and testing was a lot more open ended than many of the previous papers, but I feel that this approach lent itself to a better understanding overall. I think the authors achieved their research goals, but I would also be interested to see follow up studies to verify their results.