CHI 436
Tuesday, November 15, 2011
Paper Reading #32: Taking advice from intelligent systems: the double-edged sword of explanations
I did Paper Reading #16, so I choose this one to skip.
Paper Reading #31: Identifying emotional states using keystroke dynamics
References
Identifying emotional states using keystroke dynamics by Clayton Epp, Michael Lippold, and Regan L. Mandryk. Presented at the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.
Author Bios
Summary
Discussion
I found this paper extremely interesting and was pleased to see that they were able to come up with a fairly consistent model for their research. They mentioned that their results might be improved upon in a laboratory setting with elicited emotions, and I agree. Their data was a little bit weak simply due to the nature of the study. However, I feel that they accomplished their overall goal and I am quite convinced of the results.
Identifying emotional states using keystroke dynamics by Clayton Epp, Michael Lippold, and Regan L. Mandryk. Presented at the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.
Author Bios
- Clayton Epp is currently a software engineer for a private consulting company and holds a master's degree in CHI from the University of Saskatchewan.
- Michael Lippold is currently a masters student at the University of Saskatchewan.
- Regan L. Mandryk is an Assistant Professor in the Interaction Lab in the Department of Computer Science at the University of Saskatchewan.
Summary
- Hypothesis
- It is possible to determine a person's emotional state based on their keystrokes.
- Methods
- The researchers used a software program to collect keystroke patterns of the participants. Based on the user's level of activity, the program prompted the user with an emotional state questionnaire and another short piece of text to type. Users with fewer than 50 responses were eliminated on the basis that they didn't provide enough variance in response to be useful. The raw data collected included key press and release events, codes for each key, and a timestamp on key events.
- Results
- The researchers used undersampling on many of the models to help make the data more meaningful in terms of detectable levels of emotion. They found that two of their "tired" models performed most accurately with the most consistency, and that models utilizing the undersampling performed better overall.
- Contents
- In this paper the researchers describe how minute measurements of a person's keystrokes can be calibrated to give a reasonably accurate representation of their current emotional state. They discuss some of the related work in human emotion and computing, and go on to describe their experiment process in detail. They also describe some of the ways that this paper might be expanded upon in the future.
Discussion
I found this paper extremely interesting and was pleased to see that they were able to come up with a fairly consistent model for their research. They mentioned that their results might be improved upon in a laboratory setting with elicited emotions, and I agree. Their data was a little bit weak simply due to the nature of the study. However, I feel that they accomplished their overall goal and I am quite convinced of the results.
Paper Reading #30: Life "modes" in social media
References
Life "modes" in social media by Fatih Kursat Ozenc and Shelly D. Farnham. Presented at CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.
Author Bios
I found this paper fascinating and highly relevant. I feel the authors were convincing with their research findings, but I would have also liked to see the results presented in a more measurable way. Perhaps the ambiguity was simply a necessary part of the research, particularly given the topic of study and the sheer variability between participants and social norms.
Life "modes" in social media by Fatih Kursat Ozenc and Shelly D. Farnham. Presented at CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.
Author Bios
- Fatih Kursat Ozenc is at Carnegie Mellon University and holds a PhD in Interaction Design.
- Shelly D. Farnham is currently a researcher at Microsoft Research and holds a PhD from the University of Washington
- Hypothesis
- People organize their social worlds based on life 'modes' and social sites have not sufficiently addressed how to help users improve their experiences in this area.
- Methods
- The researchers recruited 16 potential participants after an extensive screening process and asked them to model their lives. Specifically, their lives at present and with a focus on how they spend time and who they spend it with. The participants then went through their maps with different colored markers, noting how they communicate between each node.
- Results
- The majority of participants drew their life maps as social meme maps, while a few others focused more on a timeline style. The researchers found that participants chose communication channels based on closeness and different areas of their lives. Specifically, the closer they were to someone, the more they used a mix of multiple communication channels. Additionally, the amount of segmentation that participants wished to maintain between certain facets of their lives varied greatly with age, personality, and cultural differences.
- Contents
- This paper seeks to explore how we manage and compartmentalize the different social circles in our lives. By looking at how people classify different levels of interaction and comparing it to the various social channels used to maintain communication, the researchers hoped to gain a better view of how social networking in general can be adapted and improved to better cater to the structure of our social lives.
I found this paper fascinating and highly relevant. I feel the authors were convincing with their research findings, but I would have also liked to see the results presented in a more measurable way. Perhaps the ambiguity was simply a necessary part of the research, particularly given the topic of study and the sheer variability between participants and social norms.
Paper Reading #29: Usable gestures for blind people: understanding preference and performance
References
Usable gestures for blind people: understanding preference and performance by Shaun K. Kane, Jacob O. Wobbrock, and Richard E. Ladner
Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.
Author Bios
Summary
Discussion
I thought that this was a very interesting study and I think the authors did a great job of achieving their stated goals. They performed well-thought-out tests and presented their findings in a very convincing and organized manner. I would like to see them expand on their work with accessibility in interfaces, perhaps focusing on other disabilities such as general motor impairment.
Usable gestures for blind people: understanding preference and performance by Shaun K. Kane, Jacob O. Wobbrock, and Richard E. Ladner
Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.
Author Bios
- Shaun K. Kane is currently an Assistant Professor at the University of Maryland and holds a PhD from the University of Washington.
- Jacob O. Wobbrock is currently an Associate Professor at the University of Washington.
- Richard E. Ladner is currently a Professor at the University of Washington and holds a PhD in Mathematics from the University of California, Berkeley.
Summary
- Hypothesis
- Blind people have different needs and preferences for touch based gestures than sighted people. This paper aims to explore exactly what these preferences may be.
- Methods
- In the first study both blind and sighted people were asked to invent a few of their own gestures that might be used to interact and conduct standard tasks on a computing device. Because visual results of commands would not be visible to all participants, the experimenter read a description of the action and result of each command. Each participant invented two gestures for each command and then assessed them based on usability, appropriateness, etc.
- The second study was more focused on determining whether blind people simply perform gestures differently or actually prefer to use different gestures. In this study all participants performed the same set of standardized gestures. The experimenter described the gesture and its intended purpose, and the participants tried to replicate it based on his instruction.
- Results
- In the first study the experimenters found that, on average, a blind person's gesture contains more strokes than a sighted person's. Additionally, blind people were also slightly more likely to make use of the edge of the tablet when positioning their gestures, as well as being more likely to use multi-touch gestures.
- In the second study, there was no significant measure of difference in easiness between blind and sighted people. It was noted that blind people tended to make significantly larger gestures than sighted people, although the aspect ratio appeared consistent between the two groups. Additionally, blind participants took about twice as long to perform the gestures, and their lines were often more "wavy" than those of sighted participants
- Contents
- This paper makes steps towards bridging the touch screen accessibility gap for blind people. After discussing some of the previous work done in the field, the authors describe their two experiments designed to measure exactly how blind people's gesture preferences might be different from sighted people's. The findings are, overall, reasonably predictable in terms of differences between area of the gesture and speed to perform, and the results suggest new design guidelines for more accessible touch interfaces.
Discussion
I thought that this was a very interesting study and I think the authors did a great job of achieving their stated goals. They performed well-thought-out tests and presented their findings in a very convincing and organized manner. I would like to see them expand on their work with accessibility in interfaces, perhaps focusing on other disabilities such as general motor impairment.
Paper Reading #28: Experimental analysis of touch-screen gesture designs in mobile environments
References
Experimental analysis of touch-screen gesture designs in mobile environments by Andrew Bragdon, Eugene Nelson, Yang Li, and Ken Hinckle. Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.
Author Bios
Summary
Discussion
I believe the authors accomplished their goal of understanding how distractions can play a role in how users prefer to interact with their devices, and I think that they did a good job of covering all of the bases and exploring a wide avenue of possibilities. I think that their methodology was thorough and sound, and I have nothing to criticize about this paper.
Experimental analysis of touch-screen gesture designs in mobile environments by Andrew Bragdon, Eugene Nelson, Yang Li, and Ken Hinckle. Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.
Author Bios
- Andrew Bragdon is currently a PhD student at Brown University.
- Eugene Nelson is currently a PhD student at Brown University.
- Yang Li is a researcher at Google and holds a PhD from the Chinese Academy of Sciences.
- Ken Hinckle is a Principal Researcher at Microsoft Research and has a PhD from the University of Virginia.
Summary
- Hypothesis
- Bezel and marked-based gestures can offer faster, more accurate performance for mobile touch-screen interaction that is less demanding on user attention.
- Methods
- 15 participants performed a series of tasks designed to model varying levels of distraction and measure their interaction with the mobile device. They studied two major motor activities, sitting and walking, and paired them with three levels of distraction, ranging from no distraction at all to attention-saturating distraction. The participants were given a pre-study questionnaire and instruction on how to complete the tasks in addition to a demonstration.
- Results
- Bezel marks had the lowest mean completion time, though there was no significant performance difference between soft button and hard button mark's mean. There was also no significant difference between soft button's and bezel's paths, but there was a noticeable increase in mean completion time between bezel paths and hard button paths. Bezel marks and soft buttons performed similarly in direct, and with various distraction types bezel marks significantly and consistently outperformed soft buttons.
- Contents
- This paper examines the user interaction with soft buttons, hard buttons, and gestures and observes how distractions affect these interactions. The results of their experiments indicate that direct touch gestures can produce performance and accuracy that is comparable with soft buttons when the user's attention is focused, and actually improve performance in the presence of distractions. They found that bezel-initiated gestures were the fastest and most preferred by users, and that mark-based gestures were faster and more accurate to perform than free-form path gestures.
Discussion
I believe the authors accomplished their goal of understanding how distractions can play a role in how users prefer to interact with their devices, and I think that they did a good job of covering all of the bases and exploring a wide avenue of possibilities. I think that their methodology was thorough and sound, and I have nothing to criticize about this paper.
Paper Reading #27: Sensing cognitive multitasking for a brain-based adaptive user interface
References
Sensing cognitive multitasking for a brain-based adaptive user interface by Erin Treacy Solov, Francine Lalooses, Krysta Chauncey, Douglas Weaver, Margarita Parasi, Matthias Scheutz, Angelo Sassaroli, Sergio Fantini, Paul Schermerhorn, Audrey Girouard, Robert J.K. Jacob
Author Bios

Summary
Sensing cognitive multitasking for a brain-based adaptive user interface by Erin Treacy Solov, Francine Lalooses, Krysta Chauncey, Douglas Weaver, Margarita Parasi, Matthias Scheutz, Angelo Sassaroli, Sergio Fantini, Paul Schermerhorn, Audrey Girouard, Robert J.K. Jacob
Author Bios
- Erin Treacy Solov is a postdoctoral fellow in the Humans and Automation Lab (HAL) at MIT.
- Francine Laloosesis a PhD candidate at Tufts University and has a Bachelor's and Master's degree from Boston University
- Krysta Chauncey is a post doctorate researcher at Tufts University
- Douglas Weaver has a doctorate degree from Tufts University
- Margarita Parasi is working on a Master's degree at Tufts University
- Angelo Sassaroli is a research assistant professor at Tufts University and has a PhD from the University of Electro-Communication
- Sergio Fantini is a professor at Tufts University in the Biomedical Engineering Department
- Paul Schermerhorn is a post doctorate researcher at Tufts University and has studied at Indiana University
- Audrey Girouard is an assistant professor at The Queen's University and has a PhD from Tufts University
- Robert J.K. Jacob is a professor at Tufts University

Summary
- Hypothesis
- Cognitive multitasking is a common element in daily life, and the researchers' human-robot system can be useful in recognizing these multitasking tasks and assisting with their execution.
- Methods
- The first experiment was designed to highlight three conditions: delay, dual-task, and branching. The participants interacted with a simulation of a robot on Mars, sorting rocks. Based on the pattern/order of rock classification, measure data related to each of the three conditions listed above.
- The second experiment was used to determine whether they could distinguish specific variations of the branching task. Branching was divided into to categories: Random branching and predictive branching, and the experiment followed the same basic procedure as the first experiment. However, here there were only two experimental conditions.
- Results
- In the first experiment, statistical analysis was performed and all variables were tested for normal distribution. There was statistical significance in response time between delay and dual, delay and branching, but not between dual and branching. Correlations between accuracy and response time were not significant, and they did not find a learning effect.
- As in the first experiment, the second experiment also collected data about response time and accuracy and statistical analysis was performed. There was no statistically significant difference in response time between random and predictive branching, nor was there a significant difference in accuracy. Additionally, there was no correlation between accuracy and response time for random branching, but there was a correlation under predictive branching.
- Contents
- This paper describes a study done to assess cognitive multitasking, and how the human-robot system can have an effect on this process. It describes some of the related work that has been done in the field and explains how this paper expands on some of the pre-existing work. It then goes on to describe the experiments carried out to test the effectiveness of the hypothesis.
- While I feel that this research was not as resoundingly successful as the researchers had hoped, it does provide a solid stepping stone into further research. To that end, I think that the authors did achieve their goal. I think they were very thorough in their research, but they might have gotten more solid results with a larger test subject base.
Paper Reading #26: Embodiment in brain-computer interaction
References
Embodiment in brain-computer interaction by Kenton O’Hara, Abigail Sellen, Richard Harper.
Author Bios
As a sci fi/fantasy fiction enthusiast, this sort of technology is particularly appealing to me because of its using technology to mimic mystical or unexplainable powers. I had the opportunity to play a mindflex game once on campus during a fair, and I found it very intuitive. To be honest, I would have been a terrible subject for this experiment because I didn't have any kind of 'flair' in my behavior. I think the authors did a very good job, however, and look forward to seeing more of this technology.
Embodiment in brain-computer interaction by Kenton O’Hara, Abigail Sellen, Richard Harper.
Author Bios
- Kenton O’Hara is a Senior Researcher at Microsoft Research and works in the Socio Digital Systems Group.
- Abigail Sellen is a Principal Researcher at Microsoft Research and holds a PhD from The University of California, San Diego
- Richard Harper is a Principal Researcher at Microsoft Research and holds a PhD from Manchester.
- Hypothesis
- There is a need to better understand the potential for brain-computer interaction, and the authors assert that the study of the whole body interaction is important rather than just the brain.
- Methods
- The study made use of the MindFlex game, a device that uses EEC technology to measure electrical signals in the brain. As brain activity increases the fan blows more strongly, and similarly as brain activity decreases, so does the fan. The participants took the game home for a week to play in a relaxed setting and asked to record their gameplay. The videos were analyzed by the researchers and focused on the physical manifestation of behavior around the game, looking at bodily action, gestures, and utterances. The aim was to describe the embodied nature of the interactions and collaborations and how they were coordinated.
- Results
- Body position was found to play a large role in game play, with participants orienting themselves based on the task they were attempting. For example, when concentrating harder, they might scrunch their face or clench their fists. Then, when not concentrating as hard, the gestures relaxed. The researchers also noticed the addition of narratives that arose when giving instruction between players, which was more than the game required. And finally, they noticed a certain level of "performance" that went along with the activity.
- Contents
- This paper begins by describing the need for better understanding of how the entire body works to support the brain's goals. It then describes an experiment in which participants are asked to behave naturally while interacting with the 'mind-reading' technology, and their actions and gestures are closely analyzed. The researchers found several behavioral patterns that were consistent between players.
As a sci fi/fantasy fiction enthusiast, this sort of technology is particularly appealing to me because of its using technology to mimic mystical or unexplainable powers. I had the opportunity to play a mindflex game once on campus during a fair, and I found it very intuitive. To be honest, I would have been a terrible subject for this experiment because I didn't have any kind of 'flair' in my behavior. I think the authors did a very good job, however, and look forward to seeing more of this technology.
Subscribe to:
Posts (Atom)