Tuesday, October 25, 2011

Paper Reading #23: User-defined Motion Gestures for Mobile Interaction



References
 User-defined Motion Gestures for Mobile Interaction by Jaime Ruiz, Yang Li, Edward Lank.  Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.


Author Bios
  • Jaime Ruiz is currently a fifth-year doctoral student in the HCI Lab in the Cheriton School of Computer Science at the University of Waterloo.
  • Yang Li is currently a Senior Research Scientist working for Google.  He spent time at the University of Washington as a research associate in computer science and engineering.  He holds a PhD in Computer Science from the Chinese Academy of Sciences.
  • Edward Lank holds a Ph.D. in Computer Science from Queen's University.  He is currently  an Assistant Professor in the David R. Cheriton School of Computer Science at the University of Waterloo
Summary 
  • Hypothesis
    • Although modern smartphones contain sensors to detect three-dimensional motion, there is a need for better understanding of best practices in motion-gesture design.  
  • Methods
    • 20 participants were asked to design and perform a motion gesture with a smartphone device that could be used to execute a task on the smartphone.  These gestures were then analyzed and several were selected to be included in the rest of the study.  In the following experiment, the participants were given a set of tasks and and a set of motion gestures.  The participants performed each gesture and rated them based on how well the gesture matched the task and how easy it was to perform.  
  • Results
    • Participants tended to design gestures that mimicked normal motions, such as putting the phone to their ear.  They also were able to relate interacting with the phone to interacting with a physical object, such as turning a phone upside down to hang up like the old phones.  Tasks that were considered opposites always resulted in similar gestures, but performed in opposite directions.  A diagram of the resulting gesture set is shown at the top of this post.
  • Contents
    • The paper begins by running an experiment to determine how participants feel gestures ought to be mapped to make them easy and intuitive.  The results were fairly consistent, though some unexpected.  After the research, the paper describes a second question: the set of parameters manipulated by the participants.  They determine that there are two different classes of taxonomy dimensions: gesture mapping and physical characteristics.  Gesture mapping can be further interpreted based on metaphor, physical, symbolic, and abstract.
Discussion
I felt that this was a fascinating read and is a good example of progress in the mobile interaction arena.  The hypothesis and testing was a lot more open ended than many of the previous papers, but I feel that this approach lent itself to a better understanding overall.  I think the authors achieved their research goals, but I would also be interested to see follow up studies to verify their results.  

Paper Reading #22: Mid-air pan-and-zoom on wall-sized displays



References
Mid-air pan-and-zoom on wall-sized displays by Mathieu Nancel, Julie Wagner, Emmanuel Pietriga, Olivier Chapuis, Wendy Mackay.  Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.


Author Bios
  • Mathieu Nancel is currently a PhD student in HCI in the Université Paris-Sud XI under the supervision of Michel Beaudouin-Lafon and Emmanuel Pietriga.
  • Julie Wagner is a PhD student in the insitu lab in Paris, working on new tangible interfaces and new interaction paradigms at large public displays.
  • Emmanuel Pietriga is currently a full-time research scientist working for INRIA Saclay - Île-de-France.  He is also the interim leader of INRIA team In Situ.
  • Olivier Chapuis is a Research Scientist at LRI.  He is also a member of and team co-head of the InSitu research team.  
  • Wendy Mackay is a Research Director with INRIA Saclay in France,  though currently on sabbatical at Stanford University. She is in charge of the research group, InSitu. 
Summary 
  • Hypothesis
    • The main hypothesis of the paper is that there is a need for more research on complex tasks when dealing with high resolution wall-sized displays.  The authors made seven smaller hypotheses about how people best interact with tools.
      1. Two hands are faster than one
      2. Two-handed gestures should be more accurate and easier to use.
      3.  Linear gestures should map better to the zooming component, but should eventually be slower because of clutching.
      4. Users will prefer clutch-free circular gestures.
      5. Techniques using fingers should be faster than those requiring larger muscle groups.
      6. 1D path gestures should be faster, with fewer overshoots than techniques with lesser haptic feedback.
      7. 3D gestures will be more tiring.
  • Methods
    • They conducted an experiment with 12 participants based on three primary factors: handedness, gesture, and guidance.  They controlled for potential distance effects by introducing the Distance between two consecutive targets as a secondary factor. The pan-zoom task involved navigating through two groups of concentric circles, starting at a high zoom level and zooming out until the neighboring group is visible.  Then they pan and zoom until they reach the target group.
  • Results
    • The data from the pan-zoom task strongly supported the first hypothesis, as well as numbers 5 and 6.  They were surprised to find that their third hypothesis was not supported; linear gestures are faster than circular ones.  Finally, as they expected, participants found that hypothesis 7 held true, and that IDPath guidance was least tiring but 3DFree was most tiring.
  • Contents
    • The paper discusses how users might best interact with a large screen by studying several different motions and commands.  They wanted to observe ease of use, causes of fatigue, and how simple or complex the interactions might be.  They proposed several ideas at the beginning to guide their research, and then performed an experiment to highlight the key points.  They found that while most of their hypotheses were supported, they had judged one or two points inaccurately.
Discussion
    I think this was a very well put together paper.  I feel that the authors did a good job defining their parameters and goals, and then delivering the results thoroughly.  I think it would have been interesting if they had done a little bit more experimentation, maybe trying a few different approaches to get the information they were looking for.  However, overall I was impressed by the research and findings.

Wednesday, October 19, 2011

Paper Reading #21: Human model evaluation in interactive supervised learning

References
Human model evaluation in interactive supervised learning by Rebecca Fiebrink, Perry R. Cook, and Daniel Trueman. Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems


Author Bios
Rebecca Fiebrink is currently an assistant professor in Computer Science at Princeton University. She holds a PhD from Princeton and was a postdoc for most of 2011 at the University of Washington.
Perry R. Cook is a professor emeritus at Princeton University in Computer Science and the Department of Music.   He is no longer teaching, but still researches, lectures, and makes music.
Daniel Trueman is a musician, primarily with the fiddle and the laptop.   He currently teaches composition at Princeton University.


Summary
  • Hypothesis
    • Because model evaluation plays a special role in interactive machine learning systems, it is important to develop a better understanding of what model criteria are most important to users.
  • Methods
    • The authors performed three studies of people applying supervised learning in their work.  In the first study they led a design process with seven composers to focus on refining the Wekinator.   Participants met regularly to discuss the software in relation to their work and suggest improvements.  In the second study, students were told to use the Wekinator in an assignment focused on supervised learning in interactive music performance systems.   Specifically, they were asked to use an input device to create two gesturally controlled music performance systems.  The third study was a case study completed with a professional musician to build a gesture recognition system for a sensor-equipped cello bow.  The goal of this study was to build a set of gesture classifiers to capture data from the bow and produce musically appropriate labels.
  • Results
    • In the first study participants found that the algorithms used to control the sound were difficult to control in a musically satisfying way using either a GUI or an explicitly controlled sequence.  Unlike the first study, the second and third both made some use of cross validation.  Users in the second study indicated that they considered high levels of cross validation accuracy to be indicative of good performance, and made use of it as such.  In the third study, however, it was used more as a quick way to check.  The people in all three studies used direct validation much more frequently than cross validation.  The direct validation was broken down into six categories: correctness, cost, decision boundary shape, label confidence and posterior shape,  and complexity and unexpectedness.
  • Contents
    • The researchers present work studying how users evaluate and interact with supervised learning systems.  They examine what sort of criteria is used in the evaluation and present observations of different techniques, such as cross-validation and direct validation.  The purpose of the research is both to make judgments of algorithm performance and improve training models, in addition to providing more effective training data.
Discussion
     I think this paper did a very good job of presenting findings and being thorough with the research and methodology.  By my estimation the researchers did accomplish their goal of gathering useful data regarding evaluation of supervised learning systems, and I think that this work will be very beneficial in the future.  I did not find any gaping faults with the paper itself; it presents its purpose, carries out the research and gives the findings, and even discusses potential benefits and uses.

Paper Reading #20 : The Aligned Rank Transform

References
The Aligned Rank Transform for Nonparametric Factorial Analyses Using Only Anova Procedures by Jacob O. Wobbrock, Leah Findlater, Darren Gergle, and James J. Higgins.  Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.


Author Bios
  • Jacob O. Wobbrock is currently an Associate Professor in the Information School and an Adjunct Associate Professor in the Department of Computer Science & Engineering at the University of Washington. 
  • Leah Findlater is a postdoctoral researcher in The Information School, working with Dr. Jacob Wobbrock. She holds a PhD from the University of British Colombia.
  • Darren Gergle  is an Associate Professor at Northwestern University and has a PhD from Carnegie Mellon University.
  • James J. Higgins is currently a Professor at Kansas State University and holds a PhD from the University of Missouri-Columbia.
Summary
Hypothesis
    The Aligned Rank Transform is a useful and easily accessible tool for pre-processing nonparametric data so that it can be seen and manipulated on a level beyond current nonparametrics tests.

Methods
     The ART procedure consists of 5 steps: 
  1. Computing residuals: for each raw response Y, compute residual = Y - cell mean
  2. Computing estimated effects for all main and interaction effects: these are calculated such that Ai is the mean response Yi for rows where factor A is at level i. AiBj is the mean response Yij for rows where factor A is at level i and factor B is at level j. And so on.
  3. computing the aligned response Y', assigning average ranks Y'' where Y' = residual + estimated effect.
  4. performing a full-factorial ANOVA on Y''.
Results
     The paper re-examined three different studies using their ART procedures to demonstrate its usefulness.  The first case showed how ART can uncover interaction effects that may not be seen with Friedman tests.  The second case showed how the ART can free analysts from the distributional assumptions of ANOVA.  The last case demonstrated the nonparametric testing of repeated measures data.  
Contents
     The authors presented their Aligned Rank Transform tool, which useful for the nonparametric analysis of factorial experiments and makes use of the familiar F-test.  They discuss the exact process in detail, then go on to show three examples of where it could prove useful and effective with real data.


Discussion
I am not particularly qualified to comment much on the usefulness or ingenuity of this project, but it seems to me like the authors did a fine job of creating a tool or technique to handle data that was previously more cumbersome and less obvious.  I cannot find any faults with the paper, and I think that they did a good job of selecting several different test cases to highlight particular areas of usefulness with the ART.  

Paper Reading #19 : Reflexivity in Digital Anthropology

References
     Reflexivity in Digital Anthropology by Jennifer A. Rode.  Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.


Author Bio
     Jennifer Rode is currently an Assistant Professor at Drexel's School of Information in Pennsylvania.  She is also a fellow in Digital Anthropology at University College London.  She holds her PhD from the University of California, Irvine.

Summary
  • Hypothesis   Rode believes that digital anthropologists can contribute to the field of HCI by writing reflexive ethnographies, which is different from other more positivist approaches.
  • Methods  -  This paper was more of a discussion than an actual research and development project, so the author did not have any methods to present.   
  • Results  -   The author posits that digital anthropologists are not studying technology, but rather studying in the context of technology.  She also clarifies some of the traditional approaches to ethnographic study, namely Positivist versus Reflexivity.  In addition to the approaches, she described some of the writing styles, specifically Realistic, Confessional, and Impressionistic. 
    • Positivist: Data is collected, studied, and tested with the aim of producing an unambiguous result.  
    • Reflexivity:  According to Burawoy, reflexivity embraces intervention as an opportunity to gather data, it aims to understand how the data gathering impacts the data itself, and reflexive practitioners look for patterns and attempt to draw out theories. 
    • Realistic: 
      the need for  experimental 
      author(ity), its typical forms, the native’s point of view, and interpretive omnipotence.
    • Confessional: 
      broadly provides  a written 
      form for  the  ethnographer  to  engage  with  the  nagging doubts surrounding the study and discuss them textually with the aim of demystifying the fieldwork process
    • Impressionistic: based on dramatic recall and a well told story.
  • Content  -  The author showed how ethnography has various forms and orientations, and how reflexivity can contribute to design and theory in HCI.  She also describes three forms of anthropological writing and the key elements of the technique.  Finally, she describes how ethnography is actually used in the design process of computer-human interaction.
Discussion
This paper, while surely valuable in its field, is extremely hard to read.  There is a lot of information and it seems to me that it could have all been summarized much more succinctly.  I don't know who she expects to actually read the entire paper all the way through and understand everything that she is going on about, because it is simply too much.  The paper serves as a good overall synopsis of a lot of different approaches in the ethnographic realm, but I feel it has virtually no relevance outside of that.

Monday, October 10, 2011

Paper Reading #18: Biofeedback Game Design



References
Biofeedback Game Design: Using Direct and Indirect Physiological Control to Enhance Game Interaction by Lennart E. Nacke, Michael Kalyn, Calvin Lough, and Regan L. Mandryk.  Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.


Author Bios

  • Lennart E. Nacke is currently an Assistant Professor for HCI and Game Science at the Faculty of Business and Information Technology at UOIT.  He holds a PhD in game development.
  • Michael Kalyn is currently a graduate student in Computer Engineering at the University of Saskatchewan.  He spent the summer working for Dr. Mandryk in areas related to interfacing sensors and affective feedback.
  • Calvin Lough is currently a student in at the University of Saskatchewan.
  • Regan L. Mandryk is currently an Assistant Professor in the Interaction Lab in the Department of Computer Science at the University of Saskatchewan.  

Summary
Hypothesis
  • The authors propose a system of direct and indirect physiological sensor input to augment game control.

Methods

  • The researchers wanted to answer two main questions: 1. How do users respond when physiological sensors are used to augment rather than replace game controllers?   And 2. Which types of physiological sensors (indirect versus direct) work best for which in-game tasks?      They designed a shooter game that uses a traditional game controller as the primary input and augmented it with physiological sensors.  In the actual study participants played with three combinations of physiological and traditional input. Two of the game conditions mapped two direct and two indirect sensors to the four game mechanics, while the third condition used no physiological input.  The physiological sensors used as direct control were respiration, EMG on the leg, and temperature. The indirectly controlled sensors included GSR and EKG.  All participants played all conditions and filled out a questionnaire.  They were also given instructions regarding how to control the physiological sensors.


Results


  • The participants seemed to prefer when controls matched a natural input, such as flexing the legs for more jumping power.  Overall the subjects seemed to appreciate the added level of involvement, but there was some concern that it made gameplay more complicated.  When asked about the novelty, users agreed that it was a very novel idea and that some of the controls had a little bit of a learning curve.  However, once the curve was conquered the overall experience was more rewarding.  Regarding preferred sensors: For target size increases and  flamethrower length, players RESP to GSR. For speed and jump height they preferred EMG to EKG. For controlling the weather and speed of the yeti, players preferred TEMP to EKG. 


Contents

  • This article delves into an area of gaming that has plenty of room to be explored.  Namely, physiological interaction.  The topics presented in this research focus on learning how people react to different types of sensors and which kinds are preferable in given situations.  It also explored the gap between traditional controls and learning to adapt to the new sensing controls.  The overall feedback was positive, but there were some areas that might have been a little un-intuitive or difficult to pick up on.

Discussion
I am very excited about this direction in the world of gaming and I think there will be a great market for it once some of the details are hammered out.  As for the paper itself, I think it did a reasonably good job of laying some foundation for future work, but I think they could have gone a little bit further.  For example, I would have liked to have seen a broader variety of sensors and perhaps a more diverse test group, although technically their target audience would probably (initially) be similar to the actual participants.

Wednesday, October 5, 2011

Paper Reading #17 : Privacy Risks Emerging from the Adoption of Innocuous Wearable Sensors in the Mobile Environment

References
Privacy Risks Emerging from the Adoption of Innocuous Wearable Sensors in the Mobile Environment by Andrew Raij, Santosh Kumar, Animikh Ghosh, and Mani Srivastava.   Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.


Author Bios

  • Andrew Raij is a Post-Doc Fellow in the Wireless Sensors and Mobile Ad Hoc Networks Lab at the University of Memphis.
  • Santosh Kumar is currently an associate professor at the University of Memphis and leads the WiSe MANet Lab.
  • Animikh Ghosh is currently a Junior Research Associate at Infosys Labs in India and spent time as a researcher in the WiSeMANet Lab at the University of Memphis.
  • Mani Srivastava is currently a professor in the Electrical Engineering Dept. and Computer Science Dept. at UCLA. 

Summary
Hypothesis
With wearable sensors becoming more popular, there is an increasing concern for information about potentially private behaviors becoming more accessible and more easily abused.
Methods
The researchers divided participants into two groups.  One would be monitored for several days and have some basic information recorded, the other group acted as a control group and had no such monitoring.  Both groups filled out a survey before the study began to indicate their feelings on certain aspects of potentially private behavior.  The group that was monitored for a few days was shown the results of the observation period and given some of the conclusions that were drawn from these results.  They were then asked to fill out another survey with their new perspective.
Results
The researchers found that people were least concerned about privacy when the data was not directly and obviously their own.  The group that was not monitored expressed a lower level of concern than the group that was monitored.  Also, the group that was monitored expressed an increased level of concern after the observation period had ended and they were able to see the results.  The researchers also noted that knowing who would have access to the data made a significant difference in the amount that people would care about privacy.  People tended to be more worried about data being available to a larger number of people, or the public in general.  Participants also were more concerned when a timeline or schedule of behavior was established.  Overall, the two most important areas of concern to people were those involving stress and conversation periods.
Contents
After establishing that there is increasing need for privacy awareness, the authors of the paper performed an experiment to find out how much people actually care about what sort of information they might be providing, even when the information was collected just through basic sensors.  


Discussion
This paper is a little bit different from the others we have been reading as it is primarily a research project about people's reactions to a current issue rather than a specific technology.  I think that the researchers did manage to achieve the goals that they had outlined in the beginning, but there is a lot of room for follow up in this area.  The fact is, as technology continues to advance so will the capacity for its abuse.  I think what we will find in the future is that people simply must maintain constant vigilance if they really want to protect their privacy.

Paper Reading #16: Classroom-Based Assistive Technology

References
Classroom-Based Assistive Technology:  Collective Use of Interactive Visual Schedules by Students with Autism by Meg Cramer, Sen H. Hirano, Monica Tentori, Michael T. Yeganyan, Gillian R. Hayes.  Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.
Author Bios
  • Meg Cramer and Sen Hirano are both currently graduate students in Informatics at UC Irvine in the School of Information and Computer Science.
  • Monica Tentori is currently an assistant professor in computer science at UABC in Mexico, and is a post-doc scholar at UC Irvine.
  • Michael T. Yeganyan is an Informatics STAR Group Researcher at UC Irvine and hold an MS in Informatics. 
  • Gillian R. Hayes is currently an assistant professor in Informatics in the School of Information and Computer Science at UC Irvine.  She also directs the STAR group.
Summary
Hypothesis
The vSked system can offer an improved interactive and collaborative level of assistance over current technologies that are aimed at aiding students with Autism.
Methods
The testing and observations were spread over three deployments of vSked.  The teachers and aids were interviewed and asked for comments regarding the system.  The students were observed but not directly interviewed as they all demonstrated little to no verbal communication skills.  The scores of assessments were based on several points of interest, such as level of consistency and predictability in the schedule, student anxiety, and teacher awareness of behavior.  All field notes, interviews, images, and videos were inspected using a mixed-methods approach.  Researchers analyzed the data for evidence that vSked was supporting student and teacher needs, and then examined the data in detail for emergent themes.
Results
Teachers noted that even during the first few days of use students would progress through activities with much less need for prompting from the instructor.  Teachers noted that the addition of photorealistic images seemed to help the students understand, and that the kids also seemed to be much more comfortable with the new calendar system.  The schedules are automatically updated and show the entire day's activities, which helped the students focus and reduced distractions.  The vSked system also seemed to help facilitate students' ability to demonstrate their knowledge, as one teacher noted surprise at how well a student was able to answer questions that had been assumed to be beyond understanding.  Despite the push for independence, the teachers were comfortable with the prompting given by the system.  Overall, feedback from teachers and aides was extremely positive.
Contents
The paper asserts that this vSked system has the potential to offer an unprecedented level of cooperative learning for autistic students.  It goes on to describe the results of several experimental deployments over a total of about 5 weeks in a classroom.  The teachers had very positive feedback, and the students seemed to enjoy it as well.  The authors note that it may have a few shortcomings such as being a little bit inflexible as far as ad hoc changes, but that there is a lot of potential for new features and developments.
Discussion
I am very impressed with this paper.  Instead of discussing a brand new technology, it focuses instead on applying already-known techniques and tools to improve quality of life.  It is really more about how a new technology can fit into people's lives rather than a simple statement of "here is this cool thing you can do, but I don't know if people will ever really use it".  It seems to me that this contributes more on a humanitarian and human interaction level than as a clever new technology, but the interaction between human and machine is, of course, very much in need of exploration and improvement.  

Monday, October 3, 2011

Paper Reading #15: Madgets: Actuating Widgets on Interactive Tabletops

References
Madgets: Actuating Widgets on Interactive Tabletops by Malte Weiss, Florian Schwarz, Simon Jakubowski, and Jan Borchers.  Published in UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology.


Author Bios

  • Malte Weiss is currently a PhD student at the Media Computing Group of RWTH Aachen University.  His research focuses on interactive surfaces and tangible user interfaces.
  • Florian Schwarz is currently an assistant professor of linguistics at the University of Pennsylvania.  He holds a PhD in Linguistics from the University of Massachusetts.
  • Simon Jakubowski is currently a Research Scientist at AlphaFix and spent time as a research scientist at the University of Texas Medical School in Houston.  
  • Jan Borchers is currently a professor of computer science at RWTH Aachen University.  He holds a PhD in Computer Science from Darmstadt University of Technology. 

Summary
Hypothesis

  • The interactive tabletop that uses "Madgets" is an improvement over currently existing similar technologies because it can enable interaction with and actuation of complex physical controls and is easy to control, low-cost, and does not require any built-in electronics or power sources for actuation or tracking.

Methods

  • The main goals when designing Madgets were flexibility, lightweight, and easy to build.  The display is a 24" monitor and the actuation is controlled by an array of electromagnets and an Arduino board.  Sensing is done through visual tracking and the widget controls are made of transparent acrylic.  The algorithm to control movement, orientation, and configuration of complex controls must be aware of the physical properties of each and can compute the path leading from the current to the target position.  

Results

  • The system provides a platform for rapid prototyping of medium fidelity prototypes.  Creating a new widget can take less than an hour, the tracking and actuation are maintained by simply gluing respective markers to the control, and registering new controls takes about two hours.  Their system suggests performing more iterations on medium fidelity prototypes, which makes for cheaper and easier hardware design.

Contents

  • After a brief explanation of why this approach is beneficial and unique when compared to existing technology, the paper launches a full-out description of all of the ins and outs of the actual system.  It describes the physical makeup, the sensing controls, the motion algorithm, and the widget actuation and interaction with its environment.



Discussion
I am convinced that the researchers put together an effective system, but it seems like they fell just a little short of making it truly rapid prototyping kit.  After all, it takes several hours to get a new widget fully created and integrated into the system.  I was also rather disappointed by the lack of user study and feedback that has been so characteristic of the other papers we've read thus far.  I hope that in the future, in addition to the improvements outlined in the paper, they will focus a little more on how the average user might react.

Sunday, October 2, 2011

Paper Reading #14 : Tesla Touch: Electrovibration for Touch Surfaces

References
Tesla Touch: Electrovibration for Touch Surfaces by Olivier Bau, Ivan Poupyrev, Ali Israr, Chris Harrison.   Published in UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology.


Author Bios


  • Olivier Bau holds a PhD in Computer Science and is currently a PostDoctoral Research Scientist at Disney Research in Pittsburgh.
  • Ivan Poupyrev is also currently a Research Scientist at Walt Disney Research, and holds his PhD from Hiroshima University in Japan.
  • Ali Israr is currently part of the Interaction Design team at Disney Research, and his research primarily focuses on haptics.  He holds a PhD in Mechanics from Purdue.
  • Chris Harrison is a fifth year PhD student in the HCI Institute at Carnegie Mellon University.  He is also a Microsoft Research PhD Fellow.


Summary
Hypothesis
The principle of electrovibration offers an advantageous alternative to current tactile interfaces for touch surfaces.
Methods
In the subjective evaluation, ten participants were exposed to four Tesla-touch textures.  For each texture, participants answered questions and described the sensations.  For determining detection and discrimination thresholds, ten participants spent about 15 minutes each in the detection threshold, and 7 participants spent about 10 minutes each in the frequency/amplitude detection.   The participants were exposed to varying levels and intensities of frequency and amplitude signals and asked to identify when they were detectable.  
Results
From the subjective evaluation, higher-frequency stimuli were perceived as smoother compared to lower frequencies, with descriptions like "paper" versus "wood".  They found that the effect of amplitude depended on the underlying frequency.  Increasing amplitude for high frequencies caused an increase in the perceived smoothness, where for low frequencies it caused an increased in perceived stickiness.   In determining the detection thresholds and amplitude discrimination thresholds, they found that frequency had a very significant effect on determinable threshold, and that frequency seems to have little effect on amplitude JND.
Contents
This paper discusses the development, testing, and application of a different approach to generating tactile feedback to touch interfaces.  The authors discuss the use of electrovibration, which essentially uses very low levels of current to stimulate the finger tips.  They tested different combinations of amplitude and frequency to identify the specific sensations that people associate with, and also tested exactly where the cut off levels are for being detectable at all.  Following the testing, they discussed the results and gave suggestions as to how this might be applicable, such as through tactile information layers.
Discussion
I was impressed with the level of completeness provided in this paper.  They discuss every part of the process from beginning to end, from potential benefits to potential uses.  I feel that this shows a high level of awareness of the paper's place in the tech ring.  As for the topic itself, this sort of tactile response is almost certain to become more prevalent in the next few years and this particular approach does seem to show a number of advantages over what is currently available.  I don't have any complaints about the paper, and I think it did a pretty good job of covering all major points of interest.