Title: Evaluating the design of inclusive interfaces by simulation
Authors: Pradipta Biswas, and Peter Robinson.
Venue: IUI '10 Proceedings of the 15th international conference on Intelligent user interfaces
Comments
http://detentionblockaa32.blogspot.com/2011/04/paper-reading-23-natural-language.html
http://vincehci.blogspot.com/2011/04/paper-reading-23-automatic-warning-cues.html
http://vincehci.blogspot.com/2011/04/paper-reading-23-automatic-warning-cues.html
Summary
This paper details a proposed simulator for the design of assistive interfaces. The simulator can help predict possible interaction patterns when undertaking a task using different input devices in the presence of extraneous circumstances, such as disabilities. While the exact method of predication is not discussed in the paper, the authors describe the results of an experiment conducted with 7 individuals. Elements from figures 1 and 2 were isolated and displayed to each individual, and then they were asked to click on the same icon when it was displayed in a group.
The average relative error in response time was found to be 16% with a standard deviation of 54%. In 10% of the trials the relative error was more than 100%. Removing these outliers resulted in a average relative error of 6% with a standard deviation of 42%.
This paper details a proposed simulator for the design of assistive interfaces. The simulator can help predict possible interaction patterns when undertaking a task using different input devices in the presence of extraneous circumstances, such as disabilities. While the exact method of predication is not discussed in the paper, the authors describe the results of an experiment conducted with 7 individuals. Elements from figures 1 and 2 were isolated and displayed to each individual, and then they were asked to click on the same icon when it was displayed in a group.
The average relative error in response time was found to be 16% with a standard deviation of 54%. In 10% of the trials the relative error was more than 100%. Removing these outliers resulted in a average relative error of 6% with a standard deviation of 42%.
Discussion
I didn't like this paper very much. The general concept is interesting and somewhat unprecedented, as the authors are essentially attempting to quantify interface design, but the results were very poor in my opinion. Additionally, the sample size was too small to draw any meaningful conclusions from, and the paper states that all participants were trained for the experiment. Additionally, the authors appeared to think it was acceptable to remove 10% of the data points constituting outliers from the data analysis. If their system was implemented in interface design, I think a 10% chance for the system to just fail altogether on a prediction is extremely high, and not something that can be discounted.
I didn't like this paper very much. The general concept is interesting and somewhat unprecedented, as the authors are essentially attempting to quantify interface design, but the results were very poor in my opinion. Additionally, the sample size was too small to draw any meaningful conclusions from, and the paper states that all participants were trained for the experiment. Additionally, the authors appeared to think it was acceptable to remove 10% of the data points constituting outliers from the data analysis. If their system was implemented in interface design, I think a 10% chance for the system to just fail altogether on a prediction is extremely high, and not something that can be discounted.
I agree that the authors did a poor job on this paper. It feels like so much more work should be done on this.
ReplyDelete