Tuesday, April 13, 2010

Automatic Evaluation of Assistive Interfaces: IUI 2008

Comments
not yet...

Summary
In Automatic Evaluation of Assistive Interfaces, the researchers wanted to take existing HCI user modeling programs and extend them to simulate the actions of disabled users. HCI modelling programs are used to evaluate interfaces by providing a simulated "user" that performs optimally in the interface. But, there is no current model that simulates a disabled user of the system. The researchers wanted to present a new model that would simulate the disabled user as well as a normal user in order to evaluate an "assistive interface" (one that helps disabled users with the system) without having to find lots of disabled people to test the system.

The system was built to simulate many different things:
  • Simulating the Practice Phase: Assuming that the user had no idea how to use the system, and could not read the buttons but knew where they were (a blind person and the 'tab' key), the model could try out different options and learn from the feedback.
  • Visual Simulation: Using the actions of the keyboard and positions of the mouse, the interface can track the "visual location" of the eyes in order to change interaction and assist the user.
  • Motor Simulation: Using a variety of specified disabilities, the model was able to simulate the time it would take for the user to select an option or use the mouse, allowing the interface to be evaluated and new interaction paradigms developed.

In general, the model that the researchers developed was able to accurately simulate a variety of disabilities in order to evaluate assitive interfaces without the need for lots of participants.

Discussion
While this seems useful for developers who want to save time developing applications, it seems that we would still need to do user studies with actual participants that are disabled and get their input into how they would like an interface to work. In general, models are an approximation to reality, and it doesn't seem that you could produce an accurate model to exactly simulate a disabled person (except if you have a model that would only navigate through audible feedback, possibly).

Wednesday, April 7, 2010

Opening Skinner's Box: Great Psychological Experiments of the Twentieth Century


Comments
Jill


Summary
In Opening Skinner's Box: Great Psychological Experiments of the Twentieth Century, Lauren Slater details ten of the greatest psychological experiments of this past century that have shaped the way we think about the human mind and human behavior. She narrates the stories of these researchers and their work as if we are reading a story or watching a movie. The researchers she describes are:
  • B. F. Skinner: Skinner experimented with rats and conditioning, and found that the mind is extremely receptive to rewards, which strengthen conditioning, and that the mind is not as receptive to punishment, which weakens conditioning.
  • Stanley Milgram: Milgram designed an experiment where the participant was told to shock another participant up until the point where the shock would deliver death, and found that 65% of participants shocked up until death. His experiments taught us a lot about human's obedience to authority.
  • David Rosenhan: Rosenhan and some helpers admitted themselves into mental hospitals saying that they were hearing a voice that said "thud". They found that even though they were perfectly sane, and said so after being admitted, they still would be kept in the hospital for a long time, and psychologists would swear that they were psychotic. This showed the subjectiveness of psychiatric diagnosis.
  • John Darley and Bibb Latane: Darley and Latane, through their experiments, found that when a crisis happened and someone needed help, that if a bystander perceived there were lots of other people there, would not help. These people would wait a long time, and would never truly decide on whether to help or not. But, when they thought they were the only person there to help, they would help almost immediately.
  • Leon Festinger: Festinger studied the way that people will change their ideas and beliefs based on their actions, primarily studying the way that cult members reacted whenever the "day of judgement" and the end of the world did not come as they had predicted.
  • Harry Harlow: Harlow studied how infant monkeys came to be attached to a fake "mother" that had soft cloths on it, versus a mother that was hard and metal but provided food. He found that love does not have to do with providing resources and food, but instead has an aspect of touch as well as motion.
  • Bruce Alexander: In order to study the nature of addiction, Alexander placed some rats in a nice, clean environment, and others in a solitary, confined environment, and gave each of the rats water laced with morphine, and some regular water. They found that the rats in the bad conditions liked the morphine, while the rats with the nice environment didn't like the morphine, suggesting that addiction is not a physical dependency but instead a result of situation.
  • Elizabeth Loftus: Loftus showed that memories of our past quickly disintegrate and that we can never trust them. She helped participants in her experiments "remember" fake memories of being lost in the mall, and the participants were almost 100% sure that they had remembered this fake memory.
  • Eric Kandel: Kandel showed that memory is strengthened by increasing the strength of connections between neurons, and that a specific substance called CREB helps that strengthening.
  • Antonio Moniz: Moniz pioneered the practice of brain surgery, specifically the lobotomy, in order to treat patients that had depressed or psychosis. While further refined, a lot of his techniques are used today.
In all, Slater presented some great examples of important psychological experiments that have shaped and changed the field as practiced today.

Discussion
I actually really liked this book because of the way that Slater wrote. She formulated and changed all of these boring experiments to be interesting stories that really give great insight into the human mind. Especially interesting is the fact that while many of these discoveries are of grave import, and should change the field completely, psychologists today still widely discredit them and continue to believe otherwise. One such example is the addiction example, as kids today are still taught that drug addition is physical.

Sunday, March 28, 2010

The Inmates are Running the Asylum: Chapters 8 - 14


Comments

Summary
In the second half of The Inmates are Running the Asylum, by Alan Cooper, he starts giving examples of ways to fix the interaction design problems faced in software development that he presented in the first half of the book. His most important points are below:
  • Persona Design: Interaction designers focus on developing not for all possible users but instead for a few specific "personas", or model users. These personas are specified as much as possible; each one has a name, background, occupation, and reasons why they would be using the software. Designing for personas allows the programmer to only develop features needed and make the program easier to use and address the users' needs.
  • Designing for Goals: Programs should not be designed solely to allow a certain task to be performed; they should be designed to meet a user's practical goals. This means goals that they want to accomplish on a daily basis (while ignoring the edge cases), and also personal goals that they want followed when using the program (for example, not being made to feel stupid).
  • Interaction Design First: You must let interaction design happen first before the programming happens, and not be tacked on at the end. And this does not mean just simple user interface design; interaction design tackles the deeper issues of how the program interacts with the user and the choices that the user has to make.
  • Give the Designer the Responsibility: The interaction designer must have "skin in the game" and be given all the responsibility of the program. They design the program, create the specifications, and give it to the programmers. Since the programmers aren't responsible for the interaction with the user and the success or failure of the product, they will follow the design document much more closely.
In general, Cooper says that design must happen first, and must be given as much time and resources as needed. If you do this, then the time and money spent during development and programming of the product will be reduced, and your product will be much more successful. He illustrated these concepts using many examples from his design experience with his own consulting firm.

Discussion
I liked the second half of the book much more because of the fact that he stopped accusing programmers of being so horrible and actually gave sensible ideas on how to fix the interaction design problem. I especially liked the idea of designing for a specific persona - it definitely seems that you will have a much more successful product if you design for and completely satisfy one type of person, and it follows that you will also make people sort of similar to them happy as well. I liked the examples that he gave of his design experience, because they accurately illustrated his concepts, but sometimes it got a little too much and sounded like he was trying to convince us to call him up to come solve our design problems. Again, this book was written in 1999 and a lot of the issues (with Microsoft especially) have been fixed and we're facing many different interaction problems today with new technology such as touch.

Wednesday, March 10, 2010

Towards More Paper-like Input: Flexible Input Devices for Foldable Interaction Styles

Comments
Kerry

Summary
In Towards More Paper-like Input: Flexible Input Devices for Foldable Interaction Styles, the researchers (from Queen's University) present a new form of input with foldable interactive sheets in which the user deforms the sheet to produce input to a device. Deformability of the interactive sheet can directly mirror the actions available on the interface, and mimics the physics of real sheets of paper. This Foldable Input Device (FID) looks and behaves like a a mouse pad, and consists of cheap sheets of cardstack and paper with IR retro-reflectors, tracked through an IR webcam. The computer that is connected to the sensor camera is equipped with C++ and OpenGL so that the deformations of the FID can relate to real-time graphics manipulations on the computer.
Interactions with the FID include swiping your thumb across the device, scooping, folding down the top corner, folding down the middle, squeezing, shaking, or leafing (like leafing through the pages of a book). These are shown in the picture below:

The applications for this type of input system are very great. You could navigate around a desktop by sliding the FID around on a table, selecting items by hovering over them then making a scooping motion with the sheet, leafing to browse through a list of items, shaking to re-sort a list, or zooming in and out by bringing the FID closer to or farther from a display. On the screen, there is a graphic (transparent) that hovers over the application, and represents the current shape and size of the FID so that the user can see what their actions do in relation to the application.

Discussion
This interface seems to follow the endless list of new input devices that completely rethink the way we interface with computers and devices. The problem is that I fail to see how easy or intuitive it would be to move a sheet of paper around and use it to select things on the screen, when I can just as easily use a mouse to select an area. The main problem with these new devices is that the general public must learn to not rely on physical feedback (such as physical clicks and the noise associated), and learn to use these other devices. I know that touch-screen interfaces are probably the future of computer interaction, but as of right now, I do not find it intuitive because I am not clicking and using a physical device. It will take some getting used to if we move all interaction to touch-screen.

Tuesday, March 9, 2010

OctoPocus: A Dynamic Guide for Learning Gesture-Based Command Sets

Comments
Nate

Summary
In OctoPocus: A Dynamic Guide for Learning Gesture-Based Command Sets, Olivier Bau and Wendy E. Mackay present a new dynamic guide that combines on-screen feedback to help users learn new sets of gestures to control their devi ces: OctoPocus. It can be applied to lots of different single-stroke gestures, and helps users smoothly learn the set with as little effort and time as possible. Their goal was to not focus on making the gesture-recognition algorithm better, but instead teaching the users to perform the gestures correctly so that they can use any algorithm.

The program focus on using "dynamic guides" to help the user learn the gestures. This consists of feedforward information, which explains to the user the current set of options, and what the gesture should look like if completed correctly, and feedback, which explains to the user how well the current gesture has been recognized. OctoPocus only appears if the user has hesitated, so that expert users can continue to work without being interrupted by an annoying explanation interface. When a user hestitates, the system reveals each gesture's intended path, and also feedback on how well the user has done the gesture thus far. If more than one gesture has the same initial stroke, the system will show the possible options from that current point in the gesture, as shown in the picture below.





After doing user studies with the system, the researchers found that users liked the system and were able to learn and execute gesture-based commands much more quickly than other help systems.

Discussion

I thought this seemed a little out of place, because I can't think of any current gesture-based systems where you make gestures to choose options. While touch-screen phones have gestures to zoom in or out, or move menus around, they usually have dedicated touch-screen buttons to execute commands. But, as I know that all input systems seem to be going towards touch-screen in the future, then we may possibly need these help systems for learning complicated gestures. I especially liked the idea that experts can go through the gestures quickly without being affected by the help system, because I know that if it always showed up then I would get tired fast!

Edge-Respecting Brushes

Comments

Summary
In Edge-Respecting Brushes, Dan R. Olsen Jr. and Mitchell K. Harris outline a new form of paint brush for painting programs (such as Adobe Photoshop) that is edge-detecting; that is, it identifies and paints around or along edges of an existing image. This edge-respecting algorithm combines both intelligent selection and applying affects to an image to allow the user to paint around, or over objects. For example, in the picture below, the user can use the edge-respecting brush to paint only the skin of a person in an image, or paint the background of a portrait another color without having to go through the tedious process of selecting only the section that they want to paint.


This brush works more effectively than the traditional fill paint can, which will fill down the the pixel of boundaries, allowing for leakage into other parts of the picture. This brush will allow for breaks in lines, and fill in only the part the user wants to paint. This allows for users to quickly fill in areas in a cartoon image for example, or change the color of an object in a picture quickly and easily. The brushes also extend to other non-painting tools, such as the brushes that apply blur to the picture or lighten or darken an area. When the user clicks the mouse, the algorithm fills outward using Dijkstra's algorithm (like normal flood fill), but terminates the recursive calls whenever a pixel that is a neighbor exceeds some given threshold. In normal flood fill, the algorithm terminates at the first pixel that is different from the mouse click pixel, even just by an unnoticeable amount of color difference. When users tried out this new brush algorithm, they felt it was must easier to use, quicker, less tedious, and more intuitive than the traditional selection - fill algorithms.

Discussion
This better come to Photoshop soon. This is possibly the most important innovation for computer painting systems because sometimes when editing pictures that are lower quality, fill areas may have smaller variations in color (like not all white, but some light grays as well). Usually I have had to paint the area manually zoomed in really close because the flood fill tools left out most of the fill area (because of the small color variations). This brush would make everything so much easier, and make photo editing much faster. Users that do not know all of the tools of Photoshop or other photo editing programs would find this more accessible and easier to use, especially for the newest novices.

Fixing the Program My Computer Learned: Barriers for End Users, Challenges for the Machine

Comments

Summary
In Fixing the Program My Computer Learned: Barrier for End Users, Challenges for the Machine, the researchers focused on the field of machine-learned programs, in which the computer takes measurements of user interaction and learns things from them, in order to make the program better suit the user. But, as the machine could learn the wrong things and have errors, the program may not work as well as it is supposed to. The researchers in this paper wanted to provide a way for users to debug and fix errors that have been "learned" into the knowledge base of a machine-learned program, and make this debugging process as easy and stress-free as possible.

The goal of this paper was to discover the issues faced when debugging learning systems, including the gender differences in debugging, and common issues faced by users in the process. They produced a prototype email application that learned from the way users sorted emails, and would archive and sort emails into folders based on past user actions. The debugging process in this prototype made sure to explain the logic behind the sorting of emails, so that the user can directly change and debug the logic of the program. The program allowed the user to ask the system questions such as "Why was this email filed in Personal?", or "Why is this email undecided?", and in this way they can directly change the logic.

Doing a study with the system (with half males and half females), it was found that the barrier most encountered was the "selection" barrier, which meant that they had difficulties choosing the right words or messages to modify to give feedback to the system, but they also encountered "coordination" barriers, which meant that they didn't know how their feedback would help change the learning process of the computer. Females in general had more barriers that they encountered than males, although males were more vocal in talking through their barriers.

Discussion
I thought this was an interesting way to debug a program. It would be really nice if I could ask my program when it crashed with errors what happened, and have it answer in English directly and explain to me what the error was. Especially for users that are not from a technical background, they can still tweak the performance of their programs and make them do exactly what they want to do, instead of having recommendation programs that continually recommend a movie or music that you don't really want to experience.