- Visibility: Make all aspects of the device visible and easy to see. The user can look at the device and know all parts of it instantly.
- Good Mappings: The visible parts of the objects easily map to their purpose and the way they can be used. For example, light switches are placed in close proximity to the light, or moving a handle forward moves the robot forward as well.
- Feedback: After a user does an action with the device, there is immediate feedback about the result of the action so they know if their action was successful or not.
- No Arbitrary Actions: Make it obvious why the user has to do an action; if they do not understand why they are doing to action, then it seems arbitrary and is extremely hard to remember.
- Use Affordances and Constraints: Objects have affordances in their design which easily and naturally explain how they should be used. For example, a button "affords" to be pushed. Constraints in design restrict the ways the device should be used. For example, if the battery is not supposed to be taken out, you should design the device so that those actions are constrained and not possible.
- Knowledge in the World: Put information about the device in the world, and do not require the user to memorize all aspects of the device to be able to use it.
- Reversible Actions: Any action that may harm the device or allow the user to delete or lose all of their work or data should be reversible, or there should be considerable warning before the user can complete the dangerous operation.
- Design for Error: Think like you are the user, and take precautions in the design to eliminate errors or allow for them to be easily reversed.
Sunday, January 31, 2010
The Design of Everyday Things, by Donald A. Norman
Saturday, January 30, 2010
Ethnography Idea
Friday, January 22, 2010
Disappearing Mobile Devices, UIST 2009
In Disappearing Mobile Devices, Tao Ni and Patrick Baudisch sought to determine what qualities “ultra-small” devices would have. These devices would be so small that normal input and output that existing mobile devices have would be obsolete. They would most likely have to be mounted onto the body in some way, such as on the earlobe or on the tip of a finger. Since the device is that small, an LCD or LED screen would be indistinguishable from a single pixel, making it useless for the human eye, and could amount to just one possible “button” or touch panel. These devices will eventually transcend human restrictions for device size (i.e. finger size and eye capabilities) and also physical device restrictions for size (i.e. camera size, light requirements). Without a screen or a useful button system, the only available form of input would be through a touch panel or motion sensor. In this paper, the researchers sought to determine if input into a device with just a single motion sensor would be intuitive and still useful as a mobile device for consumers. The main research and user studies were concerned with gesture recognition over the motion sensor using gesture languages that already exist: EdgeWrite, and Graffiti (a version of the language developed by Palm for their PDA series). They used both a small LED sensor attached to the arm of a test subject, and a wireless optical mouse turned upside down that had been modified for gesture recognition. After having test subjects try to input letters using these gesture languages into the device, they found that these devices could be used for text input with reasonable accuracy. They commented on the potential applications, including commands for a portable music player (navigation), controls for an AC system (i.e. the letter ‘u’ for up in temperature), and easy text entry for text messaging or emails.
Discussion
I thought this sounded super interesting! It would be really awesome to be able to text a friend using gestures across the arm, and get back a message to a receiver that would play onto my ear bone or close to my ear, and read the message to me. But, the model seems to break down there. There are so many applications that small mobile devices like the iPhone do that a small device without a screen could not. And so while it may be fun for a while to type using gestures, control your music library, and send text messages to friends (or even answer calls!), it would not be functional for viewing pictures, watching videos online, formatting documents, or reading the daily news. Plus, it will take industry a while to catch up to these researcher’s ideas and actually shrink devices small enough to fit on just one pixel of a screen!
Integrated Videos and Maps for Driving Directions, UIST 2009
Summary
This paper, Integrated Videos and Maps for Driving Directions, introduced a new way to merge map information (primarily drawings and symbols) with a visual image of the route being driven. The researchers explained that when driving a route for the first time, a person must look at the map more often to make sure they are going the correct way. But, after the person has driven the route a couple times, they have a visual image of important landmarks along the route (for example a church building at the corner where you must make a turn) and it helps you remember the route. The purpose of this innovation was to give those visual cues to a driver right before they make the drive in an attempt to help them with an unfamiliar route. The program works as follows: In the background, the normal map (like the Google Maps screen) is shown, and the user can pan and move the viewing point as usual. Along the route, thumbnails of the video of the route are shown, and the user can click on them to play the video section of this route. The videos are constructed using a couple of simple velocity and direction algorithms: If there is a long straightaway, then the video will fast forward through that section. But, if there is an important landmark at a turn, the picture will widen to show the landmark and freeze, giving the user time to remember that visual cue. The picture below shows an example of one of these turns. Then the video will continue on its way. The researchers hoped that this sort of navigation aid will help those with new routes they have never driven before.
Discussion
I thought this idea seemed very useful, until I noticed that it was meant to be viewed BEFORE you make the route. I feel like I sometimes have a bad memory, and if I was to view the video of landmarks along the route, I would just forget all of them and the system would be worthless to me. If they could find a way to integrate this system into the GPS devices that are included in almost all new cars now, it might be really useful. But, as the researchers commented on, the existence of GPS devices in cars that already give turn-by-turn directions make this research a little obsolete, unless you are someone who does not own one. In all, it seemed like a pretty cool mashup of Google Maps and Google Streetview, but didn’t seem too practical in our society where everyone has gadgets with GPS capabilities.
Wednesday, January 20, 2010
TapSongs: Tapping Rhythm-Based Passwords on a Single Binary Sensor
Discussion
SemFeel: A User Interface with Semantic Tactile Feedback for Mobile Touch-screen Devices, from UIST 2009
Discussion
I thought this technology seemed pretty interesting because having your phone vibrate from top to bottom to top when it’s ringing would be pretty cool! But I don’t really agree with the researcher’s reasons for developing the prototype: users want to know what they are pressing on when they aren’t looking at the screen at all. One example they gave was a calendar program in which the user would tap the top of the screen for the morning, the middle for the afternoon, and the bottom for the evening, and the phone would vibrate in the correct area with a vibration strength proportional to how busy the user is. For me, a calendar is supposed to be a list of things you have to do and what times you have to do them at. How can you possibly get any important information about your schedule without even looking at the phone? The only application that the researchers listed that made sense was Braille on a touch-screen. I thought it was pretty cool that almost 90% of blind people without any training on the touch-screen system could recognize the “vibration Braille” with only their previous knowledge of Braille on paper. So maybe this would be useful in the future, but I really don’t know if many blind people are buying touch-screen phones.