Sunday, January 31, 2010

The Design of Everyday Things, by Donald A. Norman

Comments

Summary

The Design of Everyday Things, by Donald A. Norman, is a book focused on good design. In this book, he takes all the examples of bad design in the world, breaks apart the situation into each component that went wrong, and explains how to fix the object, and how to design everyday objects that are easier to use. He adopts a user-centric view of the world, and stresses that designers put themselves in the shoes of the user and make design choices based on what would help them the most. Below are the major points that I thought were the most important design principles he discussed:
  • Visibility: Make all aspects of the device visible and easy to see. The user can look at the device and know all parts of it instantly.
  • Good Mappings: The visible parts of the objects easily map to their purpose and the way they can be used. For example, light switches are placed in close proximity to the light, or moving a handle forward moves the robot forward as well.
  • Feedback: After a user does an action with the device, there is immediate feedback about the result of the action so they know if their action was successful or not.
  • No Arbitrary Actions: Make it obvious why the user has to do an action; if they do not understand why they are doing to action, then it seems arbitrary and is extremely hard to remember.
  • Use Affordances and Constraints: Objects have affordances in their design which easily and naturally explain how they should be used. For example, a button "affords" to be pushed. Constraints in design restrict the ways the device should be used. For example, if the battery is not supposed to be taken out, you should design the device so that those actions are constrained and not possible.
  • Knowledge in the World: Put information about the device in the world, and do not require the user to memorize all aspects of the device to be able to use it.
  • Reversible Actions: Any action that may harm the device or allow the user to delete or lose all of their work or data should be reversible, or there should be considerable warning before the user can complete the dangerous operation.
  • Design for Error: Think like you are the user, and take precautions in the design to eliminate errors or allow for them to be easily reversed.
In all, the main point that Norman was trying to get across is that a device with good design needs no explanation or instruction manual. The was the device is supposed to be used should be completely apparent from exploration and even from just looking at the device. He urged designers to think like the user and put their needs first and not focus completely on aesthetics and winning design awards.

Discussion

Reading the beginning of this book, I was immediately drawn in to the insightful comments about how these everyday objects that we use all the time are designed poorly. It opened my eyes, and I started to realize how much time I spent misusing devices and dealing with the bad design, when I should have been completing tasks using the device. I agree with all of Norman's points, the most important being constant feedback about the results of your actions. But, I disagreed in his criticism of designers and the design process in general. Designing a product that is efficient, cheap, easy to use, nice-looking, and marketable is an extremely hard process. While it is always important to always have the user in mind, it is also a giant task for the people doing the low-level work (mechanical designers, computer programmers) to visualize how the device will be used on an everyday basis instead of focusing on their next deadline for feature delivery. The process takes not only programmers and designers, but also managers that focus on the user.

While the book may have been relevant to readers back in the 80s when it was written, it is not as applicable now. Many products, especially computer systems and PCs, have changed so much since that time and are now much more user-friendly and accessible. The problems that he had with telephone systems are mostly gone (although I'm sure he would have lots to say today about cell phones). It seems that designers in today's world are starting to understand, and hire quality assurance testers, usability testers, and user-interface experts to make sure their products are usable before they start to sell them. But, not all designers do that. With increasing technology, designers feel like they have to add more and more features, and it comes with a price: only the most advanced and younger users who grew up with mobile devices and innovations like the ones that are released today can truly enjoy and use them correctly. Overall, it was a nice read, even though he really was extremely angry at computer programmers and systems designers!

Saturday, January 30, 2010

Ethnography Idea

Our ethnography idea has to do with the peer teaching program for the computer science department at TAMU. As Mike and I are peer teachers, we have noticed that only a few students are willing to ask questions from the TA or the peer teacher, and others will only email the peer teacher or TA. We wanted to find out what the qualities were of those that ask questions and what the qualities are of those that just sit and work on their own. The point of this ethnography is to find a way to make everyone comfortable about asking a peer teacher for help and find solutions that might help those that are shy or embarrassed ask questions.

We're going to measure the reactions of students and their demographic qualities based on three different types of peer teaching:

1. We walk around the room actively asking the students if they need help.
2. We sit at a computer for the whole period and wait until students raise their hands or ask us before we get up to help them.
3. A combination, where we walk around just every now and then.

We will also observe and see how many students will look in the textbook or online for answers to their questions before asking a peer teacher. Both Mike and I teach lots of sections of lower-level computer science classes, so we'll have lots of opportunities to observe. Hopefully these observations will help us come up with an idea or innovation to help them.

Idea Name: "Peer Teaching Analysis"
Members: Aaron Loveall, Mike Chenault

Friday, January 22, 2010

Disappearing Mobile Devices, UIST 2009

Comments

Summary

In Disappearing Mobile Devices, Tao Ni and Patrick Baudisch sought to determine what qualities “ultra-small” devices would have. These devices would be so small that normal input and output that existing mobile devices have would be obsolete. They would most likely have to be mounted onto the body in some way, such as on the earlobe or on the tip of a finger. Since the device is that small, an LCD or LED screen would be indistinguishable from a single pixel, making it useless for the human eye, and could amount to just one possible “button” or touch panel. These devices will eventually transcend human restrictions for device size (i.e. finger size and eye capabilities) and also physical device restrictions for size (i.e. camera size, light requirements). Without a screen or a useful button system, the only available form of input would be through a touch panel or motion sensor. In this paper, the researchers sought to determine if input into a device with just a single motion sensor would be intuitive and still useful as a mobile device for consumers. The main research and user studies were concerned with gesture recognition over the motion sensor using gesture languages that already exist: EdgeWrite, and Graffiti (a version of the language developed by Palm for their PDA series). They used both a small LED sensor attached to the arm of a test subject, and a wireless optical mouse turned upside down that had been modified for gesture recognition. After having test subjects try to input letters using these gesture languages into the device, they found that these devices could be used for text input with reasonable accuracy. They commented on the potential applications, including commands for a portable music player (navigation), controls for an AC system (i.e. the letter ‘u’ for up in temperature), and easy text entry for text messaging or emails.

Discussion

I thought this sounded super interesting! It would be really awesome to be able to text a friend using gestures across the arm, and get back a message to a receiver that would play onto my ear bone or close to my ear, and read the message to me. But, the model seems to break down there. There are so many applications that small mobile devices like the iPhone do that a small device without a screen could not. And so while it may be fun for a while to type using gestures, control your music library, and send text messages to friends (or even answer calls!), it would not be functional for viewing pictures, watching videos online, formatting documents, or reading the daily news. Plus, it will take industry a while to catch up to these researcher’s ideas and actually shrink devices small enough to fit on just one pixel of a screen!

Integrated Videos and Maps for Driving Directions, UIST 2009

Comments

Summary

This paper, Integrated Videos and Maps for Driving Directions, introduced a new way to merge map information (primarily drawings and symbols) with a visual image of the route being driven. The researchers explained that when driving a route for the first time, a person must look at the map more often to make sure they are going the correct way. But, after the person has driven the route a couple times, they have a visual image of important landmarks along the route (for example a church building at the corner where you must make a turn) and it helps you remember the route. The purpose of this innovation was to give those visual cues to a driver right before they make the drive in an attempt to help them with an unfamiliar route. The program works as follows: In the background, the normal map (like the Google Maps screen) is shown, and the user can pan and move the viewing point as usual. Along the route, thumbnails of the video of the route are shown, and the user can click on them to play the video section of this route. The videos are constructed using a couple of simple velocity and direction algorithms: If there is a long straightaway, then the video will fast forward through that section. But, if there is an important landmark at a turn, the picture will widen to show the landmark and freeze, giving the user time to remember that visual cue. The picture below shows an example of one of these turns. Then the video will continue on its way. The researchers hoped that this sort of navigation aid will help those with new routes they have never driven before.

Discussion

I thought this idea seemed very useful, until I noticed that it was meant to be viewed BEFORE you make the route. I feel like I sometimes have a bad memory, and if I was to view the video of landmarks along the route, I would just forget all of them and the system would be worthless to me. If they could find a way to integrate this system into the GPS devices that are included in almost all new cars now, it might be really useful. But, as the researchers commented on, the existence of GPS devices in cars that already give turn-by-turn directions make this research a little obsolete, unless you are someone who does not own one. In all, it seemed like a pretty cool mashup of Google Maps and Google Streetview, but didn’t seem too practical in our society where everyone has gadgets with GPS capabilities.

Wednesday, January 20, 2010

TapSongs: Tapping Rhythm-Based Passwords on a Single Binary Sensor

Comments

Summary
The article TapSongs: Tapping Rhythm-Based Passwords on a Single Binary Sensor, by Jacob O. Wobbrock, discusses a new way to login to a mobile device based on tapping a specific rhythm. Using any sort of binary sensor (has two states: tap down, and tap up), the system can record the times and duration of time that the user has tapped, and compare it against a previously determined tapping rhythm. The user creates a TapSong by tapping a rhythm maybe from a song or another familiar source, and repeats it about 15 times for the system to compute the average time between notes and the length of notes, and also to lower the average standard deviation of time differences. Every human being innately can recognize and reproduce a rhythm, but everyone plays the rhythm completely differently. When someone tries to login, the system time-warps the tapped rhythm linearly (i.e. stretches the sequence to fit the time length of the master TapSong), and measures the relative time difference between notes to determine if the TapSong is the correct login code. It also will measure the length of time that the user has held down for each note, which corresponds to a musical range between staccato and legato styles. In user studies, Wobbrock found that other users who watched and even heard the rhythm could only login as an imposter 10% of the time. This was attributed mostly to the fact that every human will reproduce a tap a little differently than someone else but always consistently with their own performances. The diagram below shows the possible standard deviations for the time location of each note in “Shave and a Haircut, Two Bits.”


Discussion
I thought this seemed like a very promising idea. I have played music all of my life (both piano and saxophone), and I have learned through the years that everyone has their own style. If two musicians are looking at the same sheet music and try to play it exactly the same, their performances will still be completely different. There will be differences in articulation style (staccato vs. legato), note length, rest hesitation, etc. So the fact that this TapSong system can differentiate between staccato and legato notes by measuring the length of time that the user keeps their finger down for a note is great. Since everyone has their own style (even non-musicians), then even if an imposter has the rhythm in front of them, and it is a recognizable song, then they still will have trouble logging in because they will not be able to emulate your individual style. And this technology could be easily implemented with existing technology like touch-screen devices and even the new headphone remote that Apple produces (which was discussed in the article). So, I would not be surprised if we are logging into our cell phones and iPods and even computers using our favorite song within the next year.

SemFeel: A User Interface with Semantic Tactile Feedback for Mobile Touch-screen Devices, from UIST 2009

Comments

Summary
This paper, SemFeel: A User Interface with Semantic Tactile Feedback for Mobile Touch-screen Devices, by Yatani and Truong, presented a prototype of a new kind of vibration system for a touch-screen device like an iPod or a phone. The purpose is to improve on previous vibration systems that only vibrated the whole phone, or only in one specific location, and to present a “moving vibration” system in which users can distinguish between different patterns easier. With touch-screen keyboards that don’t have any sort of vibration feedback, it can be really hard to tell what you’re typing without always looking at the screen. SemFeel is designed to allow a user to use their device without always being forced to watch the screen.

SemFeel uses 5 different vibrators that can be tuned to 3 different vibration strengths to produce different patterns, such as top to bottom, or in a clockwise circular fashion. The picture below shows the prototype and the locations of the vibrators. The researchers, after an extensive user study test, found that 83.3 – 93.3% of the time, users can distinguish between the different patterns. The applications are far-reaching, including Braille on touch screens for the blind, and improved keyboard response time when typing on a touch-screen keyboard.

Discussion

I thought this technology seemed pretty interesting because having your phone vibrate from top to bottom to top when it’s ringing would be pretty cool! But I don’t really agree with the researcher’s reasons for developing the prototype: users want to know what they are pressing on when they aren’t looking at the screen at all. One example they gave was a calendar program in which the user would tap the top of the screen for the morning, the middle for the afternoon, and the bottom for the evening, and the phone would vibrate in the correct area with a vibration strength proportional to how busy the user is. For me, a calendar is supposed to be a list of things you have to do and what times you have to do them at. How can you possibly get any important information about your schedule without even looking at the phone? The only application that the researchers listed that made sense was Braille on a touch-screen. I thought it was pretty cool that almost 90% of blind people without any training on the touch-screen system could recognize the “vibration Braille” with only their previous knowledge of Braille on paper. So maybe this would be useful in the future, but I really don’t know if many blind people are buying touch-screen phones.

Tuesday, January 19, 2010

First Post

Hi, I'm Aaron Loveall, and this is my blog for CSCE 436, Computer-Human Interaction, at Texas A&M University. This blog will mostly consist of postings about UIST articles and different books that I will read during the class....Thanks and Gig 'Em!