Sunday, April 25, 2010

Measuring Trust in Wi-Fi Hotspots

Comments
not yet...

Summary
In Measuring Trust in Wi-Fi Hotspots, the researchers wanted to study the effects of appearance of electronic communication websites on trust of the website or service. For their experiment, they setup at a couple of different locations (restaurants that provided food, drinks, and free Wi-Fi internet) and equipped each location with a "fake" Wi-Fi hotspot that patrons of the restaurant could connect to. When a user tried to connect to their hotspot, the homepage was shown with an image. They were asked to enter their mobile phone number (even though the service was free), and then were given a keycode to get access to the internet. After entering the keycode correctly, they were sent to a webpage with a debriefing message to explain what this experiment was about, and also assure them that their phone number would not be stolen. The main quality measure in this experiment was: Does a location-specific image on the homepage of a wireless internet hotspot affect the likeliness that a person will accept the hotspot as being secure and safe? The participants were divided into two sections:

  • Those that connected to the website and saw an image of the restaurant that they were at or a picture of the city they were in.
  • Those that connected to the website and saw a random image that did not correlate with the location they were in.
The researchers found that people were much more likely to give out their mobile phone number to a website that had the picture of the location they were in, even if it was an unfamiliar website that was asking for information (the phone number) that didn't seem necessarily, showing us interesting data about how trusting internet users are and how easy it is to fake, or "phish," a website or email from a fake provider.



Discussion
This seemed a little scary because I have traveled to different areas and connected to Wi-Fi networks that I had no proof were safe. If these researchers could easily add their own hotspot, then a hacker that wants to steal all of my information can do the same. This paper has definitely led me to reconsider how easily I trust sources on the internet or emails that are sent to me, and especially reconsider how easily I trust wireless access points.

Thursday, April 15, 2010

Obedience to Authority

Comments
not yet...

Summary
Obedience to Authority is Stanley Milgram's account of his famous and controversial experiments into the origins of obedience. The basic procedure of his experiment were as follows. A volunteer was brought into the lab at Yale University to participate in a study on "memory and learning." The volunteer was given the role of the "teacher," while another volunteer who was actually a volunteer working with the experimenter was given the role of the "learner." The teacher was instructed to read a list of word pairs to the learner, who was hooked up to an electric shock machine in the other room. Then, the teacher would ask a few questions about the word pairs, and if the learner got any of the answers wrong, the teacher was to shock him using a generator machine, increasing the voltage for each wrong answer. The voltage levels on the machine were labeled from "mild shock" to "severe shock" to "extreme shock" to "XXX," implying death. As the shock levels rose higher, the learner started to scream in pain and demand to be let out of the experiment, but the experimenter continually stressed that the experiment must go on and that the shocks did no permanent tissue damage. In reality, the learner was not actually being shocked, but instead acting in order to convince the teacher they were actually causing him harm.


Milgram found that generally about 60% of the participants were willing to shock the victim all the way to the last level of voltage, 450 volts, or supposed death. He made a few observations about the situation and explained the results in these ways:

  • When an autonomous person is put into a situation with a perceived authority, they automatically give up their autonomy and follow the experimenter.
  • Regardless of the fact that they felt what they were doing was morally wrong, the subject is still unable to break out of the perceived obedience and power hierarchy.
  • Even when in close proximity to the subject, the victim would still follow the orders of the experimenter,  suggesting that they had lowered the status of the victim in their mind.
  • Whenever an ordinary man gave the orders to shock and the experimenter was the "victim," the subject would listen to the experimenter's demands to stop the shocking, showing that the authority must have some sort of credibility (or at least claim to have some).
  • When the experimenter was not in the room, the subject did not feel obligated to shock at the high levels, showing that when the structure of obedience and authority is broken, the person is free.
  • The person does not feel responsibility for their actions in an authority structure because "they were just following orders" and the authority is the one with the responsibility.
  • Lastly, whenever the subject was given a choice of what shock level to use, they continued to use smaller shock levels, showing that the act of shocking at the higher levels was not an outlet for the subject to get out their aggression but instead an act of obedience to the experimenter.
While his experiments did not completely sum up the ideas of obedience, he did show (in a controversial way) the extreme effects that situation have on an individuals propensity to act in horrible ways, and that it was not just the horrible character and nature of the subjects.



Discussion
This book was extremely interesting - it was definitely one of the best chapters of Opening Skinner's Box. While I would like to say that I would not have shocked all the way to the last level, I also know the nature of how I was brought up (and how many other people in America are brought up) to respect authority and always listen. Even though through my childhood I hated authority and would always talk bad about and resent those who had authority but did not deserve it, I never had the courage to actually stand up to that authority and do what I wanted to do. So, I can definitely see how these participants came to trust and respect the experimenter and were scared to go against him because of his position of authority. Especially interesting was the fact that the subjects that had access to the phone would tell the experimenter they were shocking at higher levels when they would actually shock at a lower level. To me, this showed that they were scared to defy the experimenter to his face but would do it if he wouldn't find out. Sounds kinda like doing something your parent's told you not to do when they're not home! In all, this book was extremely interesting, and impressive in the detail and experiment configurations that Milgram went through to thoroughly prove the nature of obedience to authority.

Tuesday, April 13, 2010

Seeing is retrieving: building information context from what the user sees

Comments
not yet...

Summary
In Seeing is retrieving: building information context from what the user sees,  the researchers present a new system called "SeeTrieve" that helps classify documents based on the context that they are used in, and not just the actual contents of the file. Instead of storing the file contents, the SeeTrieve system stores the text that is stored around the document - that is, the text that is in the options used and other data within the application. It stores these text snippets and maps them to the documents that were used while these snippets were on the screen. As SeeTrieve only captures text the user views, it is more accurate for content recollection than a system that indexes large amounts of HTML data not seen by the user or many pages of a PDF that were never viewed. The picture below shows the way that text snippets are linked to files using a term index:


SeeTrieve takes in actions into a stream, in which each event has a timestamp. Anytime that a file is opened and later closed in an application, all of the text snippets that occur during the life of that file are associated with the file. It acquires the text snippets primarily through the accessibility functionality of most applications. Whenever any window changes visibility, a text snippet is made of the window and inserted into the trace of events; another snapshot is made every 3 seconds to catch events where the text has changed but the visibility of the window has not.

Evaluation of this retrieval system showed that it was much more successful for finding content than a traditional content-based search engine such as Google Desktop was. One example was that in a search for the name "James Gleick," SeeTrieve found the file "log1.jpg", because the text "James Gleick" was shown on the screen during the viewing of that image, but Google Desktop did not find that file, because the name was not anywhere in the content of the file. This showed that context-based searching is much more effective than searching just by context.

Discussion
I thought this system sounded amazing, and would not be surprised if we saw it included on most systems in the future. I have always thought that the one drawback of using traditional search systems whether used on a local computer or on the internet was that it only could search the file names, description, or text inside (if it's a text file), and could lose some semantic information based on the way it is used, or the way it is stored. While Google Images does this in a way (because it also searches for text on the webpage that an image is found on and not just the name and description of the image), it is not as thorough and useful as this system.

An Interactive Game Design Assistant

Comments
Jill

Summary
In An Interactive Game-Design Assistant, Mark J. Nelsom and Michael Mateas present a new tool that will help users with a low budget or no development knowledge design a game. Current tools on the market today help the user with the programming and graphics side of the development process, allowing for easy libraries and visual tools to let these novices develop their own video games. But, there are currently no tools that help the user with the design process - mapping game rules to rules within the system, helping the user make their topic of interest most prominent while exploiting the useful characteristics of games. The system that the researchers created is a game-design assistant that provides suggestions or automates the process of game design, helping the developers define the space of their game in real-world and common-sense terms.

This system would work in two ways:
  • Giving feedback to the user on the state of their current design.
  • Suggesting modifications and additions in an intelligent way
It would also include example rules, layouts, story designs, and themes that the user could put together to form an abstract view of their game, allowing more detailed work in a later stage.

The user can input design constraints to the actions that the user can do in a certain situation, and the system can present a prototype of that game or give extra suggestions in that way. The picture below shows an example of specifying the relationship between an "attacker" and an "avoider."















Discussion
This seems pretty interesting that in addition to extra software that helps the user with the programming part of the process that a normal user could create an interactive game. While the game is not likely to be very intuitive or look very great, allowing a novice user to try out game development with "training wheels" could inspire them to create their own games that are more complex and do not use these helpful recommendation systems. This is similiar to "beginner" musical instruments, where the user can move on to more difficult instruments that have a greater musical range if they are interested in it.

An Intelligent Fitting Room Using Multi-Camera Perception

Comments
not yet...


Summary
In An Intelligent Fitting Room Using Multi-Camera Perception, the researchers present a new way for evaluating which clothes to buy in a physical shopping situation. The system is made to be placed right outside of fitting rooms (not in the area where you actually change clothes, but in the lobby). The system consists of multiple cameras that can capture the user from multiple angles, as well as three screens. The center screen is for showing the current image of the user (just like a mirror). The left screen is for showing the past image of the user wearing the last thing that they tried on. This image will animate and change in real time with the movements of the user, so that if they turn around to see the back of their body, they can see what their back looked like with the past clothing as well. And lastly, the right screen is for showing social networking information based on the clothing being tried on. The cameras can identify the clothing, pull up information from a social fashion network, and show images of other people wearing similar clothing in order to see if the clothes are in style or not. The image below shows the construction of the system:




And the image below shows the prototype used to test the system:





The recognition system recognized user poses that were hard coded into the system, and recognized clothes based on texture, color, sleeves, collar, and other things.




Discussion
This system seems like it would be really awesome and really help the shopping experience go by a lot quicker. And, the idea that advertisements could be played on the screen while there is no one using it would definitely appeal to stores. But, it seems like it would be an extremely expensive addition to current stores, and probably could only be afforded by expensive boutique stores - the kind that look almost empty and only have 4 or 5 outfits to choose from, making it useless anyway.

Automatic Evaluation of Assistive Interfaces: IUI 2008

Comments
not yet...

Summary
In Automatic Evaluation of Assistive Interfaces, the researchers wanted to take existing HCI user modeling programs and extend them to simulate the actions of disabled users. HCI modelling programs are used to evaluate interfaces by providing a simulated "user" that performs optimally in the interface. But, there is no current model that simulates a disabled user of the system. The researchers wanted to present a new model that would simulate the disabled user as well as a normal user in order to evaluate an "assistive interface" (one that helps disabled users with the system) without having to find lots of disabled people to test the system.

The system was built to simulate many different things:
  • Simulating the Practice Phase: Assuming that the user had no idea how to use the system, and could not read the buttons but knew where they were (a blind person and the 'tab' key), the model could try out different options and learn from the feedback.
  • Visual Simulation: Using the actions of the keyboard and positions of the mouse, the interface can track the "visual location" of the eyes in order to change interaction and assist the user.
  • Motor Simulation: Using a variety of specified disabilities, the model was able to simulate the time it would take for the user to select an option or use the mouse, allowing the interface to be evaluated and new interaction paradigms developed.

In general, the model that the researchers developed was able to accurately simulate a variety of disabilities in order to evaluate assitive interfaces without the need for lots of participants.

Discussion
While this seems useful for developers who want to save time developing applications, it seems that we would still need to do user studies with actual participants that are disabled and get their input into how they would like an interface to work. In general, models are an approximation to reality, and it doesn't seem that you could produce an accurate model to exactly simulate a disabled person (except if you have a model that would only navigate through audible feedback, possibly).

Wednesday, April 7, 2010

Opening Skinner's Box: Great Psychological Experiments of the Twentieth Century


Comments
Jill


Summary
In Opening Skinner's Box: Great Psychological Experiments of the Twentieth Century, Lauren Slater details ten of the greatest psychological experiments of this past century that have shaped the way we think about the human mind and human behavior. She narrates the stories of these researchers and their work as if we are reading a story or watching a movie. The researchers she describes are:
  • B. F. Skinner: Skinner experimented with rats and conditioning, and found that the mind is extremely receptive to rewards, which strengthen conditioning, and that the mind is not as receptive to punishment, which weakens conditioning.
  • Stanley Milgram: Milgram designed an experiment where the participant was told to shock another participant up until the point where the shock would deliver death, and found that 65% of participants shocked up until death. His experiments taught us a lot about human's obedience to authority.
  • David Rosenhan: Rosenhan and some helpers admitted themselves into mental hospitals saying that they were hearing a voice that said "thud". They found that even though they were perfectly sane, and said so after being admitted, they still would be kept in the hospital for a long time, and psychologists would swear that they were psychotic. This showed the subjectiveness of psychiatric diagnosis.
  • John Darley and Bibb Latane: Darley and Latane, through their experiments, found that when a crisis happened and someone needed help, that if a bystander perceived there were lots of other people there, would not help. These people would wait a long time, and would never truly decide on whether to help or not. But, when they thought they were the only person there to help, they would help almost immediately.
  • Leon Festinger: Festinger studied the way that people will change their ideas and beliefs based on their actions, primarily studying the way that cult members reacted whenever the "day of judgement" and the end of the world did not come as they had predicted.
  • Harry Harlow: Harlow studied how infant monkeys came to be attached to a fake "mother" that had soft cloths on it, versus a mother that was hard and metal but provided food. He found that love does not have to do with providing resources and food, but instead has an aspect of touch as well as motion.
  • Bruce Alexander: In order to study the nature of addiction, Alexander placed some rats in a nice, clean environment, and others in a solitary, confined environment, and gave each of the rats water laced with morphine, and some regular water. They found that the rats in the bad conditions liked the morphine, while the rats with the nice environment didn't like the morphine, suggesting that addiction is not a physical dependency but instead a result of situation.
  • Elizabeth Loftus: Loftus showed that memories of our past quickly disintegrate and that we can never trust them. She helped participants in her experiments "remember" fake memories of being lost in the mall, and the participants were almost 100% sure that they had remembered this fake memory.
  • Eric Kandel: Kandel showed that memory is strengthened by increasing the strength of connections between neurons, and that a specific substance called CREB helps that strengthening.
  • Antonio Moniz: Moniz pioneered the practice of brain surgery, specifically the lobotomy, in order to treat patients that had depressed or psychosis. While further refined, a lot of his techniques are used today.
In all, Slater presented some great examples of important psychological experiments that have shaped and changed the field as practiced today.

Discussion
I actually really liked this book because of the way that Slater wrote. She formulated and changed all of these boring experiments to be interesting stories that really give great insight into the human mind. Especially interesting is the fact that while many of these discoveries are of grave import, and should change the field completely, psychologists today still widely discredit them and continue to believe otherwise. One such example is the addiction example, as kids today are still taught that drug addition is physical.

Sunday, March 28, 2010

The Inmates are Running the Asylum: Chapters 8 - 14


Comments

Summary
In the second half of The Inmates are Running the Asylum, by Alan Cooper, he starts giving examples of ways to fix the interaction design problems faced in software development that he presented in the first half of the book. His most important points are below:
  • Persona Design: Interaction designers focus on developing not for all possible users but instead for a few specific "personas", or model users. These personas are specified as much as possible; each one has a name, background, occupation, and reasons why they would be using the software. Designing for personas allows the programmer to only develop features needed and make the program easier to use and address the users' needs.
  • Designing for Goals: Programs should not be designed solely to allow a certain task to be performed; they should be designed to meet a user's practical goals. This means goals that they want to accomplish on a daily basis (while ignoring the edge cases), and also personal goals that they want followed when using the program (for example, not being made to feel stupid).
  • Interaction Design First: You must let interaction design happen first before the programming happens, and not be tacked on at the end. And this does not mean just simple user interface design; interaction design tackles the deeper issues of how the program interacts with the user and the choices that the user has to make.
  • Give the Designer the Responsibility: The interaction designer must have "skin in the game" and be given all the responsibility of the program. They design the program, create the specifications, and give it to the programmers. Since the programmers aren't responsible for the interaction with the user and the success or failure of the product, they will follow the design document much more closely.
In general, Cooper says that design must happen first, and must be given as much time and resources as needed. If you do this, then the time and money spent during development and programming of the product will be reduced, and your product will be much more successful. He illustrated these concepts using many examples from his design experience with his own consulting firm.

Discussion
I liked the second half of the book much more because of the fact that he stopped accusing programmers of being so horrible and actually gave sensible ideas on how to fix the interaction design problem. I especially liked the idea of designing for a specific persona - it definitely seems that you will have a much more successful product if you design for and completely satisfy one type of person, and it follows that you will also make people sort of similar to them happy as well. I liked the examples that he gave of his design experience, because they accurately illustrated his concepts, but sometimes it got a little too much and sounded like he was trying to convince us to call him up to come solve our design problems. Again, this book was written in 1999 and a lot of the issues (with Microsoft especially) have been fixed and we're facing many different interaction problems today with new technology such as touch.

Wednesday, March 10, 2010

Towards More Paper-like Input: Flexible Input Devices for Foldable Interaction Styles

Comments
Kerry

Summary
In Towards More Paper-like Input: Flexible Input Devices for Foldable Interaction Styles, the researchers (from Queen's University) present a new form of input with foldable interactive sheets in which the user deforms the sheet to produce input to a device. Deformability of the interactive sheet can directly mirror the actions available on the interface, and mimics the physics of real sheets of paper. This Foldable Input Device (FID) looks and behaves like a a mouse pad, and consists of cheap sheets of cardstack and paper with IR retro-reflectors, tracked through an IR webcam. The computer that is connected to the sensor camera is equipped with C++ and OpenGL so that the deformations of the FID can relate to real-time graphics manipulations on the computer.
Interactions with the FID include swiping your thumb across the device, scooping, folding down the top corner, folding down the middle, squeezing, shaking, or leafing (like leafing through the pages of a book). These are shown in the picture below:

The applications for this type of input system are very great. You could navigate around a desktop by sliding the FID around on a table, selecting items by hovering over them then making a scooping motion with the sheet, leafing to browse through a list of items, shaking to re-sort a list, or zooming in and out by bringing the FID closer to or farther from a display. On the screen, there is a graphic (transparent) that hovers over the application, and represents the current shape and size of the FID so that the user can see what their actions do in relation to the application.

Discussion
This interface seems to follow the endless list of new input devices that completely rethink the way we interface with computers and devices. The problem is that I fail to see how easy or intuitive it would be to move a sheet of paper around and use it to select things on the screen, when I can just as easily use a mouse to select an area. The main problem with these new devices is that the general public must learn to not rely on physical feedback (such as physical clicks and the noise associated), and learn to use these other devices. I know that touch-screen interfaces are probably the future of computer interaction, but as of right now, I do not find it intuitive because I am not clicking and using a physical device. It will take some getting used to if we move all interaction to touch-screen.

Tuesday, March 9, 2010

OctoPocus: A Dynamic Guide for Learning Gesture-Based Command Sets

Comments
Nate

Summary
In OctoPocus: A Dynamic Guide for Learning Gesture-Based Command Sets, Olivier Bau and Wendy E. Mackay present a new dynamic guide that combines on-screen feedback to help users learn new sets of gestures to control their devi ces: OctoPocus. It can be applied to lots of different single-stroke gestures, and helps users smoothly learn the set with as little effort and time as possible. Their goal was to not focus on making the gesture-recognition algorithm better, but instead teaching the users to perform the gestures correctly so that they can use any algorithm.

The program focus on using "dynamic guides" to help the user learn the gestures. This consists of feedforward information, which explains to the user the current set of options, and what the gesture should look like if completed correctly, and feedback, which explains to the user how well the current gesture has been recognized. OctoPocus only appears if the user has hesitated, so that expert users can continue to work without being interrupted by an annoying explanation interface. When a user hestitates, the system reveals each gesture's intended path, and also feedback on how well the user has done the gesture thus far. If more than one gesture has the same initial stroke, the system will show the possible options from that current point in the gesture, as shown in the picture below.





After doing user studies with the system, the researchers found that users liked the system and were able to learn and execute gesture-based commands much more quickly than other help systems.

Discussion

I thought this seemed a little out of place, because I can't think of any current gesture-based systems where you make gestures to choose options. While touch-screen phones have gestures to zoom in or out, or move menus around, they usually have dedicated touch-screen buttons to execute commands. But, as I know that all input systems seem to be going towards touch-screen in the future, then we may possibly need these help systems for learning complicated gestures. I especially liked the idea that experts can go through the gestures quickly without being affected by the help system, because I know that if it always showed up then I would get tired fast!

Edge-Respecting Brushes

Comments

Summary
In Edge-Respecting Brushes, Dan R. Olsen Jr. and Mitchell K. Harris outline a new form of paint brush for painting programs (such as Adobe Photoshop) that is edge-detecting; that is, it identifies and paints around or along edges of an existing image. This edge-respecting algorithm combines both intelligent selection and applying affects to an image to allow the user to paint around, or over objects. For example, in the picture below, the user can use the edge-respecting brush to paint only the skin of a person in an image, or paint the background of a portrait another color without having to go through the tedious process of selecting only the section that they want to paint.


This brush works more effectively than the traditional fill paint can, which will fill down the the pixel of boundaries, allowing for leakage into other parts of the picture. This brush will allow for breaks in lines, and fill in only the part the user wants to paint. This allows for users to quickly fill in areas in a cartoon image for example, or change the color of an object in a picture quickly and easily. The brushes also extend to other non-painting tools, such as the brushes that apply blur to the picture or lighten or darken an area. When the user clicks the mouse, the algorithm fills outward using Dijkstra's algorithm (like normal flood fill), but terminates the recursive calls whenever a pixel that is a neighbor exceeds some given threshold. In normal flood fill, the algorithm terminates at the first pixel that is different from the mouse click pixel, even just by an unnoticeable amount of color difference. When users tried out this new brush algorithm, they felt it was must easier to use, quicker, less tedious, and more intuitive than the traditional selection - fill algorithms.

Discussion
This better come to Photoshop soon. This is possibly the most important innovation for computer painting systems because sometimes when editing pictures that are lower quality, fill areas may have smaller variations in color (like not all white, but some light grays as well). Usually I have had to paint the area manually zoomed in really close because the flood fill tools left out most of the fill area (because of the small color variations). This brush would make everything so much easier, and make photo editing much faster. Users that do not know all of the tools of Photoshop or other photo editing programs would find this more accessible and easier to use, especially for the newest novices.

Fixing the Program My Computer Learned: Barriers for End Users, Challenges for the Machine

Comments

Summary
In Fixing the Program My Computer Learned: Barrier for End Users, Challenges for the Machine, the researchers focused on the field of machine-learned programs, in which the computer takes measurements of user interaction and learns things from them, in order to make the program better suit the user. But, as the machine could learn the wrong things and have errors, the program may not work as well as it is supposed to. The researchers in this paper wanted to provide a way for users to debug and fix errors that have been "learned" into the knowledge base of a machine-learned program, and make this debugging process as easy and stress-free as possible.

The goal of this paper was to discover the issues faced when debugging learning systems, including the gender differences in debugging, and common issues faced by users in the process. They produced a prototype email application that learned from the way users sorted emails, and would archive and sort emails into folders based on past user actions. The debugging process in this prototype made sure to explain the logic behind the sorting of emails, so that the user can directly change and debug the logic of the program. The program allowed the user to ask the system questions such as "Why was this email filed in Personal?", or "Why is this email undecided?", and in this way they can directly change the logic.

Doing a study with the system (with half males and half females), it was found that the barrier most encountered was the "selection" barrier, which meant that they had difficulties choosing the right words or messages to modify to give feedback to the system, but they also encountered "coordination" barriers, which meant that they didn't know how their feedback would help change the learning process of the computer. Females in general had more barriers that they encountered than males, although males were more vocal in talking through their barriers.

Discussion
I thought this was an interesting way to debug a program. It would be really nice if I could ask my program when it crashed with errors what happened, and have it answer in English directly and explain to me what the error was. Especially for users that are not from a technical background, they can still tweak the performance of their programs and make them do exactly what they want to do, instead of having recommendation programs that continually recommend a movie or music that you don't really want to experience.

Sunday, February 28, 2010

Hand Gesture Recognition and Virtual Game Control Based on 3D Accelerometer and EMG Sensors

Comments

Summary
In Hand Gesture Recognition and Virtual Game Control Based on 3D Accelerometer and EMG Sensors, the researchers propose a new form of input for virtual games and other applications using the motion of the hand. While previous forms of input using the hand would use only the accelerometer or the EMG sensors, this project takes both forms of input and combines them together for a more accurate and reproducible input for games.

The accelerometer is used to determine the direction of large scale gestures, and is not accustomed to use with small gestures. It can also sense rotation as well as large movement. The EMG sensor, or electromagnetic sensor, senses the electrical currents that muscles make as they move within the body, thus giving the system the knowledge of when a user has moved. The problem with EMG is that it is hard to determine the exact movement that was made, but it can sense when even the tiniest of movements occurs. The accelerometer (ACC) and the EMG inputs are processed in real-time and converted to recognized gestures and inputs to control the game.

To test this new innovation, the researchers built a virtual Rubik's Cube game, and allowed those in the study to manipulate the cube in real time in 3D space using the gesture input system. After being outfitted with EMG sensors along their wrist, and an accelerometer on the top of their hand, they were able to manipulate the cube, and the system was able to recognize their gestures, 91.7% of the time. Future work would include making the system more robust, extending applications to mobile device control, and the design of smaller sensors for inclusion in a glove, for example.


Discussion
I thought this seemed pretty interesting because unlike newer game control input systems (like Project Natal, by Microsoft, which uses a camera to detect user movement, gestures, and motions, without a controller), they are getting the exact movement from the user. It seemed similar to the accelerometer that Nintendo uses in their Wii controllers, but it would have a higher sensitivity as it actually senses the electrical impulses to the muscles in the body. My question would be if this system was actually more useful than the Wii motion controller system, or was just a novel way to control input using the whole hand and fingers instead of using an external controller.

Parakeet: A Continuous Speech Recognition System for Mobile Touch-Screen Devices

Comments

Summary
In Parakeet: A Continuous Speech Recognition System for Mobile Touch-Screen Devices, the authors present a new voice-recognition application for mobile touch-screen devices to allow the user to input text into the device. Using the voice recognition, the user can read the text they want to enter, along with using a simple touch-based interface for making corrections and changing words. The features are as follows:

  • Voice recognition software converts the spoken word into text.
  • When the recognition is done, a beep lets the user know.
  • The user can then scan the sentence and easily make changes to words. The words in the sentence are ordered at the top of the screen, and below in columns are examples of other possible words that sound like the word spoken, along with a delete button at the bottom. The user can either replace the word that was recognized, or delete the word altogether.
  • If the software is completely unable to recognize the word, it will default to a keyboard entry with predictive text as seen in current mobile devices.
After doing a user study, they found that users were able to enter up to 18 words per minute while sitting still, and 13 words per minute while walking around. While it may seem small, it is easily comparable to the 10 - 20 words per minute that each user had while entering text using the traditional T9 text prediction system (using a keyboard).


Discussion
If implemented correctly and included on all new cell phones, this entry system would revolutionize the text messaging system. I know lots of people that are older than me that don't see the point of text messaging...but if they could just speak what they want to send, just like they're having a conversation over the phone, and send that as a text message, then they could be more comfortable with the system and send messages to users that use traditional typing methods. But, the words per minute would have to be a lot higher before I would use it because I already text almost as fast as I type on the computer. There would be a lot of idiots walking around talking into their phone like the people with bluetooth headsets though....

Friday, February 26, 2010

Emotional Design



Comments

Summary
In Norman's first book, The Design of Everyday Things, he stressed the necessity of usability in products of all forms. With a first glance at a device or product, the user should instantly know how the product is supposed to be used, without question; they should not have to read a lengthy manual to understand the inner workings of the product. In Emotional Design, Norman departs from this notion partially and instead focuses on the emotional appeal of objects. His main point (as quoted by William Morris) is as follows:

Have nothing in your houses that you do not know to be useful, or do not believe to be beautiful.

Norman's description of emotional design includes the following points:
  • Users report that they are more happy with objects that are aesthetically pleasing, even if they do nothing or do not work as they are supposed to.
  • Appeal to the three levels of emotional response:
Visceral: the basic appearance and visual / aural / haptic reponses to the object
Behavioral: the pleasure and effectiveness of use (does it work correctly?)
Reflective: how the object appeals to the user in a high-level, like self-image, personal
reflection, or past memories
  • Make the object have its own unique personality.
  • The object should be not only beautiful but 'fun' as well. Beauty, fun, and pleasure all work together to form a sense of enjoyment of the product or object.
  • Products should go beyond the initial expectations, produce an instinctive reactionary response, and deliver surprising novelty.
  • Communication between product and user should make the user feel important and respected.
  • Machines or robots that are created should have emotional responses so as to react with humans appropriately.
  • Allow the user to have a personal relationship with the product, either through customization, years of use, or personal touches.
In all, Norman wants us to take his past argument (that all products should be usable, even if they're ugly) and change the weights he had given before. He now believes products should be beautiful and emotionally pleasing as well as be usable and work correctly, for successful products have both of these qualities.

Discussion
I thought this book was a nice calm change from the style of The Design of Everyday Things. In that previous book, he seemed to be angry at everyone, continually telling everyone how their products are wrong because they are hard to understand or use, and that they focus too much on aesthetics instead of usability. What was nice about Emotional Design was that he instead focuses on instructing designers on great ways to make their objects and products more emotionally appealing, and gave examples of great emotional designs, instead of criticizing everyone and giving examples of poorly designed products. It was a much more satisfying read than the past.

I completely agreed with Norman on the necessity of designing with emotional and aesthetic appeal. Apple laptops and PCs are not successful because they are more powerful than any other PC (because they're not), but because they are beautiful to look at and to use. I have always wanted an Apple computer (though I can't afford one) even though I know that as a computer programmer, I would not have as many resources for developing applications and doing my daily work as I would on a PC. This shows that the emotional design of the computers can far surpass the functional requirements. I know that I definitely buy products based on aesthetic qualities as much as functional qualities.

Wednesday, February 24, 2010

Data Driven Exploration of Musical Chord Sequences

Comments

Summary
In Data Driven Exploration of Musical Chord Sequences, Basu, Nichols, and Morris present a new way of classifying different genres of music based on inherent chord progression styles, and an interface for both music novices and experts alike to modify styles of chord progressions to form their own unique genre of music, or blend two artists or styles together.

The polygon slider component of the interface consists of a polygon in which each point represents a style of music or artist (and all of them are user-defined). The current point in the polygon (inside the polygon) can be defined by the user as well, and can be slid around inside the widget in real-time. Using a distance algorithm, the system finds the current distance of the current point from each point of the polygon (each style), and assigns a percentage to each quality being compared. The system will then take properties of the chord progressions from each of the styles (based on the percentage given) and create a new chord progression style that the user has defined. In this way, users can create their own style of music (as chord progressions are inherent to the differences in musical styles) and put the progression behind a melody to create music. The possible choices for inputs include styles such as Country, Rock, Classical, or Pop, and artists such as AC/DC, the Beatles, and Lenny Kravitz. Chord progression styles for each genre of music are obtained by averaging chord values (and sound frequencies) from a large database of songs in that genre. Progressions for artists are averaged in a similar way from a database of songs by that artist.



Discussion
Being a musician, I thought it was really cool that genres of music could be exactly specified by types of chord progressions. I mean, everyone knows that some types of music have chord styles that are always seen, for example the Blues scale and chords in Jazz and Blues, and minor chords in Alternative Rock. But to think that a specific chord progression such as C->F->G constitutes a Country song is pretty interesting. Now I can tell all my family and friends that I hate Country music because I was born to hate their exact chord progressions! Not my fault, I was just born that way. And the interface to be able to create your own chords (even as a novice) is great. Many people wish they knew how to play instruments, and seeing how it is an extremely hard process, this interface would give many people a way to be creative without taking the time and effort to learn an instrument (or compose music by writing scores of notes).

TrailBlazer: Enabling Blind Users to Blaze Trails Through the Web

Comments

Summary
TrailBlazer: Enabling Blind Users to Blaze Trails Through the Web, by Bigham, Lau, and Nichols, details a new way for blind users of computers to navigate the web more quickly. Current accessibility programs for the blind consist of only a linear listing of all the text and options on the screen, whether it is listed through voice or through a refreshable Braille display. Trailblazer allows the user to access scripts written for the website (a script consists of a possible path in the website, and websites with scripts written will include the most important and popular actions on the website) and easily jump to the options they need, significantly reducing the time it takes to navigate a website. Scripts that are already written for websites are listed in a large database called "CoScriptor", but Trailblazer also allows users to create their own scripts. These scripts are a kind of instruction for the system, detailing actions the user wants to make on the website in a pseudocode language. Each time a new user creates a script, that script is added to the CoScriptor database, so as users use TrailBlazer, the database continues to grow and more websites are accessible by the blind.


Additionally, the system saves past user's actions and uses them to predict the next most likely action (picks the script most likely to be used). In this way, the system continues to improve with each time the system is used. After doing an extensive user study with blind users, all of the users said that TrailBlazer significantly reduced the amount of time that it took to navigate a website, but were not as happy with the steep learning curve that they had to overcome to use the software. Another problem with TrailBlazer was its inability to navigate dynamic web content such as Flash. As most of that content is not textual, and is contained in its own application, then it cannot navigate buttons, etc. But in general, most of the users were very happy with the interface.

Discussion
I thought that if this technology made it into all of the current web browsers, and it was accessible to all blind users, then it would be super successful. I can't imagine how long and frustrating it is to do even the most simple task as checking your email when you're blind. So, having a system that records the actions you do most often and predicts (with 75% accuracy) the action that you want to do, then you could check email, etc. much faster. The main problem would be whenever you want to go to a new website, or navigate a flash interface, and the system works much more slowly (or not at all, in the case of the flash interface).

Sunday, February 14, 2010

The Inmates are Running the Asylum (Chapters 1 - 7)



Comments

Summary
The Inmates are Running the Asylum, by Alan Cooper, is a book that promotes "interaction design" for technological devices and software so that the user is not confused or made to feel stupid by the complexity of the technology. He makes a few important points (just Chapters 1 - 7):
  1. Any device combined with a computer is still a computer; the complexity and the anti-human behavior of the computer will dominate.
  2. Because of this, we need to promote the "partnering of interaction design with programming". Interaction design is different than interface design; it is not just the design of an interface that is used to communicate with the computer, but the way that the user and computer interact with each other.
  3. Cognitive Friction is the resistance of humans to understand the complex systems of rules that are always changing that computers and technology follows. Interaction design aims to reduce the cognitive friction so that everyday users feel more comfortable.
  4. A clear distinction is made between user and programmer. The programmer is more apt to design the system to make their job of programming easier and in a way that they feel is easy to interact with. What they fail to see is that users do not think the way that they do, and the programmers are incapable of putting themselves in the shoes of the user.
  5. Interaction design and programming has to be done by different people; it is nearly impossible to both design for the irrational and emotion world of humans and simultaneously design for the deterministic and rational world of computers.
In these first seven chapters, Cooper presented the problem of software that is too hard to use, and started to explain the solution of interaction design. His main point was that interaction design is necessary before programming and will save lots of money and time in software that is accepted by users and is successful in the real world. Programmers need to give up control so that their ideas and efforts in products will not be wasted.

Discussion
I thought Cooper presented some interesting ideas in the first seven chapters of this book. I agree with his ideas that interaction design should be done before programming; without a clear vision of how the user will react to and interact with the software, its hard to keep low-level implementation details from filtering up into the interface and confusing the user. But, I don't feel that it is mostly the programmer's fault, and that all programmers have their heads stuck staring at their monitors so that they can't understand the interaction of everyday users. I think the problem sits with management, marketing, and economics more. The managers demand a certain functionality as soon as possible, and as the biggest time expense is the base programming, the engineers have to hit the ground running. If given time (and if more management realized that design is not a waste of time or budget but instead a way to make sure your product is received well by the masses), then programmers could easily help make design decisions and produce a program that users like. I just feel that the time constraints put functionality first instead of usability. Interaction designers are important though, even after the main design phase, because they can work concurrently with the programmers to make sure that the project is going along at the correct rate and it is still usable. Now that Cooper has laid down the problem, I am looking forward to the rest of the book to see what his solution is.

Wednesday, February 10, 2010

Autism Online: A Comparison of Word Usage in Bloggers With and Without Autism Spectrum Disorders

Comments

Summary
In Autism Online: A Comparison of Word Usage in Bloggers With and Without Autism Spectrum Disorders, the researchers present a study in which they analyze the language in blogs written by those with Autism Spectrum Disorders and the difference between blogs written by people without mental disorders. People with Autism and other related disorders primarily have problems in social interaction, specifically with communication issues and in face-to-face interaction. The web is a place where those with disorders can feel comfortable and free of the pressures of real-time personal interaction, and spend time thinking how they want to communicate. It gives them a place to be themselves and interact without fear of being judged. The researchers wanted to see if there was a significant communication difference in text of the internet between those with Autism and those without disorders, similar to the physical communication difference in personal interaction. After analyzing 50 blogs from those who have Autism (along with other requirements such as age, etc.) and 50 from those that did not have the disorder, the researchers found that there was not a large difference between the blogs in any categories (social, melancholy, ranty, work, metaphysical). The only difference was that the variability between blogs in the "social" category was much higher, which can probably be explained by the lack of social interest that is common in those with Autism Spectrum Disorders. The conclusion was that because of the lack of difference in communication patterns, blogs and the web in general are an appropriate place for those with disorders to interact socially and feel comfortable, mostly because the requirement for fast mental processing of social cues is taken away.

Discussion
It seemed really great to me that those who suffer with social and communication disorders can have a place to interact with others without feeling pressure of failure. It definitely means that those have a place to vent their feelings of anger and frustration that usually come with autism, and can possibly be a way for those with the disorder to interact with the public and possibly live a normal life, having jobs at home that required internet communication, and working through email and blogging. Hopefully the interaction online that they have can directly help them in their physical social interactions as well.

(Perceived) Interactivity: Does Interactivity Increase Enjoyment and Creative Identity in Artistic Spaces?

Comments
William

Summary
In (Perceived) Interactivity: Does Interactivity Increase Enjoyment and Creative Identity in Artistic Spaces?, by researchers at Cornell University, a study was presented to see the how the interactivity in an art exhibit and the relative enjoyment of the viewer are related, and if the interactivity makes a person see themselves as more creative. Interactive art, in general, is an exhibit in which the viewer's actions can affect the way things are displayed, played audibly, or moved around, and can give the user the sense that they are in control of the art, and creating things for themselves. The researchers performed an experiment that compared enjoyment of an artistic experience between groups that interacted with the system and groups that did not. The study took place in a small music studio with a system that was set up to receive interactions from viewers using a Wii remote. One group (the no interaction group) only listened to a pre-recorded session of music, and the other group (the interaction group) used the Wii remotes to change the way the music and sound effects were played. After the study, the researchers asked each participant: Did you enjoy the exhibit? Was the exhibit interactive? Did you feel more creative after interacting with this exhibit?

Their basic findings were that interactive art is more enjoyable for the viewer than art in which you have no control. But, the enjoyment of the exhibit depended on the perceived interactivity of the user, as many of the users in the no-interaction group rated the exhibit as being interactive (they were not told any information about how the music was produced). In this way, their first hypothesis was confirmed: interactive exhibits increase the enjoyment of the user. But, their second hypothesis was not confirmed: users that interact with an art exhibit do not have any changes in the perceived creativity of themselves.


Discussion
I thought this was an interesting study because the results seemed a little obvious. The exhibits at art museums, or museums in general, are always more fun if there are controls and things you can do to alter what has been shown. People in general are very hands on. But, I feel like interaction in a pure "art" exhibit is a little useless. Art museums are usually meant to showcase the creativity and vision of the artist, and if the audience can interact and change this art, then who is to say that it is the artist's creation anymore? I guess the idea of the interactive design is original to the artist, but I don't see this interactivity replacing traditional sculpting, painting, or music because those forms of expression are meant to let the artist show their feelings and ideas.

Thursday, February 4, 2010

Learning from IKEA Hacking: “Iʼm Not One to Decoupage a Tabletop and Call It a Day.”

Comments

Summary
Learning from IKEA Hacking:
“Iʼm Not One to Decoupage a Tabletop and Call It a Day” is about a culture of Do-It-Yourself (DIY) supporters, specifically focused on "IKEA Hacking", where people take existing IKEA products and furniture and modify them to fit their needs. The process is all about creativity and expression: a way for the user to change the product to be more personal for them instead of a mass-manufactured piece of furniture that half the nation already owns. Many liken the process to code hacking: you change a piece of code or exploit its weaknesses to achieve some goal that you may have. The IKEA hacker takes the weaknesses in these products and forms their own product that is theirs alone.

The article was specifically about how online websites and services strengthen the activities of the DIY community and more specifically, IKEA hackers. These people receive recognition online for their work, and feel better about themselves and their creativity. Many of the creations are personal directly to the creator; for example, one man says his creations inspire and remind him of his children. The online DIY videos on websites such as Instructables.com continue to strengthen the community and allow access points for those new to the practice. The researchers concluded that the internet will get more closely integrated with the material world as things go by, and the online activities of the IKEA hackers supports this claim.

Discussion
I thought this was an interesting idea: take something that has been mass-manufactured and represents the loss of individuality and exploit it's weaknesses (and easily-constructed parts) and form something new that is not only functional for you but also an expression of your creativity. I thought it was cool when they made the analogy to "an anarchic event"; that in some way hacking this IKEA furniture made the hacker feel like they were "sticking it to the man" or something. It was definitely interesting, and I'll give props to anyone that would take a normal chair and turn it into a chair at a gynecologist's office (below, the GYNEA chair).

Team Analytics: Understanding Teams in the Global Workplace

Comments

Summary
In Team Analytics: Understanding Teams in the Global Workspace, the authors address the issue of communication problems between members of a distributed team. When team members may work at other companies, or other locations for the same company, it is often hard to form of visual image of who you are talking to, and keep time zones, etiquette, contact information, and calendars all in your head. Existing systems to look up directory information only allow for the viewing of one directory entry at a time, and so you cannot look at your whole team at once. It is also impossible to find the team's structure from this information. Team Analytics is an online web application that allows all members of a distributed team, no matter what location, to connect and view information about the team. The application has a few important parts:
  1. A picture gallery of all team members, so each member can know who they are talking to.
  2. An organization chart that shows visually the relationships between each of the members. This information can help a member determine what the communication style is within the group, whether formal or informal.
  3. An attribute pie chart that shows the relative numbers of people in each division within a group.
  4. A "timezone pain" chart that shows the location of each person in the group, and what times are good times to call another group member based on the current time in their location.
  5. A "bizcard" section, which shows more personal information about each member, along with contact information and their picture.
After doing user studies with the product, which was tested in a large global corporation, users said it really helped them visualize who they were speaking to. It helped them coordinate conference calls between members in different countries, and the addition of the pictures allowed group members to keep track of each person they are talking to. In all, the program was a success.


Discussion
I thought this seemed like a pretty good program that hopefully will be on the web soon for companies to use. I know that I have a visual memory, and I remember faces very well. If I had to be in constant contact with many group members overseas, I would have a hard time telling all the voices apart. Having the pictures and the information about each person would definitely hep me form mental pictures of each member, and so this would help me out a lot! And the timezone feature is really important. I had never thought about how hard it would be to have a conference call between members in many different countries without one person being on the phone when they're usually asleep! Hopefully this tool would help them coordinate the call so everyone does it at a comfortable time.

Wednesday, February 3, 2010

An Enhanced Musical Experience for the Deaf: Design and Evaluation of a Music Display and a Haptic Chair

Comments
coming soon...

Summary
In An Enhanced Musical Experience for the Deaf: Design and Evaluation of a Music Display and a Haptic Chair, the researchers explained a new way for the deaf to have a fulfilling musical experience. Music is not only established through the audible sounds; it also manifests itself through vibration of the floor and walls, visual effects, and the movements of the artists (in live settings). While the deaf cannot hear the actual sounds of the music, they look to other ways to enjoy music. The researchers looked to develop a system in which the deaf can enjoy the vibration and visual phenomenon of music and get as close to actually hearing the music as they can. They found that about half of deaf people have never had any sort of musical experience before, and so they thought to introduce these types of people to a new form of entertainment and musical fulfillment.

The system consists of two parts: a visual system that translates pitch, tone, tempo, and other qualities into visual pictures, and a haptic chair that vibrates with the beat and intensity of the music. The visual system took the MIDI representation of the music, and converted it to visual form using XML and Flash (ActionScript 3.0). Different instruments were different colors, and as they played notes, different visual cues appeared on the screen. The haptic chair was a chair from IKEA with 2 different contact speakers attached to it. The user would sit in the chair with almost all of their body in contact with it, and as the music played, they would feel the vibrations throughout the chair.

After doing user studies with different types of music, most of the deaf participants said they felt no difference than actually listening to the music (how they imagined it). Many said that if they could hook their own music up to it, change the visual styles of the display, and use a hearing aid in conjunction, they would very much like to have the system for their home.


Discussion
I thought this was really interesting cause I have always felt there is so much emotion in music. Lots of emotion comes from the performer: their facial expressions, their body movements, the reaction of the audience....all of that is visual. And having a visualizer that shows the frequencies and notes, along with a chair that vibrates in place? That seems pretty close, or as least as close as you can get, to actually listening to music. This technology is really promising if they can develop it and manufacture it cheaply, and I wouldn't be surprised if it was on the market soon, targeted not only towards the deaf, but also towards those that can hear but want to experience their music in a different way.

PenLight: Combining a Mobile Projector and a Digital Pen for Dynamic Visual Overlay

Comments
coming soon...

Summary
PenLight: Combining a Mobile Projector and a Digital Pen for Dynamic Visual Overlay, a collaboration between Autodesk Research, Cornell University, and the University of Maryland, describes a digital pen with lots of extra functionality. The researchers were looking to resolve one of the important design points made by Norman in The Design of Everyday Things: the importance of feedback. Current digital pens can capture your writing using motion and tiny cameras when written on special paper, and later converted to text or saved on the computer for easy use. But, besides the physical ink that is produced by the pen, there is little useful visual feedback about what you have written. There is almost no feedback at all (except for some audio) for navigating through the menus of the pen, which can be frustrating when you're trying to make sure your work gets saved. The researchers came up with a prototype idea for the "PenLight", which would use a mobile projector (currently being added to cell phones and small devices) to project useful images about the menu and about your drawings and writings on the paper. The small projector would give great feedback so you always knew what you were doing.

The main ideas for the PenLight are:
  1. Three different layers in 3D space that can be written on: the surface, hovering right above the page, and higher (the spatial layer). Different menus can be shown for the different layers, and the distance from the pen to the paper is used to determine the current layer that is being viewed.
  2. The user can immediately see the menu options and choose them with appropriate visual feedback.
  3. Moving the pen around vertically and horizontally in 2D space parallel with the plane of the paper will show you different parts of the image stored in the pen (as if viewing through a movable window).
  4. Easy input with lots of applications.
The prototype they made consisted of an existing digital pen, a magnetic 3D tracking device attached to the pen, and (as mobile projectors are yet to be completely developed) an overhead projector simulating the projection from the pen. This system was able to completely emulate the PenLight's features as explained above.

The main application that the researchers gave was to help the architectural design field, as they still primarily work with physical schematics and drawings. Features for this program included:
  1. Virtual Ink: you can write on layers within the pen's memory (directly overlaid on the paper) virtually without writing on the paper physically.
  2. You can trace over an object on the paper and then move the object to other locations.
  3. The pen can project a virtual guide for tracing to help with drawings.
  4. The pen can overlay different content depending on the object on the paper; for example, it could overlay the electrical wires and water pipes over the drawing of a building.
  5. You can overlay computations of distance, etc. on top of the schematics.
  6. You can initiate a 3D walkthrough of the building by drawing a 2D path.
  7. Copy/Paste from the physical drawing.
  8. Searching through the physical drawing for an object or number.
  9. Displaying the 3D building and cutting out 2D cross sections.
The current prototype that they creating can already do all of the above and more.


Discussion
I thought this was one of the coolest papers I have ever read. The ability to see the menus projected on the paper in front of you is already pretty cool, but also being able to draw a walkthrough on the paper and have it go through it in 3D? That's pretty amazing. Just the ability to leave extra notes about the content in separate layers without marking all over the paper is an amazing feature, and being able to wirelessly sync your new annotation to another architect's desk is great. I think this device (if they can develop it soon, and the projector, etc. can be developed and mounted on a pen) will reinvent the modeling and computational architecture field, and make designing buildings a much easier task.

Sunday, January 31, 2010

The Design of Everyday Things, by Donald A. Norman

Comments

Summary

The Design of Everyday Things, by Donald A. Norman, is a book focused on good design. In this book, he takes all the examples of bad design in the world, breaks apart the situation into each component that went wrong, and explains how to fix the object, and how to design everyday objects that are easier to use. He adopts a user-centric view of the world, and stresses that designers put themselves in the shoes of the user and make design choices based on what would help them the most. Below are the major points that I thought were the most important design principles he discussed:
  • Visibility: Make all aspects of the device visible and easy to see. The user can look at the device and know all parts of it instantly.
  • Good Mappings: The visible parts of the objects easily map to their purpose and the way they can be used. For example, light switches are placed in close proximity to the light, or moving a handle forward moves the robot forward as well.
  • Feedback: After a user does an action with the device, there is immediate feedback about the result of the action so they know if their action was successful or not.
  • No Arbitrary Actions: Make it obvious why the user has to do an action; if they do not understand why they are doing to action, then it seems arbitrary and is extremely hard to remember.
  • Use Affordances and Constraints: Objects have affordances in their design which easily and naturally explain how they should be used. For example, a button "affords" to be pushed. Constraints in design restrict the ways the device should be used. For example, if the battery is not supposed to be taken out, you should design the device so that those actions are constrained and not possible.
  • Knowledge in the World: Put information about the device in the world, and do not require the user to memorize all aspects of the device to be able to use it.
  • Reversible Actions: Any action that may harm the device or allow the user to delete or lose all of their work or data should be reversible, or there should be considerable warning before the user can complete the dangerous operation.
  • Design for Error: Think like you are the user, and take precautions in the design to eliminate errors or allow for them to be easily reversed.
In all, the main point that Norman was trying to get across is that a device with good design needs no explanation or instruction manual. The was the device is supposed to be used should be completely apparent from exploration and even from just looking at the device. He urged designers to think like the user and put their needs first and not focus completely on aesthetics and winning design awards.

Discussion

Reading the beginning of this book, I was immediately drawn in to the insightful comments about how these everyday objects that we use all the time are designed poorly. It opened my eyes, and I started to realize how much time I spent misusing devices and dealing with the bad design, when I should have been completing tasks using the device. I agree with all of Norman's points, the most important being constant feedback about the results of your actions. But, I disagreed in his criticism of designers and the design process in general. Designing a product that is efficient, cheap, easy to use, nice-looking, and marketable is an extremely hard process. While it is always important to always have the user in mind, it is also a giant task for the people doing the low-level work (mechanical designers, computer programmers) to visualize how the device will be used on an everyday basis instead of focusing on their next deadline for feature delivery. The process takes not only programmers and designers, but also managers that focus on the user.

While the book may have been relevant to readers back in the 80s when it was written, it is not as applicable now. Many products, especially computer systems and PCs, have changed so much since that time and are now much more user-friendly and accessible. The problems that he had with telephone systems are mostly gone (although I'm sure he would have lots to say today about cell phones). It seems that designers in today's world are starting to understand, and hire quality assurance testers, usability testers, and user-interface experts to make sure their products are usable before they start to sell them. But, not all designers do that. With increasing technology, designers feel like they have to add more and more features, and it comes with a price: only the most advanced and younger users who grew up with mobile devices and innovations like the ones that are released today can truly enjoy and use them correctly. Overall, it was a nice read, even though he really was extremely angry at computer programmers and systems designers!