Project 2 – The Button & Do What I Mean

 

Timeline

The Button presentations in-class February 14th

Documentation due online February17th

Do What I Mean presentations in-class March

Documentation due online March 3rd

 

Abstract

The button is a relatively modern innovation, with its consumer roots deeply seated in visions of convenience and ease. As we saw in several of the Masterworks in Project 1, the button was seen as a way to initiate myriad actions on our behalf-preparing food, cleaning the home, and many more. The pushbutton phone heralded a new age of contemporary telephony, freeing us from the tedium of dial phones. To this day, buttons are touted as highly convenient, with modern cars, as an example, using them to enter the vehicle and more recently to free us from having to put a key in the ignition switch and turn it.

Those earlier buttons tended to co-locate the controller and the controlled object, and even in instances where that was less the case, the button was nearby and typically wired to the device. Not only was this a technical necessity, it created a situation where the button was usually viewed as an inseparable part of the thing being controlled and integral to the experience. There are exceptions, like some of our entertainment systems with their remotes (both supplied with the device and aftermarket universal), but many and most other items in the home included a built-in UI of some sort. The washing machine has its UI, your dryer has another UI, the microwave has a UI, the espresso machine has a UI, the toaster oven has a UI, the bathroom heater, the electric toothbrush, the water heater, the door, and so on. Highly restricted by cost concerns, most of the Uls on these devices are not very sophisticated. But is there a missed opportunity here? Are there lots of missed opportunities?

This situation where interactions happen on a device, associated with the thing being controlled (and obviously a branded experience), is less and less the norm. We can issue commands  at a distance, and a button  can trigger a whole cascade of actions. This is a very powerful development, but it can also mean that designers may have much less control over the overall experience. We may turn lights on and off with our mobile  phones, or adjust thermostats  from  miles away, using a handheld  device never intended for this purpose, or by using a vocal command to an external control,  maybe using software and networks from other providers, with little attention paid to the experience.

This is particularly true with the Internet of Things, and may well continue far into the future, but does this need to be the case? If a designer crafts  more  parts of the user experience, designs new forms of control around emerging  paradigms (voice and gesture come to  mind from Project 1 examples), or even pays attention to how a command plays out, there’s opportunity to create a richer and more fulfilling experience of issuing commands, or to orchestrate a multi-sensory response for the users.

Beyond the case of explicitly invoking actions, what other opportunities are present when we relax traditional notions of how things should behave? What might your home or things within it do if an occupant is known to be arriving? What about your studio, or an office, or a coffee shop, or a store, or a traffic light? What if someone’s car or bike has entered the driveway, or the train has made the station? Opening the door, entering a room, within sight, within reading distance, immediately adjacent, touching … do any or all of these easily-sensed events (both of people and of other technologies) represent nascent user-experience opportunities?

This two-part assignment will engage us in ways of re-thinking how our technologies configure our lives, and how our lives configure our technologies.  We’ll be thinking  about ways that  the world  could be vastly different, and prototyping new interactions with these reconfigured worlds.

 

Activity

The  first bite of this  project will be to really concentrate  on  the act of invoking a command. What is a meaningful thing that you want to make happen  in  response? What should  this FEEL like?  How  should  it  LOOK or SOUND? Is there an emotional component? This may seem like a very constrained activity, but use that to your advantage. We will be using a variety of simple technologies to trigger actions in the world … your project could use wireless buttons, voice, or even your phone to create a change  in the external world (sometimes invoking external things with Zapier or IffhisThenThat  recipes).  Don’t worry, we’ll work through the technicalities together.

So, think about what it means to issue a command. What are the components? Think about causes and effects. What action do we take, or what gets sensed? Do we know that something will happen? Can we anticipate what will happen? How do we invoke  the  command? How do we make that consistent with a brand? What  kind  of feedback to we give to whoever issued  a command?  Do they need anything more than the effect  actually happening? What if they make a mistake? What if the wrong thing happens?

Start simple … there are lots of moving parts, here… plenty to do. The second part of this assignment will be to take this assignment to another level (with more inputs and different feedback modes and effects, if that makes sense for your domain).

The look and feel of the user experience you create is critical to conveying both how someone should proceed in the interaction as well as being an integral part of conveying the story of the brand . When I use my iPhone to control a drone, what’s that like? When I use an Amazon Echo to control a Netflix or Pandora stream, do I know what’s happening behind the scenes? Do I feel like I’m using an anonymous app, or does it look and feel like part of the entertainment system somehow? Does it matter? Think about how your work differentiates itself from other products and brands and look for opportunities to push buttons (so to speak). Finally, what is all this FOR, and why should we care? So much of what happens in the technology sector these days is about making life more convenient and fast, but is that what life is supposed to be about? See if you can’t, explicitly or implicitly, mix in more meaningful reasons to do things. Try to serve an alternative master. Do things for the right reasons, in places that really matter, for people who are deserving of some care and attention. That probably means thinking beyond yourselves, talking to people who aren’t in your immediate circles, and leaving your everyday comfort zones. The final deliverable for Part One will be the following:

1. A compelling pitch for your developed idea. What did you choose to work on? Is it address an existing pain point? Is it giving us something we’d never thought of? Is it new, or are you piggybacking on an existing product or service? Tell us why the modality of your command is appropriate for your UX and brand. Who would its target market be? (persona) What is its brand story? (mood board) Why is it necessary? (pitch) Why is it appropriate?

2. A storyboard detailing the interaction. How does this play out in somebody’s life? Exactly what does it do?

3. You will need to demonstrate, in as high-fidelity form as you can manage (this will vary by individual and the chosen technology suite) how your experience will work. In some cases, this will be a fully operational system, and in others it will involves some Wizard-of-Oz work behind the scenes, but it should not just be handwaving and a story about how you want it to work.

4. All of your presentation materials should be on-brand and presented in an appropriate format/size for a group review (great visuals, audible, tactile, lighting – pay some attention to the details).

The second phase of the project will iterate your focus even further into an ecosystem. Sticking with the same (or closely related) general concept, you’ll go further into giving the user control, or venture into the realm of predicting appropriate actions (and automating them). You’ll have more freedom about how and where to intervene, but freedom of this sort can make things harder, because the space of opportunity is bigger and you’ll have to be able to defend your choices. Having an excuse about control modalities can be a shelter from having to explain why your various touchpoints don’t feel perfectly brand-consistent or how your graphics or audio are somehow less delightful than they could be. Limited interaction modalities could be a blessing compared to the expectations some folks now have when there’s a touchscreen display in front of them. No matter how you choose to proceed, you can often buy your way into making things easier, with specific controllers, or an Echo/Home, or a hub of some sort. We’ll show some examples for various possibilities, and you all know how (and/or can work with tutorials, the Hybrid Lab, and your more technology savvy peers) to make things happen with Processing and Arduinos, or Keynote and AirDisplay, or whatever is sufficiently convincing for your project’s goals. You are encouraged to adapt anything and everything to enable a wide array of inputs and outputs. It is up to you to get some semantics into your project, and to marry inputs and feedback in both parts of the assignment such that the interaction design has simplicity, coherence, consistency and elegance.

Remember, novel forms user interface and interaction design are an integral part of the user experience and overall brand. A recognizable mode of getting something done (like a touch wheel for volume and scrolling) can telegraph what you’re using, who was responsible for it, and how to work with it. If you can develop effective ways of interacting with devices of this sort, you can end up with a sort of “branded physical interaction” that can speed your users along the curve of working things out and can also help your new interface design spread to more devices and settings. As you expand your efforts, be prepared to defend your choices. There are no absolutes here, no rights and wrongs, but you’re responsible for the big story that you’re creating. The final deliverables for this second part will be the following:

1. Real hardware, actually performing portions of your new control interaction at a reasonable level of fidelity.1. Real hardware, actually performing portions of your new control interaction at a reasonable level of fidelity.

2. The deliverables of the first phase, modified to suit this more complete scenario to match your new devices(s) and expanded thinking. You’ll want to select your technologies such that you can illustrate how things would actually work.

 

Presentation

We will be presenting some initial thoughts and sketches about what you’re thinking about pursuing next Tuesday. Choose three domains to work up as possibilities, and we’ll discuss your ideas. You will be presenting your final work for the first half of the assignment in class on February 14th

The second half will be brainstormed in sketches on February 16th and in final form on March 2nd – these are healthy chunks of time, so make sure you do a thorough and stellar job!

At all presentations, be prepared to share the libraries and tools you’ve discovered with your classmates and instructor. (Don’t be secretive! We should cooperate on technology and compete on content. After all, that’s the philosophy that led to all these resources being available to us in the first place.)

 

Documentation

We will be posting your progress and projects as part of the IFTF. Documentation, with links to any code you might write and lists of parts and pieces, You Tube or Vimeo videos will be part of your documentation, as well. Use Google Classroom to post your brainstorming and all of the presentation materials you use.

 

Evaluation

Evaluation on this project will be based on the ingenuity, creativity and beauty of your solutions. Oh, and generosity towards your peers.

Advertisements