Viewing entries tagged
MyFeature

We work for our apps. That's backwards.

Discover & share this Tom Cruise GIF with everyone you know. GIPHY is how you search, share, discover, and create GIFs.

We design apps wrong. They’re supposed to help us, right? Make our lives easier, complete tasks for us, simplify things. But that’s not quite how they operate. They require us — users — to help them — our apps — complete the tasks we ask them to do.

Morning ritual

Consider your morning commute. Do you drive? Walk? Bike? Whatever mode of transportation, the routine is likely fairly consistent. You leave your home at roughly the same time each morning. You follow the same path each morning. You arrive at work at pretty much the same time each morning.

My morning commute includes a walk through parts of Boston, ending up in Back Bay. And every morning, five days a week, I stop at Starbucks before getting to work. I stop at the same Starbucks, at roughly the same time, and order the same drink. On my walk I place a mobile order via the Starbucks app so my drink is waiting for me.

These are the steps I take to place my order:

  1. Open the Starbucks app.

  2. Press the floating action button.

  3. Press the “Order” button.

  4. Select my saved drink choice.

  5. Press the store selector.

  6. Choose a different saved Starbucks location.

  7. Press the “Review Order” button.

  8. Press the Continue button.

  9. Press the “Order” button.

I documented these steps and screens in a Medium Series here.

9 steps. I must complete 9 steps for the Starbucks app to order me the same drink I order every day, from the same store, at the same time. Seems like I’m doing a lot of work to help the app help me.

We work for our apps. That’s backwards. They should work for us.

A better experience

What should happen? A thoughtfully designed app, meant to truly help users, would recognize behaviors and adapt to support them. As I approach the Starbucks from which I order every morning, the app should prompt me. It should proactively ask me if I want to get my usual. Something like this:

“Ken, I noticed you are approaching your regular Starbucks. Should I place your drink order so it’s ready for you?”

Wouldn’t that be a way better experience? The app helping the user, automating repetitive behavior, instead of requiring the user to tell it what they want in exact detail as if you haven’t done so 1,000 times before.

starbucks-copy.png

Make it happen

So how do we make that real? I suspect the app would need to know 4 things. And once all criteria are met, the app could prompt you magically.

It would need to know:

Location. The app would need to know where you are so that it can identify when you’re close to a Starbucks. Luckily all smartphones come with GPS.

Drink order. The app allows users to identify a favorite drink. Even better, it keeps a record of every drink you’ve ordered. It can figure this out without even needing to be told.

Preferred Starbucks location. Like your drink order, the app allows you to select a favorite store for convenience. And again, the app already records the location of every order you place. It can already discern your behavior.

Time of day. If you regularly order a drink at a particular time, it needs to recognize that. And again, there’s a record of every order you’ve ever placed.

Hooray! What the app needs to know — it already knows!

This proactive approach can greatly simplify the ordering process. What takes 9 steps today can be reduced to just one:

  1. Would you like me to order your drink — Yes or No?

Much simpler.

Let’s all do better

I don’t mean to single out Starbucks. They make a pretty good app. Their mobile ordering feature saves me about 10 minutes on my commute, having the drink ready for me versus having to wait in line, pay, and then wait for my drink. As I’ve said, I use the app every day.

But we can do better in how we design apps. Let’s craft them in such a way that they truly support and aid the user, not be just merely dumb apps that need to be manipulated/managed by the user.

UX portfolios suck

They are poor representations of what UX designers do

Moving from speech recognition to voice recognition

"Hello, computer."

"Hello, computer."

Conversational Design has (almost) arrived. Voice commands as input device are everywhere. People use it to control their smartphone. Amazon Echo is a success. Speaking offers a speed-to-task completion that beats typing.

And right now it is easily undone by a toddler.

That’s my two-year-old daughter, Avery. She is a typical, boisterous and full-of-energy toddler. When I attempt to use voice commands – to draft a text message, search for a Doc McStuffins YouTube video through Apple TV, whatever – she invariably talks over me. Loudly. And then whatever device I’m using fails miserably.

Speech recognition software has gotten much better recently and so we’re starting to realize the benefits of speaking to our devices. We can all speak faster than we can type. This creates efficiency in how we long it takes us to complete a task. But for this to work, it requires a quiet space.

I’ve been thinking a lot lately about how it will look (er, I mean sound) when we are all talking to our devices. It will get quite loud. The workplace will need to change. We will need to have more privacy to work - to speak with and to our computers. This may, or may not be practical (workspace being a pricey expense). The open space floor plans of most offices will be rejiggered out of necessity to accommodate conversational interfaces.

Out of the office - commuting, traveling, at home, wherever - I see two continuing problems with interacting with our devices. The first is societal. We frown on people talking in public, whether it be on the train, in an elevator, or waiting in line. We consider it rude when the individual vocally intrudes on the group.

Working on this post while riding commuting to work. Yes, I get to ride a ferry every day.

Working on this post while riding commuting to work. Yes, I get to ride a ferry every day.

The second continuing problem is technology. Our devices need to identify not just the words being spoken, but whom is speaking them. Until they can lock in on the user's voice to the exclusion of all other voices, the interface will continue to fail us. It needs to understand that I'm issuing commands and not be distracted by Avery practicing the Happy Birthday song.

That should be the goal. Getting devices to go from speech recognition to actual voice recognition. When that happens, maybe we'll have something.