Before I opine, I want to note that the organizers did a great job running the conference. It was a terrific experience and I'm already looking forward to next year.
In sessions I attended and during side conversations, several themes reoccurred as challenges to the discipline, with no clear solutions. Among them were:
Fault lines within the field. Several well-know experts called for the pendulum to swing back from "quick and dirty" usability studies, to testing of statistically significant sample sizes. I was not surprised to find these folks all work for large software companies with budgets and long release cycles. Try working in an Agile development environment and you'll find little patience for these rigorous methodologies.
However, if you're a practitioner, you can't just dismiss these concerns. At first your business superiors may not question your methodology. But if you recommend changes based on a six-person usability test, and then find even worse results afterwards, it's too late to bring up statistical validity.
This divide goes deeper than just disagreement over best practices. Many of the newer generation of practitioners lack formal training in statistics, and either they don't know what they're missing, or don't consider it relevant. The more rigorous faction often considers their web-raised peers to be lightweights, not fit to work alongside them on "real" projects. I think it's best to be versed in the statistics, so even if you decide to run quick-and-dirty tests, you can present them as such and provide the appropriate caveats.
Good design is the elephant in the corner. If you want to make a usability expert twitchy and uncomfortable, ask her how companies like Apple can create acclaimed, wildly successful, and usable products using top-down design without usability testing. For extra discomfort, follow up by asking why there's no evidence that user-centered design works.
An objective look at the most successful products and companies suggests there's huge value in good design. Good design is intuitively attractive to people -- they'll like it and they'll buy it, and brands that produce it acquire a positive aura that turns into money. We should be spending much more time talking about design processes and enabling great design. Yes, you can define good design. It's measured in sales!
Meanwhile, while good design can change markets, usability is incremental. It's difficult to see how good usability truly can result in innovative products and new categories. In an entrepreneurial age, usability practitioners risk being left behind.
Productizing usability. Okay, this wasn't a theme for most people, but I see it cropping up often. Web analytics vendors continue to push software to replace usability. At the conference, a slick Keynote Systems presenter efficiently marketed his company's remote unmoderated user testing application. It was the most knowledgeable effort I've seen of products trying to make usability experts obsolete. Unless practitioners can clearly and consistently articulate the value of the many tools in an expert's personal usability toolbox, they'll start to get squeezed by these intuitive sales pitches.
Taking all these problems together, I believe usability has peaked. The future for practitioners is to keep pushing on the broadest possible definition of user experience, encompassing all end-user touchpoints. We must keep expanding our skills and learning related areas -- marketing, CRM, and so on. We have to keep aware of the cutting-edge technologies under development. Soon classic usability will be only a small piece of what practitioners need to master.
It's important to not throw the baby out with the bathwater, with regard to "usability".
(I'm never quite sure what "usability" means in any given context... here I think you mean "can the user accomplish their task" while leaving out all the touchy-feely aspects of design)
Trying to trace the success of any venture back to "usability" is like trying to trace a tornado in Kansas back to a low pressure area in California 3 days earlier. Did it cause the tornado? Would the tornado have happened without it? There are so many factors involved with the success of a product that it's hard to tease any of them out... for example Apple is a marketing juggernaut. How much of the iPod's success had nothing to do with design and instead had something to do with Apple's ability to convince people that whatever they produce MEANS good design? I don't know. Apple doesn't know either, I'm sure.
Some statements that I think we can say with confidence:
1. Usability, by itself, is rarely enough to result in a successful product.
2. Products can be successful without being useful by any reasonable definition.
3. Having a usabable product will increase its chances for success... but no one knows by how much in any given situation.
So what can we conclude? Being usable is better than being unusable. Just like being well-designed is better than being poorly designed. And being functionally stable is better than being buggy.
What really matters is return on investment. How much does it cost to improve usability versus improving the other factors in a product's success? And I don't think we're particularly close to being able to answer that question.
Posted by: Terry Bleizeffer | May 30, 2008 at 02:21 PM
Thanks for your thoughtful response, Terry. In the context of this post, I define usability to include methodological practice of user research and user-centered design techniques including usability testing, but feel free to provide your own definition. In fact, "why can't we all agree on what the definition of usability is" was another theme raised at this conference.
I think it's pretty damning that after thirty years, user-centered design methodology doesn't have any big tech successes to point to. Less damning but still significant is that statistically valid usability testing can be a lengthy and expensive process, and it's difficult to conceive how usability techniques can provide the same revolutionary ROI that great design can.
Posted by: Joshua Ledwell | May 30, 2008 at 04:19 PM
Interesting post...
I don't buy Jared Spool's claim that UCD has not been shown to work -- I agree with Terry's points above about it being just one part of the process. Surely one can point to many problems that were caught during usability testing of a product. How is that not success?
Regarding Apple, isn't the difference simply that they stand in for their users during testing? So they're still testing usability but not in the formal sense with external users. It works because they're demanding users and their target customers are people like themselves.
Posted by: Kevin Arthur | May 30, 2008 at 08:59 PM
Let's talk specifically about usability testing.
Has anyone "proven" that functional testing works? Most products ship with lots of unfixed functional defects. No product is perfect, and if a product ships without unfixed defects it usually means their test team is so small that they don't know about the defects that exist. Actually, let's make it simpler - regardless of whether a team KNOWS about their functional defects, most products ship with unfixed functional defects.
The question (and the art) is: At what point does finding and fixing more defects become less beneficial to getting the product in the marketplace? Obviously there isn't one answer to that question. Consider launching an "agile" SAAS product as beta (and thereby using your customers as your testers) in a situation where it's no big deal if the product crashes and getting product updates in is simple VERSUS trying to update a huge, legacy banking application that millions of people are using everyday for mission critical work. The answer to the question is very different in those two cases.
Now prove that functional testing works. It's a useless question. Just as useless as whether usability testing works. The interesting question isn't whether usability testing works, the interesting question is at what point does fixing more design problems become not worth the effort? And just like the legacy banking app scenario, the answer is different for every project.
Sometimes I think we're too smart and we try to make things harder than they are.
That said, people have been struggling with the answer to that functional test question for decades, so it's not like we're the only ones with this problem.
Posted by: Terry Bleizeffer | June 02, 2008 at 11:35 AM
Kevin: Catching problems is not success because a top-down design process, including the key factors that Jared cites, has been shown to deliver revolutionary, commercially successful, great design. UCD has no successes to point to on a similar scale. Those key factors, BTW, are 1) a corporate culture that values design, 2) strong design vision from the top, and 3) site visits to customers.
Terry: I think our whole field needs to be less associated with usability testing, because it's expensive and time-consuming to do "right" (that is, get statistically significant results) and it's too incremental and reactive. I have a hard time imagining this equation working in a timely fashion:
mediocre design + good usability testing and iteration = great product release
On the other hand, this equation clearly has been effective:
good design + no usability testing = great product release
Absolutely it's true that updating a legacy enterprise app is a much different design and usability challenge than launching a new software-as-a-service app.
Posted by: Joshua Ledwell | June 02, 2008 at 05:04 PM
I'm a strange person to defend usability testing. As I've posted on my blog on several occasions (example: http://uxsoapbox.blogspot.com/2007/06/who-cares-about-finding-new-usability.html), I'm not a big fan of usability testing. I definitely agree with you that our entire field needs to be less associated with usability testing. I think spending time on user research and design is far better ROI than usability testing.
I just think this idea that no one has ever proven that product success can be traced to usability testing and therefore usability does not affect success is silly. Spool is presenting a false choice - should we focus on "well understood shared vision, frequent user feedback and a robust tool box of design tricks & techniques" or "design dogma, methodology and formal process"? He says the former is better than the latter. But that's like saying, "Mustard is better than ketchup, so we should choose mustard over ketchup."
Posted by: Terry Bleizeffer | June 03, 2008 at 11:39 AM