Familiarity testing: planning for pet peeves and the possibility of follow-up

When we become familiar with our surroundings, two seemingly contrary things begin to happen: we start ignoring certain annoyances – and we uncover new ones.

It happens in our everyday lives – we ignore the creaky floorboards of our house, but suddenly find fault in the loose cupboard doors; we become accustomed to one co-worker’s babbling just as we develop a first-rate annoyance with another’s.

It happens with our content management systems, too. The question comes down to, “How do we handle both the immediate usability problems while still planning for future problems?”

Easy: we test.

And then, we test again.

Why We Test

It seems simple enough to say “we test so things aren’t broken.” But it’s more than that – it comes down to emulating the everyday tasks of those who will be using the CMS and anticipating the problems they’ll have.

Deane recently wrote a great blog post about how, thanks to the law of first impressions, an editor’s first encounter with a CMS helps to form an opinion – good or bad – that will influence the relationship between user and tool for the life of the system.

If we take that one step further, we – as CMS implementers, user advocates and champions for usability – must consider what an editor is going to let go (little issues, akin to the dust in the corners that, regardless of CMS, we are forced to work with) and what they will slowly begin to despise one or two years down the road.

A current project of ours involves four stages of testing – including client-side usability testing. At first, testing is devoted to securing the basics. As we move forward, we adapt the process toward present usability.

What comes from this is a familiarity with the project, and – with rep after rep after rep – patterns of annoyance. These are the issues of future usability that will pop up once a user has overcome the CMS’s learning curve.

Because, sure, inserting a link makes sense at this point, but what happens if the user has to enter twenty links. And that user has to do it three a week?

Will it work? Can we anticipate any anger? Can we save the user from stabbing a ballpoint into his eye and calling it a day?

The Long Term Annoyances

What’s difficult to accept is that, no matter how much we test, things will come up.

The simple fact is, every user is different. And every CMS is different. Even in the best case, there are problems that we haven’t even discovered yet – that everyone will use the CMS in a unique way. Unforeseen issues are always in danger of bubbling to the top, leading a user to:

  • Become frustrated by extra steps
  • Develop bootstrapped shortcuts to overcome complicated processes
  • Shy away from sections of the system that possess friction
  • Change his or her normal (and efficient) workflow to adapt to an unintuitive interface
  • Grumble, shake and, in the worse case, cry uncontrollably

We test so we can make sure the site is sufficiently functional. We test again to ferret out as many of a user’s potential annoyances as possible. But what we can’t test is which long term annoyances will pop up six months down the line. It’s impossible. We don’t know the users, and we can’t anticipate the changes in technology that would cause these annoyances.

I’d suggest adding a new metric: actual editorial experience after months of use. In other words, making use of the most useful set of data: an editor’s real-life interaction and history with the very product we’re trying to test.

And we need to ask these questions long after we think we’ve finished the project.

So What Can We Do?

In a perfect world, testing for future usability problems would take a three stage approach.

Stage One: Basic Testing

These are the simple things: do things work, are they easy to understand, and will they cause the editor’s first impression to be positive.

Stage Two: Familiarity Testing

Once we’ve tested to the point that we ourselves are mind-numbingly familiar with the CMS’s usability, begin taking note of the things that mildly annoy us. Anything that causes the slightest hitch for the testing crew will ultimately snag an editor much earlier and much harder.

Smooth these out as best as possible within the framework of the CMS, and help develop workarounds or best practices for an editor to use for issues that come up due to a CMS’s limitations. In essence, give the editor the bootstrapped shortcuts before they’re needed.

Stage Three: Follow Up

I know, I know. No one wants to keep tweaking a Web site for usability on a constant basis post-launch. It’s expensive for the client and it’s frustrating for the development shop.


There could be value in inserting a line item in the budget for a follow-up usability process up with a group of editors at a set period after the CMS project has been delivered. As long as guidelines and expectations are set up long in advance (hopefully preventing a laundry list of desired additional features and unnecessary tweaks) we’ll be given a valuable asset: the actual opinion of the person who’s using the site.

The process is simple: we would ask a select group of stakeholders – namely, those who have intimate knowledge of the CMS and use it on a frequent basis – what frustrates them about the way the CMS is set up.

They get satisfaction and an increased sense of customer service, we get further feedback on how we’re developing Web sites, and both sides come away happy.

Is it a dream?


It’s worth testing. As Web professionals, we often shy away from contact post-launch because the desire to tinker is so great. A site doesn’t need to “ship” – it’s always available for updating, and we’re the ones who will be asked to tinker. What we’re missing by shying away is the invaluable feedback – feedback that can often turn into frustration – that comes with constant use and familiarity.

With a strict set of rules and a focus on usability updates and a pared back group of stakeholders, both sides will benefit. Which will only help to strengthen the relationships between clients and developers

And if it’s worth testing, as we’ve seen so far, it’s certainly worth testing again. And again.

(Originally posted at the Blend Interactive blog.)