Jump to content
Slate Blackcurrant Watermelon Strawberry Orange Banana Apple Emerald Chocolate Marble
Slate Blackcurrant Watermelon Strawberry Orange Banana Apple Emerald Chocolate Marble
Sign in to follow this  
Rss Bot

10 steps to great UX testing

Recommended Posts

Evaluating the success of user experience strategies in a project requires a deep understanding of why decisions were made and what core goals motivated the project in the first place. Purpose is what provides context to data. Without that context, the data you review is always going to be clouded with your own personal bias and assumptions. 

It's why all of those impressive unauthorised redesigns of websites like Facebook that make the rounds every year mean absolutely nothing. Making something better is often based on opinion. I once worked with a user experience expert (self-proclaimed of course) that would frequently try to guide major decisions with opinions rather than quantifiable data ('I don't like using drop-downs in navigation').

User experience strategy is selfless. It's not about you or me. It's not even about the stakeholder in the department asking you to improve something. It's about the organisation's clients. The customers that spend their hard-earned money on the company's products or services. 

User experience is about understanding their needs and assessing the best way to accomplish those goals with the resources currently available. You will never have enough time, enough budget or enough teammates with just the right experience to do something the way you may want to. So, how do we start this journey? Good. We're glad you asked.

01. Determine the purpose of the tests

confidence in brand sliders

Don’t let trends dictate the success of your website – pay more attention to the business goals 

One of the most difficult aspects of effectively testing anything is understanding the purpose behind the request. Why are you currently being asked to test anything at all? In many cases the organisation only recently began to find value in user experience, likely driven by a past failure, but doesn't know where to begin. 

Failure is a powerful instigator. Within the framework of testing it's inherently positive and one the most important aspects of what we're talking about here. It defines what needs to be fixed. It adds context and guidance. It's the bridge toward the success we're seeking. Failure can be recognised in its various forms: lost revenue, cancelled membership accounts, high bounce rates on landing pages, abandoned shopping carts, expensive marketing campaigns that don't convert or maybe a recent redesign that looks incredible but no longer generates as many leads. 

We're going to let you in on a secret. It's going to make your job as a fledgling test facilitator better the moment you read it and accept it. Ready? The purpose of a test is not for you to determine. That's not your job. It doesn't matter what you want to test, nor does it matter how much better you think you can make that one thing. Being proactive is great but you need to rely on your team lead, manager or stakeholder on a project to set the purpose of a test. 

If one of those people cannot provide adequate guidance, here's a quick set of questions you may ask them:

  • What is the motivation behind wanting to conduct this test?
  • Who is the customer you are trying to reach (age, gender, likes and dislikes)?
  • What would you define as a successful result for this testing (any specific KPIs)?
  • Is there a set date for when the success needs to be realised? 

Once they can answer those questions you will have what you need to start.

02. Explore your tools

Chances are you will be using Google Analytics to capture and review data as it's currently installed on over half of all websites. In larger organisations you may run into New Relic, Quantcast, Clicky, Mixpanel or Adobe Marketing Cloud, among others. The platform doesn't matter as much as the strategy behind what you're doing.

If the organisation already has one of those solutions installed, you can jump in and look at data immediately. If not, it's worth installing one and collecting data for 30 days. If the website or application has hundreds or thousands of visitors each day, it's possible to start analysing data earlier than 30 days. Use your best judgement.

03. Establish your benchmark

Although the data adds contextual clues as to what is happening, remember that you only have one side of the story. You have the result: visitor came from site X, went to page Y, viewed product Z and then left after 36 seconds. But wait. What does that mean? Why did that visitor behave that way? You'll never know with this solution and that's okay. That's not the goal. We want to capture and report quantifiable data. 

Let's walk through an example. Imagine you have a landing page with the sole purpose of lead generation (your KPI). The landing page requires a visitor to fill out a form and click submit. It could be to receive a free resource, sign up for a webinar or join an email marketing list. 

When the form submits, send them to a confirmation page (or trigger an event for the more technically inclined). Track the number of visitors that go to the landing page and how many go to the confirmation page. That's your conversion rate. The first time you track that information it becomes your benchmark. This will be used to gauge the success of all future changes to UX strategies on this page. 

If you change copy on the landing page to better address the needs of your customer and the conversion goes up, then you know that your change was a positive one. Easy, right? Everything you need to get started with benchmarking, regardless of the platform, should stem from that core model. It can be filtered by domain, geography or ad campaign. It's incredibly powerful and very easy to showcase change.

The most important aspect of showing success in UX strategies is the change in benchmarked data after you implement change. Data is the currency in which UX is funded. That goes for freelancers, agencies or departments within an organisation. Prove the value of what you're doing.

04. Start testing with a clear use case

people standing around a whiteboard

Participants probably won't be familiar with your organisation, product or goals

One of the most difficult aspects of setting up successful testing solutions is to not fall victim to the villain of personal bias. It can creep in when you write a testing plan or when you review the data. So how do you craft an effective testing plan? It's all about writing simple tasks with a clear focus.

Test participants likely aren't familiar with your organisation, product or goals. Switch to storytelling mode for a moment and set the expectation and motivation as to why they are on the website. Don't start with questions or tasks right away.

Remember that the testing participants are roleplaying, in some ways. They may meet the demographics of your customer but they may not actually be a customer. Asking them to go find this product on your site becomes purely a 'click it until I find it' experience. There won't be any comprehension or investment, which makes your data questionable at best. 

Explaining to them that they're an avid mountain biker that has trouble riding in the rain because they don't have disc brakes and their goal is to find a set of disc brakes that can be installed on their 26" bike adds purpose. It adds context.

05. Aim for 10 test participants 

Jakob Nielsen of Nielsen Norman Group, a well-respected evidence-based user-experience consultancy, suggests that elaborate usability tests are a waste of resources and that the best results come from testing no more than five users. 

The problem with this logic is that five participants only show trending data when you can compare it to a larger group size. I've uncovered valuable feedback on tests after the initial five participants were recorded in nearly all the tests I've done. Even when nothing new arose, having another 10 participants to provide supporting data made the final reports that much stronger. 

If you are given an option, try 10 participants for each test and platform, such as mobile, tablet and desktop. It may be valuable to separate by parameters like gender or age as well. Any important customer segment should be treated as a new test to keep the data focused.

06. Include screener options

Screener options are also important to keep in mind in order to qualify your participants. I recently conducted a test with 45 participants to run a competitive analysis on performance disc brake manufacturers. The focus was placed on mountain bikers that had previous knowledge of performance parts. Some of the questions had fake answers and improbable options, and if a participant selected one at any time they were removed from testing. 

As participants are paid, there is a certain level of potential dishonesty that comes into play so that they can be accepted into the testing. Keep them honest or you will get useless data that may send you in the wrong direction and jeopardise your position.

07. Focus on quantitative data

questionnaire example

Know what you need and keep questions simple

Although it's easy to capture emotional feedback (qualitative) by asking what people feel about something or what they liked/disliked, you're working with a biased perspective of a very small group size. Emotional response is dictated by personal experience and that will not typically speak to the volumes of your customer base. You run the risk of receiving a lot of, 'I don't like these images' or 'it's too dark of a design' that contrast against 'I really like these images' and 'this dark design is great'. While entertaining at times, it doesn't provide much value.

When asking specific questions that have structured response like yes or no, if they were able to complete a task successfully (also yes or no) or even the much-debated net promotor score (NPS), you receive a value that can be compared to all other tests in the same way that we are able to compare the initial benchmarked data from earlier to itself over time. It's important to remove as many variables for bias as possible. 

08. Avoid leading questions

Behavioural psychology dictates that many of us simply mimic others. It's human nature and often entirely subconscious. Next time you're sitting at a table with someone, take note of how often you both mimic posture. When it comes to the goal of extracting data from others we must be careful to not guide them toward an answer. 

A question may be leading if you suggest they take an action or respond a certain way. For example, 'Visit this website. What do you like about it?' You're leading them into providing very specific feedback. Unless you're testing for how much people like something it's better to reframe the question: 'Visit this website. What is your first reaction?' It's open-ended to enable the participant to answer about anything that stands out first to them. 

While this is generally qualitative, it can also provide direct insight into what someone notices first as quantitative. X number of participants noticed logo first. Y number of participants noticed promotional box first. Then you could divide each of those down based on participant demographic or their sentiment toward it.

Asking non-suggestive questions like: 'Visit website X. Where would you expect to find Y?' lets the participant explore a website or application in a way that speaks to their own learned behaviour. The goal is to get out of their way and remove any potential nudge into a certain response or behaviour. It takes practice.

09. Curate your collected data

The purpose of UX testing is to generate potential solutions that will have a very real, positive impact on an organisation. To collect all the data to validate or invalidate the potential solutions. 

The first step is to curate this information into a document and basic presentation. Trust me, you don't want to send a 40-page report without a few presentation slides or a one-page overview of bullet points. As impactful as this data may be, it's rare that someone will read through all the details. It's the same problem I've run into with web design in general. People's attention is fractured. It's why images, graphs, videos and even memes are so popular. They convey a lot in very little. 

Graph the benchmarks. Graph the scores. Bullet point the key takeaways. Create an issue priority chart. Quick and easy wins versus longer-term solutions. If you're familiar with SWOT (strengths, weaknesses, opportunities, threats) that can be used here as well.

If you've used a solution that recorded video or audio, create a highlight reel three to five minutes long and play it at the start of your meeting or presentation. It's always humbling for a team to hear unfiltered criticism and sets the stage for improvement. It instigates change.

10. Implement changes

uber sign language site

Focus on the purpose of your website and the value it can provide others (a great example is )

As long as you have the data to support it, make recommendations based on what you've found. That's what the client is looking for in the end. They trust that you've done your job and followed these high-level steps to come out in the end with data-backed solutions. Once some of your suggestions have been implemented, always compare the results against the benchmarks.

This article was originally published in net, the world's best-selling magazine for web designers and developers. Buy issue 310 or subscribe.

Read more:

View the full article

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Sign in to follow this  

×