Learn what data-driven design is and how to implement it
Data-driven. It’s a term you’ve likely heard applied to many different areas, from marketing to sales to customer success. Data helps us make better decisions based on past results, and we can apply this philosophy to website design as well.
The concept of “data” may seem like just numbers in a spreadsheet. But at its core, data-driven design is all about building empathy and understanding with your online visitors and customers. This approach helps separate effective designs from ineffective ones, ushering in a more ideal user experience, more conversions, and more growth.
Our partners at HubSpot unpack data-driven website design: what it means to take this approach, why it’s valuable to any online business, and how to integrate this methodology with your own design process.
Data-driven design is an approach to web design that is informed primarily by user data. The purpose of data-driven design is to understand and prioritize users’ needs through observable tests. This helps create a pleasing user experience (UX) while leading to more website traffic and online conversions.
Data-driven web designers place empirical evidence — evidence that can be directly observed through tests — at the forefront of the design process. While previous experience and innate design sense may also play a role in the process, these are secondary to insights taken from user data.
The best web design strikes a balance between an engaging user experience and an intuitive user experience. Unfortunately, it’s easy to neglect the latter in pursuit of the former if you rely solely on instincts.
Remember that, as a designer, you are not the same as the users you design for. Your design preferences, and the preferences of your team or organization, do not match those of your target user base.
Many website owners default to their own feelings in the design process. This is a result of a common psychological phenomenon called the false-consensus effect. According to the Nielsen Norman Group, the false-consensus effect is the “tendency to assume that others share their beliefs and will behave similarly in a given context.” Most designers operating on baseless predictions will miss the mark with users.
A powerful solution to this gap in understanding? Data-driven design.
By allowing data from user interactions and feedback to drive our design decisions, you mitigate your own biases and preconceptions in the design process. A data-driven approach helps you craft an experience for users, informed by users.
Numerous case studies and research shows companies that employ data-driven techniques see fast growth in conversion rates and sales — indicators of higher engagement and better UX.
It’s important to note that no design process, data-driven or not, is 100% objective. Your personal preferences will somehow find a way in. A data-driven approach gets you much closer to the optimal UX than relying on instincts and personal opinions alone.
You might also be skeptical of data-driven design because of its potential to stifle creativity. But, if you ultimately want to grow your website, you must prioritize ease-of-use and performance first. There’s still room for creativity in a data-driven approach. You’ll just need to find a compromise between your tastes and empirical results.
The data-driven design will take more time and effort to learn and master, but any designer can experience its benefits with a bit of practice. This framework will guide you through the entire process, from finding a topic to forming conclusions.
The first step in our data-driven design method is to find the aspect of your website you aim to create or change.
If you run an established site, start by reviewing your website metrics. Are there certain pages or CTAs that are underperforming? Are users following the expected path? Are they engaged, or are they bouncing? Is one traffic segment more engaged than another, and why? Are you receiving many support requests around anything in particular?
Tools like Google Analytics, a reporting plugin, or a native CMS/CRM reporting dashboard can help answer some of these questions. You could also ask existing users/customers with survey forms, send questionnaires by email, or review past research to identify opportunities for improvement in your UX.
Newer websites without many past insights might instead compare their metrics to the averages of their respective industries. If a metric of yours — like CTA clicks, email signups, or time-on-page — falls below average, target it in your redesign.
After honing in on an area to optimize, set an objective you seek to accomplish in your design process.
Your first draft of this objective might resemble something like “increase website conversions” or “decrease page bounce rate.” While valid goals, these represent larger, long-term challenges that consist of smaller goals built up over time, and can be hard to achieve in a single test cycle.
Additionally, it’s difficult to extract insights from site-wide metrics, as there are simply too many factors that might contribute to “more conversions” or “better engagement” than can be observed at once. So, try getting more specific with your goal, as each data-driven test represents a notch of progress.
When you specify your goal to something like “increase conversions from X CTA by at least X%” or “lower bounce rate on X page by X% among mobile users,” you’ll more easily determine what types of data you should collect and how to collect it.
A hypothesis is a formalized statement of your goal for your new design. It should clearly outline the objective of the project for you and your team.
Like your goal, your hypothesis should be specific enough to gain insight from a few KPIs. It should also:
Indicate what constitutes a successful test. This is essentially your goal restated.
The reasoning behind your test — why you believe your design will result in a benefit.
Specify which website visitor segment you’re targeting. For example, successful results can look different between first-time visitors and returning visitors, between mobile users and desktop users, or between organic and social traffic sources.
An example of a fully-formed hypothesis with a CTA design would be: “Adding an image of the e-book cover to our ebook CTA button will increase CTA interactions among first-time website visitors. This is because an image makes the CTA more eye-catching, and it’s more apparent to visitors what content they’re receiving.”
Think about how much time, energy, and money will be needed to prove your hypothesis. Changing the appearance of a CTA and tracking its performance is a relatively low-cost endeavor, whereas evaluating a full page or user journey tends to require more data collection and analysis and demands more resources.
Figure out which data you’ll actually be collecting, and how to acquire it. Your data should be measurable and directly relate your design change to your hypothesis.
There are two forms of data to consider at this stage, quantitative and qualitative. Let’s define both of these data types, and list some common ways to gather each:
Quantitative data is numerical — it’s what most people think of when they picture “data” as it applies to technology. Examples include traffic levels, bounce rates, clicks, share of traffic by device type or geographic location, etc.
Quantitative results are objective indicators of performance, and you can use them to see whether your goals are reached. Methods to collect quantitative data include:
Site analytics: Your website reporting tool of choice can track any relevant data point.
A/B Testing: A/B testing is an experiment for testing the performance of a specific design change, like a color change or placement of a page element. In an A/B test, you create two versions of a design, A and B, and randomly assign half of your visitors to view design A and the other half to design B. Then, track the performance of both iterations.
Multivariate testing: This method is similar to A/B testing. But whereas A/B tests work well for changing one design feature, multivariate tests are for testing multiple design changes to an element or a page.
Surveys: You can place links to brief surveys on your website at various stages in the user’s journey, such as after a purchase, after signing up for an account, or after a set amount of time spent on your site. A survey can ask customers to rate their experience on a scale or how easily they were able to complete their desired action.
Heat maps: Heat maps indicate visually where users are engaging on your web pages — “hot” sections (colored red) attract the most attention.
Surveys: Surveys come in handy for open-ended responses. After completing a process on your site, many users will have formed opinions about it. Use surveys to capture those feelings while they’re fresh.
Interviews: Interviews can take many forms, from structured or semi-structured conversations, to focus groups, to activities like card sorting and contextual inquiry. All of these produce a more thorough understanding of the typical user’s mindset and thought process.
User flows: A user flow is a series of steps and pages that users must follow to complete a task on your site. If you have flows represented by flowcharts in your documentation, compare them to your visitors’ actual interactions. Does your design reduce confusion and streamline the task?
You might think quantitative data is enough to get by, and it might be. But, if you want to understand why users take the actions they do, you need to incorporate qualitative data into your research as well.
Qualitative data is anything you can’t directly measure with numbers. It tells us what users are thinking, and why they feel a certain way while using your site.
This knowledge can be just as important as the accompanying quantitative information. Imagine that adding an image to your CTA drives up clicks — why might this be? Do users feel more comfortable clicking after seeing the preview? Did the image make the CTA more eye-catching in the first place? These insights are invaluable for informing future designs and tests.
Existing websites can evaluate opinions on their existing site or on a staging site. If building a website, you’ll need to allocate time for creating prototypes, digital and/or paper.
You’ve planned and planned, and it’s finally time to conduct your tests and collect the data you need.
If you haven’t yet considered the number of participants in your user tests, now is a good time to do so. The ideal sample size will differ based on your time and budget. For example, a sample of 10 people isn’t enough to make any solid conclusions for an A/B or multivariate test. For qualitative interviews, on the other hand, this is a worthy goal — you can gain rich insights from just a few folks, let alone 10.
A good general rule of thumb is the larger your sample size, the better. Large samples provide a better picture of your user base, so you can be more confident in your conclusions.
Compare the findings from your test with your hypothesis: Did your quantitative results achieve the predicted outcome? Did your qualitative results reveal why or why not? Even null results provide value — they show that the change was ineffective or inconsequential, and can shape future experiments.
Plot your findings. A visual representation of an improvement (or lack thereof) packs a greater punch than a plain number table with the same information.
Consider running tests of statistical significance. These will tell you whether a result was correlated with your design changes, or if it was more likely due to chance.
You might opt to save any conclusions for a set point in time, say, after a month of running tests. Or, you might prefer to carry on testing indefinitely and evaluate findings on a rolling basis. However, it’s a good idea to check your metrics regularly throughout your testing, in case a design change causes a fast and detrimental result that you need to revert as soon as possible.
Each small design change is a step towards the larger goal of growing your website. Therefore, data-driven design is not a one-and-done checklist. It’s an iterative process, where each experiment informs the next.
There’s no such thing as a perfectly optimized website. Things will change and force you to recalibrate your designs. Your product will change and your branding will change, Your design preferences and your users’ preferences will change. Even entire technologies will change, as we’ve seen from search algorithm updates and the rise of mobile browsing.
As long as your business, your users, and the internet evolve, you’ll be able to pull solutions from a data-driven approach. So, keep hypothesizing, tweaking, testing, and improving — the results will be worth it.