What Is Conversion Rate Optimization?

Conversion rate optimization (CRO) gives you a framework to incrementally improve every aspect of your funnel over time. By analyzing the results of benchmark tests at different touchpoints, CRO offers methods to cap your customer acquisition costs,lift up retention, improve your ROAS, and more. It might seem obvious, but this represents a cultural shift for many companies. If you don’t understand what’s possible with CRO, then it’s easy to see every new feature, site update, or campaign tweak as a self-contained project unto itself. Yet when companies view CRO as a vital aspect of digital transformation, they can embrace a culture of experimentation with the goal of continuous improvement. 

Solutions like A/B Tasty, Optimizely, and VWO allow you to go beyond feeling confident in your choices. With tools like these, you can attach statistical significance to a wide range of decisions across your website, product, and audience. And perhaps even more importantly, you can recognize which tests fail to significantly improve the metrics you care about. This is the essence of a digital transformation, because if you can’t understand what your data is telling you - or worse, ignore it - you’ll never see any of the benefits. 

Maybe you’re already applying CRO to landing pages in your growth marketing campaigns. Or perhaps you’re thinking of using A/B testing in your product analytics practice to improve checkout conversion rates. Or you might be experimenting with different versions of your messaging within your lifecycle marketing campaigns. No matter how you approach this discipline, remember that conversion rate optimization involves far more than just the tools involved. For many teams, CRO represents a whole new perspective on their data and how to achieve their goals.

In this article, we’ll look at where CRO makes sense (and where it doesn’t), the elements of an effective conversion rate optimization program, and how you can introduce CRO to your team. In many cases, CRO is the bridge between the strategic planning required for full-scale digital transformation, and its day-to-day practical application. Take the guesswork out of decisions that can impact new user conversion, acquisition, and retention. It’s time to get serious about conversion rate optimization.

Table of Contents

Where Conversion Rate Optimization Makes Perfect Sense (And Where To Avoid It)

Conversion rate optimization and A/B testing can help remove a lot of doubt and uncertainty from the decisions your team makes every single day. But CRO and A/B testing aren’t magic, and it’s not the answer to all your questions. You’ll see the greatest benefit from CRO only if you know when to use it, when not to use it, and how to interpret the results when you apply it to an appropriate situation.

The whole point of CRO is to improve your top KPIs. CRO works well when:

  • You have a strong value proposition for your product with good product-market fit.
  • You’ve clarified which metrics matter, and how they align with your business goals.
  • These metrics are backed up by sufficient, accurate data.
  • You need to make systematic improvements in your sales funnel to improve unit economics.
  • Your team understands your users, understands how to design and implement changes to your funnel, and knows how to run meaningful A/B tests.
  • You have the right technology in place to run A/B tests at high velocity.

Conversion rate optimization is ideal for improving metrics related to tactical decisions, especially when it comes to growth marketing and product management initiatives. It can even be used to measure the impact of more strategic projects, for example, with global hold-out groups for email marketing and geo-testing for advertising campaigns. CRO works well across a wide range of initiatives when you’re interested in measuring a series of incremental changes over time. However, there are a few cases where CRO and A/B testing won’t cut it:

  1. Think of your favorite online shopping site, the one that’s been around for more than a decade. The people responsible for optimizing that homepage have spent years and years running experiments to improve conversion rates, boost average order size, ramp up repeat order frequency, and so on. By this point, it’s unlikely they could find any small change that would have a significant impact on the metrics they care about. 

    Now, there might be a completely different version of their site that could perform better, one that looks nothing like the current site. Call the current site A and the new version B. If they were to A/B test these two versions, which version would win? It might seem surprising, but there’s a very good chance A would come out on top. This is because B hasn’t been subjected to the same level of scrutiny as A. Since B hasn’t been iteratively optimized like A has, even though it might perform better than A if given the same treatment, in a head-to-head contest today A would most likely win.
  1. The other example is branding. This is related to our previous example, except in this case we’re dramatically changing the look and feel of an entire company instead of just one page. When a company rebrands itself, it’s a strategic initiative built on months or even years of research into customer research, market trends, the company’s long-term goals, and many other factors. 

    Besides, can you imagine what would happen if a company decided to A/B test their rebrand on two different populations? Whoever saw the rebrand would immediately share it to social media, which would pollute the results and defeat the purpose of the test. In addition, a user’s familiarity with the brand will affect their behavior. That means any attempt at rebranding will probably produce short-term effects that will fade over time as users become accustomed to the new brand. Companies embark on rebranding initiatives with an eye on the long-term, which individual A/B tests simply cannot capture. 

While it won’t work in these two cases, these exceptions illustrate how powerful A/B testing can be. For every major strategic initiative, there are thousands of tactical decisions that must be made every day, and many of these can benefit from conversion rate optimization. And the results of these A/B tests, when compounded over time, can inform all kinds of long-term projects. But in order to get to this point, first you need to nail the basics.

Elements of an Effective Conversion Rate Optimization Strategy

If you build a strong foundation for your A/B testing program, you’ll be able to run the right kinds of experiments more often, and you’ll be more confident with the results of those tests no matter what those results might tell you. But before we discuss the elements of an effective conversion rate optimization plan, it’s worthwhile to think about what underlies this foundation. Because even if you ace all the points we’re about to discuss, it won’t matter if your team doesn’t trust your data in the first place

Data Governance

Every digital transformation initiative can benefit from investments in data governance. If you’re just getting started with CRO, this is one of the best ways to ensure accurate results. A lot of companies get so caught up in the pursuit of continuous improvement, that they forget data governance has to come first. If your team doesn’t trust the integrity of your data, it will be impossible for you to run meaningful experiments. 

Sleeping on data governance is a slippery slope. Fortunately, you can fix it with clear guidelines and an adherence to transparency. The payoff for this kind of initiative is immense, since better data governance will lead to fewer errors and greater efficiency in every area of your company including conversion rate optimization. Refer to this straightforward approach to data governance for recommendations on how to get started.

Whether you’re building a new data governance plan from scratch or updating your existing policies, the following factors are equally vital to an effective conversion rate optimization program.

Planning + Organizational Buy-In

We’ve mentioned it earlier, and it will come up again and again: A/B testing is far more than which tool you select. It takes careful planning and buy-in from your leadership team to build an A/B testing program that will align with your business goals. Of course, the root of good planning is good data governance. But much of the success of your A/B testing program goes back to the principles of effective change management:

  • Be transparent about your motivations for pursuing conversion rate optimization and A/B testing. Show stakeholders in leadership roles how this new approach will benefit your team and the company as a whole.
  • Engage all end-users in the discussion, and show them how they can use A/B testing to make better decisions throughout their workflow.
  • Offer training resources, so everyone feels comfortable experimenting with your chosen solution, interpreting the results of A/B tests, and implementing changes as a result of those tests (where appropriate).

Before we proceed any further, can we be blunt? If you can’t get buy-in for an A/B testing program that involves multiple teams, then don’t do it! Throughout our work with almost 900 clients, we’ve been called on time and time again to fix poor implementations of a wide range of data infrastructure and growth marketing systems, including conversion rate optimization solutions. It’s never just the technology. Demonstrate to your executive team how your new conversion rate optimization and A/B testing program will change peoples’ behavior, not simply what experiments you’ll be able to run. Do this, and you’ll already be well ahead of most other companies.

And while this article cannot address every aspect of change management, it’s important to keep these steps in mind as we walk through the elements of an effective A/B testing strategy. Hint: good data governance always comes first.

Proper Tool Implementation

Let’s consider some of the mechanics of CRO for a moment, because it can reveal a lot about what makes an experiment successful (or not). In the following examples we describe CRO and A/B tests as they apply to experiments conducted on webpages, though these points also apply to your digital products and apps.

  • Experiment Flicker - Every time you run a test, you’ll either show a specific user the control version of your webpage, or some variation. If you manage A/B tests on the client side, this can corrupt your experiments and play havoc with the results. Experiment flicker happens when the webpage in question doesn’t load very quickly, and so users wind up seeing the control, and then seeing it flip to the variant, or vice versa. And if you run too many tests at once, this will increase load time and the problem will get even worse. Not only will this render the experiment invalid, it also makes it even harder for the user to navigate your site. While you might have set up the test with good intentions, if you let experiment flicker persist it can torch your conversion rates as users get frustrated and bounce. 
  • Identity Resolution - A/B testing tools like A/B Tasty, Optimizely, and VWO often assign their own identification numbers to users who interact with experiments on that platform. If your CRO plan involves keeping this data siloed, and not correlating it with user actions on other parts of your site, then these unique IDs shouldn’t pose too much of a problem. 

    Too bad this is one of the biggest obstacles companies encounter when setting up their new CRO platform.

    If you were reading the previous paragraph and nodding your head, you fell into the same trap as everyone else. By definition, real digital transformation is impossible with siloed data. And failing to address identity resolution is one of the biggest challenges for data governance, and in turn, CRO. If you neglect to correlate user IDs assigned by your CRO platform with the same users’ IDs throughout the rest of your tech stack, you risk misinterpreting all your experiments. There are several ways to solve this problem, and just like addressing data governance in general, its effects will be felt far beyond conversion rate optimization. If you need help with this, make some time to chat with one of our experts

Designing the Right Tests for the Best Results

How do you choose a goal for your CRO tests? Test design requires a deep understanding of your audience, funnel, and key business metrics. You need to understand all the details of the user action you’re attempting to change. Get clear on your current conversion rate, and also ask yourself: 

  • What actions do users take before they arrive on that page? 
  • Once they arrive on that page, what actions do they take if they don’t purchase?
  • Where are you testing your hypothesis? 

This last point is especially important, as we’ve seen teams attempt to improve checkout conversion rates by changing something much farther up the funnel. In this case, there are too many factors at play between the variation and the goal of the test for the results to be meaningful.

Remember that A/B tests are best suited to short-term effects from tactical decisions. Questions like the ones we’ve shared here offer important guardrails for any test you decide to perform. Just make sure you’re testing a variation which can impact the metric you care about. In fact, anytime you plan different CRO experiments, take care that the scope of your ambitions matches the nature of the test.

Recognizing Biases and Statistical Significance

We included a vital caveat while explaining the benefits of A/B testing above. Did you catch it? 

Here it is again:

“If you build a strong foundation for your A/B testing program, you’ll be able to run the right kinds of experiments more often, and you’ll be more confident with the results of those tests no matter what those results might tell you.”

A/B tests aren’t meant to confirm your biases. They’re supposed to cut through your biases to reveal the likelihood that your audience will respond to specific changes in your product, website, or messaging in ways that will improve your target KPIs.

Just because you run an experiment and don’t get the results you expect, that doesn’t mean the experiment was a failure. On the contrary, you’ve learned something new that goes beyond gut feelings. And of course, recognizing insights like this becomes much easier if your executive team encourages a culture of curiosity while embracing the value of conversion rate optimization. 

But how do you set up an A/B test to deliver reliable results? And how do you interpret those results once the test is complete?

To put it another way, how do you run A/B tests that meet the threshold for statistical significance?

There are entire courses dedicated to statistical significance, so we won’t delve too deeply into the topic in this article. (For a good intro to statistical significance as it relates to A/B testing, start here). In short, the statistical significance of any A/B test indicates how confident you can be that the outcome of the test is legitimate, and not a result of random chance.

How do you increase the statistical significance of your A/B tests? In order to answer this question, let’s walk through an example with Optimizely’s A/B test sample size calculator. Please feel free to tweak the values on the calculator at the hyperlink as we explain each of the options below.

  • Baseline Conversion Rate - Before you change some element of your product to improve the conversion rate, you need to understand what your conversion rate is today.
  • Minimum Detectable Effect - In this field, you can decide how large a change needs to be before you deem it significant. This is one of the most important trade offs that you make when setting up an A/B test. If you prioritize speed, you won’t need as large of a sample, but that means you won’t be able to detect small effects. If you want greater precision, that will require a much larger sample size, and potentially a much lengthier test. 
  • Statistical Significance - How certain do you want to be that the results of your test are due to the changes you made, and not random chance?
  • Sample size per variation - This is the output of our A/B test sample size calculator. It’s important to note that Minimum Detectable Effort and Sample size per variation are inversely related. If you’re testing for very small changes to your Baseline Conversion Rate, you’ll need a larger sample size. And in a similar way, a larger Sample size per variation is correlated with a higher Statistical Significance.

You can apply the lessons from this calculator to any A/B test that you decide to perform on any A/B testing platform. However, there are a few crucial factors to consider that don’t show up on the calculator above:

  • Match the duration of the test to the size of the test audience - Once you determine how large of a sample you need to achieve the required statistical significance, you’re only part of the way there. When testing changes to your website, app, or product, you need to consider how much time you need to capture an audience of that size. For smaller companies with lower traffic volumes, longer tests are often required to achieve meaningful results. This is why smaller companies can often only test for bigger changes to their site or product early on. Those big changes are the only types of variations that will deliver the statistical significance they require with a smaller audience.
  • No peeking! - This is about the only case we can think of where “set and and forget it” is a valid tactic. That’s because every time you check the results of an A/B test that’s still running, you're inadvertently stopping the clock to conduct a different experiment. Many conversion rate optimization solutions now rely on sequential testing to counteract this effect. These systems set a higher initial threshold for statistical significance while also changing the threshold over the course of the experiment, based on the expectation that you’ll peek at the results early. Learn more about sequential testing here.

We would be remiss if we didn’t also discuss statistical significance in A/B testing as it applies to your email campaigns. Except in this case, you’re not making a change to your site with the expectation that it will take you multiple days to receive enough traffic before you can check the results of your A/B test. When you’re running A/B tests on your email messaging, over 70% of the people who might interact with the variation will do so within the first day. And the vast majority of text messages are opened within minutes of receipt. So you need to be sure you’ve mapped out your A/B testing plan before you hit send, because you’ll get the results much more quickly than if you were updating your website. 

There are many different factors to consider when you run A/B testing experiments within your lifecycle marketing program, and all of them will affect your baseline conversion rate and your sample size. As we mentioned earlier, when running different experiments and A/B tests, you need to make sure the scope of your ambitions matches the nature of the test. But that doesn’t mean you can’t aim high. See what’s possible when you enhance your lifecycle marketing program with crystal clear A/B testing, and learn how to adapt your culture of curiosity to measure the impact of your email marketing.

Now that we’ve shared an introduction to statistical significance, set all that aside for a moment. One of the best ways to ensure that your A/B tests are meaningful is to check that the test’s potential impact is large enough to make it worth running in the first place. For example, if you want to run an A/B test that affects multiple departments, what will be the practical implications of updating your product to reflect the results of the test? If implementing the results of the test throughout your company has the potential to create an extra $10k in revenue per month, but it will pull three of your best team members off their current projects for a week, is that worth it? And even before you plan a new test, think about whether building the test at all will be justified by the potential ROI. Considering A/B testing in this way can help you avoid misguided decisions. And it’s one more reason to get buy-in from your executive team for an A/B testing program that involves multiple departments. 

Tying it All Together: Embedding CRO Tests Into Full-Funnel Management

Conversion rate optimization (CRO) gives you a framework to incrementally improve every aspect of your funnel over time. We’ve seen evidence of this over and over again on a wide range of projects with many different clients. But it’s what we found on the other side of this assertion that should give everyone pause:

Our intuition is often pretty terrible.

This sobering fact actually points the way forward, towards continuous improvement. The solution? Combine A/B tests with product analytics for a more well-rounded, quantitative check on our intuition. If you conduct an A/B test on a certain metric with all the right criteria in place beforehand, you’ll be able to tell whether the result was statistically significant. But if you want to see how the user journey differed between the control and the variant, or how users interacted with a specific feature on those two paths, you need product analytics. 

Want a complete view of your user journey? Combine the quantitative results of CRO & product analytics with qualitative data from heatmap solutions, session recording tools, and user interviews. If you use any of these tools in isolation, that’s like trying to take a photograph of a majestic landscape and only focusing on the tree 20 feet in front of you, or the blade of grass six inches away. In these cases, it’s easier to see only what you want to see. If you want to see everything, combine CRO with your product analytics strategy.

For a deeper perspective on the relationship between CRO, A/B testing, and product analytics, please enjoy this podcast hosted by VWO. Though keep in mind as you listen, that it’s never just about the technology. The most effective CRO strategies elevate change management and data governance while recognizing the nuances of each individual solution.

How Do You Run A Successful Conversion Rate Optimization Program At Your Company?

By now it should be clear: spinning up a winning CRO program doesn’t happen overnight. There are many potential roadblocks on the way from implementation to adoption. Because before you even identify use cases and research different platforms, you need to get buy-in from your executive team. 

For organizations who want to update or improve their current A/B testing programs, many of these same obstacles still apply. A lack of preparation in cross-functional collaboration, data governance, or any of the other elements of an effective CRO program is more than enough justification for a reset. For many companies that want to reinvigorate their existing A/B testing strategy, a ground-up rework is often the best way to proceed.

Best case scenario: your data governance library is clear, straightforward, and accurate. You understand how a new or revamped A/B testing platform will affect stakeholders throughout the company. And you know which questions you want to A/B test to improve big KPIs through a series of tactical changes. Even within this best case scenario, how do you get started? 

In order to guide companies through this process quickly and efficiently, Mammoth Growth developed our Analytics Roadmap program. The objectives of this 6-week project are a complete audit of a company’s tech stack leading to a roadmap for improvements to their CRO and A/B testing strategies. Our Analytics Roadmap program allows Mammoth Growth to develop a deeper understanding of a company’s existing data governance, reporting, and CRO pain points. By completing the Analytics Roadmap program, Mammoth Growth and our clients can develop a customized plan to address their A/B testing goals.

When approaching CRO projects, Mammoth Growth follows these steps within our Analytics Roadmap program:

  • Before you consider which business question you’re trying to answer or how big your sample audience needs to be, you need good data. Every meaningful A/B test begins with strong data governance. Mammoth Growth sets out to verify the company's data governance plan, and how they manage customer data throughout their tech stack.
  • Next, we clarify how they currently manage their customer data, analyze audience insights, and formulate new CRO experiments.
  • Combine these findings with insights into how they use other tools in their tech stack.
  • Prioritize gaps in A/B testing creation, execution, and reporting that could deliver major results with improvements. 

The outcome of this process is a roadmap for the client’s conversion rate optimization strategy: how to plan it, what results they’re aiming for, and what benchmarks define success. 

For every company, there’s a unique combination of CRO strategy, technologies, and process improvements that can streamline digital transformation while allowing for more consistent, targeted decision making. Here at Mammoth Growth, as we move through the steps of mapping your Analytics Roadmap and defining a new A/B testing strategy, we adopt an agile approach to deliver business value as quickly as possible. Contact one of our experts today, and let’s talk about your conversion rate optimization goals.

“Mammoth Growth helped push us beyond A/B testing ideation. They helped us systematically size the opportunity of each testing idea, defending each hypothesis with current metrics. This really helped our team to align against our core goals, then narrow and prioritize our testing ideas.”

Author image

Mark Chan

Product Manager

What are your growth goals?

Insights and Updates

Show more
We and selected third parties collect personal information. You can provide or deny-  your consent to the processing of your sensitive personal information at any time via the “Accept” and “Reject” buttons.