6 Key Conversion Optimisation Rates Lessons

6. “Conversion Optimization” Isn’t About Green Or Red Buttons

Click here red CTA button on a grey background

People aren’t magically drawn to a button because it’s red, nor are we automatically pre-disposed to sign up just because you added the word “Free” in your headline. Optimization, isn’t about seeing which tests are “better”, but rather about studying your visitor’s behavior and creating designs & copy to engage in a dialogue with your visitor’s inner voice.

It starts with making a good first impression, clearly stating your value proposition, and communicating how you’re different – even if what you sell isn’t unique.

You want to carry the momentum from first click to final conversion, by maintaning scent and telling a story of how their world gets better as you guide them through the experience.

This isn’t guesswork and it’s never really “over”. It’s the combination of hard data,qualitative feedback, a deep understanding of what persuades your visitors & the different segments therein, that will provide the insights necessary for an educated test hypothesis.

Questions like “What color converts best” are a complete waste to time.  Instead, you should be always be seeking to find answers to questions such as:

  • Where do most people get stuck in the buying process?
  • What are common traits among our paying customers?
  • What hesitations do our leads have that prevent them from buying?

The smartest thing Peep taught me is to start closest to the money and work backwards from there.

Find out where your site is leaking money, then create a testing plan moving forward. If you aim to understand your visitors real motivations & hesitations, you’ll start running tests with substance that can turn into even more insight.

5. There Is No Failure, Only Learning

Analytics Screw Up

This was a huge shift in mindset for me. It’s all too easy to get emotionally involved with a test only to find that your challenger made no difference, or worse, decreased conversions.

But there are many reasons for a test not “winning” that aren’t always as cut and dry as the challenger page being worse, some of which include:

There are a large number of reasons why a test might fail on paper that are caused by inexperience & understanding what “failure” really is.

But, if I’m being honest, the real reason my early tests failed was because they were informed by my ego, not by real data.

That’s why most A/B tests fail. It was only once I started getting comfortable with the data that I stopped taking “failure” so personally. So I mis-interpreted a bit of feedback, or my colleague’s variation did better on this testing cycle… so what?

What did we learn?

Maybe “Free” wasn’t the thing they cared about. Maybe it was, but the offer still wasn’t entirely clear. Maybe an international celebrity died, and the internet had better things to do while they mourned. Maybe our visitors don’t appreciate Pre-Christmas sales starting in November.

Try not to take things so personally. Do your homework, follow the data, be smart about your tests & see what happens. If it doesn’t work, move on. There are lots of other things to test.

4. Incremental Gains Are Far More Realistic

first run experience

On a similar note, try not to get discouraged if the only gains you’re seeing are small like 5%

Blog coverage of conversion rate optimization skews towards double, triple & quadruple digit gains, but that’s only because it’s sexier to cover.

Much like media coverage tends to overstate war casualties, even though we’re statistically seeing a decline, coverage on CRO looks at huge wins because it makes for much more compelling reading. But it’s far from the “norm.”

Grigoriy Kogan talks about “The Problem With A/B Test Success Stories” and how blog coverage creates an unrealistic expectation for companies who are beginning to experiment with split testing.

The trouble, he states, is that when blogs report 30-50x increases, it creates a “survivorship bias” that minimizes the impact of more realistic wins, as well as learning from neutral & failed tests. As a result, companies tend to dismiss 5% gains as insignificant & not worth paying attention to or fully implementing.

But when compounded, a 5% increase in monthly checkout completions, for example, could increase average checkouts by 60% by the end of the year.

What would a 60% increase look like for you?

3. Case Studies Are Often Big Fat Liars

Testimonials--e1390952228699

At the very least, they need to be scrutinized with extreme care.

I’ve read a lot of studies that report significant lifts in conversions, but I also know from my own experience that a lift today can be neutral tomorrow.

The case study is only as good as the person managing it.

If they’ve set up their analytics improperly, or are calling wins too soon, they’re reporting “wins” that may not entirely be valid.

I’ve grown to become extremely skeptical of  other people’s success stories as a result of this. With many of the case studies I’ve published here, I will research the author to see, in a broader context, they appear to really know what they’re talking about.

When you’re looking at case studies or behavioral research, there are several things you need to also take into account:

  • The amount of traffic included in the test
  • The segment of traffic being tested
  • The length of the testing period
  • The number of absolute conversions
  • The number of relative conversions
  • The actual impact on revenue
  • The total impact on customer lifetime value

These, among many other variables, play a huge role in whether a test was actually successful. In many cases, case studies report on a false positive – the test appears to be a winner now, but in reality the “win” a temporary increase & everything returns to a neutral state a month later.

Via conversionxl.com

You may also like...