In a recent post here, I described the Biglight Mobile Benchmark – a simple quadrant that allows any retailer to understand their own mobile conversion rate performance compared with the wider retail market. It has proven very useful as a visual and immediate way to answer two important questions: “How big is our mobile optimisation challenge?” and “How urgently do we need to start tackling it?”
For many retailers it has also proven a useful way to secure the budget required to invest in their mobile optimisation programmes.
In many ways, of course, that is the easy bit. Actually improving conversion rates on mobile can be much more challenging – and, for lots of ecommerce people I have spoken to, the next question is “Where do we start?”.
That is where journey mapping and micro-conversion benchmarking are so important – they underpin a focused process that ensures that every penny invested in optimisation is spent where it is most likely to make a difference. What’s more, focusing on the ‘big bets’ enables rapid deployment, so the return on investment is realised quicker too.
Here’s how it works – and we know it works, because this is a process we’ve been through with a number of clients this year.
Understand the journey
It is a statement of the obvious to say that it is hard to optimise the mobile experience for your customers if you don’t understand user needs and the user journey.
As it happens, shopper behaviour on mobile is quite specific. Shoppers are pretty single minded about getting to the product quickly.
Some of the insights from a recent, large-scale study we did at Biglight back that up:
- People choose the easiest route – to get to product quickly
- They bypass or ignore content they consider to be irrelevant
- Users invest considerable time filtering to refine their selections
- There is strong interest in interacting with the image gallery
- Product descriptions are ignored if they are overwhelming
- Once into the checkout, there is a genuine intent in completion.
That combination of single mindedness and an obvious preference for native device functionality – mobile users want to pinch, swipe and so on – has implications for the way retailers should assess the mobile experience they are delivering.
It’s actually quite a simple journey – a journey of two halves, each with its own thread of quite distinct characteristics. We refer to these are the ‘browse to basket’ and ‘basket conversion’ journeys.
Browse-to-basket journey
This part of the journey covers all browsing activity and has as its key success metric the proportion of users who enter the site that go on to view the basket page (with something in it), which we call the “browse-to-basket ratio”.
We’ve found this is a reliable indicator of how well the site is performing in its core roles of helping users find products they are interested in, engaging them and motivating them towards purchase.
It also represents the total available pool from which overall site conversions are derived. Clearly, if the browse-to-basket ratio is 3%, total site conversion will only ever be a proportion of this, never more.
Finally, it’s reasonably simple to track on most websites, so it facilitates comparison from site to site. Right now, ‘good’ is a ratio of close to 10% and anything below 5% is a worry. But there are other important steps, or micro-conversions, along the way.
In simple terms, the journey here is:
The principal challenges in this part of the journey are to focus navigation, merchandising, content and search efforts on getting users to a product listings page (PLP) with a minimum of fuss then, once there, ensuring that filtering is simple, relevant and fast, so that users can progress to product details pages (PDPs).
Once users get to the PDPs, they’ll spend time there. They’re keen to interact with images and reviews and will consume other relevant content if it’s brief and easy to engage with – in fact engagement with relevant content increases conversion and average order value (AOV).
Overall, in the browse-to-basket journey there is a clear, direct correlation between time on site and micro-conversions. In other words, the challenge is to keep users engaged by getting them to product quickly and then making sure the product pages really deliver.
Basket conversion journey
In the second part of the journey, we’re interested in what proportion of the users who viewed a basket subsequently went on to complete a purchase, we call this the “basket conversion ratio”.
Unlike the browse-to-basket journey, which is about engagement and motivation, the basket conversion journey is largely about retention or completion, so success is measured in terms of users who progress from step to step.
We measure this part of the journey from the basket, rather than the start of the checkout process to allow us to make meaningful comparisons between sites with different checkout structures and those that provide different experiences for users that are logged-in. Broadly-speaking a typical journey has the following elements though:
Typically, there is an initial drop-off between the Basket and Welcome page, as those using the basket for other purposes (such as in preparation for store visit), or who have decided not to buy, leave the site. As you might expect, this can be significant, but it does vary considerably between retailers and does present opportunities for optimisation.
Once users enter the checkout process proper though, there is evidence of genuine intent to complete it and extremely high micro-conversions of 95%+ between the individual steps in the journey are possible – provided the flow is intuitive and easy to use.
In this part of the journey there is an inverse correlation between time and conversion – at the risk of stating the obvious, make it quick and easy, and you’ll convert more. Indeed, unsuccessful steps – where users do not convert – take up to twice as long as successful ones, which provides clear evidence of user struggle that can be addressed.
Even so, high conversion rates are not always realised, as the benchmarking data I’ve included in the next section demonstrates – across the eight retailers included for illustration, the average basket conversion ratio is less than 40%, and individual rates range from 26% to 61%.
It may be obvious, then, but retailers still have significant opportunities to improve their mobile checkout experiences.
Benchmarking mobile journey micro-conversions
Understanding how the journey works is one thing, but the crucial thing here is to see clearly how your own site is performing – to see where the micro-conversion issues are and identify the big optimisation opportunities.
That’s where the more detailed picture behind the Biglight Mobile Benchmark comes in. As the table below demonstrates, the data enables comparison at pretty much every step of the journey – the overall browse to conversion journey expressed in terms of micro-conversions, or the proportion of users that move from one step to the next.
Like any benchmark, it’s a really useful starting point – a way to zero in on where the problems and opportunities are. For instance, in the case of retailer 6 (below), a 10% browse-to-basket ratio is followed up by a disappointing 29% basket conversion ratio. No prizes for guessing where the priority optimisation jobs are…
Retailer 5, meanwhile, has the opposite problem.
OK, but what next?
This kind of benchmarking may only be the start, but it is a positive one. It offers a really simple, useful answer to the “Where do we start?” question, and ensures that the ‘what next?’ – the optimisation programme – delivers results fast.
It does that by ensuring the starting point is a clear focus on big opportunity areas, a focus that can be further honed through usability testing – centred on high priority steps in the journey – to ensure the why behind the data is understood and can be acted on.
Crucially, that means that the final stage – A/B testing alternative approaches to each of the priority areas – is sure footed and confident, based on real insight, not guesswork. It also means that bigger, more extensive alternatives in each area (based on best-practice prototypes) can be tested with the confidence they will succeed.
That in turn enables rapid deployment – through an experimentation roadmap that links parallel optimisation streams to specific, measureable goals, and moves each from preparation to results in weeks and months, rather than requiring long development cycles. In a market where mobile is taking over, that mix of certainty and pace could make the difference between success or failure.
Comments