If you want to improve the customer experience, you first need to understand your customers and their experiences; to do this, I typically recommend three approaches, all of which go hand in hand:
Critical to improving the customer experience is listening to customers and incorporating their data and their feedback into your transformation strategy. Data-driven decisions are key to customer experience transformation success.
There are many different customer listening posts and equally as many sources of customer data. Let’s start with some examples of listening posts, which provide not only performance data but also demographic, psychographic, diagnostic, and competitive benchmarks data:
Boeing’s widening woes are a warning to every communicator tasked with creating or sharing company purpose.
The headline in today’s New York Times says it all: Cascading Crisis Reveals ‘Sick’ Culture at Boeing. Recently revealed internal documents show employees regularly cutting corners, dissing one another and insulting customers, feeling remorse for having deluded regulators and, above all, obsessing about meeting deadlines and budgets.
A still life painting was supposed to capture a moment in time, something that we’d photograph if only the camera had been invented.
And a sauna was a nordic way to simulate a warm afternoon at the beach.
But an artistic photograph isn’t supposed to simply be a snapshot. It has to add more than that.
And a veggie burger is not simply a pale imitation of a meat burger. It can be something better.
The problem with faux is that it’s not enough.
Why is the new Dolittle movie so bad? Savaged by critics and viewers, it had:
I think the best way to understand why it failed is to look at the reasons above. Ironically, it’s these assets and lack of constraints that created the circumstances that allowed the movie to become a turkey.
Too many meetings.
Too many self-important voices around the table.
And most of all: No one who cared enough or was bold enough to stand up and say, “no.”
That would have been enough. If at three or four critical moments in the development of the project, someone had stopped the assembly line until the work was good enough to proceed, everything would have been better.
Sometimes, the investments we put in place to avoid mediocrity are the very things that cause it.
There’s always someone who is more willing to play the short-term game than you are.
Someone who is willing to cut more corners, send a more urgent text, borrow against the future, ignore the side effects, abuse trust and corrupt the system–somehow justifying that short-term hustle with a rationalization (usually a selfish one) about how urgent it is.
On the other hand…
There’s plenty of room to win as someone who takes a longer view than the others.
Here are some updates from a busy week:
The Real Skills Conference happens today at 1 pm Eastern. It’s a two-hour virtual conference. Registration closes today at 10 am. Check it out here.
My post about Google’s broken promo folder received more than 1,000 responses, sharing details and insight about the hassle it’s causing. The team at Gmail has access to the doc… not sure if they’ve responded to anyone though.
The Marketing Seminar has less than a week before enrollment closes.
This week, Blinkist launched a series of two-minute long podcasts I recorded for them.
And a new video, three things we’ve learned from the altMBA.
All too often, checking your credit score becomes one of those chores you put on the back burner. You know the ones. Like cleaning the back of the fridge and dusting ceiling fans, it’s just not something you remember to do regularly.
But there’s a lot more riding on your score than some bad leftovers you’d rather forget. Here are three reasons why you want to check your credit.
Checking your credit may protect your credit score from the effects of identity theft.
How? Well, a quick peek at your report will show you any account that gets reported to a credit bureau.
Typically, it includes things like mortgages, student loans, or credit cards — all the usual suspects. But if you’re a victim of fraud and identity theft, it may reveal a personal loan or line of credit you didn’t apply for!
Given the chance, a thief will max out these accounts, and you can count on them skipping the bill.
This may lead to bad entries filling your report and dragging down your score.
Something fishy in your file doesn’t always mean your personal information has been hacked. It could just mean there’s been an error when a financial institution reports your account to a bureau.
It maybe a simple filing mistake due to human error, or it could be a mix-up with an automated step involved in your report.
Checking in with your consumer file often will help you catch these errors before they impact your history.
When checking for errors, keep your eyes peeled for the following issues:
Financial institutions may check your consumer file before granting you a personal loan or line of credit. What they see usually helps them determine whether they’ll lend you money and at what rates and terms.
Checking your file before a big purchase when you know you’ll likely need to borrow money is a good idea. It lets you know what financial institutions will see, giving you a good idea of your chances of being approved.
If you have a high score, you’ll likely have a greater selection of products. You’ll also have a better chance of accessing the best rates and terms.
A lower score may shrink your options and make the available rates and terms more expensive.
By knowing where you fall between these two categories, you can save a lot of time and frustration during the borrowing process. You’ll know which options you have, so you aren’t applying for things you can’t get.
Checking your consumer file may serve as a much-needed wake-up call. If it’sworse than you thought, you may postpone applying for financing until you can get a handle on things.
You may not have that luxury in an emergency when you need help fast. Luckily, you may find a personal line of credit for bad credit in these situations.
Checking in on your report is an empowering move. It lets you catch errors and inaccuracies that may be impacting your score unfairly.
Request a free check right now. You get three free checks each year!
You may have heard that A/B testing isn’t a viable approach for sites that don’t have enough traffic. Or that to reach statistical significance in a timely manner, you need to have thousands of visitors a day.
I’m here to tell you that those days of A/B testing FOMO for your low-traffic site are over! Having low traffic is no longer a barrier to optimize your site or learn about your users.
In this article, we discuss different tactics on how you can run A/B tests on your site and reach statistical significance without waiting for hundreds of days. We’ll also look at some alternative strategies that can help achieve the same thing as A/B testing: a better-optimized website.
Note: These tactics also apply to low-traffic pages. Maybe your site overall gets a ton of traffic, but there’s a specific page you’d like to optimize that doesn’t get nearly as much.
First, let’s define the term “low-traffic.” If your site or page has a few hundred visits a day or less, that would be considered “low-traffic.” Alright, now we can jump in.
The higher the MDE (Minimum Detectable Effect) of your proposed variation, the less time that will be needed to reach significance. The MDE, in simple terms, is how impactful you think the variation will be on the conversion rate.
Most A/B test duration calculators will include this field when deciding how long to run your test. See the screenshot below from VWO’s A/B Split & Multivariate Test Duration Calculator as an example.
This metric is pretty much asking you, “How much difference do you anticipate the results to be for the variation for the original?” The more differences that exist in the variation versus the original, the higher this number should be.
This is one of the most common and impactful tactics for testing on low-traffic sites: creating a variation that is vastly different from the original. On high-traffic sites, you can test CTA button copy or small elements one at a time. With low-traffic sites, you don’t have that luxury.
You’ll want to make big changes to the page, or changes that you would anticipate to have a big difference in user behavior. Alternatively, you can make a bunch of small changes. The more design or copy changes you make to the page, the higher you can expect that MDE to be.
Jay Lee, Experimentation Program Lead & Web Analytics Consultant at Microsoft, outlines some “big changes” you can apply to your experience:
“For low-traffic sites, it’s all about making larger changes that will allow you to observe a difference. For example – changing up pricing, offers, etc. Those make big differences.”
VWO explains how to determine what a “big change” looks like for your site:
“Understand their [your user’s] concerns. Know what primary factors they consider before taking an action on your site. For a funky clothing site which targets teenage and college students, pricing and free shipping can be very important. For a luxury-clothing brand that focuses on high-end celebrities, 1-day shipping guarantee or exclusive collection section on the site might be high-impact.”
Some examples of “big changes” include:
Michael Wiegand, Director of Analytics at Portent, gives a great guideline for determining what a “big change” looks like:
“What I’d focus on would be things that capture the eye in the first split seconds a user sees the page: Headlines, Hero Images, CTA Buttons. If you can’t squint your eyes and notice the thing you’re testing, even with few variations, it probably isn’t going to be effective on a low-traffic site.”
Alternatively, if you aren’t sure what “big change” can take place, you can make a bulk amount of small changes. In this case, you can be more narrow in your optimization ideas. The only drawback here, as is the case with any A/B test where you make more than one change, is that you won’t know which change led to any performance difference.
You will always be balancing “optimizing” and “learning” when it comes to Conversion Rate Optimization. But there are other ways for low traffic sites to get more of those “learnings” without running an A/B test, which we outline later in the article under “Recommended Alternatives to A/B Testing.”
If you have a high-traffic website, we recommend always testing more than one variation against the original. Unfortunately, for low-traffic sites, we recommend the opposite.
It’s tempting to test all the different optimization ideas you have to solve a single problem. But with every variation you add to your test, the time to reach statistical significance also increases.
As we will touch on later, you can utilize usability testing to determine which variation to move forward with. While this might cost you some extra resources, it will save you much more time compared to running the test with all of the variations.
These are also known as “micro conversions.”
One factor when plugging those metrics into A/B testing duration calculators is the existing conversion rate. The higher this conversion rate is, the less time you will need to reach significance.
The lower in the funnel you go, the lower conversion rates get. Alternatively, as you go higher in the funnel, those conversion rates increase. Therefore, by running tests that are targeting higher-funnel KPIs, you will more likely be able to reach significance in a timely manner.
The best example here is for e-commerce. Beyond “add-to-cart” and “checkout complete” KPIs, look at the micro conversions that naturally come before that. This can include things like product searches, product detail page views, or other engagement metrics that happen well before an end purchase or conversion.
If you can determine the current conversion rates for these higher-funnel KPIs and their impact on the end conversion, you can still calculate the potential impact of your tests on the end conversion.
For example, if you know the conversion rate of users who view a product page and you also know the rate of people who view a product page then make a purchase, then you can directly anticipate the impact your test will have on the bottom line. In this case, the end conversion would be a secondary KPI you’ll want to measure.
Reaching statistical significance isn’t the only signal of your variation outperforming the original, especially when you are looking to reach that 99% mark.
When you don’t have the luxury of large sample sizes to get there, you will have to treat these methods more as “guidelines” than hard-to-follow rules.
For example, Optimizely gives you the option to lower the statistical significance level in which it would declare a winner. So if you wanted to reach about 70% or 80% significance, you would require a much smaller sample size versus going for 99%.
Bhavik Patel, Head of Conversion at Teletext Holidays and Founder of CRAP Talks, describes the type of approach you should take in regards to A/B testing on low-traffic sites:
“You’ve got to level the playing field by taking risks on tests which are bolder, loosen the reins on the statistical rigor of your analysis and not be afraid to cut your tests short (you need the traffic and don’t have time to “wait and see”).”
For the more statistically-savvy testers, you can use a completely different statistical measurement method when calculating significance for small sample sizes. There are arguments for different methods, but the ones we recommend are below (I’ve linked to calculators for each one for you):
Another tactic to get around statistical significance is to use confidence intervals to determine a winner.
Per Optimizely, a confidence interval (or “difference” interval) shows you the range of values that likely contains the actual (absolute) difference between the conversion rates that would show up if you were to implement that variation.
The article goes on to explain, “A useful and easy risk analysis that you can do with a difference interval is to report best case, worst case, and middle ground estimates of predicted lift by reading the upper endpoint, lower endpoint, and center of the difference interval, respectively.”
Below is a screenshot from Google Optimize of a test we ran for a client’s landing page headline. The highlighted region represents the confidence intervals. None of these tests have reached 99% significance, but you can see the range for the last three variations’ confidence interval is much higher than the original. Even though we didn’t reach significance, we could potentially call this test in favor of those variations based off of these confidence intervals.
Another way to get around waiting on statistical significance is to measure your test by sessions and not by users. The majority of the A/B testing calculators and tools out there (i.e., Optimizely and VWO) measure significance based on unique visitors. This means that the test treats each person as a participant in the sample group.
If you open this metric up to sessions, it will include instances where the same user visits the experiment two or more times. This strategy increases your sample size and reduces the time to reach statistical significance.
This approach is more appropriate for experiments that would only impact a user’s behavior within a single session versus their entire experience on the site. Therefore, some of those big changes we referred to before may not be applicable in this scenario.
These recommendations can benefit all conversion rate optimizers out there, regardless of how much traffic comes to your site. They just happen to be great alternatives if you don’t have the traffic to test a specific optimization idea you have.
Moderated and unmoderated user tests are great ways to understand the “why” behind a person’s decision making. Have them navigate the experience you want to optimize and narrate their thoughts as they go through it. You only need five to eight participants to start to identify any patterns that would help improve on-site experience.
Design surveys are another beneficial approach to learning how you can improve your site’s experience. Simply present your page to a panel of users and ask them prodding questions such as, “What’s missing? What’s distracting? Is there anything we could add here that would help you make a decision?”
Design preference tests are similar to design surveys, except you are showing them multiple options and asking which one they prefer. This is where you can decide which variation to use if you are stuck on choosing from a group of them.
Some tools we use at Portent for user research include:
User research has proven to be one of the most powerful tactics when coming up with ideas to optimize website experiences.
In 2019, Portent’s A/B tests that were aiming to solve a problem that was discovered through user research had a winning rate of 80.4%. For comparison, the average winning rate for all A/B tests was 60%. This highlights the idea that tests derived from user research projects have a higher chance of improving your experience, versus only using other methods in isolation, such as data or heuristic analyses.
A heuristic analysis is when someone reviews a page and makes recommendations on how to improve the experience through user experience (UX) and conversion rate optimization (CRO) principles or best practices.
But you don’t need to be an expert to do this.
Alex Abell, Conversion Optimization Expert and Founder of Lunchpool, provides two different approaches you can follow to perform your own heuristic evaluation:
“For low traffic websites, one of the most useful techniques I’ve found is to employ the use of heuristic analysis to make sweeping changes to the site. We call this a “radical redesign.” Two of the most popular heuristics to use are that of MarketingExperiments.com (C = 4M + 3V + 2(I – F) – 2A ) and WiderFunnel’s LIFT methodology. Although these formulas look complex, they are really just a mental shortcut that allows you to systematically look at your website through the eyes of your potential customer.”
Having low traffic is no longer a barrier to optimize your site or learn about how your users engage with it. Some tactics for successful A/B testing on low-traffic sites include:
There are also some great alternatives to A/B testing, which can benefit all conversion rate optimizers out there regardless of how much traffic you get to your site. You can try things such as:
The two goals of A/B testing are to optimize your website’s performance and learn more about your users.
Referencing the tactics in this article will empower your (or your client’s) low-traffic site to achieve both of these goals. No more FOMO for low traffic sites. Get out there and start testing!
If you find yourself stranded in the desert with nothing but an endless supply of chips, you’re going to die within a week.
The same thing could happen to you if you had nothing but water to live on. Hunger and thirst are similar, easily confused but very different.
Our culture of corporate consumption tries to persuade us that being hungry is all we need. Hungry to earn more, buy more, save more, spend more. It celebrates the hustler who doesn’t know how to stop, asserting that this person is getting all the fancy prizes because they’re contributing so much. Status is awarded to the unsated hungry person.
But they might still be thirsty. Thirsty for meaning and connection. Thirsty for the satisfaction of creating beauty. More hustle won’t satisfy those needs.