A/B Testing and Growth Marketing

Ayman Samir
4 min readSep 24, 2021

I have always wondered how companies create tests, what they test, and how they lift the KPI’s. This week I had a deeper dive in A/B testing from CXL

A/B Testing & Growth Marketing Proper A/B testing can make your decisions more valid, eliminate assumptions, and increase your performance metrics. There are some pitfalls that can occur when conducting A/B testing.

Here are some common pitfalls you might encounter: 1- Experimenting with A/B testing while not having enough data. 2- Not using the right statistics 3- Not knowing if the experiment is valid or not. 3- Not knowing when to call a winner. 4- Not understanding what you are experimenting with. At the end of the day, it’s all about Key Performance Indicators: how much did you accomplish? Did you achieve your goal? Which of your campaigns worked? And why? Answering these questions could be frustrating at times and requires a lot of effort and that’s why agencies tend to ignore the questions while it’s going on.

But which Key Performance Indicators should be optimized? There are five levels of Key Performance Indicators that you can optimize. These levels from the least important are clicks, revenue, transaction, revenue per user and at the top of the pyramid potential lifetime value. Clicks don’t really need A/B testing because you can usually optimize clicks without much change or testing.

Revenue is second least important after clicks. At a low transaction level, optimizing clicks and revenue might be your optimal goal, but once you have a good transaction level, it becomes a waste of time to work on clicks and revenue while ignoring transactions.

Mature vs. immature companies, how A/B testing differs when implemented in mature and immature companies. Mature businesses focus on the top level, i.e. transactions, revenue per user, and potential lifetime value, while immature businesses work on clicks and revenue. So one of the most important elements of A/B testing is data, but how can you get data and what kind of data should you aim for? How does the process begin and on what scientific basis? There is a scientific method you can use to make the whole process flawless. First, you start with an observation, then you think of interesting questions and move on to formulating a hypothesis. Based on the hypothesis, you develop a testable prediction and then collect data and finally develop general theories.

Another important element is the 6V conversion, i.e. Value, Versus, View, Validated, Verified and Voice. Starting with value, knowing your company’s mission, strategy, and long and short term goals is the core of value. Without the core, knowledge of your business, you cannot achieve results. Second, versus, which means looking at your competitors and studying the market. Know what your competitors are focused on, try to figure out their key performance indicators, and how they communicate. View is about analytics, where are your visitors coming from, where do they start on the site, how do they behave on the pages and what is the flow of those visitors.

Any kind of data you can capture on the website would be great for A/B testing and hypothesizing. Moving on to validated and verified, which means knowing what findings have been validated in previous testing and what kind of research still needs to be done. Finally, Voice, which means listening to your customers, conducting surveys and questionnaires to learn more about your customers. You can set up online forums to get feedback, use social monitoring tools or find hashtags on social networks to find out the voice of your customers.

How can you prioritize your A/B testing? There are 4 main elements to consider: potential, impact, simplicity, and strength. It’s all about questions and finding answers to the questions that will get you to the right test. Potential, what is the chance that the hypothesis is true? Impact, where would this hypothesis have a big effect? Power, what is the chance of finding a significant result? Simplicity, how easy is it to test and implement? Answering all of these questions can lead to more efficient testing and a simpler process when going through A/B testing, from hypothesis to implementation to outcome.

Ultimately, A/B testing is about eliminating your assumptions and improving your performance metrics. It can be used to improve business goals and advertising objectives. It’s doable in agencies and many roles can use A/B Testing. This role could be Product Manager while they are working on their product and figuring out which one works best for the client. Digital Agencies and optimize their performance throughout the process. How do digital agencies work and how can it be improved? Usually it all starts with the client giving Account Manager a briefing about the campaign or content.

Account Manager then passes that brief on to the copywriters, who start writing ideas for the campaign and begin the first step of the content calendar. Once the copywriter has completed the ideas, it is now the designer’s job to draft the ideas and turn them into visuals; this is the second part of the calendar. The calendar is now shared with the client for approval before being sent to the community manager to post on their channels on the specified dates. After the posts are shared, Media Planner starts boosting the content and Community Manager again creates a report on the performance, which is the last part. How can this process be improved with A/B testing?

The process can be improved by, firstly, drawing on what has been learned over the last few months and not just relying on the brief from the client. Second, by having the copywriter create two ad sets based on a hypothesis to test which one works best for the client. He can even reach out to designers to create two sets of visuals to know which one works for consumers and which one gets more engagement. And finally with Media Buyer, to test two sets of audiences in boosting to see which one works best for the account. All these parts can be implemented at the same time or separately or depending on the hypothesis set up

--

--