Experimentation and validation are necessary to de-risk your innovation process – but the methods are not bulletproof. After collaborating with and running validation for multiple global business leaders, we’ve put together a list of the most common experimentation mistakes and how to avoid them.
Preventing unsuccessful experiments
1. Think measurables
When outlining an experiment, it’s good to focus on the ‘what’. But it’s just as vital to be able to measure the outcome of the experiment. With a North Star Metric, you’ll be setting a single metric at the beginning of your experiment. It’ll be your guide when gauging the outcomes based on what you wish to achieve.
WHY WE DO THIS
You want to know where you’re going before you start walking. A descriptive set of milestones can help you sketch what this metric will look like. You can use our Experiment Card to outline a North Star Metric based on your own validation goals.
2. Define clear success criteria
Setting clear success criteria is yet another vital step some innovation teams are known to leave behind. Essentially, this step is about discussing the metric values your North Star Metric will analyze, and ultimately measure the experiment as a success or failure. To set it in motion, you’ll need the ability to kill or pivot your validation methods.
WHY WE DO THIS
Setting up the right criteria throughout all phases of experimentation will help you determine if, and why, an experiment is considered a success. Make sure to design your North Star Metric with all team members to bring clarity to your process.
Bonus tip! Most information can be reused. Data from past experiments often serves as a benchmark for future validation processes. Don’t despair if you’re working towards your first experiment, a Google search can always provide existing benchmarks you can set as your own.
3. Don’t rush into it
Control is one of the many advantages of digital validation experiments. These can be conducted quickly, enabling teams to gather consumer or end-user data in a matter of days. But like all good things, there’s always a downside. More often than not, teams hurry towards the experimentation phase believing speed is the only factor to take into account. Although digital experiments usually offer faster outcomes, rushing into them will mostly result in inconclusive data insights. You might’ve already noticed the connection between the North Star Metric, a clear success criteria, and a thinking-before-doing approach. These first three steps to avoid experimentation malfunction are usually set in motion hand-in-hand.
WHY WE DO THIS
To avoid putting effort in results we can’t use, we always take into account a data modelling basic principle: garbage in = garbage out. You should too. To avoid hasty mistakes and irrelevant information, our mindset is to always choose data quality over fast execution. Reverse engineering from desired results is a good starting point to set up these types of experiments.
4. Avoid short-term learning memory
Experiments provide consumer insights based on real market data, showing which route to take next in your project. But information saturation is never helpful. Conducting more experiments than are needed may end up tangling up results with other quantitative and qualitative research sources. This ends up diluting the value of the insights. Having clear takeaways and revisiting compiled data can help you avoid the common pitfall of not making the most out of the information that’s already available.
WHY WE DO THIS
We avoid wasting assets, and make the most of the data we already have. It is usually not single-use.
Tool tip! For a project, we used the Miro tool to create an experimental war-room with easy access to the team. All participated in outlining experiment setups, goals, results overviews, learnings, and future steps to follow. This granted all team members quick access to past experiment results that would come in handy in the future.
5. Don’t fall in love with your idea
Successful validation experiments depend on unbiased teams. What’s the point of running a test in the first place otherwise? When conducting trials, it’s important to look out for confirmation bias, as well as the human tendency to look for information that confirms one’s beliefs. A good way to keep track of assumptions is listing them and designing an experiment around them. However, you have to be willing to be proved wrong! Experiments are not set to confirm preferences or reflect your team’s presumptions, they’re meant to give you a glimpse of what end-users and consumers want.
WHY WE DO THIS
Avoiding biases creates a learning-prone mindset, much needed in innovation teams. All who are part of these processes should be prepared to have their ideas debunked, to learn from unexpected results, pivot around testing outcomes, and to re-define their success criteria as much as it takes.
6. Draw a line between business rationale and desirability metrics
You might’ve heard of soft and hard key performance indicators (KPIs). When setting up validation experiments, we like to split them up. Soft metrics measure values such as impressions, reach, and engagement; and sometimes even click-through-rate. Despite being hard measurables, these are called vanity metrics. Why? They make you feel good, but they don’t reach conversion. Although these metrics do show desirability insights, they’re not where the business is at. Conversion rates, cost per sale, cost per qualified lead, and acquired customers. These are the hard metrics that reflect direct value, so they need to take on the spotlight.
WHY WE DO THIS
It’s not one or the other. Paying attention to all kinds of metrics while focusing on the ones that reflect conversion rates is possible. Map out the metrics that show real end-user value for specific stages of the project, and pay attention to hard metrics in the viability stage. You can also focus on soft metrics when analyzing desirability. The goal is to reach a robust analytics structure that uses all these values the right way.
7. Avoid force-fitting tools
Tools aren’t just a hype. An elaborate toolbox facilitates effective validation experiments. UsabilityHub, Phantombuster, and Umso are a few examples of widgets with pretty awesome functionalities for experimentation. Each tool comes with its own strengths and weaknesses, which outlines an ideal scenario to use them respectively. This is why it’s important to choose the tool that better fits your specific learning goals, and not the other way around.
WHY WE DO THIS
We want our capital and time investment to be more efficient. Avoiding using tools without a specific need helps us get there. By creating an overview of your toolbox, your team will always be mindful of the right time to implement them. You can take a look at our experiment picker flowchart. This is a nifty tool that helps you set up the right experiments for the validation you need, and makes it easier to pick your tools accordingly.
It’s go time
We experiment because validating ideas and assumptions through experimentation provides the means to increase your product’s market-fit. Taking into account these tips to avoid an experiment malfunction, you’ll have a safety net around your validation processes, and will virtually increase your chances of success.
Source: https://www.boardofinnovation.com/blog/why-your-innovation-experiments-fail/