You run an experiment every time you release a new feature, run a marketing campaign or try a new sales method.
Running experiments is a key activity for continuous innovation. But simply running a bunch of experiments is not enough.
This is how a lot of people approach experiments
- Formulate a hypothesis.
- Run an experiment.
- Learn/refine/optimize.
But this is too simplistic and a formula for endlessly running one experiment after another but never piercing the ceiling of achievement:
Why? Because a hypothesis is a guess. And, the quality of your results directly depends on the quality of your guesses.
Garbage in = Garbage out.
This begs the question: How do you formulate better guesses? That’s the topic of today’s issue.
Here are the steps:
- Adopt a Discovery Before Experiments Mindset
- Shortlist Your Most Promising Validation Campaign(s)
- Test Your Most Promising Validation Campaign Using Small, Fast Additive Experiments
1. Adopt a Discovery Before Experiments Mindset
The challenge with rushing to formulate a hypothesis is two-fold.
First, for any given problem, it’s pretty easy to guess a multitude of possible solutions.
For instance, low product retention could be due to
- poor UX,
- poor marketing,
- poor features or performance, etc.
As testing each solution costs time and effort, how do you prioritize? Therein lies the next challenge.
We often prioritize solutions that align with our pre-existing worldviews (or biases).
- developers want to build more,
- designers want to design better,
- marketers want to market better.
This is the curse of specialization.
A better way is holding off solution prioritization until you better understand the underlying problem, i.e., going beyond surface problems to root causes.
This requires embracing a discovery before experiments mindset.
Taking the requisite time for discovery helps shortlist the right type of campaign to run.
In the example above, this may involve
- conducting customer usability interviews,
- analyzing metrics,
- reviewing customer support tickets.
2. Shortlist Your Most Promising Validation Campaign(s)
Once you better understand the problem, shortlisting your most promising solutions becomes easier.
In the example above, if you determine that the real reason people aren’t using the product is a lack of product knowledge, instead of building more features or improving the design, adding product walkthrough videos may be better aligned to solve the root problem.
Using a hammer on a screw doesn’t solve the problem and could actually make matters worse.
You then test your most promising solution as a validation campaign.
3. Test Your Most Promising Validation Campaign Using Small, Fast Additive Experiments
How fast is fast? I recommend using 2-week sprints.
2 weeks typically isn’t long enough to test the entire campaign but you can break any campaign up into a series of smaller experiments.
The key to doing this is recognizing that the goal of every campaign is to increase traction. And traction can be deconstructed into 5 macro steps:
- Acquisition
- Activation
- Retention
- Revenue
- Referral
You can often test one or more of these steps in a 2-week timebox. This is when you formulate falsifiable hypotheses.
Examples:
- At least 100 people/week will start watching the first product walkthrough video (Acquisition)
- At least 60 people/week will watch at least 75% of the first product walkthrough video (Activation)
- At least 40 people/week will watch more than one product walkthrough video (Retention)
The power of using small timeboxes is that you get faster feedback on the overall campaign. If enough people are not finding or watching the first video, adding more videos will not help!
D-AARRR-T of Experiment Design
To help you remember these steps, I devised a simple mnemonic aid: The D-ARRRR-T (The Art) of Experiment Design.
Good validation campaigns should start with problems before solutions or discovery (D) before traction (T). And since increasing traction (T) is the ultimate goal of every campaign, it shouldn’t come as a surprise that all campaigns should tie back results to one or more customer factory metrics (AARRR).
When designing any campaign, it helps to consider these seven questions:
- Discovery: Is there an underlying problem worth solving?
- Acquisition: Are enough people interested/impacted?
- Activation: Does it deliver value?
- Retention: Do people come back?
- Revenue: What’s the impact (on revenue or some other meaningful metric)?
- Referral: Do people tell others?
- Traction: Did traction go up?