Is your team wanting to run more experiments but not sure how to design them or where to start?
Let me explain a format for documenting and running experiments.
This is something that I researched and implemented in my last workplace (Trade Me).
This format is essentially a series of core questions that your team should answer:
- What’s the problem that you’re trying to solve?
- What’s your hypothesis?
- What results do you expect?
- What results did you actually get?
- What product decision has been made based on the result?
Let’s explore each of these questions in more detail.
1. What’s the problem that you’re trying to solve?
Every good experiment should be trying to solve a user or business problem.
You might realise that you don’t actually know what problem you’re solving. Or that you’re not even solving any problems. If so, then go back and find a good problem to solve first.
2. What’s your hypothesis?
Here is an effective way to write your hypothesis:
We bet that <target users> will <use this solution> because <of this reason>.
I have to admit, this template is not my original idea. Unfortunately, I can’t credit the original source either as I can’t remember where I found it!
With that disclaimer, the idea of this is that every experiment you run should be framed as a “bet”. That allows you to be open-minded to the possibility that you could be wrong.
By using the above template for your hypothesis, you’re clarifying the idea for your experiment, who your target users are, and why you’ve come up with this hypothesis.
The ‘why’ is just as important as the hypothesis itself. You better have a damn good reason for thinking that your hypothesis has a chance of being a success.
If you have no evidence, research, data, or anything else to back up your hypothesis, you should rethink your hypothesis or do some more digging.
3. What results do you expect?
This one’s a simple concept but hard to define well. This is essentially your success metrics.
Ask yourself, what would need to happen for this experiment to be a success?
Be as specific as possible. E.g., a statistically significant increase in the click-through-rate of the ‘Apply’ button, or 500 people completing the new application form.
4. What results did you actually get?
In my experience, I found that quite often you don’t get the results that you want or expect.
I’ve been proven wrong so many times through experiments.
During my time at Trade Me, our data analyst pointed out that we probably get a statistically significant difference once in every 7-8 experiments. Microsoft found that they get a positive statistically significant difference with a third of their experiments – which I think is super high!
Be honest, don’t fudge any numbers. Record the exact, actual results you found through your experiment.
5. What product decision was made?
The whole experiment is pointless without this part.
With every experiment, the team should have learned something and made a decision on what should be done next.
The product decision usually comes down to:
- This was a success so we’re going to implement the change permanently.
- This was a fail and we’re going to tweak the experiment and try again.
- This was a fail and there’s nothing to salvage from this experiment. We’re going to pivot and do something completely different or pursue a completely different opportunity.
And from there the cycle of experiment continues.
How to record your experiments
Your team can record your experiments however you want to. What matters is that it’s visible and accessible for everyone.
I personally prefer recording them on Confluence. But I’ve been in other teams where they’ve visualised it on a physical board.
Whichever medium you use to record your experiments, I recommend that you use a table format. This makes it easy to track the experiments. E.g., in the header row of the table, have “What’s the problem?” in column 1, “What’s the hypothesis?” in column 2, etc.
In addition to the basic columns (the core questions I’ve mentioned above), you can add any other columns that will help your team to track your experiments. E.g., experiment name, project name, experiment date (start date and end date), status (live, closed, coming soon), etc.
Recording our experiments have saved my butt so many times. It’s easy to forget what the results of an experiment was, why you made certain decisions, when you even ran the experiments, etc. So it’s handy to have everything on record.
I hope this is a great starting point to get your team running on experiments!
I’d love to hear how you go, so please keep me posted in the comments below!