Hypotheses are *Not* Just for Scientists
We subconsciously use hypotheses every day. A hypothesis is simply the identification of a particular assumption and the prediction of the expected outcome. Sound complicated? It isn’t.
All types of decisions have implicit assumptions and hypotheses behind them.
- Wanna get a degree in Art? Why not just go get that degree and find out later if it was a good idea? (Nothing against Art degrees)
- Have a new app idea? Why not just go build it?
- Have a marketing idea? Why not just go execute it?
Because those things cost time and money… And you don’t know what the outcome will be.
The goal of forming (and testing) a hypothesis is to figure out if your assumptions are valid, and to confirm that you’re doing things you should be doing (and to stop you from doing things you shouldn’t be doing). In other words, to make better decisions! That sounds like something everyone needs, not just scientists.
- Step 1: Identify your assumptions.
- Step 2: State your hypothesis.
- Step 3: Figure out how to test your hypothesis.
- Step 4: Know the risks of your test.
- Step 5: Learn, Pivot, Repeat
That’s it, and it’s really not as complicated as it sounds. Let me break it down for you.
Step 1: Identify Your Assumptions
Let’s say you want to build an app (and actually make money). What are your assumptions? For starters, you assume that people actually want your app, and that people will pay for your app.
But we have to go a bit deeper than that. It’s not enough to know whether or not people want your app, you need some idea of the actual number of people. You have to know where you’re going to find them, and how much it will cost to acquire those customers. It’s not enough that people will pay for your app, you need to know much they are willing to spend.
You have to go deep. You have to identify all of the assumptions of your assumptions.
Step 2: State Your Hypothesis
A hypothesis is just an assumption with a predication thrown in.
Usually they come in the form of:
If [X] happens, then the result will be [Y].
Let’s return to the app example. We want to figure out if people are actually interested in our app. One way to gauge public interest is to build a simple landing page that takes email registrations. The landing page markets the app and if people are interested, they will give out their email so that they can be notified when the app is ready to purchase. Now the goal becomes 1) get them to your landing page and 2) once they’re at your landing page, get their email. One hypothesis might be: If we advertise on Twitter, then 10% (or more) of all people who view the marketing message will go to our website. Another hypothesis might be: 20% (or more) of the users who come to our landing page will register. Now, keep in mind these are simple examples that are full of assumptions.
It’s also critical to state why you think the result will occur. You might actually get the result you predicted, but if you can’t explain why it happened, or if you can’t validate that it happened because of the way you thought it happened, then you can’t reasonably expect to repeat your success.
(If you want to get even more technical, you should actually determine the ‘Null Hypothesis’, and test for statistical significance (if possible), but that’s another blog post in itself. Don’t let that stop you from running tests though.)
Step 3: Test Your Hypothesis
How do you test your hypothesis? Unfortunately, that just depends.
But you can start by determining the smallest useful version of your overall goal. So, for example, if you think that your marketing plan is going to cost $100K and will result in 100K registrations on your website, then try to design an experiment that is a smaller version of that. How small? The smallest useful version. The version that is the least expensive, quick, but still produces enough data so that you have confidence in the results. So you might spend $10K and verify that it produces >10K registrants, for example.
The goal, in most cases, is to validate/invalidate your assumptions as quickly and inexpensively as possible.
Get creative. If you have a new product idea, for example, KickStarter or IndieGoGo (crowdfunding platforms) are a great way to, not only see how many people will actually buy your product, but you also have the added benefit of simultaneously raising money which lowers your risk because you don’t have to front the cash of building the product.
Step 4: Know the Risks of Your Test
Before we even run the tests, we should identify what the test *isn’t* testing.
For example, let’s say 100,000 people register their email. Sounds great, right? But that only tells you (in general) how many people are ‘interested’ in our app. It doesn’t tell you how many of those people are actually going to use the app. If it’s a free app, you probably have a good idea. If it’s a 99 cent app, it’s probably less accurate (but to what degree?). If its a $99 app, you still might not have a clue.
Step 5: Learn, Pivot, Repeat
Now it’s time to analyze the data. Does the data align with your expectations? Is there enough data for the experiment to be valid?
If 10 people visited your website and 4 people signed up, you might have doubled your goal of 20%, but that’s just not enough data to be useful.
Keep in mind that if you do validate an assumption, that doesn’t mean that the assumption will always be valid. Things change. People change. Similarly, if your assumption is invalidated, it doesn’t mean you should give up. If not as many people registered on your website as you expected, for example, then reach out to people and find out way. Perhaps they didn’t understand what you were talking about. Perhaps you targeted the wrong people. Who knows? Find out.
Learn from the experiment, try to determine how you can change things (pivot), and then retry. Continually optimize.
Continually validate/invalidate assumptions.
You don’t have to be a scientist to get the benefit of using hypotheses. We use them every day. By knowing and using the process, however, we can be more explicit, proactive, and confident about our choices. And we can make data-driven decisions.
Written by Shane Kercheval.