According to the dictionary, ‘null’ is defined as a non-existent or empty value or as something that has no force or meaning.

Null is nothing.  Well, almost nothing for, in science, null is really something.

It is really something for it reflects a core element of both scientific method and scientific values.

You might have heard people talk about the null hypothesis in scientific research.  This is the starting contention that the independent variable under investigation – the medication, the treatment, the program, the course – makes no difference or has no effect.  There is always an alternative hypothesis but this plays a secondary role.

But the null hypothesis is much more than a starting point.

The challenge of research is to design and execute an activity such that the null hypothesis can be validly and reliably accepted or rejected.  You make a decision about the null hypothesis only.

It is more than semantics to realise that the alternative hypothesis is not accepted – this is the logical outcome of rejecting the null hypothesis.  The scientific approach is to reject the contention that there is no effect.  This distinction is very important, as a focus on the alternative hypothesis implies a search for a positive outcome.  Can you foresee the potential implications of this stance?

The null hypothesis is accepted or rejected on the basis of probabilities and not a clear, decisive yes or no.  A measured difference between the treatment group and the control group is assessed relative to the probability of this difference being due to chance alone.  The traditional convention is to reject the null hypothesis if the probability of the difference being due to chance is less than 5%.

You can increase the standard of evidence required by dropping this 5% to, say, 1% – there is only one chance in 100 that the measured difference is due to chance.  For a range of reasons, chance could still be the cause, which is why replication studies are important.

Conversely, it is possible to relax the standard of proof by easing it to a number higher than 5%, perhaps 10%, or by pursuing what is known as one-tailed testing.  The latter assumes that a measured effect can only exist in one direction and is something that is seldom defensible.  In this case, you are saying that the independent variable can only make things better, never worse.  This is often a statement of belief or motivation rather than evidence or science.

Can you see how both approaches can really make things messy, leading to possibly invalid conclusions?  This is where method meets values in science.  And why the principles of science we have discussed before are of paramount importance.

You can be convinced of the effectiveness of your treatment or program but you can’t let this conviction influence objective scientific assessment, with rigorous and reasoned standards of proof.

Finally, acceptance/rejection of the null hypothesis is constrained to the particular sample study undertaken, which introduces the concept of statistical power.  This measure reflects ‘detectability’ – how able is your study to detect a true effect, given one exists?  The most obvious influence on statistical power is sample size – as sample size decreases, effect size must increase in order to be detected with same degree of confidence.  Confusing, isn’t it!

Strong and explicit research designs, adequate statistical power, refined measurement technologies, dedicated researchers and collegial collaboration are the safeguards against shortcuts that dilute the value of ‘null’.

Null is really something, something fundamentally important!

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.