Statistics is a subject that is full of researches and experiments. In order to perform a research or survey, firstly one needs to have adequate data from which the suitable sample is chosen. In many cases, the statistical hypotheses are formed. Recall that the statistical hypotheses are the statements that are assumed to be true. Null hypothesis is a statement which is adopted until it is proved wrong.
When it is found wrong, it is rejected and an alternative hypothesis is adopted. Usually, the alternative hypothesis is just opposite to the null hypothesis. To find the authenticity of our hypotheses, we are required to perform tests of significance which are known as statistical significance tests. They can be mainly categorized into two : one sided tests and two-sided tests.Whenever a statistical significance test is conducted (an ANOVA, a correlation, a regression or any other type of test), it ends up getting a p-value in its result. The p-value is then compared with the null hypothesis. If it is found statistically significant enough, then the null hypothesis is accepted, otherwise rejected and an alternative hypothesis is accepted. Usually, when the data is symmetrically distributed on the both sides of a benchmark value, a two-sided test is utilized. On the other hand, if lies on either side, a one-sided test is adopted. In this page, we shall learn about one-sided test, its definition, when to use it etc.
A one-sided test is well known by the term "one-tailed test". It is said to be a significance test that examines whether there is any relation in one direction between the variables, thus this test is also known as "directional test". One-sided tests are found to be useful when one has good subject knowledge and there is a directional difference among given variables.
We can say that the one-sided tests should be used if the big difference in a direction provides same action as there is no difference at all. If a difference in a direction is expected, then it would not be an adequate justification because sometimes statistical researches lead to the results which are not expected at all.Explanation
If the significance level (\alpha) is chosen to be 0.05 or 5% (as it is generally chosen in many cases), then a one-sided test simply allocates whole of it for the testing of significance in the one particular direction only, i.e. 0.05 significance level is used in testing only one tail of our distribution. Have a look at the following diagram :
This type of test completely ignores any possibility of relationship in the direction which is not specified. It just focuses on the direction of interest.
One-sided tests are generally used for the distributions which are not symmetric (asymmetric) about a benchmark value. Such distributions have only one tail. A common example for the measurement of goodness-of-fit is the chi-squared distribution. One-sided tests can also be used for the distributions having two tails, but only one side is considered.
For Example: Normal distribution is often used for the estimation of location in which a particular direction is specified.
The examples of one-sided tests are given below:
1) If one has to test if a coin is biased for head (or tail) in the experiment of tossing a coin, a one-sided test is utilized. For it, obtaining the data for "all heads" (or all tails) would be considered highly significant, while obtaining data for "all tails" (or all heads) would not be significant at all.
2) In a factory, the manufacturer must check that the errors on labels must be below 1%. So, instead of getting every label checked (which is expensive and time taking), the random samples are to be chosen. These samples are tested for the label error whether or not it is greater than 1% with the selected significance level. This is the implementation of one-sided test.
3) Let us suppose that the mean of given sample is required to be compared with some given value (say x) using a one-sided test. The null hypothesis would be "mean is equal to x". By one-sided test, we will test whether the mean is either significantly bigger than or significantly smaller than x, but definitely not both of them.