程序代写代做代考 9 Hypothesis testing (Part 2)
9 Hypothesis testing (Part 2)
Single sample procedures
9.1 Introduction
In this chapter we will discuss specific applications of hypothesis testing where
we have a single sample of data and wish to test hypotheses regarding the value
of a population mean parameter.
We focus our main discussion on the scenario in which the random sample is
from a N(µ, σ2) distribution with µ unknown and σ2 known. The ideas are then
extended to develop hypothesis tests for (i) the mean of a normal distribution
with unknown variance, (ii) the mean of a non-normal distribution, and (iii) a
population proportion p. In cases (ii) and (iii) it is not possible to calculate the
exact distribution of the test statistic under the null hypothesis, however we can
appeal to the central limit theorem to find an approximate normal distribution.
9.2 Inference about the mean of a normal distribution when the
variance is known
Let X1, . . . , Xn be a random sample from N(µ, σ
2), where the value of µ is
unknown but the value of σ2 is known. We would like to use the data to make
inferences about the value of µ and, in particular, we wish to test the following
hypotheses:
H0 : µ = µ0 vs H1 : µ > µ0 .
The null hypothesis H0 posits that the data are sampled from N(µ0, σ
2). In
contrast, the alternative hypothesis H1 posits that the data arise from N(µ1, σ
2),
where µ1 > µ0 is an unspecified value of µ. This is a one-sided test.
We know that the sample mean, X̄, is an unbiased estimator of µ. Hence, if
the true value of µ is µ0, then E[X̄ − µ0] = µ0 − µ0 = 0. In contrast, if H1 is
true, we would have that E[X̄ − µ0] = µ− µ0 > 0. This suggests that we should
reject H0 in favour of H1 if X̄ is ‘significantly’ larger than µ0, i.e. if X̄ > k, for
some k > µ0. The question is, how much greater than µ0 should x̄ be before we
reject H0? In other words, what value should we choose for k?
One way to decide this is to fix the probability of rejecting H0 if H0 is true,
i.e. the probability of making a Type I error; the critical value k can then be
determined on this basis. This is equivalent to fixing the significance level of the
1
test. Suppose that we do indeed use X̄ as the test statistic, with rejection region
C = {x̄ > k} ,
and suppose we wish to find k > µ0 to ensure that
P(type I error) = P(reject H0 | H0 true) = α .
Hence we have that
α = P(reject H0 | H0 true) = P(X̄ > k | H0 true)
= P
(
X̄ − µ0
σ/
√
n
>
k − µ0
σ/
√
n
)
= P
(
Z >
k − µ0
σ/
√
n
)
,
where Z =
X̄−µ0
σ/
√
n
∼ N(0, 1) under H0. Let zα denote the upper α point of
N(0, 1), i.e. P(Z > zα) = α. From this we see that zα =
k−µ0
σ/
√
n
and so
k = µ0 +
zα σ√
n
.
Thus, H0 is rejected in favour of H1 if the sample mean is greater than µ0 by zα
standard errors.
Equivalently, we reject H0 in favour of H1 at the 100α% significance level if
Z =
X̄ − µ0
σ/
√
n
> zα .
The standardized version of X̄ given by Z is the most frequently used form of
the test statistic in this scenario. The critical value zα can be obtained from
standard normal tables. In hypothesis testing it is common to use α = 0.05, and
in this case z0.05 = 1.645.
Suppose now that we wish to use our sample to test the hypotheses
H0 : µ = µ0 vs H1 : µ < µ0 . This is again a one-sided test. In this case we will reject H0 in favour of H1 if X̄ < k where k < µ0. Using analogous arguments to those used above, we will 2 reject H0 in favour of H1 at the 100α% significance level if X̄ < µ0 − zα σ√ n , or, equivalently, if Z = X̄ − µ0 σ/ √ n < −zα . For a test having a 5% significance level the critical value is −z0.05 = −1.645. If in fact our interest is in testing H0 : µ = µ0 vs H1 : µ 6= µ0 , then we now have a two-sided test. We will reject H0 in favour of H1 if X̄ is either significantly greater or significantly less than µ0, i.e. if X̄ < k1 or X̄ > k2 ,
The critical values k1 < µ0 and k2 > µ0 are chosen so that the significance level
is equal to α, i.e.
α = P(X̄ < k1 or X̄ > k2 | H0 true)
= P(X̄ < k1 |H0) + P(X̄ > k2 |H0) .
It seems natural to choose the values of k1 and k2 so that the probability of
rejecting H0 is split equally between the upper and lower parts of the rejection
region. In other words, we choose k1 and k2 such that
P(X̄ < k1 |H0) = P(X̄ > k2 |H0) = α/2 .
For illustration, see the figure overleaf which shows the p.d.f. of X̄, together with
the rejection region.
3
µ0k1 k2
Reject H0Reject H0 Do not reject H0
α 2α 2 1 − α
Figure: illustration of a two-tailed test.
We now find appropriate values of k1 and k2 satisfying this property. We begin
with k2. Note that
α/2 = P(X̄ > k2 | H0 true ) = P
(
X̄ − µ0
σ/
√
n
>
k2 − µ0
σ/
√
n
)
= P
(
Z >
k2 − µ0
σ/
√
n
)
, with Z ∼ N(0, 1) .
However, we know that zα/2 satisfies P(Z > zα/2) = α/2. Hence,
k2 − µ0
σ/
√
n
= zα/2 ,
and so we have that
k2 = µ0 +
zα/2 σ√
n
.
4
For k1, observe that
α/2 = P(X̄ < k1 | H0 true ) = P
(
X̄ − µ0
σ/
√
n
<
k1 − µ0
σ/
√
n
)
= P
(
Z <
k1 − µ0
σ/
√
n
)
, with Z ∼ N(0, 1) .
We know that P(Z < −zα/2) = α/2 and so
k1−µ0
σ/
√
n
= −zα/2. Hence
k1 = µ0 −
zα/2 σ√
n
.
To summarize the two-tailed test here, we reject H0 at significance level α if
X̄ > µ0 +
zα/2 σ√
n
or if
X̄ < µ0 − zα/2 σ√ n . Equivalently, we reject H0 at significance level α if Z = X̄ − µ0 σ/ √ n > zα/2 or if
Z =
X̄ − µ0
σ/
√
n
< −zα/2 .
9.2.1 Connection between the two-tailed test and a confidence inter-
val for the mean when the variance is known
LetX1, . . . Xn be a random sample fromN(µ, σ
2) with µ unknown and σ2 known.
Recall from Chapter 7 that a 100(1− α)% confidence interval for µ is given by[
X̄ −
zα/2 σ√
n
, X̄ +
zα/2 σ√
n
]
.
From the preceding discussion, if we are testing the hypotheses
H0 : µ = µ0
H1 : µ 6= µ0 ,
5
then we will ‘accept’ H0 at the 100α% significance level if
µ0 −
zα/2 σ√
n
≤ X̄ ≤ µ0 +
zα/2 σ√
n
,
or, equivalently, if
X̄ −
zα/2σ√
n
≤ µ0 ≤ X̄ +
zα/2σ√
n
.
Thus, the values of µ in the confidence interval correspond to values of µ0 for
which the corresponding null hypothesis H0 would not be rejected. In other
words, informally, the 100(1 − α)% confidence interval is a set of values of µ
which would ‘pass a hypothesis test at significance level α’. It is in this sense
that we can regard the confidence interval as a set of plausible values of µ given
the data.
Example 9.1. (i) A random sample of n = 25 observations is taken from a
normal distribution with unknown mean but known variance σ2 = 16. The
sample mean is found to be x̄ = 18.2. Test H0 : µ = 20 vs H1 : µ < 20 at
the 5% significance level.
(ii) Find the probability that we reject H0 using this testing procedure when
the true value of the mean µ is 19.0.
6
Example 9.2. Suppose now that we have a random sample of n = 50 obser-
vations from a normal distribution with unknown mean and known variance
σ2 = 36. It is found that x̄ = 30.8.
(i) Test H0 : µ = 30 vs H1 : µ 6= 30 at the 5% significance level.
7
(ii) Find the probability that we reject H0 when the true value of the mean µ
is 31.0.
8
9.3 Inference about the mean of a normal distribution when the
variance is unknown
Let X1, . . . , Xn be a random sample from the N(µ, σ
2) distribution, where the
value of µ is unknown but that of σ2 is also unknown. We want to test the
following hypotheses:
H0 : µ = µ0
H1 : µ > µ0
at significance level α. Based on the discussion in the previous section, an
appropriate test statistic which measures the discrepancy between µ0 and the
sample estimator X̄ is given by
T =
X̄ − µ0
S/
√
n
where S is the sample standard deviation. This is an estimate of the standardized
difference between X̄ and µ0. As we have discussed previously, because the
statistic T involves the random quantities X̄ and S, its sampling distribution
is no longer N(0, 1). We have seen in Chapter 7 that T ∼ t(n − 1), under the
assumption that H0 is true, i.e. T has a Student t-distribution with n−1 degrees
of freedom.
Assuming that the significance level of the test is α, we use one of the fol-
lowing rejection regions, depending on the alternative hypothesis:
• For the one-sided alternative hypothesis H1 : µ > µ0,
reject H0 if T > tα ,
where tα is the upper α point of a t(n−1) distribution, i.e. P(T > tα) = α.
• For the one-sided alternative hypothesis H1 : µ < µ0, reject H0 if T < −tα . • For the two-sided alternative hypothesis H1 : µ 6= µ0, reject H0 if T < −tα/2 or T > tα/2 .
9
Example 9.3. The drug 6-mP is used to treat leukaemia. A random sample of
21 patients using 6-mP were found to have an average remission time of x̄ = 17.1
weeks with a sample standard deviation of s = 10.00 weeks. A previously used
drug treatment had a known mean remission time of µ0 = 12.5 weeks. Assuming
that the remission times of patients taking 6-mP are normally distributed with
both the mean µ and variance σ2 being unknown, test at the 5% significance
level whether the mean remission time of patients taking 6-mP is greater than
µ0 = 12.5 weeks.
9.4 Using the central limit theorem
(i) Inference about the mean of a non-normal distribution.
Let X1, . . . , Xn be a random sample from a non-normal distribution, where
the value of the mean µ is unknown and that of the variance σ2 is also
unknown. We want to test the following hypotheses:
H0 : µ = µ0
H1 : µ > µ0
at significance level α. We can again use the test statistic
Y =
X̄ − µ0
S/
√
n
defined above which, by asymptotic (large n) results, has an approximate
N(0, 1) distribution when H0 is true (n ≥ 30). Aside from the choice of test
statistic, the rejection regions for the various versions of H1 are otherwise
10
identical to those defined in the case of normal data with a known variance.
(ii) Inference about the population proportion p.
Let X1, . . . , Xn be a random sample of Bi(1, p) random variables, where
the value of p is unknown. We want to test the following hypotheses:
H0 : p = p0
H1 : p > p0
at significance level α. As we have seen earlier in this module, an unbiased
sample estimator of the parameter p is given by
p̂ =
1
n
n∑
i=1
Xi = X̄n .
By the central limit theorem, p̂ ∼ N(p, p(1−p)/n) approximately for large
n. As a rule of thumb, n ≥ 9 max{p/(1 − p), (1 − p)/p} guarantees this
approximation has a good degree of accuracy. A suitable test statistic is
Y =
p̂− p0√
p0(1− p0)/n
Here we have estimated the standard error of p̂ by
√
p0(1− p0)/n which
uses the value of p specified under H0. If H0 is true then Y has an approx-
imate N(0, 1) distribution for large n. Thus, to achieve an approximate
significance level of α, we reject H0 in favour of the above H1 if Y > zα.
• For the one-sided alternative hypothesis H1 : p < p0, to achieve an approximate significance level of α, we reject H0 if Y < −zα. • For the two-sided alternative hypothesis H1 : p 6= p0, to achieve an approximate significance level of α, we reject H0 if Y < −zα/2 or Y > zα/2 .
Example 9.4. A team of eye surgeons has developed a new technique for
an eye operation to restore the sight of patients blinded by a particular
disease. It is known that 30% of patients who undergo an operation using
the old method recover their eyesight.
11
A total of 225 operations are performed by surgeons in various hospitals
using the new method and it is found that 88 of them are successful in
that the patients recover their sight. Can we justify the claim that the
new method is better than the old one? (Use a 1% level of significance).
12
Hypothesis testing (Part 2) Single sample procedures
Introduction
Inference about the mean of a normal distribution when the variance is known
Connection between the two-tailed test and a confidence interval for the mean when the variance is known
Inference about the mean of a normal distribution when the variance is unknown
Using the central limit theorem