Answer :
We begin by noting that we wish to test
[tex]$$
\begin{aligned}
H_0&:\mu_1 = \mu_2, \\
H_a&:\mu_1 < \mu_2,
\end{aligned}
$$[/tex]
with a significance level of [tex]$\alpha=0.01$[/tex]. Two independent samples of data were collected. Their sizes, means, and variances (calculated with [tex]$n-1$[/tex] degrees of freedom for the sample variance) are as follows:
- For Sample \#1 with [tex]$n_1=56$[/tex], the sample mean is [tex]$\bar{x}_1 \approx 77.1768$[/tex] and the sample variance is [tex]$s_1^2 \approx 144.9905$[/tex].
- For Sample \#2 with [tex]$n_2=57$[/tex], the sample mean is [tex]$\bar{x}_2 \approx 77.7333$[/tex] and the sample variance is [tex]$s_2^2 \approx 189.4001$[/tex].
Because the sample variances differ, we use the two-sample [tex]$t$[/tex]-test for unequal variances (Welch’s [tex]$t$[/tex]-test). The test statistic is calculated using
[tex]$$
t = \frac{\bar{x}_1 - \bar{x}_2}{\sqrt{\frac{s_1^2}{n_1} + \frac{s_2^2}{n_2}}}.
$$[/tex]
Substituting the numbers into the formula gives
[tex]$$
t \approx \frac{77.1768 - 77.7333}{\sqrt{\frac{144.9905}{56} + \frac{189.4001}{57}}} \approx -0.2289.
$$[/tex]
Rounded to three decimal places, this yields
[tex]$$
t \approx -0.229.
$$[/tex]
Next, we determine the degrees of freedom (using the Welch–Satterthwaite approximation)
[tex]$$
df = \frac{\left(\dfrac{s_1^2}{n_1} + \dfrac{s_2^2}{n_2}\right)^2}{\frac{\left(\dfrac{s_1^2}{n_1}\right)^2}{n_1-1} + \frac{\left(\dfrac{s_2^2}{n_2}\right)^2}{n_2-1}}.
$$[/tex]
The computed value is approximately [tex]$df \approx 109.5487$[/tex]. (For our one-tailed test, the exact degrees of freedom used in the technology are taken into account.)
Because the alternative hypothesis is [tex]$H_a: \mu_1 < \mu_2$[/tex], we need the one-tailed [tex]$p$[/tex]-value corresponding to the test statistic. For a negative [tex]$t$[/tex] statistic, the one-tailed [tex]$p$[/tex]-value is given by the cumulative distribution function value
[tex]$$
p\text{-value} = P(T \le t),
$$[/tex]
where [tex]$T$[/tex] follows a [tex]$t$[/tex]-distribution with approximately [tex]$109.5487$[/tex] degrees of freedom. For [tex]$t \approx -0.2289$[/tex], the one-tailed [tex]$p$[/tex]-value computed comes out to
[tex]$$
p\text{-value} \approx 0.4097.
$$[/tex]
Thus, our final answers are:
[tex]$$
\text{Test statistic} = -0.229 \quad \text{(to three decimal places)},
$$[/tex]
[tex]$$
\text{p-value} = 0.4097 \quad \text{(to four decimal places)}.
$$[/tex]
Since the [tex]$p$[/tex]-value is much larger than the significance level [tex]$\alpha=0.01$[/tex], we do not have sufficient evidence to reject the null hypothesis.
[tex]$$
\begin{aligned}
H_0&:\mu_1 = \mu_2, \\
H_a&:\mu_1 < \mu_2,
\end{aligned}
$$[/tex]
with a significance level of [tex]$\alpha=0.01$[/tex]. Two independent samples of data were collected. Their sizes, means, and variances (calculated with [tex]$n-1$[/tex] degrees of freedom for the sample variance) are as follows:
- For Sample \#1 with [tex]$n_1=56$[/tex], the sample mean is [tex]$\bar{x}_1 \approx 77.1768$[/tex] and the sample variance is [tex]$s_1^2 \approx 144.9905$[/tex].
- For Sample \#2 with [tex]$n_2=57$[/tex], the sample mean is [tex]$\bar{x}_2 \approx 77.7333$[/tex] and the sample variance is [tex]$s_2^2 \approx 189.4001$[/tex].
Because the sample variances differ, we use the two-sample [tex]$t$[/tex]-test for unequal variances (Welch’s [tex]$t$[/tex]-test). The test statistic is calculated using
[tex]$$
t = \frac{\bar{x}_1 - \bar{x}_2}{\sqrt{\frac{s_1^2}{n_1} + \frac{s_2^2}{n_2}}}.
$$[/tex]
Substituting the numbers into the formula gives
[tex]$$
t \approx \frac{77.1768 - 77.7333}{\sqrt{\frac{144.9905}{56} + \frac{189.4001}{57}}} \approx -0.2289.
$$[/tex]
Rounded to three decimal places, this yields
[tex]$$
t \approx -0.229.
$$[/tex]
Next, we determine the degrees of freedom (using the Welch–Satterthwaite approximation)
[tex]$$
df = \frac{\left(\dfrac{s_1^2}{n_1} + \dfrac{s_2^2}{n_2}\right)^2}{\frac{\left(\dfrac{s_1^2}{n_1}\right)^2}{n_1-1} + \frac{\left(\dfrac{s_2^2}{n_2}\right)^2}{n_2-1}}.
$$[/tex]
The computed value is approximately [tex]$df \approx 109.5487$[/tex]. (For our one-tailed test, the exact degrees of freedom used in the technology are taken into account.)
Because the alternative hypothesis is [tex]$H_a: \mu_1 < \mu_2$[/tex], we need the one-tailed [tex]$p$[/tex]-value corresponding to the test statistic. For a negative [tex]$t$[/tex] statistic, the one-tailed [tex]$p$[/tex]-value is given by the cumulative distribution function value
[tex]$$
p\text{-value} = P(T \le t),
$$[/tex]
where [tex]$T$[/tex] follows a [tex]$t$[/tex]-distribution with approximately [tex]$109.5487$[/tex] degrees of freedom. For [tex]$t \approx -0.2289$[/tex], the one-tailed [tex]$p$[/tex]-value computed comes out to
[tex]$$
p\text{-value} \approx 0.4097.
$$[/tex]
Thus, our final answers are:
[tex]$$
\text{Test statistic} = -0.229 \quad \text{(to three decimal places)},
$$[/tex]
[tex]$$
\text{p-value} = 0.4097 \quad \text{(to four decimal places)}.
$$[/tex]
Since the [tex]$p$[/tex]-value is much larger than the significance level [tex]$\alpha=0.01$[/tex], we do not have sufficient evidence to reject the null hypothesis.