A computer manufacturer is about to unveil a new, faster personal computer. The new machine clearly is faster, but initial tests indicate there is more variation in the processing time. The processing time depends on the particular program being run, the amount of input data, and the amount of output. A sample of 16 computer runs, covering a range of production jobs, showed that the standard deviation of the processing time was 22 (hundredths of a second) for the new machine and 12 (hundredths of a second) for the current machine. At the .05 significance level can we conclude that there is more variation in the processing time of the new machine?

Thanks

2 answers

You may want to check this with a statistics text, but here is one way you might approach this problem.

This looks like a hypothesis test involving two variances (standard deviation is the square root of the variance). The null hypothesis would be the ratio of the two variances less than or equal to 1. The alternative or alternate hypothesis would be the ratio of the two variances greater than 1.

Test statistic = 22^2/12^2 = ?

Finish the calculation. Using an F-distribution table at .05 level of significance, determine the critical or cutoff value to reject the null. (You will need to use degrees of freedom to determine the value.) If the test statistic exceeds the critical value from the table, the null is rejected in favor of the alternative hypothesis. If the test statistic does not exceed the critical value from the table, then the null is not rejected.
Thank you.