I have been working on the Type I error and a Type II error problem but now I can not figure out the Power of the test?

A manufacturer of computer monitors receives shipments of LCD panels from a suppliers overseas. It is not cost effective to inspect each LCD panel for defeats, so a sample is taken from each shipment. A significance test is conducted to determine whether the proportion of defective LCD panels is greater then the acceptable limit of 1%. If it is, the shipment will be taken back to the supplier. The hypothesis for this test is Ho:p=0.01 and Ha:p>0.01, where p is the true proportion of defective panels in the shipment.
If a Type I error were to be committed, we would conclude that there are more the 1% defective panels when there really not. The shipment would be returned when it was not authorized. If a Type II error were not committed, we would conclude that there are no more than 1% of defected panels when there really were. Then defective panels would be accepted from the supplier.
The supplier would think that the Type I error is more serious because they would be receiving LCD panels back that work fine. The computer monitor manufacturer would think that the Type II error is more serious because they would be receiving panels of poor quality.

What would the Power of the test be?

1 answer

The probability of making a Type II error is equal to beta. A Type II error is failure to reject the null (Ho) when it is false. The power of the test is 1-beta and is the correct decision of rejecting the null when it is false. The alpha level directly affects the power of a test. The higher the level, the more powerful the test. Sample size also affects power. As the alpha level gets smaller, the probability of a Type II error increases and power decreases. You want to reject the null when it is truly false to have high power in statistical tests.

I hope this will help.