Suppose that a New Jersey testing service has created a standardized test for assessing the reading and computing skills of 8th graders. The test consists of a reading part, which is structured to have a mean of 200 and a standard deviation of 50. It also has a mathematics part, which is structured to have a mean of 100 and a standard deviation of 20. Students take both parts of the test and are given a final score that is the sum of both parts together. What would be the standard deviation of the final student scores?

2 answers

The mean combined score is 200 + 100 = 300 and the standard deviation of the combined score is sqrt[(50)^2 + (20)^2] = 53.9
My previous answer assumed the two score distributions are uncorrelated. In actual practice, this may not be true, becasus students who score low in one test would tend to be the ones who score lowest in the other, and similarly for the highest. In other words, the two distributions are correlated. The actual standard deviation is probably somewhere between the sum of the separate standard deviations (70) and the "root sum of squares (RSS)" number of 53.9 previously mentioned.