Let delta hat be an estimator of a random variable delta, and let delta hat=delta hat-delta be the estimation error.

a) In this part of the problem, let delta hat be specifically the LMS estimator of delta. We have seen that for the case of the LMS estimator, E[delta prime|X=x]=0 for every x. Is it also true that E[delta prime|delta=theta]=0 for all theta? Equivalently, is it true that E[delta hat|delta=theta]=theta for all theta?

b) In this part of the problem, delta hat is no longer necessarily the LMS estimator of delta. Is the property Variance(delta)=Variance(delta hat)+Variance(delta prime) true for every estimator delta hat?

1 answer

a) No, it is not true that E[delta prime|delta=theta]=0 for all theta. The LMS estimator minimizes the mean squared error, but that does not guarantee that the conditional expectation of the estimation error will be zero for all values of delta. In other words, while E[delta prime|X=x]=0 for every x, it is not necessarily true that E[delta prime|delta=theta]=0 for all theta.

b) No, the property Variance(delta)=Variance(delta hat)+Variance(delta prime) is not true for every estimator delta hat. This property is specific to the LMS estimator, which is designed to minimize the mean squared error. Other estimators may have different properties and may not necessarily follow this equation.