Duplicate Question
The question on this page has been marked as a duplicate question.
Original Question
As in the last video, let X=delta+W, where delta and W are independent normal random variables and W has mean zero. 1. Assume t...Asked by Azerbaijan
As in the last video, let X=delta+W, where delta and W are independent normal random variables and W has mean zero.
1. Assume that W has positive variance. Are X and W independent?
2. Find the MAP estimator of delta based on X if delta follows Normal(1,1) and W follows Normal(0,1), and evaluate the corresponding estimate if X=2.
3. Find the MAP estimator of delta based on X if delta follows Normal(0,1) and W follows Normal(0,4) and evaluate the corresponding estimate if X=2.
4. For this part of the problem, suppose instead that X=2*delta+3*W, where delta and W are standard normal random variables. Find the estimator of delta based on X under this model and evaluate the corresponding estimate if X=2.
1. Assume that W has positive variance. Are X and W independent?
2. Find the MAP estimator of delta based on X if delta follows Normal(1,1) and W follows Normal(0,1), and evaluate the corresponding estimate if X=2.
3. Find the MAP estimator of delta based on X if delta follows Normal(0,1) and W follows Normal(0,4) and evaluate the corresponding estimate if X=2.
4. For this part of the problem, suppose instead that X=2*delta+3*W, where delta and W are standard normal random variables. Find the estimator of delta based on X under this model and evaluate the corresponding estimate if X=2.
Answers
There are no human answers yet.
Answered by
GPT 3.5
1. Yes, X and W are independent because they are defined as independent random variables.
2. To find the MAP estimator of delta based on X, we need to find the value of delta that maximizes the posterior probability P(delta|X). Using Bayes' theorem, we have:
P(delta|X) = P(X|delta) * P(delta) / P(X)
Since delta follows Normal(1,1) and W follows Normal(0,1), we can write:
P(X|delta) = P(delta+W=2) = P(W=2-delta) = Normal(2-delta, 1)
P(delta) = Normal(1,1)
P(X) = ∫ P(X|delta) * P(delta) d(delta) = ∫ Normal(2-delta, 1) * Normal(1,1) d(delta)
To find the MAP estimator, we want to find the value of delta that maximizes P(delta|X). Thus, we need to find the delta that maximizes the posterior probability expression:
P(delta|X) = P(X|delta) * P(delta) / P(X)
The MAP estimator is the value of delta that maximizes P(delta|X). In order to evaluate the corresponding estimate, we need to substitute X=2 into the posterior probability expression and evaluate the resulting expression.
3. Following similar steps as in the previous part, we now have:
P(delta|X) = P(X|delta) * P(delta) / P(X)
P(X|delta) = P(delta+W=2) = P(W=2-delta) = Normal(2-delta, 4)
P(delta) = Normal(0,1)
P(X) = ∫ P(X|delta) * P(delta) d(delta) = ∫ Normal(2-delta, 4) * Normal(0,1) d(delta)
Using these expressions, we can find the MAP estimator of delta based on X, and evaluate the corresponding estimate when X=2.
4. In this case, X=2*delta+3*W. Since delta and W are standard normal random variables, they have mean zero and variance one. To find the estimator of delta based on X, we want to find the value of delta that minimizes the mean squared error (MSE). The MSE is given by:
MSE = E((delta_est - delta)^2)
We can minimize the MSE by taking the derivative of the MSE with respect to delta_est and setting it equal to zero. Solving this equation will give us the estimator of delta.
To evaluate the corresponding estimate when X=2, we substitute X=2 into the estimator of delta expression and solve for delta_est.
2. To find the MAP estimator of delta based on X, we need to find the value of delta that maximizes the posterior probability P(delta|X). Using Bayes' theorem, we have:
P(delta|X) = P(X|delta) * P(delta) / P(X)
Since delta follows Normal(1,1) and W follows Normal(0,1), we can write:
P(X|delta) = P(delta+W=2) = P(W=2-delta) = Normal(2-delta, 1)
P(delta) = Normal(1,1)
P(X) = ∫ P(X|delta) * P(delta) d(delta) = ∫ Normal(2-delta, 1) * Normal(1,1) d(delta)
To find the MAP estimator, we want to find the value of delta that maximizes P(delta|X). Thus, we need to find the delta that maximizes the posterior probability expression:
P(delta|X) = P(X|delta) * P(delta) / P(X)
The MAP estimator is the value of delta that maximizes P(delta|X). In order to evaluate the corresponding estimate, we need to substitute X=2 into the posterior probability expression and evaluate the resulting expression.
3. Following similar steps as in the previous part, we now have:
P(delta|X) = P(X|delta) * P(delta) / P(X)
P(X|delta) = P(delta+W=2) = P(W=2-delta) = Normal(2-delta, 4)
P(delta) = Normal(0,1)
P(X) = ∫ P(X|delta) * P(delta) d(delta) = ∫ Normal(2-delta, 4) * Normal(0,1) d(delta)
Using these expressions, we can find the MAP estimator of delta based on X, and evaluate the corresponding estimate when X=2.
4. In this case, X=2*delta+3*W. Since delta and W are standard normal random variables, they have mean zero and variance one. To find the estimator of delta based on X, we want to find the value of delta that minimizes the mean squared error (MSE). The MSE is given by:
MSE = E((delta_est - delta)^2)
We can minimize the MSE by taking the derivative of the MSE with respect to delta_est and setting it equal to zero. Solving this equation will give us the estimator of delta.
To evaluate the corresponding estimate when X=2, we substitute X=2 into the estimator of delta expression and solve for delta_est.
Submit Your Answer
We prioritize human answers over AI answers.
If you are human, and you can answer this question, please submit your answer.