a) No, X and W are not independent. Since X is defined as the sum of two random variables, it is dependent on both \Theta and W.
b) The MAP estimator of \Theta based on X is given by:
\hat\theta = argmax P(\Theta|X)
= argmax P(X|\Theta)P(\Theta)
= argmax P(X|\Theta)P(\Theta=1)
Since X = \Theta + W, and W has mean zero, we have X|\Theta = \Theta + W ~ N(\Theta, 1). P(X|\Theta) can be expressed as the likelihood function of X given \Theta.
Likelihood function: L(\theta|X) = P(X|\theta) = \frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}(X-\theta)^2}
P(\Theta=1) is the prior probability distribution of \Theta, which is given as N(1,1).
We can now substitute these values into the MAP estimator:
\hat\theta = argmax \frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}(X-\theta)^2} \cdot \frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}(\theta-1)^2}
To find the maximum, we take the logarithm of the expression and set the derivative equal to zero:
\frac{d}{d\theta} [\ln(\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}(X-\theta)^2} \cdot \frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}(\theta-1)^2})] = 0
Simplifying and solving for \theta, we get:
\hat\theta = \frac{X+1}{2}
Evaluating the estimate for X=2:
\hat\theta = \frac{2+1}{2} = 1.5
c) Following the same steps as in part b, we have:
\hat\theta = argmax \frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}(X-\theta)^2} \cdot \frac{1}{\sqrt{2\pi}}e^{-\frac{1}{8}\theta^2}
To find the maximum, we take the logarithm of the expression and set the derivative equal to zero:
\frac{d}{d\theta} [\ln(\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}(X-\theta)^2} \cdot \frac{1}{\sqrt{2\pi}}e^{-\frac{1}{8}\theta^2})] = 0
Simplifying and solving for \theta, we get:
\hat\theta = \frac{X}{1+\frac{1}{8}} = \frac{8X}{9}
Evaluating the estimate for X=2:
\hat\theta = \frac{16}{9}
d) Following the same steps as in parts b and c, we have:
X = 2\Theta + 3W
\hat\theta = argmax \frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}(X-2\theta)^2} \cdot \frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}\theta^2}
To find the maximum, we take the logarithm of the expression and set the derivative equal to zero:
\frac{d}{d\theta} [\ln(\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}(X-2\theta)^2} \cdot \frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}\theta^2})] = 0
Simplifying and solving for \theta, we get:
\hat\theta = \frac{X}{2}
Evaluating the estimate for X=2:
\hat\theta = \frac{2}{2} = 1
As in the last video, let X=\Theta +W, where \Theta and W are independent normal random variables and W has mean zero.
a) Assume that W has positive variance. Are X and W independent?
yes or no
b) Find the MAP estimator of \Theta based on X if \Theta \sim N(1,1) and W\sim N(0,1), and evaluate the corresponding estimate if X=2.
\hat\theta =\,
c) Find the MAP estimator of \Theta based on X if \Theta \sim N(0,1) and W\sim N(0,4), and evaluate the corresponding estimate if X=2.
\hat\theta =\,
d) For this part of the problem, suppose instead that X=2\Theta +3W, where \Theta and W are standard normal random variables. Find the MAP estimator of \Theta based on X under this model and evaluate the corresponding estimate if X=2.
\hat\theta =\,
1 answer