To find the maximum a posteriori (MAP) estimate of \(\Theta\), we need to calculate the posterior probability density function (PDF) of \(\Theta\) given the observed values \(X_1 = -1\) and \(X_2 = 1\).
First, let's write the joint distribution of \(X_1\) and \(X_2\) in terms of \(\Theta\):
\(f(x_1, x_2 | \Theta) = f(x_1 | \Theta) \cdot f(x_2 | \Theta)\)
Since \(X_1 = \Theta + W_1\) and \(X_2 = 2\Theta + W_2\), we can substitute these equations into the joint distribution:
\(f(x_1, x_2 | \Theta) = f(x_1 | \theta + w_1) \cdot f(x_2 | 2\theta + w_2)\)
Since \(W_1\) and \(W_2\) are independent standard normal random variables, the conditional distributions \(f(x_1 | \theta + w_1)\) and \(f(x_2 | 2\theta + w_2)\) are also standard normal distributions centered at \(\theta\) and \(2\theta\) respectively.
Now, let's write the prior distribution of \(\Theta\) as \(f(\Theta) = f(\Theta | \mu, \sigma^2)\), where \(\mu\) and \(\sigma^2\) are the parameters of the prior distribution. Since \(\Theta\) is a standard normal random variable, we have \(\mu = 0\) and \(\sigma^2 = 1\).
The posterior probability density function (PDF) of \(\Theta\) given the observed values \(X_1 = -1\) and \(X_2 = 1\) is given by Bayes' theorem as follows:
\(f(\Theta | x_1, x_2) = \frac{{f(x_1, x_2 | \Theta) \cdot f(\Theta)}}{{f(x_1, x_2)}}\)
We can now calculate the denominator \(f(x_1, x_2)\) by integrating the joint distribution over all possible values of \(\Theta\):
\(f(x_1, x_2) = \int_{-\infty}^{\infty} f(x_1, x_2 | \Theta) \cdot f(\Theta) d\Theta\)
\(= \int_{-\infty}^{\infty} f(x_1 | \theta + w_1) \cdot f(x_2 | 2\theta + w_2) \cdot f(\Theta) d\Theta\)
Since the standard normal distribution is symmetric, the integral of the product of the two standard normal distributions centered at \(\theta\) and \(2\theta\) respectively is equal to the product of their individual integrals. Therefore, we can simplify the above equation as:
\(f(x_1, x_2) = f(x_1 | \theta + w_1) \int_{-\infty}^{\infty} f(x_2 | 2\theta + w_2) \cdot f(\Theta) d\Theta\)
The denominator \(f(x_1, x_2)\) can be further simplified by recognizing that the integrand is the joint distribution of \(X_1\) and \(X_2\) if \(\Theta\) is fixed to a specific value. Therefore, the integral over all possible values of \(\Theta\) is equal to the joint distribution itself:
\(f(x_1, x_2) = f(x_1 | \theta + w_1) \cdot f(x_2 | 2\theta + w_2)\)
Now, we can calculate the posterior probability density function (PDF) of \(\Theta\) given the observed values \(X_1 = -1\) and \(X_2 = 1\) as:
\(f(\Theta | x_1, x_2) = \frac{{f(x_1 | \theta + w_1) \cdot f(x_2 | 2\theta + w_2) \cdot f(\Theta)}}{{f(x_1 | \theta + w_1) \cdot f(x_2 | 2\theta + w_2)}}\)
\(= f(\Theta)\)
\(= f(\Theta | \mu, \sigma^2)\)
Since the observed values of \(X_1 = -1\) and \(X_2 = 1\) do not change the prior distribution of \(\Theta\), the posterior distribution is the same as the prior distribution.
Therefore, the MAP estimate of \(\Theta\) is simply the mean of the prior distribution, which is \(\mu = 0\).
Suppose that X_1=\Theta +W_1 and X_2=2\Theta +W_2, where \Theta ,W_1,W_2 are independent standard normal random variables. If the values that we observe happen to be X_1=-1 and X_2=1, then what is the MAP estimate of \Theta?
1 answer