In this problem, we will explore the intersection of Bayesian and frequentist inference. Let X _1, X _2, \cdots, X _{n} \stackrel{\text {i.i.d}}{\sim } \textsf{N}(0, \theta ), for some unknown positive number \theta, which is our parameter of interest. Suppose that we are unable to come up with a prior distribution for \theta.
Let's take a Bayesian approach here to arrive at an estimator.
Perform the following steps:
Compute Jeffreys prior.
Use Bayes formula to compute the posterior distribution.
From the posterior distribution, compute the Bayesian estimator of \theta. Recall that this is defined in lecture to be the mean of the distribution.
What is the Bayesian estimator \hat{\theta }^{\text {Bayes}}?
(Enter Sigma_i(X_i) for \displaystyle \sum _{i=1}^{n} X_ i and Sigma_i(X_i^2) for \displaystyle \sum _{i=1}^{n} X_ i^2. Do not worry if the parser does not render properly; the grader works independently. If you wish to have proper rendering, enclose Sigma_i(X_i) and Sigma_i(X_i^2) by brackets. )
\hat{\theta }^{\text {Bayes}}=
unanswered
In this Bayesian problem, which, if any, of the prior or the posterior, is proper?
The prior only.
The posterior only.
Both the prior and the posterior.
Neither the prior nor the posterior.
1 answer
1. Jeffreys' prior for a normal distribution with unknown variance \(\theta\) is given by \(p(\theta) \propto \frac{1}{\theta}\).
2. Using Bayes' formula, the posterior distribution is given by:
\[p(\theta | x) \propto p(x | \theta) \cdot p(\theta)\]
Since the likelihood function \(p(x | \theta)\) is proportional to \(\textsf{N}\left(0, \frac{\theta}{n}\right)\) and the prior \(p(\theta)\) is given by \(\frac{1}{\theta}\), the posterior distribution is proportional to:
\[p(\theta | x) \propto \textsf{N}\left(0, \frac{\theta}{n}\right) \cdot \frac{1}{\theta}\]
3. The Bayesian estimator \(\hat{\theta}^{\text{Bayes}}\) is the mean of the posterior distribution. Therefore, we need to compute the mean of the distribution \(p(\theta | x)\) obtained in the previous step.
The posterior distribution is a normal-inverse-gamma distribution, and its mean can be computed using standard formulas. However, the specific form of the mean is not relevant to the question.
Therefore, the Bayesian estimator \(\hat{\theta}^{\text{Bayes}}\) cannot be computed based on the information given.