Question
Consider a Gaussian linear model Y=aX+\epsilon in a Bayesian view. Consider the prior \pi (a)=1 for all a\in \mathbb {R}. Determine whether each of the following statements is true or false.
\pi (a) a uniform prior.
True
False
\pi (a) is a Jeffreys prior when we consider the likelihood L(Y=y|A=a, X=x) (where we assume x is known).
True
False
Consider a linear regression model \mathbf{Y}=\mathbb {X}{\boldsymbol \beta }+\sigma {\boldsymbol \varepsilon } where
{\boldsymbol \varepsilon }\in \mathbb {R}^ n is a random vector with \mathbb {E}[{\boldsymbol \varepsilon }]=\mathbf{0}, \mathbb {E}[{\boldsymbol \varepsilon }{\boldsymbol \varepsilon }^ T]=I_ n, and no further assumptions are made about {\boldsymbol \varepsilon }
\mathbb {X} is an n by p deterministic matrix, and \mathbb {X}^ T\mathbb {X} is invertible.
\sigma >0 is an unknown constant.
Let \hat{{\boldsymbol \beta }} denote the least squares estimator of \beta in this context. Determine whether each of the the following statements is true or false.
\hat{{\boldsymbol \beta }} is the maximum likelihood estimator for {\boldsymbol \beta }.
True
False
With the model written as \, \mathbf{Y}=\mathbb {X}{\boldsymbol \beta }+\sigma {\boldsymbol \varepsilon },\, \hat{{\boldsymbol \beta }} has dimension 1 \times p (i.e. is a row vector of length p).
True
False
\hat{\beta } has a Gaussian distribution (even for small n).
True
False
Under the same setup and assumptions, \mathbb {X}\hat{{\boldsymbol \beta }} is...
(Check all that apply.)
Equal to (\mathbb {X}^ T\mathbb {X})^{-1}\mathbb {X}^{T}\mathbf{Y}
An unbiased estimator of \mathbb {X}{\boldsymbol \beta }
A vector in \mathbb {R}^ p
\pi (a) a uniform prior.
True
False
\pi (a) is a Jeffreys prior when we consider the likelihood L(Y=y|A=a, X=x) (where we assume x is known).
True
False
Consider a linear regression model \mathbf{Y}=\mathbb {X}{\boldsymbol \beta }+\sigma {\boldsymbol \varepsilon } where
{\boldsymbol \varepsilon }\in \mathbb {R}^ n is a random vector with \mathbb {E}[{\boldsymbol \varepsilon }]=\mathbf{0}, \mathbb {E}[{\boldsymbol \varepsilon }{\boldsymbol \varepsilon }^ T]=I_ n, and no further assumptions are made about {\boldsymbol \varepsilon }
\mathbb {X} is an n by p deterministic matrix, and \mathbb {X}^ T\mathbb {X} is invertible.
\sigma >0 is an unknown constant.
Let \hat{{\boldsymbol \beta }} denote the least squares estimator of \beta in this context. Determine whether each of the the following statements is true or false.
\hat{{\boldsymbol \beta }} is the maximum likelihood estimator for {\boldsymbol \beta }.
True
False
With the model written as \, \mathbf{Y}=\mathbb {X}{\boldsymbol \beta }+\sigma {\boldsymbol \varepsilon },\, \hat{{\boldsymbol \beta }} has dimension 1 \times p (i.e. is a row vector of length p).
True
False
\hat{\beta } has a Gaussian distribution (even for small n).
True
False
Under the same setup and assumptions, \mathbb {X}\hat{{\boldsymbol \beta }} is...
(Check all that apply.)
Equal to (\mathbb {X}^ T\mathbb {X})^{-1}\mathbb {X}^{T}\mathbf{Y}
An unbiased estimator of \mathbb {X}{\boldsymbol \beta }
A vector in \mathbb {R}^ p
Answers
GPT 3.5
False (it is a column vector)
Related Questions
Which one of the following statements below illustrates the advantages of Bayesian view over the fre...
One side concept introduced in the second Bayesian lecture is the conjugate prior. Simply put, a pri...
Which one of the following data modeling scenarios require one to strictly use a generalized linear...
×
Gaussian Mixture Model and EM Algorithm
Which of the following statements are true. Assume we ha...