Yes, the map \psi is one-to-one on the domain \mathbb {R} \times (0, \infty ).
To see why, suppose that we have two pairs (\mu_1 , \sigma_1 ) and (\mu_2 , \sigma_2 ) in the domain of \psi such that \psi(\mu_1 , \sigma_1 ) = \psi(\mu_2 , \sigma_2 ). Then we must have both m_1(\mu_1 , \sigma_1 ) = m_1(\mu_2 , \sigma_2 ) and m_2(\mu_1 , \sigma_1 ) = m_2(\mu_2 , \sigma_2 ).
Using the fact that the k-th moment of a normal distribution N(\mu , \sigma ^2 ) is \mu ^k + k\mu ^{k-2}\sigma ^2 + ... (with the ellipsis denoting terms involving higher powers of \sigma ), we can write:
m_1(\mu , \sigma ) = \mu
m_2(\mu , \sigma ) = \mu ^2 + \sigma ^2
Thus, from the conditions m_1(\mu_1 , \sigma_1 ) = m_1(\mu_2 , \sigma_2 ) and m_2(\mu_1 , \sigma_1 ) = m_2(\mu_2 , \sigma_2 ), we have:
\mu_1 = \mu_2
\mu_1 ^2 + \sigma_1 ^2 = \mu_2 ^2 + \sigma_2 ^2
From the first equation, we obtain \mu_1 = \mu_2 . Substituting this into the second equation, we obtain:
\sigma_1 ^2 = \sigma_2 ^2
Thus, we have shown that if \psi(\mu_1 , \sigma_1 ) = \psi(\mu_2 , \sigma_2 ), then \mu_1 = \mu_2 and \sigma_1 = \sigma_2 , which implies that the map \psi is one-to-one on the domain \mathbb {R} \times (0, \infty ). This means that given the outputs m_1 and m_2, we can use them to uniquely reconstruct \mu \in \mathbb {R} and \sigma > 0.
Let
\displaystyle \psi : \mathbb {R} \times (0, \infty ) \displaystyle \to \mathbb {R}^2
\displaystyle (\mu , \sigma ) \displaystyle \mapsto (m_1(\mu , \sigma ), m_2(\mu , \sigma )).
denote the moments map considered in the previous problem, where m_ k(\mu , \sigma ) denotes the k-th moment of the distribution N(\mu , \sigma ^2 ).
Is \psi one-to-one on the domain \mathbb {R} \times (0, \infty )? (Equivalently, given the outputs m_1 and m_2, can we use them to uniquely reconstruct \mu \in \mathbb {R} and \sigma > 0?)
5 answers
If \psi is one-to-one on the given domain and \psi (\mu , \sigma ) = (m_1, m_2), what is \mu expressed in terms of m_1 and m_2? (If \psi is not one-to-one, enter 0.)
Since \psi is one-to-one on the given domain, we can use the inverse function \psi^{-1} to express \mu in terms of m_1 and m_2. In other words, we want to find \mu such that \psi(\mu, \sigma) = (m_1, m_2) for some \sigma > 0. Let's first solve for \sigma using the second component of \psi:
m_2 = \mu^2 + \sigma^2
Rearranging gives:
\sigma^2 = m_2 - \mu^2
Taking the positive square root (since \sigma > 0), we get:
\sigma = \sqrt{m_2 - \mu^2}
Substituting this into the first component of \psi, we get:
m_1 = \mu
Therefore, the solution for \mu in terms of m_1 and m_2 is simply:
\mu = m_1
Note that this is consistent with what we know about the moments of a normal distribution; the first moment is simply the mean of the distribution.
m_2 = \mu^2 + \sigma^2
Rearranging gives:
\sigma^2 = m_2 - \mu^2
Taking the positive square root (since \sigma > 0), we get:
\sigma = \sqrt{m_2 - \mu^2}
Substituting this into the first component of \psi, we get:
m_1 = \mu
Therefore, the solution for \mu in terms of m_1 and m_2 is simply:
\mu = m_1
Note that this is consistent with what we know about the moments of a normal distribution; the first moment is simply the mean of the distribution.
If \psi is one-to-one on the given domain and \psi (\mu , \sigma ) = (m_1, m_2), what is \sigma expressed in terms of m_1 and m_2? (If \psi is not one-to-one, enter 0.)
Using the same reasoning as in the previous question, we can solve for \sigma in terms of m_1 and m_2 as follows:
m_2 = \mu^2 + \sigma^2
Substituting \mu = m_1, we get:
m_2 = m_1^2 + \sigma^2
Rearranging gives:
\sigma^2 = m_2 - m_1^2
Taking the positive square root (since \sigma > 0), we get:
\sigma = \sqrt{m_2 - m_1^2}
Therefore, the solution for \sigma in terms of m_1 and m_2 is:
\sigma = \sqrt{m_2 - m_1^2}
m_2 = \mu^2 + \sigma^2
Substituting \mu = m_1, we get:
m_2 = m_1^2 + \sigma^2
Rearranging gives:
\sigma^2 = m_2 - m_1^2
Taking the positive square root (since \sigma > 0), we get:
\sigma = \sqrt{m_2 - m_1^2}
Therefore, the solution for \sigma in terms of m_1 and m_2 is:
\sigma = \sqrt{m_2 - m_1^2}