To analyze how different frequentist methods of estimation handle transformations, we can consider each method you mentioned: maximum likelihood estimation (MLE), the method of moments, and M-estimation.
-
Maximum Likelihood Estimation (MLE):
- MLE maximizes the likelihood function based on the observed data. If you transform the parameter (say, \( \theta' = g(\theta) \)) and the corresponding estimator (MLE) of the transformed parameter \( \theta' \) is simply \( g(\hat{\theta}) \), where \( \hat{\theta} \) is the MLE for \( \theta \). This does not generally guarantee that the estimated value for \( \theta' \) will equal \( \theta \). Therefore, it’s not guaranteed that the estimator applied to the transformed values equals \( g(\hat{\theta}) \).
-
Method of Moments:
- The method of moments relies on equating sample moments to population moments. If we transform the parameter, the moments will also transform according to the corresponding transformation \( g \). However, the estimator obtained from the method of moments applied to the transformed variable generally will not equal the original parameter directly either. Hence, this method does not guarantee that the relationship holds after transformation.
-
M-estimation:
- M-estimation is a more general framework that includes maximum likelihood estimation and encompasses various estimation techniques based on minimizing or maximizing some criterion function. If we consider a bijective transformation, each M-estimator might be transformed appropriately, similar to MLEs. If the estimator applied results in something like \( g(\hat{\theta}) \) from the transformed data, it won't necessarily yield \( \theta \).
In summary, none of the three methods guarantees that the estimator applied to the transformed values is equal to the transformed parameter, unless the transformation is linear and certain conditions are satisfied. If the transformation \( g \) is invertible and if the estimator involves straightforward linear transformations in a specific context, one could infer some equality, but generally, MLE, method of moments, and M-estimation do not have this property.
Thus, the answer is that none of the three methods guarantee \( g(\hat{\theta}) = \theta' \) after transformation, but they each may have corresponding transformed estimates. The constancy depends critically on the nature of the transformation and the specific method of estimation used.