Here are the evaluations of each statement:
-
True - Multiple Linear Regression Models (MLRMs) allow us to use partial analysis, which helps in understanding the contribution of individual explanatory variables.
-
False - The normal distribution of the error term does not imply that the dependent variable is normally distributed; it only indicates that the residuals of the model are normally distributed.
-
False - The alternative name for the dependent variable is not a control variable; per the terminology used in regression analysis, the dependent variable is the outcome variable, while control variables are other explanatory variables included to account for their effect.
-
True - In MLRMs, it is indeed best to include explanatory variables that are correlated with the dependent variable but not too highly correlated with other X's, to avoid multicollinearity.
-
True - Including too many variables can lead to overfitting, which may reduce the model's predictive power and decrease precision.
-
True - The null hypothesis of the F-test states that the restricted model (with fewer predictors) is as good as the full model in explaining the variation in the dependent variable.
-
False - If the F calculated is large (p < 0.05), it indicates that the model is significant, not insignificant.
-
False - For qualitative (categorical) variables, it is common practice to omit the reference category from the model to avoid "dummy variable trap" and to have a baseline for comparison.
-
True - Perfect multicollinearity means that two or more independent variables have a fixed mathematical relationship with each other (they can be expressed as linear combinations).
-
False - The probability of success or failure in the probit and logit models can be conceptually similar, but they are based on different distribution assumptions (normal for probit and logistic for logit). One is not inherently greater than the other; they just model the same thing in different ways.
-
False - A variance inflation factor (VIF) greater than 10 does indicate multicollinearity, but it does not imply perfect multicollinearity. It suggests that the explanatory variables are correlated, but perfect multicollinearity means the correlation is perfect (VIF would be infinitely large in such cases).