Get inspired by the success stories of our students in IIT JAM MS, ISI  MStat, CMI MSc DS.  Learn More 

ISI MStat PSB 2013 Problem 4 | Linear Regression

This is a sample problem from ISI MStat PSB 2013 Problem 4. It is based on the simple linear regression model, finding the estimates, and MSEs. But think over the "Food for Thought" any kind of discussion will be appreciated. Give it a try!

Problem- ISI MStat PSB 2013 Problem 4


Consider n independent observation { (x_i,y_i) :1 \le i \le n} from the model

Y= \alpha + \beta x + \epsilon ,

where \epsilon is normal with mean 0 and variance \sigma^2 . Let \hat{\alpha}, \hat{\beta} and \hat{\sigma}^2 be the maximum likelihood estimators of \alpha , \beta and \sigma^2 , respectively. Let v_{11}, v_{22} and v_{12} be the estimated values of Var(\hat{\alpha}), Var(\hat{\beta} and Cov ( \hat{\alpha}, \hat{\beta}), respectively.

(a) What is the estimated mean of Y, when when x=x_o ? Estimate the mean squared error of this estimator .

(b) What is the predicted value of Y, when when x=x_o ? Estimate the mean squared error of this predictor .

Prerequisites


Linear Regression

Method of Least Squares

Maximum likelihood Estimators.

Mean Squared Error.

Solution :

Here for the given model,

we have , the random errors, \epsilon \sim n(0, \sigma^2), and the maximum likelihood estimators (MLE), of the model parameters are given by \hat{\alpha}, \hat{\beta} and \hat{\sigma}^2. The interesting thing about this model is, since the random errors \epsilon are Gaussian Random variables, the Ordinary Least Square Estimates of the model parameters \alpha, \beta and \sigma^2, are identical to their Maximum Likelihood Estimators, ( which are already given!). How ?? Verify it yourself and once and remember it henceforth.

So, here \hat{\alpha}, \hat{\beta} and \hat{\sigma}^2 there also the OLS estimates of the model parameters respectively.

And By Gauss-Markov Theorem, the OLS estimates of the model parameters are the BLUE (Best Linear Unbiased Estimator), for the model parameters. So, here \hat{\alpha}, \hat{\beta} and \hat{\sigma}^2 are also the unbiased estimators of \alpha, \beta and \sigma^2 respectively.

(a) Now we need to find the estimated mean Y given x=x_o ,

\hat{ E( Y| x=x_o)}= \hat{\alpha} + \hat{\beta} x_o is the estimated mean of Y given x=x_o.

Now since, the given MLEs ( OLSEs) are also unbiased for their respective parameters,

MSE( \hat{ E( Y| x=x_o)})=MSE(\hat{\alpha} + \hat{\beta} x_o)=E(\hat{\alpha} + \hat{\beta} x_o-(\alpha + \beta x_o))^2

=E(\hat{\alpha} + \hat{\beta} x_o-E(\hat{\alpha} + \hat{\beta} x_o))^2

=Var( \hat{\alpha} + \hat{\beta} x_o)

= Var(\hat{\alpha}+2x_o Cov(\hat{\alpha}, \hat{\beta})+ {x_o}^2Var(\hat{\beta})

So, .MSE( \hat{ E( Y| x=x_o)})= v_{11} +2x_o v_{12} + {x_o}^2 {v_{22}}.

(b) Similarly, when x=x_o , the predicted value of Y would be,

\hat{Y} = \hat{\alpha} + \hat{\beta} x_o +\epsilon is the predicted value of Y when x=x_o is given.

Using similar arguments, as in (a) Properties of independence between the model parameters , verify that,

MSE(\hat{Y})= v_{11}+ 2x_o v_{12} + {x_o}^2{ v_{22}}+{\hat{\sigma}^2}. Hence we are done !


Food For Thought

Now, can you explain Why, the Maximum Likelihood Estimators and Ordinary Least Square Estimates are identical, when the model assumes Gaussian errors ??

Wait!! Not done yet. The main course is served below !!

In a game of dart, a thrower throws a dart randomly and uniformly in a unit circle. Let \theta be the angle between the line segment joining the dart and the center and the horizontal axis, now consider Z be a random variable. When the thrower is lefty , Z=-1 and when the thrower is right-handed , Z=1 . Assume that getting a Left-handed and Right-handed thrower is equally likely ( is it really equally likely, in real scenario ?? ). Can you construct a regression model, for regressing \theta on Z.

Think over it, if you want to discuss, we can do that too !!


Similar Problems and Solutions



ISI MStat PSB 2008 Problem 10
Outstanding Statistics Program with Applications

Outstanding Statistics Program with Applications

Subscribe to Cheenta at Youtube


This is a sample problem from ISI MStat PSB 2013 Problem 4. It is based on the simple linear regression model, finding the estimates, and MSEs. But think over the "Food for Thought" any kind of discussion will be appreciated. Give it a try!

Problem- ISI MStat PSB 2013 Problem 4


Consider n independent observation { (x_i,y_i) :1 \le i \le n} from the model

Y= \alpha + \beta x + \epsilon ,

where \epsilon is normal with mean 0 and variance \sigma^2 . Let \hat{\alpha}, \hat{\beta} and \hat{\sigma}^2 be the maximum likelihood estimators of \alpha , \beta and \sigma^2 , respectively. Let v_{11}, v_{22} and v_{12} be the estimated values of Var(\hat{\alpha}), Var(\hat{\beta} and Cov ( \hat{\alpha}, \hat{\beta}), respectively.

(a) What is the estimated mean of Y, when when x=x_o ? Estimate the mean squared error of this estimator .

(b) What is the predicted value of Y, when when x=x_o ? Estimate the mean squared error of this predictor .

Prerequisites


Linear Regression

Method of Least Squares

Maximum likelihood Estimators.

Mean Squared Error.

Solution :

Here for the given model,

we have , the random errors, \epsilon \sim n(0, \sigma^2), and the maximum likelihood estimators (MLE), of the model parameters are given by \hat{\alpha}, \hat{\beta} and \hat{\sigma}^2. The interesting thing about this model is, since the random errors \epsilon are Gaussian Random variables, the Ordinary Least Square Estimates of the model parameters \alpha, \beta and \sigma^2, are identical to their Maximum Likelihood Estimators, ( which are already given!). How ?? Verify it yourself and once and remember it henceforth.

So, here \hat{\alpha}, \hat{\beta} and \hat{\sigma}^2 there also the OLS estimates of the model parameters respectively.

And By Gauss-Markov Theorem, the OLS estimates of the model parameters are the BLUE (Best Linear Unbiased Estimator), for the model parameters. So, here \hat{\alpha}, \hat{\beta} and \hat{\sigma}^2 are also the unbiased estimators of \alpha, \beta and \sigma^2 respectively.

(a) Now we need to find the estimated mean Y given x=x_o ,

\hat{ E( Y| x=x_o)}= \hat{\alpha} + \hat{\beta} x_o is the estimated mean of Y given x=x_o.

Now since, the given MLEs ( OLSEs) are also unbiased for their respective parameters,

MSE( \hat{ E( Y| x=x_o)})=MSE(\hat{\alpha} + \hat{\beta} x_o)=E(\hat{\alpha} + \hat{\beta} x_o-(\alpha + \beta x_o))^2

=E(\hat{\alpha} + \hat{\beta} x_o-E(\hat{\alpha} + \hat{\beta} x_o))^2

=Var( \hat{\alpha} + \hat{\beta} x_o)

= Var(\hat{\alpha}+2x_o Cov(\hat{\alpha}, \hat{\beta})+ {x_o}^2Var(\hat{\beta})

So, .MSE( \hat{ E( Y| x=x_o)})= v_{11} +2x_o v_{12} + {x_o}^2 {v_{22}}.

(b) Similarly, when x=x_o , the predicted value of Y would be,

\hat{Y} = \hat{\alpha} + \hat{\beta} x_o +\epsilon is the predicted value of Y when x=x_o is given.

Using similar arguments, as in (a) Properties of independence between the model parameters , verify that,

MSE(\hat{Y})= v_{11}+ 2x_o v_{12} + {x_o}^2{ v_{22}}+{\hat{\sigma}^2}. Hence we are done !


Food For Thought

Now, can you explain Why, the Maximum Likelihood Estimators and Ordinary Least Square Estimates are identical, when the model assumes Gaussian errors ??

Wait!! Not done yet. The main course is served below !!

In a game of dart, a thrower throws a dart randomly and uniformly in a unit circle. Let \theta be the angle between the line segment joining the dart and the center and the horizontal axis, now consider Z be a random variable. When the thrower is lefty , Z=-1 and when the thrower is right-handed , Z=1 . Assume that getting a Left-handed and Right-handed thrower is equally likely ( is it really equally likely, in real scenario ?? ). Can you construct a regression model, for regressing \theta on Z.

Think over it, if you want to discuss, we can do that too !!


Similar Problems and Solutions



ISI MStat PSB 2008 Problem 10
Outstanding Statistics Program with Applications

Outstanding Statistics Program with Applications

Subscribe to Cheenta at Youtube


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Knowledge Partner

Cheenta is a knowledge partner of Aditya Birla Education Academy
Cheenta

Cheenta Academy

Aditya Birla Education Academy

Aditya Birla Education Academy

Cheenta. Passion for Mathematics

Advanced Mathematical Science. Taught by olympians, researchers and true masters of the subject.
JOIN TRIAL
support@cheenta.com
Menu
Trial
Whatsapp
rockethighlight