Get inspired by the success stories of our students in IIT JAM MS, ISI  MStat, CMI MSc DS.  Learn More 

Restricted Maximum Likelihood Estimator |ISI MStat PSB 2012 Problem 9

This is a very beautiful sample problem from ISI MStat PSB 2012 Problem 9, It's about restricted MLEs, how restricted MLEs are different from the unrestricted ones, if you miss delicacies you may miss the differences too . Try it! But be careful.

Problem- ISI MStat PSB 2012 Problem 9


Suppose X_1 and X_2 are i.i.d. Bernoulli random variables with parameter p where it us known that \frac{1}{3} \le p \le \frac{2}{3}. Find the maximum likelihood estimator \hat{p} of p based on X_1 and X_2.

Prerequisites


Bernoulli trials

Restricted Maximum Likelihood Estimators

Real Analysis

Solution :

This problem seems quite simple and it is simple, if and only if one observes subtle details. Lets think about the unrestricted MLE of p,

Let the unrestricted MLE of p (i.e. when 0\le p \le 1 )based on X_1 and X_2 be p_{MLE}, and p_{MLE}=\frac{X_1+X_2}{2} (How ??)

Now lets see the contradictions which may occur if we don't modify p_{MLE} to \hat{p} (as it is been asked).

See, that when if our sample comes such that X_1=X_2=0 or X_1=X_2=1, then p_{MLE} will be 0 and 1 respectively, where p, the actual parameter neither takes the value 1 or 0 !! So, p_{MLE} needs serious improvement !

To, modify the p_{MLE}, lets observe the log-likelihood function of Bernoulli based in two samples.

\log L(p|x_1,x_2)=(x_1+x_2)\log p +(2-x_1-x_2)\log (1-p)

Now, make two observations, when X_1=X_2=0 (.i.e. p_{MLE}=0), then \log L(p|x_1,x_2)=2\log (1-p), see that \log L(p|x_1,x_2) decreases as p increase, hence under the given condition, log_likelihood will be maximum when p is least, .i.e. \hat{p}=\frac{1}{3}.

Similarly, when p_{MLE}=1 (i.e.when X_1=X_2=1), then for the log-likelihood function to be maximum, p has to be maximum, i.e. \hat{p}=\frac{2}{3}.

So, to modify p_{MLE} to \hat{p}, we have to develop a linear relationship between p_{MLE} and \hat{p}. (Linear because, the relationship between p and p_{MLE} is linear. ). So, \hat{p} and p_{MLE} is on the line that is joining the points (0,\frac{1}{3}) ( when p_{MLE}= 0 then \hat{p}=\frac{1}{3}) and (1,\frac{2}{3}). Hence the line is,

\frac{\hat{p}-\frac{1}{3}}{p_{MLE}-0}=\frac{\frac{2}{3}-\frac{1}{3}}{1-0}

\hat{p}=\frac{2-X_1-X_2}{6}. is the required restricted MLE.

Hence the solution concludes.


Food For Thought

Can You find out the conditions for which the Maximum Likelihood Estimators are also unbiased estimators of the parameter. For which distributions do you think this conditions holds true. Are the also Minimum Variance Unbiased Estimators !!

Can you give some examples when the MLEs are not unbiased ?Even If they are not unbiased are the Sufficient ??


Similar Problems and Solutions



ISI MStat PSB 2008 Problem 10
Outstanding Statistics Program with Applications

Outstanding Statistics Program with Applications

Subscribe to Cheenta at Youtube


This is a very beautiful sample problem from ISI MStat PSB 2012 Problem 9, It's about restricted MLEs, how restricted MLEs are different from the unrestricted ones, if you miss delicacies you may miss the differences too . Try it! But be careful.

Problem- ISI MStat PSB 2012 Problem 9


Suppose X_1 and X_2 are i.i.d. Bernoulli random variables with parameter p where it us known that \frac{1}{3} \le p \le \frac{2}{3}. Find the maximum likelihood estimator \hat{p} of p based on X_1 and X_2.

Prerequisites


Bernoulli trials

Restricted Maximum Likelihood Estimators

Real Analysis

Solution :

This problem seems quite simple and it is simple, if and only if one observes subtle details. Lets think about the unrestricted MLE of p,

Let the unrestricted MLE of p (i.e. when 0\le p \le 1 )based on X_1 and X_2 be p_{MLE}, and p_{MLE}=\frac{X_1+X_2}{2} (How ??)

Now lets see the contradictions which may occur if we don't modify p_{MLE} to \hat{p} (as it is been asked).

See, that when if our sample comes such that X_1=X_2=0 or X_1=X_2=1, then p_{MLE} will be 0 and 1 respectively, where p, the actual parameter neither takes the value 1 or 0 !! So, p_{MLE} needs serious improvement !

To, modify the p_{MLE}, lets observe the log-likelihood function of Bernoulli based in two samples.

\log L(p|x_1,x_2)=(x_1+x_2)\log p +(2-x_1-x_2)\log (1-p)

Now, make two observations, when X_1=X_2=0 (.i.e. p_{MLE}=0), then \log L(p|x_1,x_2)=2\log (1-p), see that \log L(p|x_1,x_2) decreases as p increase, hence under the given condition, log_likelihood will be maximum when p is least, .i.e. \hat{p}=\frac{1}{3}.

Similarly, when p_{MLE}=1 (i.e.when X_1=X_2=1), then for the log-likelihood function to be maximum, p has to be maximum, i.e. \hat{p}=\frac{2}{3}.

So, to modify p_{MLE} to \hat{p}, we have to develop a linear relationship between p_{MLE} and \hat{p}. (Linear because, the relationship between p and p_{MLE} is linear. ). So, \hat{p} and p_{MLE} is on the line that is joining the points (0,\frac{1}{3}) ( when p_{MLE}= 0 then \hat{p}=\frac{1}{3}) and (1,\frac{2}{3}). Hence the line is,

\frac{\hat{p}-\frac{1}{3}}{p_{MLE}-0}=\frac{\frac{2}{3}-\frac{1}{3}}{1-0}

\hat{p}=\frac{2-X_1-X_2}{6}. is the required restricted MLE.

Hence the solution concludes.


Food For Thought

Can You find out the conditions for which the Maximum Likelihood Estimators are also unbiased estimators of the parameter. For which distributions do you think this conditions holds true. Are the also Minimum Variance Unbiased Estimators !!

Can you give some examples when the MLEs are not unbiased ?Even If they are not unbiased are the Sufficient ??


Similar Problems and Solutions



ISI MStat PSB 2008 Problem 10
Outstanding Statistics Program with Applications

Outstanding Statistics Program with Applications

Subscribe to Cheenta at Youtube


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Knowledge Partner

Cheenta is a knowledge partner of Aditya Birla Education Academy
Cheenta

Cheenta Academy

Aditya Birla Education Academy

Aditya Birla Education Academy

Cheenta. Passion for Mathematics

Advanced Mathematical Science. Taught by olympians, researchers and true masters of the subject.
JOIN TRIAL
support@cheenta.com
Menu
Trial
Whatsapp
rockethighlight