Categories
I.S.I. and C.M.I. Entrance IIT JAM Statistics ISI M.Stat PSB ISI MSAT ISI MSTAT Statistics Theory of Estimation

Restricted Maximum Likelihood Estimator |ISI MStat PSB 2012 Problem 9

This is a very beautiful sample problem from ISI MStat PSB 2012 Problem 9, It’s about restricted MLEs, how restricted MLEs are different from the unrestricted ones, if you miss delicacies you may miss the differences too . Try it! But be careful.

Problem– ISI MStat PSB 2012 Problem 9


Suppose \(X_1\) and \(X_2\) are i.i.d. Bernoulli random variables with parameter \(p\) where it us known that \(\frac{1}{3} \le p \le \frac{2}{3} \). Find the maximum likelihood estimator \(\hat{p}\) of \(p\) based on \(X_1\) and \(X_2\).

Prerequisites


Bernoulli trials

Restricted Maximum Likelihood Estimators

Real Analysis

Solution :

This problem seems quite simple and it is simple, if and only if one observes subtle details. Lets think about the unrestricted MLE of \(p\),

Let the unrestricted MLE of \(p\) (i.e. when \(0\le p \le 1\) )based on \(X_1\) and \(X_2\) be \(p_{MLE}\), and \( p_{MLE}=\frac{X_1+X_2}{2}\) (How ??)

Now lets see the contradictions which may occur if we don’t modify \(p_{MLE}\) to \(\hat{p}\) (as it is been asked).

See, that when if our sample comes such that \(X_1=X_2=0\) or \(X_1=X_2=1\), then \(p_{MLE}\) will be 0 and 1 respectively, where \(p\), the actual parameter neither takes the value 1 or 0 !! So, \(p_{MLE}\) needs serious improvement !

To, modify the \(p_{MLE}\), lets observe the log-likelihood function of Bernoulli based in two samples.

\( \log L(p|x_1,x_2)=(x_1+x_2)\log p +(2-x_1-x_2)\log (1-p) \)

Now, make two observations, when \(X_1=X_2=0\) (.i.e. \(p_{MLE}=0\)), then \(\log L(p|x_1,x_2)=2\log (1-p)\), see that \(\log L(p|x_1,x_2)\) decreases as p increase, hence under the given condition, log_likelihood will be maximum when p is least, .i.e. \(\hat{p}=\frac{1}{3}\).

Similarly, when \(p_{MLE}=1\) (i.e.when \( X_1=X_2=1\)), then for the log-likelihood function to be maximum, p has to be maximum, i.e. \(\hat{p}=\frac{2}{3}\).

So, to modify \(p_{MLE}\) to \(\hat{p}\), we have to develop a linear relationship between \(p_{MLE}\) and \(\hat{p}\). (Linear because, the relationship between \(p\) and \(p_{MLE}\) is linear. ). So, \(\hat{p}\) and \(p_{MLE}\) is on the line that is joining the points \((0,\frac{1}{3})\) ( when \(p_{MLE}= 0\) then \(\hat{p}=\frac{1}{3}\)) and \((1,\frac{2}{3})\). Hence the line is,

\(\frac{\hat{p}-\frac{1}{3}}{p_{MLE}-0}=\frac{\frac{2}{3}-\frac{1}{3}}{1-0}\)

\(\hat{p}=\frac{2-X_1-X_2}{6}\). is the required restricted MLE.

Hence the solution concludes.


Food For Thought

Can You find out the conditions for which the Maximum Likelihood Estimators are also unbiased estimators of the parameter. For which distributions do you think this conditions holds true. Are the also Minimum Variance Unbiased Estimators !!

Can you give some examples when the MLEs are not unbiased ?Even If they are not unbiased are the Sufficient ??


ISI MStat PSB 2008 Problem 10
Outstanding Statistics Program with Applications

Outstanding Statistics Program with Applications

Subscribe to Cheenta at Youtube


Categories
I.S.I. and C.M.I. Entrance IIT JAM Statistics ISI M.Stat PSB ISI MSAT ISI MSTAT Statistics Theory of Estimation

ISI MStat PSB 2010 Problem 10 | Uniform Modified

This is a very elegant sample problem from ISI MStat PSB 2010 Problem 10. It’s mostly based on propertes of uniform, and its behaviour when modified . Try it!

Problem– ISI MStat PSB 2010 Problem 10


Let \(X\) be a random variable uniformly distributed over \((0,2\theta )\), \(\theta>0\), and \(Y=max(X,2\theta -X)\).

(a) Find \(\mu =E(Y)\).

(b) Let \(X_1,X_2,…..X_n\) be a random sample from the above distribution with unknown \(\theta\). Find two distinct unbiased estimators of \(\mu\), as defined in (a), based on the entire sample.

Prerequisites


Uniform Distribution

Law of Total Expectation

Unbiased Estimators

Solution :

Well, this is a very straight forward problem, where we just need to be aware the way \(Y\) is defined.

As, we need \(E(Y)\) and by definition of \(Y\) , we clearly see that \(Y\) is dependent in \(X\) where \(X \sim Unif( 0, 2\theta)\).

So, using Law of Total Expectation,

\(E(Y)= E(X|X>2\theta-X)P(X>2\theta-X)+E(2\theta-X|X \le 2\theta-X)P(X \le 2\theta-X).

Observe that, \(P(X \le \theta)=\frac{1}{2}\), why ??

Also, conditional pdf of \(X|X>\theta\) is,

\(f_{X|X>\theta}(x)=\frac{f_X(x)}{P(X>\theta)}==\frac{1}{\theta} \& \theta< x \le 2\theta \). [where \(f_X\) is the pdf of \(X\)].

the other conditional pdf is also same due to symmetry.(Verify!!).

So, \(E(Y)=E(X|X\sim Unif(\theta,2\theta))\frac{1}{2}+E(X|X\sim Unif(0,\theta))\\frac{1}{2}=\frac{1}{2}(\frac{3\theta}{2}+2\theta-\frac{\theta}{2})=\frac{3\theta}{2} \).

hence, \(\mu=\frac{3\theta}{2}\).

Now, for the next part, one trivial unbiased estimator of \(\theta\) is \(T_n=\frac{1}{n}\sum_{i=1}^n X_i \) (based on the given sample). So,

\(\frac{3T_n}{2}=\frac{3}{2n}\sum_{i=1}^n X_i \) is an obvious unbiased estimator of \(\mu\).

For another we need to change our way of looking on conventional way and look for the order statistics, since we know that \(X_{(n)}\) is sufficient for \(\theta\).(Don’t Know ?? Look for Factorization Theorem .)

So, verify that \(E(X_{(n)})=\frac{2n}{n+1}\theta\).

Hence, \(\frac{n+1}{2n}X_{(n)} \) is another unbiased estimator of \(theta\). So, \(\frac{3(n+1)}{4n}X_{(n)}\) is also another unbiased estimator of \(\mu\) a defined in (a).

Hence the solution concludes.


Food For Thought

Let us think about some unpopular but very beautiful relationship between discrete random variables besides the Universality of uniform. Let \(X\)be a discrete random variable with cdf \(F_X(x)\) and define the random variable \(Y=F_X(x)\).

Can you verify that, \(Y\) is stochastically greater that a uniform(0,1) random variable \(U\). i.e.

\(P(Y>y) \ge P(U>y)=1-y\) for all \(y\), \(0<y<1\),

\(P(Y>y) > P(U>y) =1-y \), for some \(y\), \(0<y<1\).

Hint: Draw a typical picture of a discrete cdf, and observe the jump points ! you may jump to the solution!! Think it over.


ISI MStat PSB 2008 Problem 10
Outstanding Statistics Program with Applications

Outstanding Statistics Program with Applications

Subscribe to Cheenta at Youtube


Categories
I.S.I. and C.M.I. Entrance IIT JAM Statistics ISI M.Stat PSB ISI MSAT ISI MSTAT Statistics Theory of Estimation

ISI MStat PSB 2012 Problem 10 | MVUE Revisited

This is a very simple sample problem from ISI MStat PSB 2012 Problem 10. It’s a very basic problem but very important and regular problem for statistics students, using one of the most beautiful theorem in Point Estimation. Try it!

Problem– ISI MStat PSB 2012 Problem 10


Let \(X_1,X_2,…..X_{10}\) be i.i.d. Poisson random variables with unknown parameter \(\lambda >0\). Find the minimum variance unbiased estimator of exp{\(-2\lambda \)}.

Prerequisites


Poisson Distribution

Minimum Variance Unbiased Estimators

Lehman-Scheffe’s Theorem

Completeness and Sufficiency

Solution :

Well, this is a very straight forward problem, where we just need to verify certain conditions, of sufficiency and completeness.

If, one is aware of the nature of Poisson Distribution, one knows that for a given sample \(X_1,X_2,…..X_{10}\), the sufficient statistics for the unknown parameter \(\lambda>0\), is \(\sum_{i=1}^{10} X_i \) , also by extension \(\sum_{i}X_i\) is also complete for \(\lambda\) (How??).

So, now first let us construct an unbiased estimator of \(e^{-2\lambda}\). Here, we need to observe patterns as usual. Let us define an Indicator Random variable,

\(I_X(x) = \begin{cases} 1 & X_1=0\ and\ X_2=0 \\ 0 & Otherwise \end{cases}\),

So, \(E(I_X(x))=P(X_1=0, X_2=0)=e^{-2\lambda}\), hence \(I_X(x)\) is an unbiased estimator of \(e^{-2\lambda}\). But is it a Minimum Variance ??

Well, Lehman-Scheffe answers that, Since we know that \(\sum X_i\) is complete and sufficient for \(\lambda \), By Lehman-Scheffe’s theorem,

\(E(I_X(x)|\sum X_i=t)\) is the minimum variance unbiased estimator of \(e^{-2\lambda }\) for any \(t>0\). So, we need to find the following,

\(E(I_X(x)|\sum_{i=1}^{10}X_i=t)= \frac{P(X_1=0,X_2; \sum_{i}X_i=t)}{P(\sum_{i=3}^{10}X_i=t)}=\frac{e^{-2\lambda}e^{-8\lambda}\frac{(8\lambda)^t}{t!}}{e^{10\lambda}\frac{(10\lambda)^t}{t!}}=(\frac{8}{10})^t\).

So, the Minimum Variance Unbiased Estimator of exp{\(-2\lambda\)} is \((\frac{8}{10})^{\sum_{i=1}^{10}X_i}\)

Now can you generalize this for a sample of size n, again what if I defined \(I_X(x)\) as,

\(I_X(x) = \begin{cases} 1 & X_i=0\ &\ X_j=0 \\ 0 & Otherwise \end{cases}\), for some \(i \neq j\),

would it affected the end result ?? What do you think?


Food For Thought

Let’s not end our concern for Poisson, and think further, that for the given sample if the sample mean is \(\bar{X}\) and sample variance is \(S^2\). Can you show that \(E(S^2|\bar{X})=\bar{X}\), and further can you extend your deductions to \( Var(S^2) > Var(\bar{X}) \) ??

Finally can you generalize the above result ?? Give some thoughts to deepen your insights on MVUE.


ISI MStat PSB 2008 Problem 10
Outstanding Statistics Program with Applications

Outstanding Statistics Program with Applications

Subscribe to Cheenta at Youtube


Categories
I.S.I. and C.M.I. Entrance IIT JAM Statistics ISI M.Stat PSB ISI MSAT ISI MSTAT Statistics Theory of Estimation

ISI MStat PSB 2006 Problem 9 | Consistency and MVUE

This is a very simple sample problem from ISI MStat PSB 2006 Problem 9. It’s based on point estimation and finding consistent estimator and a minimum variance unbiased estimator and recognizing the subtle relation between the two types. Go for it!

Problem– ISI MStat PSB 2006 Problem 9


Let \(X_1,X_2,……\) be i.i.d. random variables with density \(f_{\theta}(x), \ x \in \mathbb{R}, \ \theta \in (0,1) \), being the unknown parameter. Suppose that there exists an unbiased estimator \(T\) of \(\theta\) based on sample size 1, i.e. \(E_{\theta}(T(X_1))=\theta \). Assume that \(Var(T(X_1))< \infty \).

(a) Find an estimator \(V_n\) for \(\theta\) based on \(X_1,X_2,……,X_n\) such that \(V_n\) is consistent for \(\theta \) .

(b) Let \(S_n\) be the MVUE( minimum variance unbiased estimator ) of \(\theta \) based on \(X_1,X_2,….,X_n\). Show that \(\lim_{n\to\infty}Var(S_n)=0\).

Prerequisites


Consistent estimators

Minimum Variance Unbiased Estimators

Rao-Blackwell Theorem

Solution :

Often, problems on estimation seems a bit of complicated and we feel directionless, but most cases its always beneficiary do go with the flow.

Here, it is given that \(T\) is an unbiased estimator of \(\theta \) based on one observation, and we are to find a consistent estimator for \(\theta \) based on a sample of size \(n\). Now first, we should consider what are the requisition of an estimator to be consistent?

  • The required estimator \(V_n\) have to be unbiased for \(\theta \) as \( n \uparrow \infty \) . i.e. \(\lim_{n \uparrow \infty} E_{\theta}(V_n)=\theta \).
  • The variance of the would be consistent estimator must converge to 0, as n grows large .i.e. \(\lim_{n \uparrow \infty}Var_{\theta}(V_n)=0 \).

First thing first, let us fulfill the unbiased criteria of \(V_n\), so, from each of the observation from the sample , \(X_1,X_2,…..,X_n\) , of size n, we can get as set of n unbiased estimator of \(\theta \) \( T(X_1), T(X_2), ….., T(X_n)\). So, can we write \(V_n=\frac{1}{n} \sum_{i=1}^n(T(X_i)+a)\) ? where \(a\) is a constant, ( kept for generality). Can you verify that \(V_n\) satisfies the first requirement of being a consistent estimator?

Now, proceeding towards fulfilling the final requirement, that is the variance of \(V_n\) converges to 0 as \(n \uparrow \infty\) . Since we have defined \(V_n\) based on \(T\), and it is given that \(Var(T(X_i)) \) exists for \( i \in \mathbb{N}\), and \(X_1,X_2,…X_n\) are i.i.d. (which is a very important realization here), leads us to

\(Var(V_n)= \frac{Var(T(X_1))}{n}\) , (why ??) . So, clearly, \(Var(V_n) \downarrow 0\) a \( n \uparrow \infty\), fulfilling both required conditions for being a consistent estimator. So, \(V_n= \sum_{i=1}^n(T(X_i)+a)\) is a consistent estimator for \(\theta \).

(b) For this part one may also use Rao-Blackwell theorem, but I always prefer using as less formulas and theorem as possible, and in this case we can do the required problem from the previous part. Since given \(S_n\) is MVUE for \(\theta \) and we found that \(V_n\) is consistent for \(\theta \), so, by the nature of MVUE,

\(Var(S_n) \le Var(V_n) \), so as n gets bigger, \( \lim_{ n \to \infty} Var(S_n) \le \lim{n \to infty} Var(V_n) \Rightarrow \lim_{n \to \infty}Var(S_n) \le 0\)

again, \(Var(S_n) \ge 0\), so, \(\lim_{n \to \infty }Var(S_n)= 0\). Hence, we conclude.


Food For Thought

Lets extend this problem a liitle bit just to increase the fun!!

Let, \(X_1,….,X_n\) are independent but not identical, but still \(T(X_1),T(X_2),…..,T(X_n)\), remains unbiased of \(\theta\) , and \(Var(T(X_i)= {\sigma_i}^2 \), and

\( Cov(T(X_i),T(X_j))=0\) if \( i \neq j \).

Can you show that of all the estimators of form \( \sum a_iT(X_i)\), where \(a_i\)’s are constants, and \(E_{\theta}(\sum a_i T(X_i))=\theta\), the estimator,

\(T*= \frac{\sum \frac{T(X_i)}{{\sigma_i}^2}}{\sum\frac{1}{{\sigma_i}^2}} \) has minimum variance.

Can you find the variance ? Think it over !!


ISI MStat PSB 2008 Problem 10
Outstanding Statistics Program with Applications

Outstanding Statistics Program with Applications

Subscribe to Cheenta at Youtube


Categories
I.S.I. and C.M.I. Entrance IIT JAM Statistics ISI M.Stat PSB ISI MSAT ISI MSTAT Miscellaneous Statistics Theory of Estimation

ISI MStat PSB 2008 Problem 8 | Bivariate Normal Distribution

This is a very beautiful sample problem from ISI MStat PSB 2008 Problem 8. It’s a very simple problem, based on bivariate normal distribution, which again teaches us that observing the right thing makes a seemingly laborious problem beautiful . Fun to think, go for it !!

Problem– ISI MStat PSB 2008 Problem 8


Let \( \vec{Y} = (Y_1,Y_2)’ \) have the bivariate normal distribution, \( N_2( \vec{0}, \sum ) \),

where, \(\sum\)= \begin{pmatrix} \sigma_1^2 & \rho\sigma_1\sigma_2 \\ \rho\sigma_2\sigma_1 & \sigma^2 \end{pmatrix} ;

Obtain the mean ad variance of \( U= \vec{Y’} {\sum}^{-1}\vec{Y} – \frac{Y_1^2}{\sigma^2} \) .

Prerequisites


Bivariate Normal

Conditonal Distribution of Normal

Chi-Squared Distribution

Solution :

This is a very simple and cute problem, all the labour reduces once you see what to need to see !

Remember , the pdf of \(N_2( \vec{0}, \sum)\) ?

Isn’t \( \vec{Y}\sum^{-1}\vec{Y}\) is the exponent of e, in the pdf of bivariate normal ?

So, we can say \(\vec{Y}\sum^{-1}\vec{Y} \sim {\chi_2}^2 \) . Can We ?? verify it !!

Also, clearly \( \frac{Y_1^2}{\sigma^2} \sim {\chi_1}^2 \) ; since \(Y_1\) follows univariate normal.

So, expectation is easy to find accumulating the above deductions, I’m leaving it as an exercise .

Calculating the variance may be a laborious job at first, but now lets imagine the pdf of the conditional distribution of \( Y_2 |Y_1=y_1 \) , what is the exponent of e in this pdf ?? \( U = \vec{Y’} {\sum}^{-1}\vec{Y} – \frac{Y_1^2}{\sigma^2} \) , right !!

and also , \( U \sim \chi_1^2 \) . Now doing the last piece of subtle deduction, and claiming that \(U\) and \( \frac{Y_1^2}{\sigma^2} \) are independently distributed . Can you argue why ?? go ahead . So, \( U+ \frac{Y_1^2}{\sigma^2} \sim \chi_2^2 \).

So, \( Var( U + \frac{Y_1^2}{\sigma^2})= Var( U) + Var( \frac{Y_1^2}{\sigma^2}) \)

\( \Rightarrow Var(U)= 4-2=2 \) , [ since, Variance of a R.V following \(\chi_n^2\) is \(2n\).]

Hence the solution concludes.


Food For Thought

Before leaving, lets broaden our mind and deal with Multivariate Normal !

Let, \(\vec{X}\) be a 1×4 random vector, such that \( \vec{X} \sim N_4(\vec{\mu}, \sum ) \), \(\sum\) is positive definite matrix, then can you show that,

\( P( f_{\vec{X}}(\vec{x}) \ge c) = \begin{cases} 0 & c \ge \frac{1}{4\pi^2\sqrt{|\sum|}} \\ 1-(\frac{k+2}{2})e^{-\frac{k}{2}} & c < \frac{1}{4\pi^2\sqrt{|\sum|}} \end{cases}\)

Where, \( k=-2ln(4\pi^2c \sqrt{|\sum|}) \).

Keep you thoughts alive !!


ISI MStat PSB 2008 Problem 10
Outstanding Statistics Program with Applications

Outstanding Statistics Program with Applications

Subscribe to Cheenta at Youtube


Categories
I.S.I. and C.M.I. Entrance IIT JAM Statistics Inequality ISI M.Stat PSB ISI MSAT ISI MSTAT Statistics Theory of Estimation

ISI MStat PSB 2004 Problem 6 | Minimum Variance Unbiased Estimators

This is a very beautiful sample problem from ISI MStat PSB 2004 Problem 6. It’s a very simple problem, and its simplicity is its beauty . Fun to think, go for it !!

Problem– ISI MStat PSB 2004 Problem 6


Let \(Y_1,Y_2.Y_3\), and \(Y_4\) be four uncorrelated random variables with

\(E(Y_i) =i\theta , \) \( Var(Y_i)= i^2 {\sigma}^2, \) , \(i=1,2,3,4\) ,

where \(\theta\) and \(\sigma\) (>0) are unknown parameters. Find the values of \(c_1,c_2,c_3,\) and \(c_4\) for which \(\sum_{i=1}^4{c_i Y_i}\) is unbiased for \( \theta \) and has least variance.

Prerequisites


Unbiased estimators

Minimum-Variance estimators

Cauchy-Schwarz inequality

Solution :

This is a very simple and cute problem, just do as it is said…

for , \(\sum_{i=1}^4{c_i Y_i} \) to be an unbiased estimator for \(\theta\) , then it must satisfy,

\(E(\sum_{i=1}^4{c_i Y_i} )= \theta \Rightarrow \sum_{i=1}^4{c_i E(Y_i)}= \theta \Rightarrow \sum_{i=1}^4{c_i i \theta} = \theta \)

so, \( \sum_{i=1}^4 {ic_i}=1 . \) ………………….(1)

So, we have to find \(c_1,c_2,c_3,\) and \(c_4\), such that (1), is satisfied . But hold on there is some other conditions also.

Again, since the given estimator will also have to be minimum variance, lets calculate the variance of \(\sum_{i=1}^4{c_i Y_i}\) ,

\( Var(\sum_{i=1}^4{c_i Y_i})= \sum_{i=1}^4{c_i}^2Var( Y_i)=\sum_{i=1}^4{i^2 {c_i}^2 {\sigma}^2 }.\)………………………………………..(2)

So, for minimum variance, \(\sum_{i=1}^4{i^2{c_i}^2 }\) must be minimum in (2).

So, we must find \(c_1,c_2,c_3,\) and \(c_4\), such that (1), is satisfied and \(\sum_{i=1}^4{i^2{c_i}^2 }\) in (2) is minimum.

so, minimizing \(\sum_{i=1}^4{i^2{c_i}^2 }\) when it is given that \( \sum_{i=1}^4 {ic_i}=1 \) ,

What do you think, what should be our technique of minimizing \(\sum_{i=1}^4{i^2{c_i}^2 }\) ???

For, me the beauty of the problem is hidden in this part of minimizing the variance. Can’t we think of Cauchy-Schwarz inequality to find the minimum of, \(\sum_{i=1}^4{i^2{c_i}^2 }\) ??

So, using CS- inequality, we have,

\( (\sum_{i=1}^4{ic_i})^2 \le n \sum_{i=1}^4{i^2{c_i}^2} \Rightarrow \sum_{i=1}^4 {i^2{c_i}^2} \ge \frac{1}{n}. \) ………..(3). [ since \(\sum_{i=1}^4 {ic_i}=1\) ].

now since \(\sum_{i=1}^4{i^2{c_i}^2 }\) is minimum the equality in (3) holds, i.e. \(\sum_{i=1}^4{i^2{c_i}^2 }=\frac{1}{n}\) .

and we know the equality condition of CS- inequality is, \( \frac{1c_1}{1}=\frac{2c_2}{1}=\frac{3c_3}{1}=\frac{4c_4}{1}=k \) (say),

then \(c_i= \frac{k}{i}\) for i=1,2,3,4 , where k is some constant .

Again since, \( \sum_{i=1}^4{ic_i} =1 \Rightarrow 4k=1 \Rightarrow k= \frac{1}{4} \) . Hence the solution concludes .


Food For Thought

Let’s, deal with some more inequalities and behave Normal !

Using, Chebyshev’s inequality we can find a trivial upper bound for \( P(|Z| \ge t)\), where \( Z \sim n(0,1)\) and t>0 ( really !! what’s the bound ?). But what about some non-trivial bounds, sharper ones perhaps !! Can you show the following,

\( \sqrt{\frac{2}{\pi}}\frac{t}{1+t^2}e^{-\frac{t^2}{2}} \le P(|Z|\ge t) \le \sqrt{\frac{2}{\pi}}\frac{e^{-\frac{t^2}{2}}}{t} \) for all t>0.

also, verify this upper bound is sharper than the trivial upper bound that one can obtain.


ISI MStat PSB 2008 Problem 10
Outstanding Statistics Program with Applications

Outstanding Statistics Program with Applications

Subscribe to Cheenta at Youtube