Categories
I.S.I. and C.M.I. Entrance IIT JAM Statistics ISI M.Stat PSB ISI MSAT ISI MSTAT Statistics Theory of Estimation

ISI MStat PSB 2012 Problem 10 | MVUE Revisited

This is a very simple sample problem from ISI MStat PSB 2012 Problem 10. It’s a very basic problem but very important and regular problem for statistics students, using one of the most beautiful theorem in Point Estimation. Try it!

Problem– ISI MStat PSB 2012 Problem 10


Let \(X_1,X_2,…..X_{10}\) be i.i.d. Poisson random variables with unknown parameter \(\lambda >0\). Find the minimum variance unbiased estimator of exp{\(-2\lambda \)}.

Prerequisites


Poisson Distribution

Minimum Variance Unbiased Estimators

Lehman-Scheffe’s Theorem

Completeness and Sufficiency

Solution :

Well, this is a very straight forward problem, where we just need to verify certain conditions, of sufficiency and completeness.

If, one is aware of the nature of Poisson Distribution, one knows that for a given sample \(X_1,X_2,…..X_{10}\), the sufficient statistics for the unknown parameter \(\lambda>0\), is \(\sum_{i=1}^{10} X_i \) , also by extension \(\sum_{i}X_i\) is also complete for \(\lambda\) (How??).

So, now first let us construct an unbiased estimator of \(e^{-2\lambda}\). Here, we need to observe patterns as usual. Let us define an Indicator Random variable,

\(I_X(x) = \begin{cases} 1 & X_1=0\ and\ X_2=0 \\ 0 & Otherwise \end{cases}\),

So, \(E(I_X(x))=P(X_1=0, X_2=0)=e^{-2\lambda}\), hence \(I_X(x)\) is an unbiased estimator of \(e^{-2\lambda}\). But is it a Minimum Variance ??

Well, Lehman-Scheffe answers that, Since we know that \(\sum X_i\) is complete and sufficient for \(\lambda \), By Lehman-Scheffe’s theorem,

\(E(I_X(x)|\sum X_i=t)\) is the minimum variance unbiased estimator of \(e^{-2\lambda }\) for any \(t>0\). So, we need to find the following,

\(E(I_X(x)|\sum_{i=1}^{10}X_i=t)= \frac{P(X_1=0,X_2; \sum_{i}X_i=t)}{P(\sum_{i=3}^{10}X_i=t)}=\frac{e^{-2\lambda}e^{-8\lambda}\frac{(8\lambda)^t}{t!}}{e^{10\lambda}\frac{(10\lambda)^t}{t!}}=(\frac{8}{10})^t\).

So, the Minimum Variance Unbiased Estimator of exp{\(-2\lambda\)} is \((\frac{8}{10})^{\sum_{i=1}^{10}X_i}\)

Now can you generalize this for a sample of size n, again what if I defined \(I_X(x)\) as,

\(I_X(x) = \begin{cases} 1 & X_i=0\ &\ X_j=0 \\ 0 & Otherwise \end{cases}\), for some \(i \neq j\),

would it affected the end result ?? What do you think?


Food For Thought

Let’s not end our concern for Poisson, and think further, that for the given sample if the sample mean is \(\bar{X}\) and sample variance is \(S^2\). Can you show that \(E(S^2|\bar{X})=\bar{X}\), and further can you extend your deductions to \( Var(S^2) > Var(\bar{X}) \) ??

Finally can you generalize the above result ?? Give some thoughts to deepen your insights on MVUE.


ISI MStat PSB 2008 Problem 10
Outstanding Statistics Program with Applications

Outstanding Statistics Program with Applications

Subscribe to Cheenta at Youtube


Categories
I.S.I. and C.M.I. Entrance IIT JAM Statistics ISI M.Stat PSB ISI MSAT ISI MSTAT Statistics Theory of Estimation

ISI MStat PSB 2006 Problem 9 | Consistency and MVUE

This is a very simple sample problem from ISI MStat PSB 2006 Problem 9. It’s based on point estimation and finding consistent estimator and a minimum variance unbiased estimator and recognizing the subtle relation between the two types. Go for it!

Problem– ISI MStat PSB 2006 Problem 9


Let \(X_1,X_2,……\) be i.i.d. random variables with density \(f_{\theta}(x), \ x \in \mathbb{R}, \ \theta \in (0,1) \), being the unknown parameter. Suppose that there exists an unbiased estimator \(T\) of \(\theta\) based on sample size 1, i.e. \(E_{\theta}(T(X_1))=\theta \). Assume that \(Var(T(X_1))< \infty \).

(a) Find an estimator \(V_n\) for \(\theta\) based on \(X_1,X_2,……,X_n\) such that \(V_n\) is consistent for \(\theta \) .

(b) Let \(S_n\) be the MVUE( minimum variance unbiased estimator ) of \(\theta \) based on \(X_1,X_2,….,X_n\). Show that \(\lim_{n\to\infty}Var(S_n)=0\).

Prerequisites


Consistent estimators

Minimum Variance Unbiased Estimators

Rao-Blackwell Theorem

Solution :

Often, problems on estimation seems a bit of complicated and we feel directionless, but most cases its always beneficiary do go with the flow.

Here, it is given that \(T\) is an unbiased estimator of \(\theta \) based on one observation, and we are to find a consistent estimator for \(\theta \) based on a sample of size \(n\). Now first, we should consider what are the requisition of an estimator to be consistent?

  • The required estimator \(V_n\) have to be unbiased for \(\theta \) as \( n \uparrow \infty \) . i.e. \(\lim_{n \uparrow \infty} E_{\theta}(V_n)=\theta \).
  • The variance of the would be consistent estimator must converge to 0, as n grows large .i.e. \(\lim_{n \uparrow \infty}Var_{\theta}(V_n)=0 \).

First thing first, let us fulfill the unbiased criteria of \(V_n\), so, from each of the observation from the sample , \(X_1,X_2,…..,X_n\) , of size n, we can get as set of n unbiased estimator of \(\theta \) \( T(X_1), T(X_2), ….., T(X_n)\). So, can we write \(V_n=\frac{1}{n} \sum_{i=1}^n(T(X_i)+a)\) ? where \(a\) is a constant, ( kept for generality). Can you verify that \(V_n\) satisfies the first requirement of being a consistent estimator?

Now, proceeding towards fulfilling the final requirement, that is the variance of \(V_n\) converges to 0 as \(n \uparrow \infty\) . Since we have defined \(V_n\) based on \(T\), and it is given that \(Var(T(X_i)) \) exists for \( i \in \mathbb{N}\), and \(X_1,X_2,…X_n\) are i.i.d. (which is a very important realization here), leads us to

\(Var(V_n)= \frac{Var(T(X_1))}{n}\) , (why ??) . So, clearly, \(Var(V_n) \downarrow 0\) a \( n \uparrow \infty\), fulfilling both required conditions for being a consistent estimator. So, \(V_n= \sum_{i=1}^n(T(X_i)+a)\) is a consistent estimator for \(\theta \).

(b) For this part one may also use Rao-Blackwell theorem, but I always prefer using as less formulas and theorem as possible, and in this case we can do the required problem from the previous part. Since given \(S_n\) is MVUE for \(\theta \) and we found that \(V_n\) is consistent for \(\theta \), so, by the nature of MVUE,

\(Var(S_n) \le Var(V_n) \), so as n gets bigger, \( \lim_{ n \to \infty} Var(S_n) \le \lim{n \to infty} Var(V_n) \Rightarrow \lim_{n \to \infty}Var(S_n) \le 0\)

again, \(Var(S_n) \ge 0\), so, \(\lim_{n \to \infty }Var(S_n)= 0\). Hence, we conclude.


Food For Thought

Lets extend this problem a liitle bit just to increase the fun!!

Let, \(X_1,….,X_n\) are independent but not identical, but still \(T(X_1),T(X_2),…..,T(X_n)\), remains unbiased of \(\theta\) , and \(Var(T(X_i)= {\sigma_i}^2 \), and

\( Cov(T(X_i),T(X_j))=0\) if \( i \neq j \).

Can you show that of all the estimators of form \( \sum a_iT(X_i)\), where \(a_i\)’s are constants, and \(E_{\theta}(\sum a_i T(X_i))=\theta\), the estimator,

\(T*= \frac{\sum \frac{T(X_i)}{{\sigma_i}^2}}{\sum\frac{1}{{\sigma_i}^2}} \) has minimum variance.

Can you find the variance ? Think it over !!


ISI MStat PSB 2008 Problem 10
Outstanding Statistics Program with Applications

Outstanding Statistics Program with Applications

Subscribe to Cheenta at Youtube


Categories
I.S.I. and C.M.I. Entrance IIT JAM Statistics Inequality ISI M.Stat PSB ISI MSAT ISI MSTAT Statistics Theory of Estimation

ISI MStat PSB 2004 Problem 6 | Minimum Variance Unbiased Estimators

This is a very beautiful sample problem from ISI MStat PSB 2004 Problem 6. It’s a very simple problem, and its simplicity is its beauty . Fun to think, go for it !!

Problem– ISI MStat PSB 2004 Problem 6


Let \(Y_1,Y_2.Y_3\), and \(Y_4\) be four uncorrelated random variables with

\(E(Y_i) =i\theta , \) \( Var(Y_i)= i^2 {\sigma}^2, \) , \(i=1,2,3,4\) ,

where \(\theta\) and \(\sigma\) (>0) are unknown parameters. Find the values of \(c_1,c_2,c_3,\) and \(c_4\) for which \(\sum_{i=1}^4{c_i Y_i}\) is unbiased for \( \theta \) and has least variance.

Prerequisites


Unbiased estimators

Minimum-Variance estimators

Cauchy-Schwarz inequality

Solution :

This is a very simple and cute problem, just do as it is said…

for , \(\sum_{i=1}^4{c_i Y_i} \) to be an unbiased estimator for \(\theta\) , then it must satisfy,

\(E(\sum_{i=1}^4{c_i Y_i} )= \theta \Rightarrow \sum_{i=1}^4{c_i E(Y_i)}= \theta \Rightarrow \sum_{i=1}^4{c_i i \theta} = \theta \)

so, \( \sum_{i=1}^4 {ic_i}=1 . \) ………………….(1)

So, we have to find \(c_1,c_2,c_3,\) and \(c_4\), such that (1), is satisfied . But hold on there is some other conditions also.

Again, since the given estimator will also have to be minimum variance, lets calculate the variance of \(\sum_{i=1}^4{c_i Y_i}\) ,

\( Var(\sum_{i=1}^4{c_i Y_i})= \sum_{i=1}^4{c_i}^2Var( Y_i)=\sum_{i=1}^4{i^2 {c_i}^2 {\sigma}^2 }.\)………………………………………..(2)

So, for minimum variance, \(\sum_{i=1}^4{i^2{c_i}^2 }\) must be minimum in (2).

So, we must find \(c_1,c_2,c_3,\) and \(c_4\), such that (1), is satisfied and \(\sum_{i=1}^4{i^2{c_i}^2 }\) in (2) is minimum.

so, minimizing \(\sum_{i=1}^4{i^2{c_i}^2 }\) when it is given that \( \sum_{i=1}^4 {ic_i}=1 \) ,

What do you think, what should be our technique of minimizing \(\sum_{i=1}^4{i^2{c_i}^2 }\) ???

For, me the beauty of the problem is hidden in this part of minimizing the variance. Can’t we think of Cauchy-Schwarz inequality to find the minimum of, \(\sum_{i=1}^4{i^2{c_i}^2 }\) ??

So, using CS- inequality, we have,

\( (\sum_{i=1}^4{ic_i})^2 \le n \sum_{i=1}^4{i^2{c_i}^2} \Rightarrow \sum_{i=1}^4 {i^2{c_i}^2} \ge \frac{1}{n}. \) ………..(3). [ since \(\sum_{i=1}^4 {ic_i}=1\) ].

now since \(\sum_{i=1}^4{i^2{c_i}^2 }\) is minimum the equality in (3) holds, i.e. \(\sum_{i=1}^4{i^2{c_i}^2 }=\frac{1}{n}\) .

and we know the equality condition of CS- inequality is, \( \frac{1c_1}{1}=\frac{2c_2}{1}=\frac{3c_3}{1}=\frac{4c_4}{1}=k \) (say),

then \(c_i= \frac{k}{i}\) for i=1,2,3,4 , where k is some constant .

Again since, \( \sum_{i=1}^4{ic_i} =1 \Rightarrow 4k=1 \Rightarrow k= \frac{1}{4} \) . Hence the solution concludes .


Food For Thought

Let’s, deal with some more inequalities and behave Normal !

Using, Chebyshev’s inequality we can find a trivial upper bound for \( P(|Z| \ge t)\), where \( Z \sim n(0,1)\) and t>0 ( really !! what’s the bound ?). But what about some non-trivial bounds, sharper ones perhaps !! Can you show the following,

\( \sqrt{\frac{2}{\pi}}\frac{t}{1+t^2}e^{-\frac{t^2}{2}} \le P(|Z|\ge t) \le \sqrt{\frac{2}{\pi}}\frac{e^{-\frac{t^2}{2}}}{t} \) for all t>0.

also, verify this upper bound is sharper than the trivial upper bound that one can obtain.


ISI MStat PSB 2008 Problem 10
Outstanding Statistics Program with Applications

Outstanding Statistics Program with Applications

Subscribe to Cheenta at Youtube