# ISI MStat PSB 2006 Problem 8 | Bernoullian Beauty

This is a very beautiful sample problem from ISI MStat PSB 2006 Problem 8. It is based on basic idea of Maximum Likelihood Estimators, but with a bit of thinking. Give it a thought !

## Problem- ISI MStat PSB 2006 Problem 8

Let $$(X_1,Y_1),......,(X_n,Y_n)$$ be a random sample from the discrete distributions with joint probability

$$f_{X,Y}(x,y) = \begin{cases} \frac{\theta}{4} & (x,y)=(0,0) \ and \ (1,1) \\ \frac{2-\theta}{4} & (x,y)=(0,1) \ and \ (1,0) \end{cases}$$

with $$0 \le \theta \le 2$$. Find the maximum likelihood estimator of $$\theta$$.

### Prerequisites

Maximum Likelihood Estimators

Indicator Random Variables

Bernoulli Trials

## Solution :

This is a very beautiful Problem, not very difficult, but her beauty is hidden in her simplicity, lets explore !!

Observe, that the given pmf is as good as useless while taking us anywhere, so we should think out of the box, but before going out of the box, lets collect whats in the box !

So, from the given pmf we get, $$P( \ of\ getting\ pairs \ of\ form \ (1,1) \ or \ (0,0))=2\times \frac{\theta}{4}=\frac{\theta}{2}$$,

Similarly, $$P( \ of\ getting\ pairs \ of\ form \ (0,1) \ or \ (1,0))=2\times \frac{2-\theta}{4}=\frac{2-\theta}{2}=1-P( \ of\ getting\ pairs \ of\ form \ (1,1) \ or \ (0,0))$$

So, clearly it is giving us a push towards involving Bernoulli trials, isn't it !!

So, lets treat the pairs with match, .i.e. $$x=y$$, be our success, and the other possibilities be failure, then our success probability is $$\frac{\theta}{2}$$, where $$0\le \theta \le 2$$. So, if $$S$$ be the number of successful pairs in our given sample of size $$n$$, then it is evident $$S \sim Binomial(n, \frac{\theta}{2})$$.

So, now its simplified by all means, and we know the MLE of population proportion in binomial is the proportion of success in the sample,

Hence, $$\frac{\hat{\theta_{MLE}}}{2}= \frac{s}{n}$$, where $$s$$ is the number of those pairs in our sample where $$X_i=Y_i$$.

So, $$\hat{\theta_{MLE}}=\frac{2(number\ of \ pairs \ in\ the\ sample\ of \ form\ (0,0)\ or \ (1,1))}{n}$$.

Hence, we are done !!

## Food For Thought

Say, $$X$$ and $$Y$$ are two independent exponential random variable with means $$\mu$$ and $$\lambda$$ respectively. But you observe two other variables, $$Z$$ and $$W$$, such that $$Z=min(X,Y)$$ and $$W$$ takes the value $$1$$ when $$Z=X$$ and $$0$$ otherwise. Can you find the MLEs of the parameters ?

Give it a try !!

# ISI MStat PSB 2009 Problem 8 | How big is the Mean?

This is a very simple and regular sample problem from ISI MStat PSB 2009 Problem 8. It It is based on testing the nature of the mean of Exponential distribution. Give it a Try it !

## Problem- ISI MStat PSB 2009 Problem 8

Let $$X_1,.....,X_n$$ be i.i.d. observation from the density,

$$f(x)=\frac{1}{\mu}exp(-\frac{x}{\mu}) , x>0$$

where $$\mu >0$$ is an unknown parameter.

Consider the problem of testing the hypothesis $$H_o : \mu \le \mu_o$$ against $$H_1 : \mu > \mu_o$$.

(a) Show that the test with critical region $$[\bar{X} \ge \mu_o {\chi_{2n,1-\alpha}}^2/2n]$$, where $${\chi^2}_{2n,1-\alpha}$$ is the $$(1-\alpha)$$th quantile of the $${\chi^2}_{2n}$$ distribution, has size $$\alpha$$.

(b) Give an expression of the power in terms of the c.d.f. of the $${\chi^2}_{2n}$$ distribution.

### Prerequisites

Likelihood Ratio Test

Exponential Distribution

Chi-squared Distribution

## Solution :

This problem is quite regular and simple, from the given form of the hypotheses , it is almost clear that using Neyman-Pearson can land you in trouble. So, lets go for something more general , that is Likelihood Ratio Testing.

Hence, the Likelihood function of the $$\mu$$ for the given sample is ,

$$L(\mu | \vec{X})=(\frac{1}{\mu})^n exp(-\frac{\sum_{i=1}^n X_i}{\mu}) , \mu>0$$, also observe that sample mean $$\vec{X}$$ is the MLE of $$\mu$$.

So, the Likelihood Ratio statistic is,

$$\lambda(\vec{x})=\frac{\sup_{\mu \le \mu_o}L(\mu |\vec{x})}{\sup_\mu L(\mu |\vec{x})} \\ =\begin{cases} 1 & \mu_o \ge \bar{X} \\ \frac{L(\mu_o|\vec{x})}{L(\bar{X}|\vec{x})} & \mu_o < \bar{X} \end{cases}$$

So, our test function is ,

$$\phi(\vec{x})=\begin{cases} 1 & \lambda(\vec{x})<k \\ 0 & otherwise \end{cases}$$.

We, reject $$H_o$$ at size $$\alpha$$, when $$\phi(\vec{x})=1$$, for some $$k$$, $$E_{H_o}(\phi) \le \alpha$$,

Hence, $$\lambda(\vec{x}) < k \\ \Rightarrow L(\mu_o|\vec{x})<kL(\bar{X}|\vec{x}) \\ \ln k_1 -\frac{1}{\mu_o}\sum_{i=1}^n X_i < \ln k -n \ln \bar{X} -\frac{1}{n} \\ n \ln \bar{X}-\frac{n\bar{X}}{\mu_o} < K*$$.

for some constant, $$K*$$.

Let $$g(\bar{x})=n\ln \bar{x} -\frac{n\bar{x}}{\mu_o}$$, and observe that $$g$$ is,

decreasing function of $$\bar{x}$$ for $$\bar{x} \ge \mu_o$$,

Hence, there exists a $$c$$ such that $$\bar{x} \ge c$$,we have $$g(\bar) < K*$$. See the figure.

So, the critical region of the test is of form $$\bar{X} \ge c$$, for some $$c$$ such that,

$$P_{H_o}(\bar{X} \ge c)=\alpha$$, for some $$0 \le \alpha \le 1$$, where $$\alpha$$ is the size of the test.

Now, our task is to find $$c$$, and for that observe, if $$X \sim Exponential(\theta)$$, then $$\frac{2X}{\theta} \sim {\chi^2}_2$$,

Hence, in this problem, since the $$X_i$$'s follows $$Exponential(\mu)$$, hence, $$\frac{2n\bar{X}}{\mu} \sim {\chi^2}_{2n}$$, we have,

$$P_{H_o}(\bar{X} \ge c)=\alpha \\ P_{H_o}(\frac{2n\bar{X}}{\mu_o} \ge \frac{2nc}{\mu_o})=\alpha \\ P_{H_o}({\chi^2}{2n} \ge \frac{2nc}{\mu_o})=\alpha$$,

which gives $$c=\frac{\mu_o {\chi^2}_{2n;1-\alpha}}{2n}$$,

Hence, the rejection region is indeed, $$[\bar{X} \ge \frac{\mu_o {\chi^2}_{2n;1-\alpha}}{2n}$$.

Hence Proved !

(b) Now, we know that the power of the test is,

$$\beta= E_{\mu}(\phi) \\ = P_{\mu}(\lambda(\bar{x})>k)=P(\bar{X} \ge \frac{\mu_o {\chi_{2n;1-\alpha}}^2}{2n}) \\ \beta = P_{\mu}({\chi^2}_{2n} \ge \frac{mu_o}{\mu}{\chi^2}_{2n;1-\alpha})$$.

Hence, the power of the test is of form of a cdf of chi-squared distribution.

## Food For Thought

Can you use any other testing procedure to conduct this test ?

# ISI MStat PSB 2009 Problem 4 | Polarized to Normal

This is a very beautiful sample problem from ISI MStat PSB 2009 Problem 4. It is based on the idea of Polar Transformations, but need a good deal of observation o realize that. Give it a Try it !

## Problem- ISI MStat PSB 2009 Problem 4

Let $$R$$ and $$\theta$$ be independent and non-negative random variables such that $$R^2 \sim {\chi_2}^2$$ and $$\theta \sim U(0,2\pi)$$. Fix $$\theta_o \in (0,2\pi)$$. Find the distribution of $$R\sin(\theta+\theta_o)$$.

### Prerequisites

Convolution

Polar Transformation

Normal Distribution

## Solution :

This problem may get nasty, if one try to find the required distribution, by the so-called CDF method. Its better to observe a bit, before moving forward!! Recall how we derive the probability distribution of the sample variance of a sample from a normal population ??

Yes, you are thinking right, we need to use Polar Transformation !!

But, before transforming lets make some modifications, to reduce future complications,

Given, $$\theta \sim U(0,2\pi)$$ and $$\theta_o$$ is some fixed number in $$(0,2\pi)$$, so, let $$Z=\theta+\theta_o \sim U(\theta_o,2\pi +\theta_o)$$.

Hence, we need to find the distribution of $$R\sin Z$$. Now, from the given and modified information the joint pdf of $$R^2$$ and $$Z$$ are,

$$f_{R^2,Z}(r,z)=\frac{r}{2\pi}exp(-\frac{r^2}{2}) \ \ R>0, \theta_o \le z \le 2\pi +\theta_o$$

Now, let the transformation be $$(R,Z) \to (X,Y)$$,

$$X=R\cos Z \\ Y=R\sin Z$$, Also, here $$X,Y \in \mathbb{R}$$

Hence, $$R^2=X^2+Y^2 \\ Z= \tan^{-1} (\frac{Y}{X})$$

Hence, verify the Jacobian of the transformation $$J(\frac{r,z}{x,y})=\frac{1}{r}$$.

Hence, the joint pdf of $$X$$ and $$Y$$ is,

$$f_{X,Y}(xy)=f_{R,Z}(x^2+y^2, \tan^{-1}(\frac{y}{x})) J(\frac{r,z}{x,y}) \\ =\frac{1}{2\pi}exp(-\frac{x^2+y^2}{2})$$ , $$x,y \in \mathbb{R}$$.

Yeah, Now it is looking familiar right !!

Since, we need the distribution of $$Y=R\sin Z=R\sin(\theta+\theta_o)$$, we integrate $$f_{X,Y}$$ w.r.t to $$X$$ over the real line, and we will end up with, the conclusion that,

$$R\sin(\theta+\theta_o) \sim N(0,1)$$. Hence, We are done !!

## Food For Thought

From the above solution, the distribution of $$R\cos(\theta+\theta_o)$$ is also determinable right !! Can you go further investigating the occurrence pattern of $$\tan(\theta+\theta_o)$$ ?? $$R$$ and $$\theta$$ are the same variables as defined in the question.

Give it a try !!

# ISI MStat PSB 2008 Problem 7 | Finding the Distribution of a Random Variable

This is a very beautiful sample problem from ISI MStat PSB 2008 Problem 7 based on finding the distribution of a random variable . Let's give it a try !!

## Problem- ISI MStat PSB 2008 Problem 7

Let $$X$$ and $$Y$$ be exponential random variables with parameters 1 and 2 respectively. Another random variable $$Z$$ is defined as follows.

A coin, with probability p of Heads (and probability 1-p of Tails) is
tossed. Define $$Z$$ by $$Z=\begin{cases} X & , \text { if the coin turns Heads } \\ Y & , \text { if the coin turns Tails } \end{cases}$$
Find $$P(1 \leq Z \leq 2)$$

### Prerequisites

Cumulative Distribution Function

Exponential Distribution

## Solution :

Let , $$F_{i}$$ be the CDF for i=X,Y, Z then we have ,

$$F_{Z}(z) = P(Z \le z) = P( Z \le z | coin turns Head )P(coin turns Head) + P( Z \le z | coin turns Tail ) P( coin turns Tail)$$

=$$P( X \le z)p + P(Y \le z ) (1-p)$$ = $$F_{X}(z)p+F_{Y}(y) (1-p)$$

Therefore pdf of Z is given by $$f_{Z}(z)= pf_{X}(z)+(1-p)f_{Y}(z)$$ , where $$f_{X} and f_{Y}$$ are pdf of X,Y respectively .

So , $$P(1 \leq Z \leq 2) = \int_{1}^{2} \{pe^{-z} + (1-p) 2e^{-2z}\} dz = p \frac{e-1}{e^2} +(1-p) \frac{e^2-1}{e^4}$$

## Food For Thought

Find the the distribution function of $$K=\frac{X}{Y}$$ and then find $$\lim_{K \to \infty} P(K >1 )$$

# ISI MStat PSB 2008 Problem 2 | Definite integral as the limit of the Riemann sum

This is a very beautiful sample problem from ISI MStat PSB 2008 Problem 2 based on definite integral as the limit of the Riemann sum . Let's give it a try !!

## Problem- ISI MStat PSB 2008 Problem 2

For $$k \geq 1,$$ let $$a_{k}=\lim {n \rightarrow \infty} \frac{1}{n} \sum_{m=1}^{kn} \exp \left(-\frac{1}{2} \frac{m^{2}}{n^{2}}\right)$$

Find $$\lim_{k \rightarrow \infty} a_{k}$$ .

### Prerequisites

Integration

Gamma function

Definite integral as the limit of the Riemann sum

## Solution :

$$a_{k}=\lim {n \rightarrow \infty} \frac{1}{n} \sum_{m=1}^{kn} \exp \left(-\frac{1}{2} \frac{m^{2}}{n^{2}}\right) = \int_{0}^{k} e^{\frac{-y^2}{2}} dy$$ , this can be written you may see in details Definite integral as the limit of the Riemann sum .

Therefore , $$lim_{k \to \infty} a_{k}= \int_{0}^{ \infty} e^{\frac{-y^2}{2}} dy$$ ----(1) , let $$\frac{y^2}{2}=z \Rightarrow dy= \frac{dz}{\sqrt{2z}}$$

Substituting we get , $$\int_{0}^{ \infty} z^{\frac{1}{2} -1} e^{z} \frac{1}{\sqrt{2}} dz =\frac{ \gamma(\frac{1}{2}) }{\sqrt{2}} = \sqrt{\frac{\pi}{2}}$$

Statistical Insight

Let $$X \sim N(0,1)$$ i.e X is a standard normal random variable then,

$$Y=|X|$$ called folded Normal has pdf $$f_{Y}(y)= \begin{cases} \frac{2}{\sqrt{2 \pi }} e^{\frac{-x^2}{2}} & , y>0 \\ 0 &, otherwise \end{cases}$$ . (Verify!)

So, from (1) we can say that $$\int_{0}^{ \infty} e^{\frac{-y^2}{2}} dy = \frac{\sqrt{2 \pi }}{2} \int_{0}^{ \infty}\frac{2}{\sqrt{2 \pi }} f_{Y}(y) dy$$

$$=\frac{\sqrt{2 \pi }}{2} \times 1$$ ( As that a PDF of folded Normal distribution ) .

## Food For Thought

Find the same when $$a_{k}=\lim {n \rightarrow \infty} \frac{1}{n} \sum_{m=1}^{kn} {(\frac{m}{n})}^{5} \exp \left(-\frac{1}{2} \frac{m}{n}\right)$$.

# ISI MStat PSB 2008 Problem 3 | Functional equation

Content
[hide]

This is a very beautiful sample problem from ISI MStat PSB 2008 Problem 3 based on Functional equation . Let's give it a try !!

## Problem- ISI MStat PSB 2008 Problem 3

Let $$g$$ be a continuous function with $$g(1)=1$$ such that $$g(x+y)=5 g(x) g(y)$$ for all $$x, y .$$ Find $$g(x)$$.

### Prerequisites

Continuity & Differentiability

Differential equation

Cauchy's functional equation

## Solution :

We are g is continuous function such that$$g(x+y)=5 g(x) g(y)$$ for all $$x, y$$ and g(1)=1.

Now putting x=y=0 , we get $$g(0)=5{g(0)}^2 \Rightarrow g(0)=0$$ or , $$g(0)= \frac{1}{5}$$ .

If g(0)=0 , then g(x)=0 for all x but we are given that g(1)=1 . Hence contradiction .

So, $$g(0)=\frac{1}{5}$$ .

Now , we can write $$g'(x)= \lim_{h \to 0} \frac{g(x+h)-g(x)}{h} = \lim_{h \to 0} \frac{5g(x)g(h)-g(x)}{h}$$

$$= 5g(x) \lim_{h \to 0} \frac{g(h)- \frac{1}{5} }{ h} = 5g(x) \lim_{h \to 0} \frac{g(h)- g(0) }{ h} = 5g(x)g'(0)$$ (by definition)

Therefore , $$g(x)=5g'(0)g(x)= Kg(x)$$ , for some constant k ,say.

Now we will solve the differential equation , let y=g(x) then we have from above

$$\frac{dy}{dx} = ky \Rightarrow \frac{dy}{y}=k{dx}$$ . Integrating both sides we get ,

$$ln(y)=kx+c$$ c is integrating constant . So , we get $$y=e^{kx+c} \Rightarrow g(x)=e^{kx+c}$$

Solve the equation g(0)=1/5 and g(1)=1 to get the values of K and c . Finally we will get , $$g(x)=\frac{1}{5} e^{(ln(5)) x} =5^{x-1}$$.

But there is a little mistake in this solution .

What's the mistake ?

Ans- Here we assume that g is differentiable at x=0 , which may not be true .

Correct Solution comes here!

We are given that $$g(x+y)=5 g(x) g(y)$$ for all $$x, y .$$ Now taking log both sides we get ,

$$log(g(x+y))=log5+log(g(x))+log(g(y)) \Rightarrow log_5 (g(x+y))=1+log_5 (g(x))+log_5 (g(y))$$

$$\Rightarrow log_5 (g(x+y)) +1= log_5 (g(x))+1+log_5 (g(y)) +1 \Rightarrow \phi(x+y)=\phi(x)+\phi(y)$$ , where $$\phi(x)=1+log_5 (g(x))$$

It's a cauchy function as $$\phi(x)$$ is also continuous . Hence , $$\phi(x)=cx$$ , c is a constant $$\Rightarrow 1+log_5 (g(x))=cx \Rightarrow g(x)=5^{cx-1}$$.

Now $$g(1)=1 \Rightarrow 5^{c-1}=1 \Rightarrow c=1$$.

Therefore , $$g(x)=5^{x-1}$$

## Food For Thought

Let $$f:R to R$$ be a non-constant , 3 times differentiable function . If $$f(1+ \frac{1}{n})=1$$ for all integer n then find $$f''(1)$$ .

# ISI MStat PSB 2009 Problem 6 | abNormal MLE of Normal

This is a very beautiful sample problem from ISI MStat PSB 2009 Problem 6. It is based on the idea of Restricted Maximum Likelihood Estimators, and Mean Squared Errors. Give it a Try it !

## Problem-ISI MStat PSB 2009 Problem 6

Suppose $$X_1,.....,X_n$$ are i.i.d. $$N(\theta,1)$$, $$\theta_o \le \theta \le \theta_1$$, where $$\theta_o < \theta_1$$ are two specified numbers. Find the MLE of $$\theta$$ and show that it is better than the sample mean $$\bar{X}$$ in the sense of having smaller mean squared error.

### Prerequisites

Maximum Likelihood Estimators

Normal Distribution

Mean Squared Error

## Solution :

This is a very interesting Problem ! We all know, that if the condition "$$\theta_o \le \theta \le \theta_1$$, for some specified numbers $$\theta_o < \theta_1$$" had been not given, then the MLE would have been simply $$\bar{X}=\frac{1}{n}\sum_{k=1}^n X_k$$, the sample mean of the given sample. But due to the restriction over $$\theta$$ things get interestingly complicated.

So, simplify a bit, lets write the Likelihood Function of $$theta$$ given this sample, $$\vec{X}=(X_1,....,X_n)'$$,

$$L(\theta |\vec{X})={\frac{1}{\sqrt{2\pi}}}^nexp(-\frac{1}{2}\sum_{k=1}^n(X_k-\theta)^2)$$, when $$\theta_o \le \theta \le \theta_1$$ow taking natural log both sides and differentiating, we find that ,

$$\frac{d\ln L(\theta|\vec{X})}{d\theta}= \sum_{k=1}^n (X_k-\theta)$$.

Now, verify that if $$\bar{X} < \theta_o$$, then $$L(\theta |\vec{X})$$ is always a decreasing function of $$\theta$$, [ where, $$\theta_o \le \theta \le \theta_1$$], Hence the maximum likelihood attains at $$\theta_o$$ itself. Similarly, when, $$\theta_o \le \bar{X} \le \theta_1$$, the maximum likelihood attains at $$\bar{X}$$, lastly the likelihood function will be increasing, hence the maximum likelihood will be found at $$\theta_1$$.

Hence, the Restricted Maximum Likelihood Estimator of $$\theta$$, say

$$\hat{\theta_{RML}} = \begin{cases} \theta_o & \bar{X} < \theta_o \\ \bar{X} & \theta_o\le \bar{X} \le \theta_1 \\ \theta_1 & \bar{X} > \theta_1 \end{cases}$$

Now, to check that, $$\hat{\theta_{RML}}$$ is a better estimator than $$\bar{X}$$, in terms of Mean Squared Error (MSE).

Now, $$MSE_{\theta}(\bar{X})=E_{\theta}(\bar{X}-\theta)^2=\int^{-\infty}_\infty (\bar{X}-\theta)^2f_X(x)\,dx$$

$$=\int^{-\infty}_{\theta_o} (\bar{X}-\theta)^2f_X(x)\,dx+\int^{\theta_o}_{\theta_1} (\bar{X}-\theta)^2f_X(x)\,dx+\int^{\theta_1}_\infty (\bar{X}-\theta)^2f_X(x)\,dx$$.

$$\ge \int^{-\infty}_{\theta_o} (\theta_o-\theta)^2f_X(x)\,dx+\int^{\theta_o}_{\theta_1} (\bar{X}-\theta)^2f_X(x)\,dx+\int^{\theta_1}_\infty (\theta_1-\theta)^2f_X(x)\,dx$$

$$=E_{\theta}(\hat{\theta_{RML}}-\theta)^2=MSE_{\theta}(\hat{\theta_{RML}})$$.

Hence proved !!

## Food For Thought

Now, can you find an unbiased estimator, for $$\theta^2$$ ?? Okay!! now its quite easy right !! But is the estimator you are thinking about is the best unbiased estimator !! Calculate the variance and also compare weather the Variance is attaining Cramer-Rao Lowe Bound.

Give it a try !! You may need the help of Stein's Identity.

# ISI MStat PSB 2009 Problem 3 | Gamma is not abNormal

This is a very simple but beautiful sample problem from ISI MStat PSB 2009 Problem 3. It is based on recognizing density function and then using CLT. Try it !

## Problem- ISI MStat PSB 2009 Problem 3

Using and appropriate probability distribution or otherwise show that,

$$\lim\limits_{x\to\infty}\int^n_0 \frac{exp(-x)x^{n-1}}{(n-1)!}\,dx =\frac{1}{2}$$.

### Prerequisites

Gamma Distribution

Central Limit Theorem

Normal Distribution

## Solution :

Here all we need is to recognize the structure of the integrand. Look, that here, the integrand is integrated over the non-negative real numbers. Now, event though here it is not mentioned explicitly that $$x$$ is a random variable, we can assume $$x$$ to be some value taken by a random variable $$X$$. After all we can find randomness anywhere and everywhere !!

Now observe that the integrand has a structure which is very identical to the density function of gamma random variable with parameters $$1$$ ande $$n$$. So, if we assume that $$X$$ is a $$Gamma(1, n)$$, then our limiting integral transforms to,

$$\lim\limits_{x\to\infty}P(X \le n)$$.

Now, we know that if $$X \sim Gamma(1,n)$$, then its mean and variance both are $$n$$.

So, as $$n \uparrow \infty$$, $$\frac{X-n}{\sqrt{n}} \to N(0,1)$$, by Central Limit Theorem.

Hence, $$\lim\limits_{x\to\infty}P(X \le n)=\lim\limits_{x\to\infty}P(\frac{X-n}{\sqrt{n}} \le 0)=\lim\limits_{x\to\infty}\Phi (0)=\frac{1}{2}$$. [ here $$\Phi(z)$$ is the cdf of Normal at $$z$$.]

Hence proved !!

## Food For Thought

Can, you do the proof under the "Otherwise" condition !!

Give it a try !!

# ISI MStat PSB 2009 Problem 1 | Nilpotent Matrices

This is a very simple sample problem from ISI MStat PSB 2009 Problem 1. It is based on basic properties of Nilpotent Matrices and Skew-symmetric Matrices. Try it !

## Problem- ISI MStat PSB 2009 Problem 1

(a) Let $$A$$ be an $$n \times n$$ matrix such that $$(I+A)^4=O$$ where $$I$$ denotes the identity matrix. Show that $$A$$ is non-singular.

(b) Give an example of a non-zero $$2 \times 2$$ real matrix $$A$$ such that $$\vec{x'}A \vec{x}=0$$ for all real vectors $$\vec{x}$$.

### Prerequisites

Nilpotent Matrix

Eigenvalues

Skew-symmetric Matrix

## Solution :

The first part of the problem is quite easy,

It is given that for a $$n \times n$$ matrix $$A$$, we have $$(I+A)^4=O$$, so, $$I+A$$ is a nilpotet matrix, right !

And we know that all the eigenvalues of a nilpotent matrix are $$0$$. Hence all the eigenvalues of $$I+A$$ are 0.

Now let $$\lambda_1, \lambda_2,......,\lambda_k$$ be the eigenvalues of the matrix $$A$$. So, the eigenvalues of the nilpotent matrix $$I+A$$ are of form $$1+\lambda_k$$ where, $$k=1,2.....,n$$. Now since, $$1+\lambda_k=0$$ which implies $$\lambda_k=-1$$, for $$k=1,2,...,n$$.

Since all the eigenvalues of $$A$$ are non-zero, infact $$|A|=(-1)^n$$. Hence our required propositon.

(b) Now this one is quite interesting,

If for any $$2\times 2$$ matrix, the Quadratic form of that matrix with respect to a vector $$\vec{x}=(x_1,x_2)^T$$ is of form,

$$a{x_1}^2+ bx_1x_2+cx_2x_1+d{x_2}^2$$ where $$a,b,c$$ and $$d$$ are the elements of the matrix. Now if we equate that with $$0$$, what condition should it impose on $$a, b, c$$ and $$d$$ !! I leave it as an exercise for you to complete it. Also Try to generalize it you will end up with a nice result.

## Food For Thought

Now, extending the first part of the question, $$A$$ is invertible right !! So, can you prove that we can always get two vectors from $$\mathbb{R}^n$$, say $$\vec{x}$$ and $$\vec{y}$$, such that the necessary and sufficient condition for the invertiblity of the matrix $$A+\vec{x}\vec{y'}$$ is " $$\vec{y'} A^{-1} \vec{x}$$ must be different from $$1$$" !!

This is a very important result for Statistics Students !! Keep thinking !!

# ISI MStat PSB 2006 Problem 2 | Cauchy & Schwarz come to rescue

This is a very subtle sample problem from ISI MStat PSB 2006 Problem 2. After seeing this problem, one may think of using Lagrange Multipliers, but one can just find easier and beautiful way, if one is really keen to find one. Can you!

## Problem- ISI MStat PSB 2006 Problem 2

Maximize $$x+y$$ subject to the condition that $$2x^2+3y^2 \le 1$$.

### Prerequisites

Cauchy-Schwarz Inequality

Tangent-Normal

Conic section

## Solution :

This is a beautiful problem, but only if one notices the trick, or else things gets ugly.

Now we need to find the maximum of $$x+y$$ when it is given that $$2x^2+3y^2 \le 1$$. Seeing the given condition we always think of using Lagrange Multipliers, but I find that thing very nasty, and always find ways to avoid it.

So let's recall the famous Cauchy-Schwarz Inequality, $$(ab+cd)^2 \le (a^2+c^2)(b^2+d^2)$$.

Now, lets take $$a=\sqrt{2}x ; b=\frac{1}{\sqrt{2}} ; c= \sqrt{3}y ; d= \frac{1}{\sqrt{3}}$$, and observe our inequality reduces to,

$$(x+y)^2 \le (2x^2+3y^2)(\frac{1}{2}+\frac{1}{3}) \le (\frac{1}{2}+\frac{1}{3})=\frac{5}{6} \Rightarrow x+y \le \sqrt{\frac{5}{6}}$$. Hence the maximum of $$x+y$$ with respect to the given condition $$2x^2+3y^2 \le 1$$ is $$\frac{5}{6}$$. Hence we got what we want without even doing any nasty calculations.

Another nice approach for doing this problem is looking through the pictures. Given the condition $$2x^2+3y^2 \le 1$$ represents a disc whose shape is elliptical, and $$x+y=k$$ is a family of straight parallel lines passing passing through that disc.

Hence the line with the maximum intercept among all the lines passing through the given disc represents the maximized value of $$x+y$$. So, basically if a line of form $$x+y=k_o$$ (say), is a tangent to the disc, then it will basically represent the line with maximum intercept from the mentioned family of line. So, we just need to find the point on the boundary of the disc, where the line of form $$x+y=k_o$$ touches as a tangent. Can you finish the rest and verify weather the maximum intercept .i.e. $$k_o= \sqrt{\frac{5}{6}}$$ or not.

## Food For Thought

Can you show another alternate solution to this problem ? No, Lagrange Multiplier Please !! How would you like to find out the point of tangency if the disc was circular ? Show us the solution we will post them in the comment.

Keep thinking !!