Categories
College Mathematics I.S.I. and C.M.I. Entrance IIT JAM Statistics ISI M.Stat PSB ISI MSAT ISI MSTAT Statistics Theory of Estimation

ISI MStat PSB 2006 Problem 8 | Bernoullian Beauty

This is a very beautiful sample problem from ISI MStat PSB 2006 Problem 8. It is based on basic idea of Maximum Likelihood Estimators, but with a bit of thinking. Give it a thought !

Problem– ISI MStat PSB 2006 Problem 8


Let \((X_1,Y_1),……,(X_n,Y_n)\) be a random sample from the discrete distributions with joint probability

\(f_{X,Y}(x,y) = \begin{cases} \frac{\theta}{4} & (x,y)=(0,0) \ and \ (1,1) \\ \frac{2-\theta}{4} & (x,y)=(0,1) \ and \ (1,0) \end{cases}\)

with \(0 \le \theta \le 2\). Find the maximum likelihood estimator of \(\theta\).

Prerequisites


Maximum Likelihood Estimators

Indicator Random Variables

Bernoulli Trials

Solution :

This is a very beautiful Problem, not very difficult, but her beauty is hidden in her simplicity, lets explore !!

Observe, that the given pmf is as good as useless while taking us anywhere, so we should think out of the box, but before going out of the box, lets collect whats in the box !

So, from the given pmf we get, \(P( \ of\ getting\ pairs \ of\ form \ (1,1) \ or \ (0,0))=2\times \frac{\theta}{4}=\frac{\theta}{2}\),

Similarly, \(P( \ of\ getting\ pairs \ of\ form \ (0,1) \ or \ (1,0))=2\times \frac{2-\theta}{4}=\frac{2-\theta}{2}=1-P( \ of\ getting\ pairs \ of\ form \ (1,1) \ or \ (0,0))\)

So, clearly it is giving us a push towards involving Bernoulli trials, isn’t it !!

So, lets treat the pairs with match, .i.e. \(x=y\), be our success, and the other possibilities be failure, then our success probability is \(\frac{\theta}{2}\), where \(0\le \theta \le 2\). So, if \(S\) be the number of successful pairs in our given sample of size \(n\), then it is evident \(S \sim Binomial(n, \frac{\theta}{2})\).

So, now its simplified by all means, and we know the MLE of population proportion in binomial is the proportion of success in the sample,

Hence, \(\frac{\hat{\theta_{MLE}}}{2}= \frac{s}{n}\), where \(s\) is the number of those pairs in our sample where \(X_i=Y_i\).

So, \(\hat{\theta_{MLE}}=\frac{2(number\ of \ pairs \ in\ the\ sample\ of \ form\ (0,0)\ or \ (1,1))}{n}\).

Hence, we are done !!


Food For Thought

Say, \(X\) and \(Y\) are two independent exponential random variable with means \(\mu\) and \(\lambda\) respectively. But you observe two other variables, \(Z\) and \(W\), such that \(Z=min(X,Y)\) and \(W\) takes the value \(1\) when \(Z=X\) and \(0\) otherwise. Can you find the MLEs of the parameters ?

Give it a try !!


ISI MStat PSB 2008 Problem 10
Outstanding Statistics Program with Applications

Outstanding Statistics Program with Applications

Subscribe to Cheenta at Youtube


Categories
College Mathematics I.S.I. and C.M.I. Entrance IIT JAM Statistics ISI M.Stat PSB ISI MSAT ISI MSTAT Statistics Testing of Hypothesis Theory of Estimation

ISI MStat PSB 2009 Problem 8 | How big is the Mean?

This is a very simple and regular sample problem from ISI MStat PSB 2009 Problem 8. It It is based on testing the nature of the mean of Exponential distribution. Give it a Try it !

Problem– ISI MStat PSB 2009 Problem 8


Let \(X_1,…..,X_n\) be i.i.d. observation from the density,

\(f(x)=\frac{1}{\mu}exp(-\frac{x}{\mu}) , x>0\)

where \(\mu >0\) is an unknown parameter.

Consider the problem of testing the hypothesis \(H_o : \mu \le \mu_o\) against \(H_1 : \mu > \mu_o\).

(a) Show that the test with critical region \([\bar{X} \ge \mu_o {\chi_{2n,1-\alpha}}^2/2n]\), where \( {\chi^2}_{2n,1-\alpha} \) is the \((1-\alpha)\)th quantile of the \({\chi^2}_{2n}\) distribution, has size \(\alpha\).

(b) Give an expression of the power in terms of the c.d.f. of the \({\chi^2}_{2n}\) distribution.

Prerequisites


Likelihood Ratio Test

Exponential Distribution

Chi-squared Distribution

Solution :

This problem is quite regular and simple, from the given form of the hypotheses , it is almost clear that using Neyman-Pearson can land you in trouble. So, lets go for something more general , that is Likelihood Ratio Testing.

Hence, the Likelihood function of the \(\mu\) for the given sample is ,

\(L(\mu | \vec{X})=(\frac{1}{\mu})^n exp(-\frac{\sum_{i=1}^n X_i}{\mu}) , \mu>0\), also observe that sample mean \(\vec{X}\) is the MLE of \(\mu\).

So, the Likelihood Ratio statistic is,

\(\lambda(\vec{x})=\frac{\sup_{\mu \le \mu_o}L(\mu |\vec{x})}{\sup_\mu L(\mu |\vec{x})} \\ =\begin{cases} 1 & \mu_o \ge \bar{X} \\ \frac{L(\mu_o|\vec{x})}{L(\bar{X}|\vec{x})} & \mu_o < \bar{X} \end{cases} \)

So, our test function is ,

\(\phi(\vec{x})=\begin{cases} 1 & \lambda(\vec{x})<k \\ 0 & otherwise \end{cases}\).

We, reject \(H_o\) at size \(\alpha\), when \(\phi(\vec{x})=1\), for some \(k\), \(E_{H_o}(\phi) \le \alpha\),

Hence, \(\lambda(\vec{x}) < k \\ \Rightarrow L(\mu_o|\vec{x})<kL(\bar{X}|\vec{x}) \\ \ln k_1 -\frac{1}{\mu_o}\sum_{i=1}^n X_i < \ln k -n \ln \bar{X} -\frac{1}{n} \\ n \ln \bar{X}-\frac{n\bar{X}}{\mu_o} < K* \).

for some constant, \(K*\).

Let \(g(\bar{x})=n\ln \bar{x} -\frac{n\bar{x}}{\mu_o}\), and observe that \(g\) is,

Here, \(K*, \mu_o\) are fixed quantities.

decreasing function of \(\bar{x}\) for \(\bar{x} \ge \mu_o\),

Hence, there exists a \(c\) such that \(\bar{x} \ge c \),we have \(g(\bar) < K*\). See the figure.

So, the critical region of the test is of form \(\bar{X} \ge c\), for some \(c\) such that,

\(P_{H_o}(\bar{X} \ge c)=\alpha \), for some \(0 \le \alpha \le 1\), where \(\alpha\) is the size of the test.

Now, our task is to find \(c\), and for that observe, if \(X \sim Exponential(\theta)\), then \(\frac{2X}{\theta} \sim {\chi^2}_2\),

Hence, in this problem, since the \(X_i\)’s follows \(Exponential(\mu)\), hence, \(\frac{2n\bar{X}}{\mu} \sim {\chi^2}_{2n}\), we have,

\(P_{H_o}(\bar{X} \ge c)=\alpha \\ P_{H_o}(\frac{2n\bar{X}}{\mu_o} \ge \frac{2nc}{\mu_o})=\alpha \\ P_{H_o}({\chi^2}{2n} \ge \frac{2nc}{\mu_o})=\alpha \),

which gives \(c=\frac{\mu_o {\chi^2}_{2n;1-\alpha}}{2n}\),

Hence, the rejection region is indeed, \([\bar{X} \ge \frac{\mu_o {\chi^2}_{2n;1-\alpha}}{2n}\).

Hence Proved !

(b) Now, we know that the power of the test is,

\(\beta= E_{\mu}(\phi) \\ = P_{\mu}(\lambda(\bar{x})>k)=P(\bar{X} \ge \frac{\mu_o {\chi_{2n;1-\alpha}}^2}{2n}) \\ \beta = P_{\mu}({\chi^2}_{2n} \ge \frac{mu_o}{\mu}{\chi^2}_{2n;1-\alpha}) \).

Hence, the power of the test is of form of a cdf of chi-squared distribution.


Food For Thought

Can you use any other testing procedure to conduct this test ?

Think about it !!


ISI MStat PSB 2008 Problem 10
Outstanding Statistics Program with Applications

Outstanding Statistics Program with Applications

Subscribe to Cheenta at Youtube


Categories
Calculus College Mathematics I.S.I. and C.M.I. Entrance IIT JAM Statistics ISI M.Stat PSB ISI MSAT ISI MSTAT Miscellaneous Probability Statistics

ISI MStat PSB 2009 Problem 4 | Polarized to Normal

This is a very beautiful sample problem from ISI MStat PSB 2009 Problem 4. It is based on the idea of Polar Transformations, but need a good deal of observation o realize that. Give it a Try it !

Problem– ISI MStat PSB 2009 Problem 4


Let \(R\) and \(\theta\) be independent and non-negative random variables such that \(R^2 \sim {\chi_2}^2 \) and \(\theta \sim U(0,2\pi)\). Fix \(\theta_o \in (0,2\pi)\). Find the distribution of \(R\sin(\theta+\theta_o)\).

Prerequisites


Convolution

Polar Transformation

Normal Distribution

Solution :

This problem may get nasty, if one try to find the required distribution, by the so-called CDF method. Its better to observe a bit, before moving forward!! Recall how we derive the probability distribution of the sample variance of a sample from a normal population ??

Yes, you are thinking right, we need to use Polar Transformation !!

But, before transforming lets make some modifications, to reduce future complications,

Given, \(\theta \sim U(0,2\pi)\) and \(\theta_o \) is some fixed number in \((0,2\pi)\), so, let \(Z=\theta+\theta_o \sim U(\theta_o,2\pi +\theta_o)\).

Hence, we need to find the distribution of \(R\sin Z\). Now, from the given and modified information the joint pdf of \(R^2\) and \(Z\) are,

\(f_{R^2,Z}(r,z)=\frac{r}{2\pi}exp(-\frac{r^2}{2}) \ \ R>0, \theta_o \le z \le 2\pi +\theta_o \)

Now, let the transformation be \((R,Z) \to (X,Y)\),

\(X=R\cos Z \\ Y=R\sin Z\), Also, here \(X,Y \in \mathbb{R}\)

Hence, \(R^2=X^2+Y^2 \\ Z= \tan^{-1} (\frac{Y}{X}) \)

Hence, verify the Jacobian of the transformation \(J(\frac{r,z}{x,y})=\frac{1}{r}\).

Hence, the joint pdf of \(X\) and \(Y\) is,

\(f_{X,Y}(xy)=f_{R,Z}(x^2+y^2, \tan^{-1}(\frac{y}{x})) J(\frac{r,z}{x,y}) \\ =\frac{1}{2\pi}exp(-\frac{x^2+y^2}{2})\) , \(x,y \in \mathbb{R}\).

Yeah, Now it is looking familiar right !!

Since, we need the distribution of \(Y=R\sin Z=R\sin(\theta+\theta_o)\), we integrate \(f_{X,Y}\) w.r.t to \(X\) over the real line, and we will end up with, the conclusion that,

\(R\sin(\theta+\theta_o) \sim N(0,1)\). Hence, We are done !!


Food For Thought

From the above solution, the distribution of \(R\cos(\theta+\theta_o)\) is also determinable right !! Can you go further investigating the occurrence pattern of \(\tan(\theta+\theta_o)\) ?? \(R\) and \(\theta\) are the same variables as defined in the question.

Give it a try !!


ISI MStat PSB 2008 Problem 10
Outstanding Statistics Program with Applications

Outstanding Statistics Program with Applications

Subscribe to Cheenta at Youtube


Categories
Calculus College Mathematics I.S.I. and C.M.I. Entrance IIT JAM Statistics ISI M.Stat PSB ISI MSAT ISI MSTAT Probability Statistics Theory of Estimation

ISI MStat PSB 2009 Problem 6 | abNormal MLE of Normal

This is a very beautiful sample problem from ISI MStat PSB 2009 Problem 6. It is based on the idea of Restricted Maximum Likelihood Estimators, and Mean Squared Errors. Give it a Try it !

Problem-ISI MStat PSB 2009 Problem 6


Suppose \(X_1,…..,X_n\) are i.i.d. \(N(\theta,1)\), \(\theta_o \le \theta \le \theta_1\), where \(\theta_o < \theta_1\) are two specified numbers. Find the MLE of \(\theta\) and show that it is better than the sample mean \(\bar{X}\) in the sense of having smaller mean squared error.

Prerequisites


Maximum Likelihood Estimators

Normal Distribution

Mean Squared Error

Solution :

This is a very interesting Problem ! We all know, that if the condition “\(\theta_o \le \theta \le \theta_1\), for some specified numbers \(\theta_o < \theta_1\)” had been not given, then the MLE would have been simply \(\bar{X}=\frac{1}{n}\sum_{k=1}^n X_k\), the sample mean of the given sample. But due to the restriction over \(\theta\) things get interestingly complicated.

So, simplify a bit, lets write the Likelihood Function of \(theta\) given this sample, \(\vec{X}=(X_1,….,X_n)’\),

\(L(\theta |\vec{X})={\frac{1}{\sqrt{2\pi}}}^nexp(-\frac{1}{2}\sum_{k=1}^n(X_k-\theta)^2)\), when \(\theta_o \le \theta \le \theta_1\)ow taking natural log both sides and differentiating, we find that ,

\(\frac{d\ln L(\theta|\vec{X})}{d\theta}= \sum_{k=1}^n (X_k-\theta) \).

Now, verify that if \(\bar{X} < \theta_o\), then \(L(\theta |\vec{X})\) is always a decreasing function of \(\theta\), [ where, \(\theta_o \le \theta \le \theta_1\)], Hence the maximum likelihood attains at \(\theta_o\) itself. Similarly, when, \(\theta_o \le \bar{X} \le \theta_1\), the maximum likelihood attains at \(\bar{X}\), lastly the likelihood function will be increasing, hence the maximum likelihood will be found at \(\theta_1\).

Hence, the Restricted Maximum Likelihood Estimator of \(\theta\), say

\(\hat{\theta_{RML}} = \begin{cases} \theta_o & \bar{X} < \theta_o \\ \bar{X} & \theta_o\le \bar{X} \le \theta_1 \\ \theta_1 & \bar{X} > \theta_1 \end{cases}\)

Now, to check that, \(\hat{\theta_{RML}}\) is a better estimator than \(\bar{X}\), in terms of Mean Squared Error (MSE).

Now, \(MSE_{\theta}(\bar{X})=E_{\theta}(\bar{X}-\theta)^2=\int^{-\infty}_\infty (\bar{X}-\theta)^2f_X(x)\,dx\)

\(=\int^{-\infty}_{\theta_o} (\bar{X}-\theta)^2f_X(x)\,dx+\int^{\theta_o}_{\theta_1} (\bar{X}-\theta)^2f_X(x)\,dx+\int^{\theta_1}_\infty (\bar{X}-\theta)^2f_X(x)\,dx\).

\(\ge \int^{-\infty}_{\theta_o} (\theta_o-\theta)^2f_X(x)\,dx+\int^{\theta_o}_{\theta_1} (\bar{X}-\theta)^2f_X(x)\,dx+\int^{\theta_1}_\infty (\theta_1-\theta)^2f_X(x)\,dx\)

\(=E_{\theta}(\hat{\theta_{RML}}-\theta)^2=MSE_{\theta}(\hat{\theta_{RML}})\).

Hence proved !!


Food For Thought

Now, can you find an unbiased estimator, for \(\theta^2\) ?? Okay!! now its quite easy right !! But is the estimator you are thinking about is the best unbiased estimator !! Calculate the variance and also compare weather the Variance is attaining Cramer-Rao Lowe Bound.

Give it a try !! You may need the help of Stein’s Identity.


ISI MStat PSB 2008 Problem 10
Outstanding Statistics Program with Applications

Outstanding Statistics Program with Applications

Subscribe to Cheenta at Youtube


Categories
Calculus College Mathematics I.S.I. and C.M.I. Entrance IIT JAM Statistics ISI M.Stat PSB ISI MSAT ISI MSTAT Probability

ISI MStat PSB 2009 Problem 3 | Gamma is not abNormal

This is a very simple but beautiful sample problem from ISI MStat PSB 2009 Problem 3. It is based on recognizing density function and then using CLT. Try it !

Problem– ISI MStat PSB 2009 Problem 3


Using and appropriate probability distribution or otherwise show that,

\( \lim\limits_{x\to\infty}\int^n_0 \frac{exp(-x)x^{n-1}}{(n-1)!}\,dx =\frac{1}{2}\).

Prerequisites


Gamma Distribution

Central Limit Theorem

Normal Distribution

Solution :

Here all we need is to recognize the structure of the integrand. Look, that here, the integrand is integrated over the non-negative real numbers. Now, event though here it is not mentioned explicitly that \(x\) is a random variable, we can assume \(x\) to be some value taken by a random variable \(X\). After all we can find randomness anywhere and everywhere !!

Now observe that the integrand has a structure which is very identical to the density function of gamma random variable with parameters \(1\) ande \(n\). So, if we assume that \(X\) is a \(Gamma(1, n)\), then our limiting integral transforms to,

\(\lim\limits_{x\to\infty}P(X \le n)\).

Now, we know that if \(X \sim Gamma(1,n)\), then its mean and variance both are \(n\).

So, as \(n \uparrow \infty\), \(\frac{X-n}{\sqrt{n}} \to N(0,1)\), by Central Limit Theorem.

Hence, \(\lim\limits_{x\to\infty}P(X \le n)=\lim\limits_{x\to\infty}P(\frac{X-n}{\sqrt{n}} \le 0)=\lim\limits_{x\to\infty}\Phi (0)=\frac{1}{2}\). [ here \(\Phi(z)\) is the cdf of Normal at \(z\).]

Hence proved !!


Food For Thought

Can, you do the proof under the “Otherwise” condition !!

Give it a try !!


ISI MStat PSB 2008 Problem 10
Outstanding Statistics Program with Applications

Outstanding Statistics Program with Applications

Subscribe to Cheenta at Youtube


Categories
College Mathematics I.S.I. and C.M.I. Entrance IIT JAM Statistics ISI M.Stat PSB ISI MSAT ISI MSTAT Linear Algebra

ISI MStat PSB 2009 Problem 1 | Nilpotent Matrices

This is a very simple sample problem from ISI MStat PSB 2009 Problem 1. It is based on basic properties of Nilpotent Matrices and Skew-symmetric Matrices. Try it !

Problem– ISI MStat PSB 2009 Problem 1


(a) Let \(A\) be an \(n \times n\) matrix such that \((I+A)^4=O\) where \(I\) denotes the identity matrix. Show that \(A\) is non-singular.

(b) Give an example of a non-zero \(2 \times 2\) real matrix \(A\) such that \( \vec{x’}A \vec{x}=0\) for all real vectors \(\vec{x}\).

Prerequisites


Nilpotent Matrix

Eigenvalues

Skew-symmetric Matrix

Solution :

The first part of the problem is quite easy,

It is given that for a \(n \times n\) matrix \(A\), we have \((I+A)^4=O\), so, \(I+A\) is a nilpotet matrix, right !

And we know that all the eigenvalues of a nilpotent matrix are \(0\). Hence all the eigenvalues of \(I+A\) are 0.

Now let \(\lambda_1, \lambda_2,……,\lambda_k\) be the eigenvalues of the matrix \(A\). So, the eigenvalues of the nilpotent matrix \(I+A\) are of form \(1+\lambda_k\) where, \(k=1,2…..,n\). Now since, \(1+\lambda_k=0\) which implies \(\lambda_k=-1\), for \(k=1,2,…,n\).

Since all the eigenvalues of \(A\) are non-zero, infact \(|A|=(-1)^n \). Hence our required propositon.

(b) Now this one is quite interesting,

If for any \(2\times 2\) matrix, the Quadratic form of that matrix with respect to a vector \(\vec{x}=(x_1,x_2)^T\) is of form,

\(a{x_1}^2+ bx_1x_2+cx_2x_1+d{x_2}^2\) where \(a,b,c\) and \(d\) are the elements of the matrix. Now if we equate that with \(0\), what condition should it impose on \(a, b, c\) and \(d\) !! I leave it as an exercise for you to complete it. Also Try to generalize it you will end up with a nice result.


Food For Thought

Now, extending the first part of the question, \(A\) is invertible right !! So, can you prove that we can always get two vectors from \(\mathbb{R}^n\), say \(\vec{x}\) and \(\vec{y}\), such that the necessary and sufficient condition for the invertiblity of the matrix \(A+\vec{x}\vec{y’}\) is “ \(\vec{y’} A^{-1} \vec{x}\) must be different from \(1\)” !!

This is a very important result for Statistics Students !! Keep thinking !!


ISI MStat PSB 2008 Problem 10
Outstanding Statistics Program with Applications

Outstanding Statistics Program with Applications

Subscribe to Cheenta at Youtube


Categories
Calculus College Mathematics Coordinate Geometry Geometry I.S.I. and C.M.I. Entrance IIT JAM Statistics Inequality ISI M.Stat PSB ISI MSAT ISI MSTAT

ISI MStat PSB 2006 Problem 2 | Cauchy & Schwarz come to rescue

This is a very subtle sample problem from ISI MStat PSB 2006 Problem 2. After seeing this problem, one may think of using Lagrange Multipliers, but one can just find easier and beautiful way, if one is really keen to find one. Can you!

Problem– ISI MStat PSB 2006 Problem 2


Maximize \(x+y\) subject to the condition that \(2x^2+3y^2 \le 1\).

Prerequisites


Cauchy-Schwarz Inequality

Tangent-Normal

Conic section

Solution :

This is a beautiful problem, but only if one notices the trick, or else things gets ugly.

Now we need to find the maximum of \(x+y\) when it is given that \(2x^2+3y^2 \le 1\). Seeing the given condition we always think of using Lagrange Multipliers, but I find that thing very nasty, and always find ways to avoid it.

So let’s recall the famous Cauchy-Schwarz Inequality, \((ab+cd)^2 \le (a^2+c^2)(b^2+d^2)\).

Now, lets take \(a=\sqrt{2}x ; b=\frac{1}{\sqrt{2}} ; c= \sqrt{3}y ; d= \frac{1}{\sqrt{3}} \), and observe our inequality reduces to,

\((x+y)^2 \le (2x^2+3y^2)(\frac{1}{2}+\frac{1}{3}) \le (\frac{1}{2}+\frac{1}{3})=\frac{5}{6} \Rightarrow x+y \le \sqrt{\frac{5}{6}}\). Hence the maximum of \(x+y\) with respect to the given condition \(2x^2+3y^2 \le 1\) is \(\frac{5}{6}\). Hence we got what we want without even doing any nasty calculations.

Another nice approach for doing this problem is looking through the pictures. Given the condition \(2x^2+3y^2 \le 1\) represents a disc whose shape is elliptical, and \(x+y=k\) is a family of straight parallel lines passing passing through that disc.

The disc and the line with maximum intercept.

Hence the line with the maximum intercept among all the lines passing through the given disc represents the maximized value of \(x+y\). So, basically if a line of form \(x+y=k_o\) (say), is a tangent to the disc, then it will basically represent the line with maximum intercept from the mentioned family of line. So, we just need to find the point on the boundary of the disc, where the line of form \(x+y=k_o\) touches as a tangent. Can you finish the rest and verify weather the maximum intercept .i.e. \(k_o= \sqrt{\frac{5}{6}}\) or not.


Food For Thought

Can you show another alternate solution to this problem ? No, Lagrange Multiplier Please !! How would you like to find out the point of tangency if the disc was circular ? Show us the solution we will post them in the comment.

Keep thinking !!


ISI MStat PSB 2008 Problem 10
Outstanding Statistics Program with Applications

Outstanding Statistics Program with Applications

Subscribe to Cheenta at Youtube


Categories
College Mathematics

Problem on Inequality | ISI – MSQMS – B, 2018 | Problem 2a

Try this problem from ISI-MSQMS 2018 which involves the concept of Inequality.

INEQUALITY | ISI 2018| MSQMS | PART B | PROBLEM 2a


(a) Prove that if $x>0, y>0$ and $x+y=1,$ then $\left(1+\frac{1}{x}\right)\left(1+\frac{1}{y}\right) \geq 9$

Key Concepts


Algebra

Inequality

Numbers

Check The Answer


But Try the Problem First…

Answer: $xy \leq \frac{1}{4}$

Source
Suggested Reading

ISI – MSQMS – B, 2018, Problem 2A

“INEQUALITIES: AN APPROACH THROUGH PROBLEMS BY BJ VENKATACHALA”

Try with Hints


First hint

We have to show that ,

$(1+\frac{1}{x})(1+\frac{1}{y}) \geq 9$

i.e $1+ \frac{1}{x} + \frac{1}{y} +\frac{1}{xy} \geq 9$

Since $x+y =1$

Therefore the above equation becomes $\frac{2}{xy} \geq 8$

ie $xy \leq \frac{1}{4}$

Now with this reduced form of the equation why don’t you give it a try yourself,I am sure you can do it.

Second hint

Applying AM $\geq$ GM on $x,y$

So you are just one step away from solving your problem,go on………….

Final Step

Therefore, $\frac{x+y}{2} \geq (xy)^\frac{1}{2}$

$\Rightarrow \frac{1}{2} \geq (xy)^\frac{1}{2}$

Squaring both sides we get, $xy \leq \frac{1}{4}$

Hence the result follows.

Subscribe to Cheenta at Youtube


Categories
College Mathematics IIT JAM Statistics ISI M.Stat PSB Linear Algebra Statistics

Data, Determinant and Simplex

This is a beautiful problem connecting linear algebra, geometry and data. Go ahead and delve into the glorious connection.

Problem

Given a matrix \( \begin{bmatrix}a & b \\c & d \end{bmatrix} \) with the constraint \( 1 \geq a, b, c, d \geq 0; a + b + c + d = 1\), find the matrix with the largest determinant.

Is there any statistical significance behind this result?

Prerequisites

Solution ( Geometrical )

Step 1

Take two vectors \( v = (a,c) and w = (b,d)\) such that their addition lies on \(v +w lies on x + y = 1\) line. Now, we need to find a pair of vectors {\(v, w\)}such that the area formed by these two vectors is maximum.

Triagles and vectors

Step 2

Rotate the parallelogram such that CF lies on the X – axis.

Now, observe that this new parallelogram has an area same as the initial one. Can you give a new parallelogram with a larger area?

Step 3

Just extend the vertices to the end of the simplex OAB. Observe that the new parallelogram has a larger area than the initial parallelogram. Is there any thing larger?

Triangles and Parallelograms

Step 4

Now, extend it to a rectangle. Voila! It has a larger area. Now therefore, given any non rectangular parallelogram we can find a rectangle with a larger area than the parallelogram. So, let’s search in the region of rectangles. What do you guess is the answer?

Triangle and rectangle

Step 5

A Square!

Triangle and square

Let the rectangle has length \(x, y\) and area \(xy\). Now, observe that \(xy\) is maximized with respect to \(x+y = 1\) when \(x = y = \frac{1}{2}\). [Use AM – GM Inequality].

So, \(v = (0,\frac{1}{2}) \) and \( w = (\frac{1}{2},0) \) maximizes the determinant.

Challenge 1

Prove it using algebraic methods borrowed from this geometrical thinking. Your solution will be put upon here.

Challenge 2

Can you generalize this result for \( n \times n \) matrices? If, yes prove it. Just algebrify the steps.

Statistical Significance

Lung Cancer and Smoker Data

Data

Observe that that if, we divide every thing by 1000, we get a matrix.

So, the question is about association of Smoking and Lung Cancer. Given these 1000 individuals let’s see how the distribution of the numbers result in what odd ratio?

For the categorical table data \( \begin{bmatrix}a & b \\c & d \end{bmatrix} \) the odd’s ratio is defined as \(\frac{ad}{bc} = \frac{det(\begin{bmatrix}a & b \\c & d \end{bmatrix})}{bc} + 1\)

The log odd’s ratio is defined as \( log(ad) – log(bc)\).

Data

Observe the above data, observe that Log Odd’s Ratio is almost behaving like the determinant. When \( X = 1\) and \(X = 0\) depend on Y uniformly, no information of dependence is released. Hence, Log Odd’s Ratio is 0 and so is the Determinant.

Try to understand, why the Log Odd’s ratio is behaving same as Odd’s Ratio?

\( log(x)\) is increasing and so is \(x\) hence, \(log(ad) – log(bc)\) must have the same nature as \(ad -bc\).

Share your ideas here. I will write in more details about this phenemenon.

Stay Tuned! Stay Blessed!

Categories
College Mathematics

Problem on Integral Inequality | ISI – MSQMS – B, 2015

Try this problem from ISI-MSQMS 2015 which involves the concept of Integral Inequality.

INTEGRAL INEQUALITY | ISI 2015 | MSQMS | PART B | PROBLEM 7b


Show that $1<\int_{0}^{1} e^{x^{2}} d x<e$

Key Concepts


Real Analysis

Inequality

Numbers

Check The Answer


But Try the Problem First…

Source
Suggested Reading

ISI – MSQMS – B, 2015, Problem 7b

“INEQUALITIES: AN APPROACH THROUGH PROBLEMS BY BJ VENKATACHALA”

Try with Hints


First hint

We have to show that ,

$1<\int_{0}^{1} e^{x^{2}} d x<e$

$ 0< x <1$

It implies, $0 < x^2 <1$

Now with this reduced form of the equation why don’t you give it a try yourself, I am sure you can do it.

Second hint

Thus, $ e^0 < e^{x^2} <e^1 $

i.e $1 < e^{x^2} <e $

So you are just one step away from solving your problem, go on………….

Final hint

Therefore, Integrating the inequality with limits $0$ to $1$ we get, $\int\limits_0^1 \mathrm dx < \int\limits_0^1 e^{x^2} \mathrm dx < \int\limits_0^1e \mathrm dx$

Subscribe to Cheenta at Youtube