Categories

## ISI MStat PSB 2007 Problem 3 | Application of L’hospital Rule

This is a very beautiful sample problem from ISI MStat PSB 2007 Problem 3 based on use of L’hospital Rule . Let’s give it a try !!

## Problem– ISI MStat PSB 2007 Problem 3

Let f be a function such that $f(0)=0$ and f has derivatives of all order. Show that $\lim _{h \to 0} \frac{f(h)+f(-h)}{h^{2}}=f”(0)$
where $f”(0)$ is the second derivative of f at 0.

### Prerequisites

Differentiability

Continuity

L’hospital rule

## Solution :

Let L= $\lim _{h \to 0} \frac{f(h)+f(-h)}{h^{2}}$ it’s a $\frac{0}{0}$ form as f(0)=0 .

So , here we can use L’hospital rule as f is differentiable .

We get L= $\lim _{h \to 0} \frac{f'(h)-f'(-h)}{2h} = \lim _{h \to 0} \frac{(f'(h)-f'(0)) -(f'(-h)-f'(0))}{2h}$

= $\lim _{h \to 0} \frac{f'(h)-f'(0)}{2h} + \lim _{k \to 0} \frac{f'(k)-f'(0)}{2k}$ , taking -h=k .

= $\frac{f”(0)}{2} + \frac{f”(0)}{2}$ = $f”(0)$ . Hence done!

## Food For Thought

Let $f:[0,1] \rightarrow[0,1]$ be a continuous function such $f^{(n)} := f ( f ( \cdots ( f(n \text{ times} ))$ and assume that there exists a positive integer m such that $f^{(m)}(x)=x$ for all $x \in[0,1] .$ Prove that $f(x)=x$ for all $x \in[0,1]$

Categories

## ISI MStat PSB 2005 Problem 1 | The Inductive Matrix

This is a very beautiful sample problem from ISI MStat PSB 2005 Problem 1. It is based on some basic properties of upper triangular matrix and diagonal matrix, only if you use them carefully. Give it a thought!

## Problem– ISI MStat PSB 2005 Problem 1

Let $A$ be a $n \times n$ upper triangular matrix such that $AA^T=A^TA$. Show that $A$ is a diagonal matrix.

### Prerequisites

Upper Triangular Matrix

Diagonal Matrix

Mathematical Induction

## Solution :

This is very beautiful problem, since it deals with some very beautiful aspects of matrix. Lets assume the matrix $A$ to be,

$A=\begin{bmatrix} a_{11} & \vec{{a_1}^T} \\ \vec{0_{n-1}}& A_{n-1} \end{bmatrix}$.

Here, $A_{n-1}$ is a partition matrix of $A$, which is also an upper triangular matrix of order $n-1$, and $\vec{0_{n-1}}$ is a null column vector of order $n-1$.

So, $AA^T= \begin{bmatrix} {a_{11}}^2+\vec{{a_1}^T}\vec{a_1} && \vec{{a_1}^T}{A_{n-1}}^T \\ A_{n-1}\vec{a_1}&& A_{n-1}{A_{n-1}}^T \end{bmatrix}$ .

Again, $A^TA= \begin{bmatrix} {a_{11}}^2&& a_{11}\vec{a_1} \\ a_{11}\vec{a_1}&& \vec{a_1}\vec{{a_1}^T}+{A_{n-1}}^TA_{n-1} \end{bmatrix}$.

Now lets assume that when a $n\times n$ upper triangular matrix $A$, holds $AA^T=A^TA$ , then $A$ is diagonal is true. Then equating the above elements, we have,

$\vec{{a_1}^T}\vec{a_1}=0 \Rightarrow \vec{a_1}=\vec{0}$ , and also $A_{n-1}{A_{n-1}}^T={A_{n-1}}^TA_{n-1}$.

Now observe that, if $A_{n-1}$ (which is actually an $n\times n$ upper triangular matrix), is a diagonal matrix then only $A$ will be also diagonal. So, its the same problem we are trying to prove !! So, just use induction get the proof done !!

## Food For Thought

What if I change the given relation as $AA^*=A^*A$, where $A^*$ is the conjugate matrix of $A$, rest of the conditions remains same. Can you investigate whether $A$ is a diagonal matrix or not ?

Keep thinking !!

Categories

## ISI MStat PSB 2014 Problem 4 | The Machine’s Failure

This is a very simple sample problem from ISI MStat PSB 2014 Problem 4. It is based on order statistics, but generally due to one’s ignorance towards order statistics, one misses the subtleties . Be Careful !

## Problem– ISI MStat PSB 2014 Problem 4

Consider a machine with three components whose times to failure are independently distributed as exponential random variables with mean $\lambda$. the machine continue to work as long as at least two components work. Find the expected time to failure of the machine.

### Prerequisites

Exponential Distribution

Order statistics

Basic counting

## Solution :

In the problem as it is said, let the 3 component part of the machine be A,B and C respectively, where $X_A, X_B$ and $X_C$ are the survival time of the respective parts. Now it is also told that $X_A, X_B$ and $X_C$ follows $exponential(\lambda)$, and clearly these random variables are also i.id.

Now, here comes the trick ! It is told that the machine stops when two or all parts of the machine stop working. Here, we sometimes gets confused and start thinking combinatorially. But the we forget the basic counting of combinatorics lies in ordering ! Suppose we start ordering the life time of the individual components .i.e. among $X_A, X_B$ and $X_C$, there exists a ordering and say if we write it in order, we have $X_{(1)} \le X_{(2)} \le X_{(3)}$.

Now observe that, after $X_{(2)}$ units of time, the machine will stop !! (Are you sure ?? think it over ).

So, expected time till the machine stops , is just $E(X_{(2)})$, but to find this we need to know the distribution of $X_{(2)}$.

We have the pdf of $X_{(2)}$ as, $f_{(2)}(x)= \frac{3!}{(2-1)!(3-2)!} [P(X \le x)]^{2-1}[P(X>x)]^{3-2}f_X(x)$.

Where $f_X(x)$ is the pdf of exponentional with mean $\lambda$.

So, $E(X(2))= \int^{\infty}_0 xf_{(2)}(x)dx$. which will turn out to be $\frac{5\lambda}{6}$, which I leave on the readers to verify , hence concluding my solution.

## Food For Thought

Now, suppose, you want install an alarm system, which will notify you some times before the machine wears our!! So, what do you think your strategy should be ? Given that you have a strategy, you now replace the weared out part of the machine within the time period between the alarm rings and the machine stops working, to continue uninterrupted working.What is the expected time within which you must act ?

Keep the machine running !!

Categories

## ISI MStat PSB 2009 Problem 2 | Linear Difference Equation

This is a very beautiful sample problem from ISI MStat PSB 2009 Problem 2 based on Convergence of a sequence. Let’s give it a try !!

## Problem– ISI MStat PSB 2009 Problem 2

Let $\{x_{n}: n \geq 0\}$ be a sequence of real numbers such that
$x_{n+1}=\lambda x_{n}+(1-\lambda) x_{n-1}, n \geq 1,$ for some $0<\lambda<1$
(a) Show that $x_{n}=x_{0}+(x_{1}-x_{0}) \sum_{k=0}^{n-1}(\lambda-1)^{k}$
(b) Hence, or otherwise, show that $x_{n}$ converges and find the limit.

### Prerequisites

Limit

Sequence

Linear Difference Equation

## Solution :

(a) We are given that $x_{n+1}=\lambda x_{n}+(1-\lambda) x_{n-1}, n \geq 1,$ for some $0<\lambda<1$

So, $x_{n+1} – x_{n} = -(1- \lambda)( x_n-x_{n-1})$ —- (1)

Again using (1) we have $( x_n-x_{n-1})= -(1- \lambda)( x_{n-1}-x_{n-2})$ .

Now putting this in (1) we have , $x_{n+1} – x_{n} = {(-(1- \lambda))}^2 ( x_{n-1}-x_{n-2})$ .

So, proceeding like this we have $x_{n+1} – x_{n} = {(-(1- \lambda))}^n ( x_{1}-x_{0})$ for all $n \geq 1$ and for some $0<\lambda<1$—- (2)

So, from (2) we have $x_{n} – x_{n-1} = {(-(1- \lambda))}^{n-1} ( x_{1}-x_{0})$ , $\cdots , (x_2-x_1)=-(\lambda-1)(x_1-x_{0})$ and $x_1-x_{0}=x_{1}-x_{0}$

Adding all the above n equation we have $x_{n}-x_{0}=(x_{1}-x_{0}) \sum_{k=0}^{n-1} {(\lambda-1)}^{k}$

Hence , $x_{n}=x_{0}+(x_{1}-x_{0}) \sum_{k=0}^{n-1}(\lambda-1)^{k}$ (proved ) .

(b) As we now have an explicit form of $x_{n}=x_{0}+(x_{1}-x_{0}) \times \frac{1-{( \lambda -1)}^n}{1-(\lambda -1)}$ —-(3)

Hence from (3) we can say $x_{n}$ is bounded and monotonic ( verify ) so , it’s convergent .

Now let’s take $\lim_{n\to\infty}$ both side of (3) we get , $\lim_{x\to\infty} x_{n} = x_{0}+(x_{1}-x_{0}) \times \frac{1}{2 – \lambda}$ .

Since , $\lim_{x\to\infty} {( \lambda – 1)}^{n} = 0$ as , $-1 < \lambda -1 < 0$ .

## Food For Thought

$\mathrm{m}$ and $\mathrm{k}$ are two natural number and $a_{1}, a_{2}, \ldots, a_{m}$ and $b_{1}, b_{2}, \ldots, b_{k}$ are two sets of positive real numbers such that $a_{1}^{\frac{1}{n}}+a_{2}^{\frac{1}{n}}+\cdots+a_{m}^{\frac{1}{n}}$ = $b_{1}^{\frac{1}{n}}+\cdots+b_{k}^{\frac{1}{n}}$

for all natural number $\mathrm{n} .$ Then prove that $\mathrm{m}=\mathrm{k}$ and $a_{1} a_{2} \ldots a_{m}=b_{1} b_{2} . . b_{k}$ .

Categories

## ISI MStat PSB 2012 Problem 6 | Tossing a biased coin

This is a very beautiful sample problem from ISI MStat PSB 2012 Problem 6 based on Conditional probability . Let’s give it a try !!

## Problem– ISI MStat PSB 2012 Problem 6

There are two biased coins – one which has probability $1 / 4$ of showing
heads and $3 / 4$ of showing tails, while the other has probability $3 / 4$ of showing heads and $1 / 4$ of showing tails when tossed. One of the two coins is chosen at random and is then tossed 8 times.

(a) Given that the first toss shows heads, what is the probability that in the next 7 tosses there will be exactly 6 heads and 1 tail?
(b) Given that the first toss shows heads and the second toss shows tail, what is the probability that the next 6 tosses all show heads?

## Prerequisites

Basic Counting Principle

## Solution :

Let , $A_1$: Coin with probability of Head 1/4 and Tail 3/4 is chosen
$A_2$ : Coin with probability of Head 3/4 and Tail 1/4 is chosen
B :first toss shows heads and the next 7 tosses there will be exactly 6 heads and 1 tail .
C : the first toss shows heads and the second toss shows tail and the next 6 tosses all show heads .

(a) $P(B)=P(B|A_1)P(A_1) + P(B|A_2)P(A_2)$

Now , $P(B|A_1)= \frac{1}{4} \times {7 \choose 1} \times (\frac{1}{4})^{6} \times \frac{3}{4}$

Since , first toss is head so it can occur by coin 1 with probability 1/4 and out of next 7 tosses we can choose 6 where head comes and this occurs with probability ${7 \choose 1} \times (\frac{1}{4})^{6} \times \frac{3}{4}$

Similarly we can calculate $P(B|A_2)$ and $P(A_1)=P(A_2)= 1/2$ the probability of choosing any one coin out of 2 .

Therefore , $P(B)=P(B|A_1)P(A_1) + P(B|A_2)P(A_2)$

= $\frac{1}{4} \times {7 \choose 1} \times\left(\frac{1}{4}\right)^{6} \times \frac{3}{4} \times\frac{1}{2}+\frac{3}{4} \times {7 \choose 1} \times\left(\frac{3}{4}\right)^{6} \times \frac{1}{4} \times \frac{1}{2}$

(b) Similarly like (a) we get ,

$P(C)= \frac{1}{4} \times \frac{3}{4} \times (\frac{1}{4})^{6} \times \frac{1}{2}+\frac{3}{4} \times \frac{1}{4} \times\left(\frac{3}{4}\right)^{6} \times \frac{1}{2}$ .

Here we don’t need to choose any thing as all the outcomes of the toss are given we just need to see for two different coins .

## Food For Thought

There are 10 boxes each containing 6 white and 7 red balls. Two different boxes are chosen at random, one ball is drawn simultaneously at random from each and transferred to the other box. Now a box is again chosen from the 10 boxes and a ball is chosen from it.Find out the probability of the ball being white.

Categories

## ISI MStat PSB 2013 Problem 3 | Number of distinct integers

This is a very beautiful sample problem from ISI MStat PSB 2013 Problem 3 based on Counting principle . Let’s give it a try !!

## Problem– ISI MStat PSB 2013 Problem 3

1. Suppose integers are formed by taking one or more digits from the
2. following $2,2,3,3,4,5,5,5,6,7$ . For example, 355 is a possible choice while 44 is not. Find the number of distinct integers that can be formed in which
3. (a) the digits are non-decreasing;
4. (b) the digits are strictly increasing.

### Prerequisites

Basic Counting Principle

## Solution :

(a) To find the number of integers with non-decreasing digits we will go by this way

First see that the position of given digits are fixed as they have to form non-decreasing digits what can be do is to select number of times the particular digit occurs .

For example 223556 in an integer With non-decreasing digits so here 2 occurs
2 times, 3 occurs 1 times, 4 doesn’t occurs any times, 5 occurs 2 times, 6 occur 1 time and finally 7 which doesn’t occur any number of times.
So, here we have (0,1,2) possible choices of occurrence of 2 ,(0,1,2) possible choices of occurrence of $3, \cdots,$ (0,1) possible choices of occurrence of 7 .

Hence, number of such integers $=3 \times 3 \times 2 \times 4 \times 2 \times 2-1$.

We are subtracting 1 to exclude the case where no digits has occur any number of times.

(b) To find the number of integers with increasing digits we can go by above method. Here there is only one restriction that a digit must be greater than it’s preceding digits.so, no consecutive digits can be equal. Hence every digits has two choices (0,1) of occurrence . Therefore number of such integers = $2^{6}-1$ .

We are subtracting 1 to exclude the case where no digits has occur any number of times.

## Food For Thought

A number is chosen randomly from all the 5 digited numbers.Find out the probability that the digits form a non decreasing sequence.

Hint 1 : Find what is invariant here .

Hint 2 : Use the non-negative integer solutions of a equation formula .

Categories

## ISI MStat PSB 2013 Problem 8 | Finding the Distribution of a Random Variable

This is a very beautiful sample problem from ISI MStat PSB 2013 Problem 8 based on finding the distribution of a random variable . Let’s give it a try !!

## Problem– ISI MStat PSB 2013 Problem 8

1. Suppose $X_{1}$ is a standard normal random variable. Define
2. $X_{2}= \begin{cases} – X_{1} & , \text{if } |X_{1}|<1 \\ X_{1} & \text{otherwise} \end{cases}$
(a) Show that $X_{2}$ is also a standard normal random variable.
(b) Obtain the cumulative distribution function of $X_{1}+X_{2}$ in terms of the cumulative distribution function of a standard normal random
variable.

### Prerequisites

Cumulative Distribution Function

Normal Distribution

## Solution :

(a) Let $F_{X_{2}}(x)$ be distribution function of X_{2}\) then we can say that ,

$F_{X_{2}}(x) = P( X_{2} \le x) = P( X_{2} \le x | |X_{1}| < 1) P( |X_{1}| <1) + P( X_{2} \le x | |X_{1}| > 1 ) P( |X_{1}| >1)$

= $P( – X_{1} ||X_{1}| < 1)P( |X_{1}| <1) + P( X_{1} \le x | |X_{1}| > 1 ) P( |X_{1}| >1)$

= $P( – X_{1}\le x , |X_{1}| < 1 ) + P( X_{1} \le x , |X_{1}| > 1 )$

= $P( X_{1}\le x , |-X_{1}| < 1 ) + P( X_{1} \le x , |X_{1}| > 1 )$

Since $X_{1} \sim N(0,1)$ hence it’s symmetric about 0 . So,$X_{1}$ and$-X_{1}$ are identically distributed .

Therefore , $F_{X_{2}}(x) = P( X_{1}\le x , |X_{1}| < 1 ) + P( X_{1} \le x , |X_{1}| > 1 )$

=$P(X_{1} \le x ) = \Phi(x)$

Hence , $X_{2}$ is also a standard normal random variable.

(b) Let , $Y= X_{1} + X_{2} = \begin{cases} 0 & \text{if } |X_{1}|<1 \\ 2X_{1} & \text{ otherwise } \end{cases}$

Distribution function $F_{Y}(y) = P(Y \le y)$

=$P(Y \le y | |X_{1} < 1) P(|X_{1}| <1) + P( Y\le y | |X_{1}| >1)P(|X_{1}|>1)$

= $P( 0 \le y , -1 \le X_{1} \le 1 ) + P( 2X_{1} \le y , ( X_{1} >1 \cup X_{1}<-1))$ \)

= $P(0 \le y , -1 \le X_{1} \le 1 ) + P( X_{1} \le \frac{y}{2} , X_{1} > 1) + P( X_{1} \le \frac{y}{2} , X_{1} < -1)$

= $P(0 \le y , -1 \le X_{1} \le 1 ) + P( 1< X_{1} \le \frac{y}{2}) + P( X_{1} \le min{ \frac{y}{2} , -1 } )$

= $\begin{cases} P( -1 \le X_{1} \le 1 ) + P( 1< X_{1} \le \frac{y}{2}) + P( X_{1} \le -1) & y \ge 2 \\ P( -1 \le X_{1} \le 1 ) + P( X_{1} \le -1) & 0 \le y < 2 \\ P( X_{1} \le -1) & -2 \le y < 0 \\ P( X_{1} \le \frac{y}{2}) & y<-2 \end{cases}$

= $\begin{cases} \Phi( \frac{y}{2} ) & y<-2 \\ \Phi(-1) & -2 \le y < 0 \\ \Phi(1) & 0 \le y <2 \\ \Phi(\frac{y}{2} ) & y \ge 2 \end{cases}$ .

## Food For Thought

Find the the distribution function of $2X_{1}-X_{2}$ in terms of the cumulative distribution function of a standard normal random variable.

Categories

## ISI MStat PSB 2009 Problem 5 | Finding the Distribution of a Random Variable

This is a very beautiful sample problem from ISI MStat PSB 2009 Problem 5 based on finding the distribution of a random variable . Let’s give it a try !!

## Problem– ISI MStat PSB 2009 Problem 5

Suppose $F$ and $G$ are continuous and strictly increasing distribution
functions. Let $X$ have distribution function $F$ and $Y=G^{-1}( F(X))$
(a) Find the distribution function of Y.
(b) Hence, or otherwise, show that the joint distribution function of $(X, Y),$ denoted by $H(x, y),$ is given by $H(x, y)=\min (F(x), G(y))$.

### Prerequisites

Cumulative Distribution Function

Inverse of a function

Minimum of two function

## Solution :

(a) Let $F_{Y}(y)$ be Cumulative distribution Function of $Y=G^{-1}(F(x))$
Then , $F_{Y}(y)=P(Y \le y) =P(G^{-1}(F(x)) \le y)$
=$P(F(x) \le G(y))$

[ taking G on both side, since G is Strictly in decreasing function the inequality doesn’t change]
= $P(x \le F^{-1}(G(y)))$

[ taking $F^{-1}$ on both side and since F is strictly increasing distribution function hence inverse exists and inequality doesn’t change ]

=$F(F^{-1}(G(y)))$ [Since F is a distribution function of X ]
=G(y)

therefore Cumulative distribution Function of $Y=G^{-1}(F(x))$ is G .

(b) Let $F_{H}(h)$ be joint cdf of $(x, y)$ then we have ,

$F_{H}(h)=P(X \leq x, Y \leq y) =P(X \leq x, G^{-1}(F(X)) \leq y) =P(X \leq x, F(X) \leq G(y))$

=$P(F(X) \leq F(x), F(X) \leq G(y))$

[ Since if $X \le x$ with probability 1 then $F(X) \le F(x)$ with probability 1 as F is strictly increasing distribution function ]
= $P(\min F(X) \leq \min {F(x), G(y)}) =P(X \leq F^{-1}(\min {F(x), G(y)}))$

=$F(F^{-1}(\min {F(x),(n(y)})) =\min {F(x), G(y)}$ [ Since F is CDF of X ]

Therefore , the joint distribution function of $(X, Y),$ denoted by $H(x, y),$ is given by $H(x, y)=\min (F(x), G(y))$

## Food For Thought

Find the the distribution function of $Y=G^{-1}( F(X))$ where G is continuous and strictly decreasing function .

Categories

## ISI MStat PSB 2018 Problem 9 | Regression Analysis

This is a very simple sample problem from ISI MStat PSB 2018 Problem 9. It is mainly based on estimation of ordinary least square estimates and Likelihood estimates of regression parameters. Try it!

## Problem – ISI MStat PSB 2018 Problem 9

Suppose $(y_i,x_i)$ satisfies the regression model,

$y_i= \alpha + \beta x_i + \epsilon_i$ for $i=1,2,….,n.$

where ${ x_i : 1 \le i \le n }$ are fixed constants and ${ \epsilon_i : 1 \le i \le n}$ are i.i.d. $N(0, \sigma^2)$ errors, where $\alpha, \beta$ and $\sigma^2 (>0)$ are unknown parameters.

(a) Let $\tilde{\alpha}$ denote the least squares estimate of $\alpha$ obtained assuming $\beta=5$. Find the mean squared error (MSE) of $\tilde{\alpha}$ in terms of model parameters.

(b) Obtain the maximum likelihood estimator of this MSE.

### Prerequisites

Normal Distribution

Ordinary Least Square Estimates

Maximum Likelihood Estimates

## Solution :

These problem is simple enough,

for the given model, $y_i= \alpha + \beta x_i + \epsilon_i$ for $i=1,….,n$.

The scenario is even simpler here since, it is given that $\beta=5$ , so our model reduces to,

$y_i= \alpha + 5x_i + \epsilon_i$, where $\epsilon_i \sim N(0, \sigma^2)$ and $\epsilon_i$’s are i.i.d.

now we know that the Ordinary Least Square (OLS) estimate of $\alpha$ is

$\tilde{\alpha} = \bar{y} – \tilde{\beta}\bar{x}$ (How ??) where $\tilde{\beta}$ is the (generally) the OLS estimate of $\beta$, but here $\beta=5$ is known, so,

$\tilde{\alpha}= \bar{y} – 5\bar{x}$ again,

$E(\tilde{\alpha})=E( \bar{y}-5\bar{x})=alpha-(\beta-5)\bar{x}$, hence $\tilde{\alpha}$ is a biased estimator for $\alpha$ with $Bias_{\alpha}(\tilde{\alpha})= (\beta-5)\bar{x}$.

So, the Mean Squared Error, MSE of $\tilde{\alpha}$ is,

$MSE_{\alpha}(\tilde{\alpha})= E(\tilde{\alpha} – \alpha)^2=Var(\tilde{\alpha})$ + ${Bias^2}_{\alpha}(\tilde{\alpha})$

$= frac{\sigma^2}{n}+ \bar{x}^2(\beta-5)^2$

[ as, it follows clearly from the model, $y_i \sim N( \alpha +\beta x_i , \sigma^2)$ and $x_i$’s are non-stochastic ] .

(b) the last part follows directly from the, the note I provided at the end of part (a),

that is, $y_i \sim N( \alpha + \beta x_i , \sigma^2 )$ and we have to find the Maximum Likelihood Estimator of $\sigma^2$ and $\beta$ and then use the inavriant property of MLE. ( in the MSE obtained in (a)). In leave it as an Exercise !! Finish it Yourself !

## Food For Thought

Suppose you don’t know the value of $\beta$ even, What will be the MSE of $\tilde{\alpha}$ in that case ?

Also, find the OLS estimate of $\beta$ and you already have done it for $\alpha$, so now find the MLEs of all $\alpha$ and $\beta$. Are the OLS estimates are identical to the MLEs you obtained ? Which assumption induces this coincidence ?? What do you think !!

Categories

## ISI MStat PSB 2004 Problem 7 | Finding the Distribution of a Random Variable

This is a very beautiful sample problem from ISI MStat PSB 2004 Problem 7 based on finding the distribution of a random variable . Let’s give it a try !!

## Problem– ISI MStat PSB 2004 Problem 7

Suppose X has a normal distribution with mean 0 and variance 25 . Let Y be an independent random variable taking values -1 and 1 with
equal probability. Define $S=X Y+\frac{X}{Y}$ and $T=X Y-\frac{X}{Y}$
(a) Find the probability distribution of s.
(b) Find the probability distribution of $(\frac{S+T}{10})^{2}$

### Prerequisites

Cumulative Distribution Function

Normal distribution

## Solution :

(a) We can write $S = \begin{cases} 2x & , if Y=1 \\ -2x & , if Y=-1 \end{cases}$

Let Cumulative distribution function of S be denoted by $F_{S}(s)$ . Then ,

$F_{S}(s) = P(S \le s) = P(S \le s | Y=1)P(Y=1) + P(S \le s| Y=-1)P(Y=-1) = P(2X \le s) \times \frac{1}{2} + P(-2X \le s) \times \frac{1}{2}$ —-(1)

Here given that Y takes values 1 and -1 with equal probabilities .so , $P(Y=1)=P(Y=-1)= \frac{1}{2}$ .

Now as $X \sim N(0, 5^2)$ hence X is symmetric distribution about 0 . Thus X and -X are identically distributed .

Thus from (1) we get $F_{S}(s) = P(X \le s/2 ) \times \frac{1}{2} + P(-X \le s/2) \times \frac{1}{2} = P(X \le s/2 ) \times \frac{1}{2} + P(X \le s/2) \times \frac{1}{2}$=$P( X \le s/2) = P(\frac{X-0}{5} \le s/2 ) = \Phi(\frac{s-0}{10})$

Hence $S \sim N(0,{10}^2)$.

(b) Let K=$(\frac{S+T}{10})^{2}$ = $\frac{{XY}^2}{ {(10)}^2}$

Let C.D.F of K be $F_{K}(k) = P(K \le k ) = P(K \le k | Y=1)P(Y=1) + P(K \le k| Y=-1)P(Y=-1) = P( \frac{X^2}{{(10)}^2} \le k )$

=$P( -10 \sqrt{k} \le X \le 10 \sqrt{k} ) = \Phi(2\sqrt{k}) – \Phi(-2 \sqrt{k})$ as $X \sim N(0, 5^2)$.

## Food For Thought

Find the distribution of T .