ISI MStat PSB 2014 Problem 9 | Hypothesis Testing

This is a another beautiful sample problem from ISI MStat PSB 2014 Problem 9. It is based on testing simple hypothesis, but reveals and uses a very cute property of Geometric distribution, which I prefer calling sister to Loss of memory . Give it a try !

Problem- ISI MStat PSB 2014 Problem 9


Let \( X_i \sim Geo(p_1)\) and \( X_2 \sim Geo(p_2)\) be independent random variables, where Geo(p) refers to Geometric distribution whose p.m.f. f is given by,

\(f(k)=p(1-p)^k, k=0,1,.....\)

We are interested in testing the null hypothesis \(H_o : p_1=p_2\) against the alternative \( H_1: p_1<p_2\). Intuitively it is clear that we should reject if \(X_1\) is large, but unfortunately, we cannot compute the cut-off because the distribution of \(X_1\) under \(H_o\) depends on the unknown (common) value \(p_1\) and \(p_2\).

(a) Let \(Y= X_1 +X_2\). Find the conditional distribution of \( X_1|Y=y\) when \(p_1=p_2\).

(b) Based on the result obtained in (a), derive a level 0.05 test for \(H_o\) against \(H_1\) when \(X_1\) is large.

Prerequisites


Geometric Distribution.

Negative binomial distribution.

Discrete Uniform distribution .

Conditional Distribution . .

Simple Hypothesis Testing.

Solution :

Well, Part (a), is quite easy, but interesting and elegant, so I'm leaving it as an exercise, for you to have the fun. Hint: verify whether the required distribution is Discrete uniform or not ! If you are done, proceed .

Now, part (b), is further interesting, because here we will not use the conventional way of analyzing the distribution of \(X_1\) and \( X_2\), whereas we will be concentrating ourselves on the conditional distribution of \( X_1 | Y=y\) ! But why ?

The reason behind this adaptation of strategy is required, one of the reason is already given in the question itself, but the other reason is more interesting to observe , i.e. if you are done with (a), then by now you found that , the conditional distribution of \(X_1|Y=y\) is independent of any parameter ( i.e. ithe distribution of \(X_1\) looses all the information about the parameter \(p_1\) , when conditioned by Y=y , \(p_1=p_2\) is a necessary condition), and the parameter independent conditional distribution is nothing but a Discrete Uniform {0,1,....,y}, where y is the sum of \(X_1 \) and \(X_2\) .

so, under \(H_o: p_1=p_2\) , the distribution of \(X_1|Y=y\) is independent of the both common parameter \(p_1 \) and \(p_2\) . And clearly as stated in the problem itself, its intuitively understandable , large value of \(X_1\) exhibits evidences against \(H_o\). Since large value of \(X_1\) is realized, means the success doesn't come very often .i.e. \(p_1\) is smaller.

So, there will be strong evidence against \(H_o\) if \(X_1 > c\) , where , for some constant \(c \ge y\), where y is given the sum of \(X_1+X_2\).

So, for a level 0.05 test , the test will reject \(H_o\) for large value of k , such that,

\( P_{H_o}( X_1 > c| Y=y)=0.05 \Rightarrow \frac{y-c}{y+1} = 0.05 \Rightarrow c= 0.95 y - 0.05 .\)

So, we reject \(H_o\) at level 0.05, when we observe \( X_1 > 0.95y - 0.05 \) , where it is given that \(X_1+X_2\) =y . That's it!


Food For Thought

Can you show that for this same \(X_1 \) and \( X_2\) ,

\(P(X_1 \le n)- P( X_1+X_2 \le n)= \frac{1-p}{p}P(X_1+X_2= n) \)

considering \(p_1=p_2=p\) , where n=0,1,.... What about the converse? Does it hold? Find out!

But avoid loosing memory, it's beauty is exclusively for Geometric ( and exponential) !!


ISI MStat PSB 2008 Problem 10
Outstanding Statistics Program with Applications

Outstanding Statistics Program with Applications

Subscribe to Cheenta at Youtube


Useless Data, Conditional Probability, and Independence | Cheenta Probability Series

This concept of independence, conditional probability and information contained always fascinated me. I have thus shared some thoughts upon this.

When do you think some data is useless?

Some data/ information is useless if it has no role in understanding the hypothesis we are interested in.

We are interested in understanding the following problem.

\(X\) is some event. \(Y\) is another event. How much information do \(Y\) and \(X\) give about each other?

We can model an event by a random variable. So, let's reframe the problem as follows.

\(X\) and \(Y\) are two random variables. How much information do \(Y\) and \(X\) give about each other?

There is something called entropy. But, I will not go into that. Rather I will give a probabilistic view only. The conditional probability marches in here. We have to use the idea that we have used the information of \(Y\), i.e. conditioned on \(Y\). Hence, we will see how \(X \mid Y\) will behave?

How does \( X \mid Y\) behave? If \(Y\) has any effect on \(X\), then \(X \mid Y\) would have changed right?

But, if \(Y\) has no effect on \(X\), then \(X \mid Y\) will not change and remain same as X. Mathematically, it means

\( X \mid Y\) ~ \(X\) \(\iff\) \( X \perp \!\!\! \perp Y\)

We cannot distinguish between the initial and the final even after conditioning on \(Y\).

Theorem

\(X\) and \(Y\) are independent \( \iff \) \( f(x,y) = P(X =x \mid Y = y) \) is only a function of \(x\).

Proof

\( \Rightarrow\)

\(X\) and \(Y\) are independent \( \Rightarrow \) \( f(x,y) = P(X =x \mid Y = y) = P(X = x)\) is only a function of \(x\).

\( \Leftarrow \)

Let \( \Omega \) be the support of \(Y\).

\( P(X =x \mid Y = y) = g(x) \Rightarrow \)

\( P(X=x) = \int_{\Omega} P(X =x \mid Y = y).P(Y = y)dy \)

\(= g(x) \int_{\Omega} P(Y = y)dy = g(x) = P(X =x \mid Y = y) \)

Exercises

  1. \((X,Y)\) is a bivariate standard normal with \( \rho = 0.5\) then \( 2X - Y \perp \!\!\! \perp Y\).
  2. \(X, Y, V, W\) are independent standard normal, then \( \frac{VX + WY}{\sqrt{V^2+W^2}} \perp \!\!\! \perp (V,W) \).

Random Thoughts (?)

How to quantify the amount of information contained by a random variable in another random variable?

Information contained in \(X\) = Entropy of a random variable \(H(X)\) is defined by \( H(X) = E(-log(P(X)) \).

Now define the information of \(Y\) contained in \(X\) as \(\mid H(X) - H(X|Y) \mid\).

Thus, it turns out that \(H(X) - H(X|Y) = E_{(X,Y)} (log(\frac{P(X \mid Y)}{P(X)})) = H(Y) - H(Y|X) = D(X,Y)\).

\(D(X,Y)\) = Amount of information contained in \(X\) and \(Y\) about each other.

Exercise

Note: This is just a mental construction I did, and I am not sure of the existence of the measure of this information contained in literature. But, I hope I have been able to share some statistical wisdom with you. But I believe this is a natural construction, given the properties are satisfied. It will be helpful, if you get hold of some existing literature and share it to me in the comments.

Some useful Links:

ISI MStat PSA 2019 Problem 18 | Probability and Digits

This problem is a very easy and cute problem of probability from ISI MStat PSA 2019 Problem 18.

Probability and Digits - ISI MStat Year 2019 PSA Problem 18


Draw one observation \(N\) at random from the set \(\{1,2, \ldots, 100\}\). What is the probability that the last digit of \(N^{2}\) is \(1\)?

  • \(\frac{1}{20}\)
  • \(\frac{1}{50}\)
  • \(\frac{1}{10}\)
  • \(\frac{1}{5}\)

Prerequisites


Last Digit of Natural Numbers

Basic Probability Theory

Combinatorics

Check the Answer


Answer: is \(\frac{1}{5}\)

ISI MStat 2019 PSA Problem Number 18

A First Course in Probability by Sheldon Ross

Try with Hints


Try to formulate the sample space. Observe that the sample space is not dependent on the number itself rather only on the last digits of the number \(N\).

Also, observe that the number of integers in \(\{1,2, \ldots, 100\}\) is uniformly distributed over the last digits. So the sample space can be taken as \(\{0,1,2, \ldots, 9\}\). So, the number of elements in the sample space is \(10\).

See the Food for Thought!

This step is easy.

Find out the cases for which \(N^2\) gives 1 as the last digit. Use the reduced last digit sample space.

  • 1 x 1
  • 3 x 7 (Since \(N^2\) and they must have the same last digit)
  • 7 x 3 (Since \(N^2\) and they must have the same last digit)
  • 9 x 9

So, there are 2 possible cases out of 10.

Therefore the probability = \( \frac{2}{10} = \frac{1}{5}\).

  • Observe that there is a little bit of handwaving in the First Step. Please make it more precise using the ideas of Probability that it is okay to use the sample space as the reduced version rather than \(\{1,2, \ldots, 100\}\).
  • Generalize the problem for \(\{1,2, \ldots, n\}\).
  • Generalize the problems for \(N^k\) for selecting an observation from \(\{1,2, \ldots, n\}\).
  • Generalize the problems for \(N^k\) for selecting an observation from \(\{1,2, \ldots, n\}\) for each of the digits from \(\{0,1,2, \ldots, 9\}\).
Outstanding Statistics Program with Applications

Outstanding Statistics Program with Applications

Subscribe to Cheenta at Youtube


Size, Power, and Condition | ISI MStat 2019 PSB Problem 9

This is a problem from the ISI MStat Entrance Examination, 2019. This primarily tests one's familiarity with size, power of a test and whether he/she is able to condition an event properly.

The Problem:

Let Z be a random variable with probability density function

\( f(z)=\frac{1}{2} e^{-|z- \mu|} , z \in \mathbb{R} \) with parameter \( \mu \in \mathbb{R} \). Suppose, we observe \(X = \) max \( (0,Z) \).

(a)Find the constant c such that the test that "rejects when \( X>c \)" has size 0.05 for the null hypothesis \(H_0 : \mu=0 \).

(b)Find the power of this test against the alternative hypothesis \(H_1: \mu =2 \).

Prerequisites:

And believe me as Joe Blitzstein says: "Conditioning is the soul of statistics"

Solution:

(a) If you know what size of a test means, then you can easily write down the condition mentioned in part(a) in mathematical terms.

It simply means \( P_{H_0}(X>c)=0.05 \)

Now, under \( H_0 \), \( \mu=0 \).

So, we have the pdf of Z as \( f(z)=\frac{1}{2} e^{-|z|} \)

As the support of Z is \( \mathbb{R} \), we can partition it in \( \{Z \ge 0,Z <0 \} \).

Now, let's condition based on this partition. So, we have:

\( P_{H_0}(X > c)=P_{H_0}(X>c , Z \ge 0)+ P_{H_0}(X>c, Z<0) =P_{H_0}(X>c , Z \ge 0) =P_{H_0}(Z > c) \)

Do, you understand the last equality? (Try to convince yourself why)

So, \( P_{H_0}(X >c)=P_{H_0}(Z > c)=\int_{c}^{\infty} \frac{1}{2} e^{-|z|} dz = \frac{1}{2}e^{-c} \)

Equating \(\frac{1}{2}e^{-c} \) with 0.05, we get \( c= \ln{10} \)

(b) The second part is just mere calculation given already you know the value of c.

Power of test against \(H_1 \) is given by:

\(P_{H_1}(X>\ln{10})=P_{H_1}(Z > \ln{10})=\int_{\ln{10}}^{\infty} \frac{1}{2} e^{-|z-2|} dz = \frac{e^2}{20} \)

Try out this one:

The pdf occurring in this problem is an example of a Laplace distribution.Look it up on the internet if you are not aware and go through its properties.

Suppose you have a random variable V which follows Exponential Distribution with mean 1.

Let I be a Bernoulli(\(\frac{1}{2} \)) random variable. It is given that I,V are independent.

Can you find a function h (which is also a random variable), \(h=h(I,V) \) ( a continuous function of I and V) such that h has the standard Laplace distribution?

Conditions and Chance | ISI MStat 2018 PSB Problem 5

This problem is a cute application of joint distribution and conditional probability. This is the problem 5 from ISI MStat 2018 PSB.

Problem

Suppose \(X_{1}\) and \(X_{2}\) are identically distributed random variables, not necessarily independent, taking values in \(\{1,2\}\). If \(\mathrm{E}\left(X_{1} X_{2}\right)= \frac{7}{3} \) and \(\mathrm{E}\left(X_{1}\right) = \frac{3}{2},\) obtain the joint distribution of \(\left(X_{1}, X_{2}\right)\).

Prerequisites

Solution

This problem is mainly about crunching the algebra of the conditions and get some good conditions for you to easily trail your path to the solution.

Usually, we go forward starting with the distribution of \(X_1\) and \(X_2\) to the distribution of (\(X_1, X_2\)). But, we will go backward from the distribution of (\(X_1, X_2\)) to \(X_1\), \(X_2\) and \(X_1X_2\)with the help of conditional probability.

conditional probability

Now, observe \(p_{21} = p_{12}\) because \(X_1\) and \(X_2\) are identically distributed.

Let's calculate the following:

\(P(X_1 = 1 )= p_{11} + p_{12} = P(X_2 = 1)\)

\(P(X_1 = 2) = p_{12} + p_{22} = P(X_2 = 2)\)

\(E(X_1) = p_{11} + 3p_{12} + 2p_{22} = \frac{3}{2}\)

Now, \(X_1X_2\) can take values {\(1, 2, 4\)}.

\(X_1 = 1, X_2 = 1 \iff X_1X_2 = 1\) \( \Rightarrow P(X_1X_2 = 2) = p_{11}\).

\(X_1 = 2, X_2 = 2 \iff X_1X_2 = 4\) \( \Rightarrow P(X_1X_2 = 4) = p_{22}\).

\(X_1 = 1, X_2 = 1\) or \(X_1 = 2, X_2 = 1 \iff X_1X_2 = 2\) \( \Rightarrow P(X_1X_2 = 2) = 2p_{12}\).

\(E(X_1X_2) = p_{11} + 4p_{12} + 4p_{22} = \frac{7}{3}\).

Now, we need another condition, do you see that ?

\(p_{11} + 2p_{12} + p_{44} = 1\).

Now, you can solve it easily to get the solutions \( p_{11} = \frac{1}{3}, p_{12} = \frac{1}{6}, p_{22} =\frac{1}{3} \).

Food for Thought

Now, what do you think, how many expectation values will be required if \(X_1\) and \(X_2\) takes values in {1, 2, 3}?

What if \(X_1\) and \(X_2\) takes values in {\(1, 2, 3, 4, ..., n\)}?

What if there are \(X_1, X_2, ...., X_n\) taking values in {\(1, 2, 3, 4, ..., m\)}?

This is just another beautiful counting problem.

Enjoy and Stay Tuned!

Application of Cauchy Functional Equations | ISI MStat 2019 PSB Problem 4

This problem is a beautiful application of the probability theory and cauchy functional equations. This is from ISI MStat 2019 PSB problem 4.

Problem - Application of Cauchy Functional Equations

Let \(X\) and \(Y\) be independent and identically distributed random variables with mean \(\mu>0\) and taking values in {\(0,1,2, \ldots\)}. Suppose, for all \(m \geq 0\)
$$
\mathrm{P}(X=k | X+Y=m)=\frac{1}{m+1}, \quad k=0,1, \ldots, m
$$
Find the distribution of \(X\) in terms of \(\mu\).

Prerequisites

Solution

Let \( P(X =i) = p_i\) where $$\sum_{i=0}^{\infty} p_i = 1$$. Now, let's calculate \(P(X+Y = m)\).

$$P(X+Y = m) = \sum_{i=0}^{m} P(X+Y = m, X = i) = \sum_{i=0}^{m} P(Y = m-i, X = i) = \sum_{i=0}^{m} p_ip_{m-i}$$.

$$P( X = k|X+Y = m) = \frac{P( X = k, X+Y = m)}{P(X+Y = m)} = \frac{P( X = k, Y = m-k)}{\sum_{i=0}^{m} p_ip_{m-i}} = \frac{p_kp_{m-k}}{\sum_{i=0}^{m} p_ip_{m-i}} = \frac{1}{m+1}$$.

Hence,$$ \forall m \geq 0, p_0p_m =p_1p_{m-1} = \dots = p_mp_0$$.

Thus, we get the following set of equations.

$$ p_0p_2 = p_1^2$$ $$ p_0p_3 = p_1p_2$$ Hence, by the thrid prerequisite, \(p_0, p_1, p_2, p_3\) are in geometric progression.

Observe that as a result we get \( p_1p_3 =p_2^2 \). In the line there is waiting:

$$ p_1p_4 = p_2p_3$$. Thus, in the similar way we get \(p_1, p_2, p_3, p_4\) are in geometric progression.

Hence, by induction, we will get that \(p_k; k \geq 0\) form a geometric progression.

This is only possible if \(X, Y\) ~ Geom(\( p\)). We need to find \(p\) now, but here \(X\) counts the number of failures, and \(p\) is the probability of success.

So, \(E(X) = \frac{1-p}{p} = \mu \Rightarrow p = \frac{1}{\mu +1}\).

Challenge Problem

So, can you guess, what it will be its continous version?

It will be the exponential distribution. Prove it. But, what exactly is governing this exponential structure? What is the intuition behind it?

The Underlying Mathematics and Intuition

Observe that the obtained condition

$$ \forall m \geq 0, p_0p_m =p_1p_{m-1} = \dots = p_mp_0.$$ can be written as follows

Find all such functions \(f: \mathbf{N}_0 \to \mathbf{N}_0\) such that \(f(m)f(n) = f(m+n)\) and with the summation = 1 restriction property.

The only solution to this geometric progression structure. This is a variant of the Cauchy Functional Equation. For the continuous case, it will be exponential distribution.

Essentially, this is the functional equation that arises, if you march along to prove that the Geometric Random Variable is the only discrete distribution with the memoryless property.

Stay Tuned!

Elchanan Mossel's Dice Paradox | ISI MStat 2018 PSB Problem 6

This problem from ISI MStat 2018 PSB (Problem 6) is called the Elchanan Mossel's Dice Paradox. The problem has a paradoxical nature, but there is always a way out.

Problem

A fair 6 -sided die is rolled repeatedly until a 6 is obtained. Find the expected number of rolls conditioned on the event that none of the rolls yielded an odd number.

Prerequisites

Solution

The Wrong Solution

Let \(X_{1}, X_{2}, \cdots\) be the throws of a die. Let
$$
{T}=\min\{{n: X_{n}=6}\}
$$

Then \(T\) ~ Geo(\(p =\frac{1}{6}\))

But, here it is given that none of the rolls are odd numbers. So,

$$
{T}=\min\{{n: X_{n}=6} | X_n = \text{even}\} = \min\{{n: X_{n}=6} | X_n = \{2, 4, 6\}\}
$$

Then \(T\) ~ Geo(\(p =\frac{1}{3}\)) Since, there are three posibilities in the reduced (conditional) sample space.

So, \(E(T) =3\).

Obviously, this is false. But you are not getting why it is false right? Scroll Down!

Where it went wrong?

It went wrong in observing the given condition of the problem. Observe that it is given that none of the rolls are odd till the roll you got success, not for all the rolls beyond that also.

So, $$
{T}=\min\{{n: X_{n}=6} | X_n = \text{even}, n \leq T\} = \min\{{n: X_{n}=6} | X_n = \{2, 4, 6\}, n \leq T\}
$$

So, we are essentially seeing that the sample space didn't get reduced all along, it got reduced till that point of the roll. This is where the paradox marches in.

We are thinking of the experiment as we are picking up only \( \{ 2, 4, 6\} \) in the experiment and rolling. No!

The Elegant One Liner Solution

The idea is to think from a different perspective as with the case of every elegant solution. Let's reconstruct the experiment in a different way. It is like the following. Remember, we need to exclude the odd numbers, so just throw them away and start anew.

Idea

If you get \(\{1, 3, 5\}\), start counting the number of rolls again from the beginning. Stop when you get 6. This is the exact formulation of the waiting time to get a 6 without getting any odd numbers till that toss. We will show that our success is when we get \(\{1, 3, 5, 6\}\) in this experiment.

Mathematical Form

Let \(\tau\) be the time required to get an outcome different from \(\{2,4\}\) Then \(E(\tau | X_{\tau}=j)\) is independent of \(j\) for \(j \in \{1,3,5,6\}\) because it is same for all \( j\). Thus, by the smooting property of \(E\left(\tau | X_{\tau}=j\right)=E(\tau)\).

Observe, \(\tau\) ~ Geo( \( p = \frac{4}{6}\)). Hence, \( E(\tau) = \frac{3}{2}\).

The Bigger Bash Solution

\(T=\min \{n: X_{n}=6\}\)
We need to calculate \( \mathbb{E}(T | X_{1}, \cdots, X_T \in \{2,4,6\})\).

For that we need to find out the conditional probabilities \(\mathrm{P}\left(\mathrm{T}=\mathrm{k} | \mathrm{X}{1}, \cdots, \mathrm{X}{\mathrm{T}} \in{2,4,6}\right)\) and that is given by
$$
\frac{\mathrm{P}\left(\mathrm{T}=\mathrm{k} \cap\left(\mathrm{X}_{1}, \cdots, \mathrm{X}_{\mathrm{T}} \in \{2,4,6\}\right)\right)}{\mathrm{P}\left(\mathrm{X}_{1}, \cdots, \mathrm{X}_{\mathrm{T}} \in \{2,4,6\} \right)}=\frac{\mathrm{P}\left(X_{\mathrm{k}}=6, \mathrm{X}_{1}, \cdots, \mathrm{X}_{\mathrm{k}-1} \in \{2,4\} \right)}{\mathrm{P}\left(\mathrm{X}_{1}, \cdots, \mathrm{X}_{\mathrm{T}} \in \{2,4,6\} \right)}=\frac{1}{6}\left(\frac{1}{3}\right)^{\mathrm{k}-1} \frac{1}{\alpha}
$$
where \(\alpha=\mathrm{P}\left(\mathrm{X}_{1}, \cdots, \mathrm{X}_{\mathrm{T}} \in \{2,4,6\} \right)\) . Thus \(\mathrm{T} |\left(\mathrm{X}_{1}, \cdots, \mathrm{X}_{\mathrm{T}} \in \{2,4,6\} \right)\) follows a geometric distribution with parameter \(\frac{2}{3}\) and consequently its expectation is \(\frac{3}{2}\).

Stay Tuned! Stay Blessed!

Click here for Detailed Discussion

Simulation in Python

import random

times = 0 #number of times a successful (all-even) sequence was rolled
rolls = 0 #total of all number of rolls it took to get a 6, on successful sequences
curr = 0
alleven = True

for x in range(0, 100000):

  num = random.randint(1,6)
  if num % 2 != 0:
    alleven = False
  else:
    if num == 6:
      if alleven:
        times += 1
        rolls += curr + 1
      curr = 0
      alleven = True
    else:
      curr += 1

print(rolls * 1.0 / times)
#1.51506456241

Source: mathstackexachange

Stay Tuned! Stay Blessed!

Intertwined Conditional Probability | ISI MStat 2016 PSB Problem 4

This is an interesting problem from intertwined conditional probability and Bernoulli random variable mixture, which gives a sweet and sour taste to Problem 4 of ISI MStat 2016 PSB.

Problem

Let \(X, Y,\) and \(Z\) be three Bernoulli \(\left(\frac{1}{2}\right)\) random variables such that \(X\) and \(Y\) are independent, \(Y\) and \(Z\) are independent, and \(Z\) and \(X\) are independent.
(a) Show that \(\mathrm{P}(X Y Z=0) \geq \frac{3}{4}\).
(b) Show that if equality holds in (a), then $$
Z=
\begin{cases}
1 & \text { if } X=Y, \\
0 & \text { if } X \neq Y\\
\end{cases}
$$

Prerequisites

Solution

(a)

\( P(XYZ = 0) \iff P( { X = 0} \cup {Y = 0} \cup {Z = 0}) \)

$$= P(X = 0) + P(Y = 0) + P(Z= 0) - P({ X = 0} \cap {Y = 0}) - P({Y = 0} \cap {Z= 0}) - P({X = 0} \cap {Z= 0}) + P({X = 0} \cap {Y = 0} \cap {Z= 0}). $$

We use the fact that \(X\) and \(Y\) are independent, \(Y\) and \(Z\) are independent, and \(Z\) and \(X\) are independent.

$$= P(X = 0) + P(Y = 0) + P(Z= 0) - P({ X = 0})P({Y = 0}) - P({Y = 0})P({Z= 0}) - P({X = 0})P({Z= 0}) + P({X = 0},{Y = 0},{Z= 0})$$.

\(X, Y,\) and \(Z\) be three Bernoulli \(\left(\frac{1}{2}\right)\) random variables. Hence,

\( P(XYZ = 0) = \frac{3}{4} + P({X = 0},{Y = 0},{Z= 0}) \geq \frac{3}{4}\).

(b)

\( P(XYZ = 0) = \frac{3}{4} \iff P({X = 0},{Y = 0},{Z= 0}) = 0 \).

Now, this is just a logical game with conditional probability.

\( P({X = 0} |{Y = 0},{Z= 0}) = 0 \Rightarrow P({Z= 0} |{Y = 0},{X = 1}) = 1\).

\( P({Y = 0} |{X = 0},{Z= 0}) = 0 \Rightarrow P({Z= 0} |{X = 0},{Y = 1}) = 1\).

\( P({Z = 0} |{X = 0},{Y= 0}) = 0 \Rightarrow P({Z = 1} |{X = 0},{Y= 0}) = 1\).

\( P( Z = 0) = P({X = 1},{Y = 0},{Z= 0}) + P({X = 0},{Y = 1},{Z= 0}) + P({X = 1},{Y = 1},{Z= 0}) + P({X = 0},{Y = 0},{Z= 0})\)

\( = \frac{1}{4} + \frac{1}{4} + P({X = 1},{Y = 1},{Z= 0}) \).

Now, \(Z\) is a Bernoulli \(\left(\frac{1}{2}\right)\) random variable. So, \(P(Z = 0) =\frac{1}{2}\) \( \Rightarrow P({X = 1},{Y = 1},{Z= 0}) = 0 \Rightarrow P({Z = 0} | {Y = 1},{X= 1}) = 0 \).

\( P({Z= 0} |{Y = 0},{X = 1}) = 1\).

\(P({Z= 0} |{X = 0},{Y = 1}) = 1\).

\(P({Z = 1} |{X = 0},{Y= 0}) = 1\).

\( P({Z = 1} | {Y = 1},{X= 1}) = 1\).

Hence, $$
Z=
\begin{cases}
1 & \text { if } X=Y, \\
0 & \text { if } X \neq Y\\
\end{cases}
$$.