This problem is a beautiful example when the maximum likelihood estimator is same as the method of moment estimator. Infact, we have proposed a general problem, is when exactly, they are equal? This is from ISI MStat 2016 PSB Problem 7, Stay Tuned.

Problem

Let \(X_{1}, X_{2}, \ldots, X_{n}\) be independent and identically distributed random variables ~ \(X\) with probability mass function
$$
f(x ; \theta)=\frac{x \theta^{x}}{h(\theta)} \quad \text { for } x=1,2,3, \dots
$$
where \(0<\theta<1\) is an unknown parameter and \(h(\theta)\) is a function of \(\theta\) Show that the maximum likelihood estimator of \(\theta\) is also a method of moments estimator.

Prerequisites

Solution

This \(h(\theta)\) looks really irritating.

Find the \( h(\theta) \).

\( \sum_{x = 1}^{\infty} f(x ; \theta) = \sum_{x = 1}^{\infty} \frac{x \theta^{x}}{h(\theta)} = 1 \)

\( \Rightarrow h(\theta) = \sum_{x = 1}^{\infty} {x \theta^{x}} \)

\( \Rightarrow (1 – \theta) \times h(\theta) = \sum_{x = 1}^{\infty} {\theta^{x}} = \frac{\theta}{1 – \theta} \Rightarrow h(\theta) = \frac{\theta}{(1 – \theta)^2}\).

Maximum Likelihood Estimator of \(\theta\)

\( L(\theta)=\prod_{i=1}^{n} f\left(x_{i} | \theta\right) \)

\( l(\theta) = log(L(\theta)) = \sum_{i=1}^{n} \log \left(f\left(x_{i} | \theta\right)\right) \)

Note: All irrelevant stuff except the thing associated with \( \theta \) is kept as constant (\(c\)).

\( \Rightarrow l(\theta) = c + n\bar{X}log(\theta) – nlog(h(\theta)) \)

\( l^{\prime}(\theta) = 0 \overset{Check!}{\Rightarrow} \hat{\theta}_{mle} = \frac{\bar{X} -1}{\bar{X} +1}\)

Method of Moments Estimator

We need to know the \( E(X)\).

\( E(X) = \sum_{x = 1}^{\infty} xf(x ; \theta) = \sum_{x = 1}^{\infty} \frac{x^2 \theta^{x}}{h(\theta)} \).

\( E(X)(1 – \theta) = \sum_{x = 1}^{\infty} \frac{(2x-1)\theta^{x}}{h(\theta)} \).

\( E(X)\theta(1 – \theta) = \sum_{x = 1}^{\infty} \frac{(2x-1)\theta^{x+1}}{h(\theta)} \)

\( E(X)((1 – \theta) – \theta(1 – \theta)) =\frac{\sum_{x = 1}^{\infty} 2\theta^{x} – \theta }{h(\theta)} = \frac{\theta(1 + \theta)}{(1 – \theta)h(\theta)}\).

\( \Rightarrow E(X) = \frac{\theta(1 + \theta)}{(1 – \theta)^3h(\theta)} = \frac{1+\theta}{1-\theta}.\)

\( E(X) = \bar{X} \Rightarrow \frac{1+\theta_{mom}}{1-\theta_{mom}}= \bar{X} \Rightarrow \hat{\theta}_{mom} = \frac{\bar{X} -1}{\bar{X} +1}\)

Food For Thought and Research Problem

Normal (unknown mean and variance), exponential, and Poisson all have sufficient statistics equal to their moments and have MLEs and MoM estimators the same (not strictly true for things like Poisson where there are multiple MoM estimators).

So, when do you think, the Method of Moments Estimator = Maximum Likelihood Estimator?

Pitman Kooper Lemma tells us that it is an exponential family.

Also, you can prove that that there exists a specific form of the exponential family.

Stay tuned for more exciting such stuff!