To get the best deal on Tutoring, call 1-855-666-7440 (Toll Free)

Convergence in Distribution

Many notions of convergence of random variables are studied in probability theory. Among them, the weakest form is "convergence in distribution" because it arises from other forms of convergence in probability theory. Still convergence in distribution has been quite commonly used distribution in practicality. This type of convergence provides an expectation of modelling betterment of next outcome of the random-experiment sequence in given probability distribution.

Related Calculators
Radius of Convergence Calculator Binomial Distribution Calculator
Calculating Poisson Distribution Calculator for Distributive Property


Back to Top
Let $X_{1}, X_{2}, X_{3}, ...$ be a sequence of random variables. This sequence converges to a random variable and hence called as converges in distribution when the following condition is satisfied.

$\lim_{n \rightarrow \infty} f_{n}(x) = f(x)$

Where each number x is a real number and f is continuous on each x.
f$_{n}$ is a cumulative distribution function of $X_{n}$ and f is a cumulative distribution function of X.


Back to Top
The theorem that is closely related to convergence in distribution is the central limit theorem, according to which under certain conditions, the mean of a countably large number of independent random variables, each of which has well-defined variance, is normally distributed, irrespective of the underlying distribution.

Let us suppose that $X_{1}, X_{2}, X_{3}, ...$ is a independent and identically distributed sequence of random variables. The mean of these variables be $\mu$ and variance be $\sigma^{2}$. Also,

$Z_{n} = \frac{\sum_{i=1}^{n} (X_{i} - n \mu)}{\sqrt{n \sigma}}$

Then, we have

$Z_{n} \rightarrow Z$

Where Z is the standard normal random variable.

Probability Density Function

Back to Top
In the reference of probability density functions, the notion of convergence in distribution is discussed below.

Let us consider that f and ($f_{1}, f_{2}, f_{3}…$) be two probability density functions of continuous distributions defined over a countable set C, then distribution of $f_{n}$ is said to be converging to distribution of f if :

$f_{n}(x) \rightarrow f(x)$ as n approaches to infinity for all x $\in$ C.

Again, assume that f and ($f_{1}, f_{2}, f_{3}…$) are two probability density functions of discrete distributions defined over a countable set C, then distribution of $f_{n}$ is said to be converging to distribution of f when :

$f_{n}(x) \rightarrow f(x)$ as n tends to infinity for each x $\in$ C.

Relation to Convergence in Probability

Back to Top
The notion of convergence in probability is better and stronger than the concept of convergence in distribution. A sequence denoted by $X_{1}, X_{2}, X_{3}, ...$ converges to X, if

$P(|X_{n} - X| \geq \epsilon)$ approaches to 0 as n tends to infinity where $\epsilon$ > 0.

For random variable $X_{n}$ converges to X in probability, we must have

$X_{n} \rightarrow X$ under the probability P, if

$lim_{n \rightarrow \infty} P(|X_{n} - X| \geq \epsilon) = 0$, for each $\epsilon$ > 0


Back to Top
Let us have a look at an example discussed below.

Example : If $X_{n} = 1+ \frac{1}{n}$ be a sequence of constant random variables and the distribution function $F_{n}$ associated with this sequence is :

$F_{n} (x) = \left\{\begin{matrix}1\ if\ x < 1 + \frac{1}{n}\\0\ if\ x \geq 1 + \frac{1}{n}\end{matrix}\right.$

Prove convergence in distribution.

Solution : Given that

$F_{n} (x) = \left\{\begin{matrix} 1\ if\ x < 1 + \frac{1}{n}\\ 0\ if\ x \geq 1 + \frac{1}{n} \end{matrix}\right.$

$lim_{n \rightarrow \infty} F_{n}{x} = \left\{\begin{matrix} 1\ if\ x < 1 + \frac{1}{\infty}\\ 0\ if\ x \geq 1 + \frac{1}{\infty} \end{matrix}\right.$

= $\left\{\begin{matrix} 1\ if\ x < 1\\ 0\ if\ x \geq 1 \end{matrix}\right.$

= F(x)

Therefore, $lim_{n \rightarrow \infty} F_{n}(x) = F(x)$
Related Topics
Math Help Online Online Math Tutor
*AP and SAT are registered trademarks of the College Board.