This section discusses a theoretical topic that you may want to skip if you are a new student of probability.
Basic Theory
Stable distributions are an important general class of probability distributions on that are defined in terms of location-scale transformations. Stable distributions occur as limits (in distribution) of scaled and centered sums of independent, identically distributed variables. Such limits generalize the central limit theorem, and so stable distributions generalize the normal distribution in a sense. The pioneering work on stable distributions was done by Paul Lévy.
Definition
In this section, we consider real-valued random variables whose distributions are not degenerate (that is, not concentrated at a single value). After all, a random variable with a degenerate distribution is not really random, and so is not of much interest.
Random variable has a stable distribution if the following condition holds: If and is a sequence of independent copies of , then has the same distribution as for some and . If for then the distribution of is strictly stable.
- The parameters for are the centering parameters.
- The parameters for are the norming parameters.
Details:
Since the distribution of is not point mass at 0, note that if the distribution of is the same as the distribution of for some and , then and . Thus, the centering parameters and the norming parameters are uniquely defined for .
Recall that two distributions on that are related by a location-scale transformation are said to be of the same type, and that being of the same type defines an equivalence relation on the class of distributions on . With this terminology, the definition of stability has a more elegant expression: has a stable distribution if the sum of a finite number of independent copies of is of the same type as . As we will see, the norming parameters are more important than the centering parameters, and in fact, only certain norming parameters can occur.
Basic Properties
We start with some very simple results that follow easily from the definition, before moving on to the deeper results.
Suppose that has a stable distribution with mean and finite variance. Then the norming parameters are and the centering parameters are for .
Details:
As usual, let and denote the centering and norming parameters of for , and let denote the (finite) variance of . Suppose that and that is a sequence of independent copies of . Then has the same distribution as . Taking variances gives and hence . Taking expected values now gives .
It will turn out that the only stable distribution with finite variance is the normal distribution, but the result above is useful as an intermediate step. Next, it seems fairly clear from the definition that the family of stable distributions is itself a location-scale family.
Suppose that the distribution of is stable, with centering parameters and norming parameters for . If and , then the distribution of is also stable, with centering parameters and norming parameters for .
Details:
Suppose that and that is a sequence of independent copies of . Then has the same distribution where is a sequence of independent copies of . By stability, has the same distribution as . Hence has the same distribution as , which in turn has the same distribution as .
An important point is the the norming parameters are unchanged under a location-scale transformation.
Suppose that the distribution of is stable, with centering parameters and norming parameters for . Then the distribution of is stable, with centering parameters and norming parameters for .
Details:
If and is a sequence of independent copies of then is a sequence of independent copies of . By stability, has the same distribution as .
From [3] and [4], if has a stable distribution, then so does , with the same norming parameters, for every with . Stable distributions are also closed under convolution (corresponding to sums of independent variables) if the norming parameters are the same.
Suppose that and are independent variables. Assume also that has a stable distribution with centering parameters and norming parameters for , and that has a stable distribution with centering parameters and the same norming parameters for . Then has a stable distribution with centering paraemters and norming parameters for .
Details:
Suppose that and that is a sequence of independent copies of . Then has the same distribution as where is a sequence of independent copies of , and is a sequence of independent copies of , and where and are independent. By stability, this is the same as the distribution of .
We can now give another characterization of stability that just involves two independent copies of .
Random variable has a stable distribution if and only if the following condition holds: If are independent copies of and then has the same distribution as for some and .
Details:
Clearly the condition in definition [1] implies the condition here. Coversely, suppose that the condition here holds. We will show by induction that the condition in definition [1] holds. For , definition [1] is a special case of the condition in this theorem, with . Suppose that condition [1] holds for a given . Suppose that is a sequence of independent copies of . By the induction hypothesis, has the same distribution as for some and . By independence, has the same distribution as . By another application of the condition above, has the same distribution as for some and . But then has the same distribution as .
As a corollary of [5] and [6], we have the following:
Suppose that and are independent with the same stable distribution. Then the distribution of is stable, with the same norming parameters.
Note that the distribution of is symmetric (about 0). The last result is useful because it allows us to get rid of the centering parameters when proving facts about the norming parameters. Here is the most important of those facts:
Suppose that has a stable distribution. Then the norming parameters have the form for , for some . The parameter is known as the index or characteristic exponent of the distribution.
Details:
The proof is in several steps, and is based on the proof in An Introduction to Probability Theory and Its Applications, Volume II, by William Feller. The proof uses the basic trick of writing a sum of independent copies of in different ways in order to obtain relationships between the norming constants .
First we can assume from [7] that the distribution of is symmetric and strictly stable. Let be a sequence of independent copies of . Let for . Now let and consider . Directly from stability, has the same distribution as . On the other hand, can be thought of as a sum of blocks
, where each block is a sum of independent copies of . Each block has the same distribution as , and since the blocks are independent, it follows that has the same distribution as
But by another application of stability, the random variable on the right has the same distribution as . It then follows that for all which in turn leads to for all .
We use the same trick again, this time with a sum. Let and consider . Directly from stability, has the same distribution as . On the other hand, can be thought of as the sum of two blocks. The first is the sum of independent copies of and hence has the same distribution as , while the second is the sum of independent copies of and hence has the same distribution as . Since the blocks are independent, it follows that has the same distribution as , or equivalently, has the same distribution as
Next note that for ,
and so by independence,
But by symmetry, . Also and have the same distribution as , so we conclude that
It follows that the ratios are bounded for . If that were not the case, we could find a sequence of integers with , in which case the displayed equation above would give the contradiction for all . Restating, the ratios are bounded for with .
Fix . There exists a unique with . It then follows from step 1 above that for every with . Similarly, if , there exists with and then for every with . For our next step, we show that and it then follows that for every . Towards that end, note that if with there exists with with . Hence
Therefore
Since the coefficients are unbounded in , but the ratios are bounded for with , the last inequality implies that . Reversing the roles of and then gives and hence .
All that remains to show is that . We will do this by showing that if , then must have finite variance, in which case the finite variance property in [2] leads to the contradiction . Since is nonnegative,
So the idea is to find bounds on the integrals on the right so that the sum converges. Towards that end, note that for and
Hence we can choose so that . On the other hand, using a special inequality for symmetric distributions,
This implies that is bounded in or otherwise the two inequalities together would lead to . Substituting leads to for some . It then follows that
If , the series with the terms on the right converges and we have .
Every stable distribution is continuous.
Details:
As in the proof of [8], suppose that has a symmetric stable distribution with norming parameters for . As a special case of the last proof, for , has the same distribution as
where and are independent and also have this distribution. Suppose now that for some where . Then
If the index , the points
are distinct, which gives us infinitely many atoms, each with probability at least —clearly a contradiction.
Next, suppose that the only atom is and that where . Then has the same distribution as . But while , another contradiction.
The next result is a precise statement of the limit theorem alluded to in the introductory paragraph.
Suppose that is a sequence of independent, identically distributed random variables, and let for . If there exist constants and for such that has a (non-degenerate) limiting distribution as , then the limiting distribution is stable.
The following theorem completely characterizes stable distributions in terms of the characteristic function.
Suppose that has a stable distribution. The characteristic function of has the following form, for some , , , and
where is the usual sign function, and where
- The parameter is the index, as before.
- The parameter is the skewness parameter.
- The parameter is the location parameter.
- The parameter is the scale parameter.
Thus, the family of stable distributions is a 4 parameter family. The index parameter and and the skewness parameter can be considered shape parameters. When the location parameter and the scale parameter , we get the standard form of the stable distributions, with characteristic function
The characteristic function gives another proof that stable distributions are closed under convolution (corresponding to sums of independent variables), if the index is fixed.
Suppose that and are independent random variables, and that and have the stable distribution with common index , skewness parameter , location parameter , and scale parameter . Then has the stable distribution with index , location parameter , scale parameter , and skewness parameter
Details:
Let denote the characteristic function of for . Then has characteristic function . The result follows from using the form of the characteristic function in [11] and some algebra.
Special Cases
Three special parametric families of distributions studied in this chapter are stable. In the proofs in this subsection, we use the definition of stability and various important properties of the distributions. These properties, in turn, are verified in the sections devoted to the distributions. We also give proofs based on the characteristic function, which allows us to identify the skewness parameter.
The normal distribution is stable with index . There is no skewness parameter.
Details:
Suppose that has the standard normal distribution. If and is a sequence of independent copies of , then has the normal distribution with mean 0 and variance . But this is also the distribution of . Hence the standard normal distribution is strictly stable, with index . The normal distribution with mean and standard deviation is the distribution of . From our basic properties above, this distribution is stable with index and centering parameters for .
In terms of the characteristic function, note that if then so the skewness parameter drops out completely. The characteristic function in standard form for , which is the characteristic function of the normal distribution with mean 0 and variance 2.
Of course, the normal distribution has finite variance, so once we know that it is stable, it follows from the finite variance property [2] that the index must be 2. Moreover, the characteristic function shows that the normal distribution is the only stable distribution with index 2, and hence the only stable distribution with finite variance.
Open the special distribution simulator and select the normal distribution. Vary the parameters and note the shape and location of the probability density function. For various values of the parameters, run the simulation 1000 times and compare the empirical density function to the probability density function.
The Cauchy distribution is stable with index and skewness parameter .
Details:
Suppose that has the standard Cauchy distribiution. If and is a sequence of independent copies of , then has the Cauchy distribution scale parameter . By definition this is the same as the distribution of . Hence the standard Cauchy distribution is strictly stable, with index . The Cauchy distribution with location parameter and scale parameter is the distribution of . From our basic properties above, this distribution is strictly stable with index .
When and the characteristic function in standard form is for , which is the characteristic function of the standard Cauchy distribution.
Open the special distribution simulator and select the Cauchy distribution. Vary the parameters and note the shape and location of the probability density function. For various values of the parameters, run the simulation 1000 times and compare the empirical density function to the probability density function.
The Lévy distribution is stable with index and skewness parameter .
Details:
If and is a sequence of independent variables, each with the standard Lévy distribution, then has the Lévy distribution scale parameter . By definition this is the same as the distribution of where has the standard Lévy distribution. Hence the standard Lévy distribution is strictly stable, with index . The Lévy distribution with location parameter and scale parameter is the distribution of . From our basic properties above, this distribution is stable with index and centering parameters for .
When note that . So the characteristic function in standard form with and is
which is the characteristic function of the standard Lévy distribution.
Open the special distribution simulator and select the Lévy distribution. Vary the parameters and note the shape and location of the probability density function. For various values of the parameters, run the simulation 1000 times and compare the empirical density function to the probability density function.
The normal, Cauchy, and Lévy distributions are the only stable distributions for which the probability density function is known in closed form.