Basic Theory
In this section, we will study a number of interesting inequalities associated with martingales and their sub-martingale and super-martingale cousins. These turn out to be very important for both theoretical reasons and for applications. You many need to review infimums and supremums.
Basic Assumptions
As in the introduction, we start with a stochastic process on an underlying probability space , having state space , and where the index set (representing time) is either (discrete time) or (continuous time). Next, we have a filtration , and we assume that is adapted to . So is an increasing family of sub -algebras of and is measurable with respect to for . We think of as the collection of events up to time . We assume that , so that the mean of exists as a real number, for each . Finally, in continuous time where , we make the standard assumptions that is right continuous and has left limits, and that the filtration is right continuous and complete.
Maximal Inequalites
For motivation, let's review a modified version of Markov's inequality, named for Andrei Markov.
If is a real-valued random variable then
Details:
The modified version has essentially the same elegant proof as the original. Clearly
Taking expected values through the inequality gives . Dividing both sides by gives the result (and it is at this point that we need ).
So Markov's inequality gives an upper bound on the probability that exceeds a given positive value , in terms of a monent of . Now let's return to our stochastic process . To simplify the notation, let for . Here is the main definition:
For the process , define the corresponding maximal process by
Clearly, the maximal process is increasing, so that if with then . A trivial application of Markov's inequality above would give
But when is a sub-martingale, the following theorem gives a much stronger result by replacing the first occurrent of on the right with . The theorem is known as Doob's sub-martingale maximal inequality (or more simply as Doob's inequaltiy), named once again for Joseph Doob who did much of the pioneering work on martingales. A sub-martingale has an increasing property of sorts in the sense that if with then , so it's perhaps not entirely surprising that such a bound is possible.
Suppose that is a sub-martingale. For , let . Then
Details:
Proof in the discrete time: So and the maximal process is given by for . Let , and define where as usual, . The random time is a stopping time relative to . Moreover, the processes and are inverses in the sense that for and ,
We have seen this type of duality before—in the Poisson process and more generally in renewal processes. Let . First note that
If then . On the other hand if then . So we have
Similarly,
But by the optional stopping theorem, . Hence we have
Subtracting the common term and then dividing both sides by gives the result
Proof in continuous time: For , let denote the set of nonnegative dyadic rationals (or binary rationals) of rank or less. For let , so that is the finite set of such dyadic rationals that are less than , with added to the set. Note that has an ordered enumeration, so is a discrete-time sub-martingale for each . Let for . Note that for and for with and therefore . It follows that for ,
The set of all nonnegative dyadic rationals is dense in and so since is right continuous and has left limits, it follows that if then for some . That is, we have
The maximal inequality applies to the discrete-time sub-martingale and so
for each . By the monotone convergence theorem, the left side converges to as and the right side converges to as .
There are a number of simple corollaries of the maximal inequality. For the first, recall that the positive part of is , so that if and if .
Suppose that is a sub-martingale. For , let . Then
Details:
Recall that since is a sub-martingale and is increasing and convex, is also a sub-martingale. Hence the result follows from the general maximal inequality for sub-martingales.
As a further simple corollary, note that
This is sometimes how the maximal inequality is given in the literature.
Suppose that is a martingale. For , let . Then
Details:
Recall from the basic properties that since is a martingale, and is convex, is a sub-martingale. Hence the result follows from the general maximal inequality for sub-martingales.
Once again, a further simple corollary is
Next recall that for , the -norm of a real-valued random variable is , and the vector space consists of all real-valued random variables for which this norm is finite. The following theorem is the norm version of the Doob's maximal inequality.
Suppose again that is a martingale. For , let . Then for ,
Details:
Fix . If , the inequality trivial holds, so assume that , and thus that . The proof relies fundamentally on Hölder's inequality, and for that inequality to work, we need to truncate the variable and consider instead the the bounded random variable where . First we need to show that
If , both sides are 0. If ,
and so from the maximal inequality [5],
Next recall from the properties that
Applying the inequality gives
By Fubini's theorem we can interchange the expected value and the integral which gives
But and where is the exponent conjugate to . So an application of Hölder's inequality gives
where we have used the simple fact that . Dividing by this factor gives
Finally, as by the monotone convergence theorem. So letting in the last displayed equation gives
Once again, is the maximal process associated with . As noted in the proof, is the exponent conjugate to , satisfying . So this version of the maximal inequality states that the norm of the maximum of the martingale on is bounded by times the norm of , where and are conjugate exponents. Stated just in terms of expected value, rather than norms, the maximal inequality is
Our final result in this discussion is a variation of the maximal inequality for super-martingales.
Suppose that is a nonnegative super-martingale, and let . Then
Details:
Let for . Since is a super-martingale, is a sub-martinagle. And since is nonnegative, for . Let for . By the maximal inequality for sub-martingales, and since is a super-martingale we have for ,
Next note that as . Let and . If then for sufficiently large . Hence
Using the continuity theorem for increasing events, and our result above we have
Since this holds for all , it follows that .
The Up-Crossing Inequality
The up-crossing inequality gives a bound on how much a sub-martingale (or super-martingale) can oscillate, and is the main tool in the martingale convergence theorems that will be studied in the next section. It should come as no surprise by now that the inequality is due to Joseph Doob. We start with the discrete-time case.
Suppose that is a sequence of real numbers, and that with . Define and then recursively define
- The number of up-crossings of the interval by the sequence up to time is
- The total number of up-crossings of the interval by the sequence is
Details:
As usual, we define . Note that if for , then is the th up-crossing of the interval by the sequence .
So informally, as the name suggests, is the number of times that the sequence goes from a value below to one above , and is the number of times the entire sequence goes from a value below to one above . Here are a few of simple properties:
Suppose again that is a sequence of real numbers and that with .
- is increasing in .
- as .
- If with then for , and .
Details:
- Note that .
- Note that .
- Every up-crossing of is also an up-crossing of .
The importance of the definitions is found in the following theorem. Recall that is the set of extended real numbers, and is the set of rational real numbers.
Suppose again that is a sequence of real numbers. Then exists in is and only if for every with .
Details:
We prove the contrapositive. Note that the following statements are equivalent:
- does not exist in in .
- .
- There exists with and with for infinitely many and for infinitely many .
- There exists with and .
Clearly [10] is true with replaced with , but the countability of will be important in the martingale convergence theorem. As a simple corollary, if is bounded and for every with , then converges in . The up-crossing inequality for a discrete-time martingale gives an upper bound on the expected number of up-crossings of up to time in terms of a moment of .
Suppose that satisfies the basic assumptions with respect to the filtration , and let with . Let , the random number of up-crossings of by up to time .
- If is a super-martingale relative to then
- If is a sub-martingale relative to then
Details:
In the context definition [8], let and . These are the random times that define the up-crossings of . Let and then define . To understand the sum, let's take cases for the th term :
- If then . By definition, the first terms are of this form.
- If then . There is at most one such term, with index .
- If then .
Hence and so
Next note that and are bounded stopping times and of course .
- If is a super-martingale, it follows from the optional stopping theorem that
and therefore . Finally, . Taking expected values gives
The remaining parts of the inequality follow since for .
The process in the proof can be viewed as a transform of by a predictable process. Specifically, for , let if for some , and let otherwise. Since and are stopping times, note that for . Hence the process is predictable with respect to . Moreover, the transform of by is
Since is a nonnegative process, if is a martingale (sub-martingale, super-martingale), then is also a martingale (sub-martingale, super-martingale).
Of course if is a martingale with respect to then both inequalities apply. In continuous time, as usual, the concepts are more complicated and technical.
Suppose that and that that with .
- If is finite, define and then recursively define
The number of up-crossings of the interval by the function restricted to is
- If is infinte, the number of up-crossings of the interval by restricted to is
To simplify the notation, we will let , the number of up-crossings of by on , and , the total number of up-crossings of by . In continuous time, the definition of up-crossings is built out of finte subsets of for measurability concerns, which arise when we replace the deterministic function with a stochastic process . Here are the simple properties that are analogous to our previous ones.
Suppose again that and that with .
- If with , then .
- If is an increasing sequence of sets in and then as .
- If with and then .
Details:
- The result follows easily from the definitions if is finite (and either finite or infinite). If is infinite (and hence so is ), note that
- Since is increasing in (in the subset partial order), note that if is finite, then if and only if for some .
- Every up-crossing of is an up-crossing of .
The following result is the reason for studying up-crossings in the first place. Note that the definition built from finite set is sufficient.
Suppose that . Then exists in if and only if for every with .
Details:
As in the discrete-time case, we prove the contrapositive. The proof is almost the same: The following statements are equivalent:
- does not exist in in .
- .
- There exists with and there exists with for and for .
- There exists with and .
Finally, here is the up-crossing inequality for martingales in continuous time. Once again, the inequality gives a bound on the expected number of up-crossings.
Suppose that satisfies the basic assumptions with respect to the filtration , and let with . Let , the random number of up-crossings of by up to time .
- If is a super-martingale relative to then
- If is a sub-martingale relative to then
Details:
Suppose that is a sub-martingale; the proof for a super-martingale is analogous. Fix and with . For let , the number of up-crossings of by restricted to . Suppose that is finite and that is the maximum of . Since restricted to is also a sub-martingale, the discrete-time up-crossing theorem applies and so
Since , there exists finite for with as . In particular, is measurable. By property (a) in [13], there exists such a sequence with increasing in and for each . By the monotone convergence theorem, as . So by the displayed equation above,
Examples and Applications
Kolmogorov's Inequality
Suppose that is a sequence of independent variables with and for . Let be the partial sum process associated with , so that
From the introduction we know that is a martingale. A simple application of the maximal inequality gives the following result, which is known as Kolmogorov's inequality, named for Andrei Kolmogorov.
For , let . Then
Details:
As noted above, is a martingale. Since the function on is convex, is a sub-martingale. Let for , and let . Applying the maximal inequality for sub-martingales we have
Finally, since is an independent sequence,
Red and Black
In the game of red and black, a gambler plays a sequence of Bernoulli games with success parameter at even stakes. The gambler starts with an initial fortune and plays until either she is ruined or reaches a specified target fortune , where with . When , so that the games are fair or unfair, an optimal strategy is bold play: on each game, the gambler bets her entire fortune or just what is needed to reach the target, whichever is smaller. In the section on bold play we showed that when , so that the games are fair, the probability of winning (that is, reaching the target starting with ) is . We can use the maximal inequality for super-martingales to show that indeed, one cannot do better.
To set up the notation and review various concepts, let denote the gambler's initial fortune and let denote the outcome of game , where 1 denotes a win and a loss. So is a sequence of independent variables with and for . (The initial fortune has an unspecified distribution on .) The gambler is at a casino after all, so of course . Let
so that is the partial sum process associated with . Recall that is also known as the simple random walk with parameter , and since , is a super-martingale. The process is the difference sequence associated with . Next let denote the amount that the gambler bets on game . The process is predictable with respect to , so that is measurable with respect to for . So the gambler's fortune after games is
Recall that is the transform of with , denoted . The gambler is not allowed to go into debt and so we must have for : the gambler's bet on game cannot exceed her fortune after game . What's the probability that the gambler can ever reach or exceed the target starting with fortune ?
Let . Suppose that with and that . Then
Details:
Since is a super-martingale and is nonnegative, the transform is also a super-martingale. By [5]:
Note that the only assumptions made on the gambler's sequence of bets is that the sequence is predictable, so that the gambler cannot see into the future, and that gambler cannot go into debt. Under these basic assumptions, no strategy can do any better than bold play. However, there are strategies that do as well as bold play; these are variations on bold play.
Open the simulation of the bold play and set . Play the game with various values of initial and target fortunes.