RemNote Community
Community

Stochastic process - Specific Process Types

Understand martingale fundamentals, Lévy process characteristics, and major stochastic process types and their applications.
Summary
Read Summary
Flashcards
Save Flashcards
Quiz
Take Quiz

Quick Practice

How is a martingale defined in terms of its conditional expected future value?
1 of 21

Summary

Martingales, Lévy Processes, and Stochastic Processes: A Comprehensive Guide Martingales: The Foundation of Fair Games Understanding Martingales A martingale is a stochastic process that captures the intuitive notion of a fair game. In a fair game, your expected wealth at any future time, given all information available today, should equal your current wealth. This is the central idea behind martingales. Formally, let $\{Mt\}$ be a stochastic process with times $s < t$. The process is a martingale with respect to a filtration $\{\mathcal{F}t\}$ (which represents all information available up to time $t$) if: $$E[Mt \mid \mathcal{F}s] = Ms$$ This equation says: if you know the entire history of the process up to time $s$, your best prediction of the value at the later time $t$ is simply the current value $Ms$. Why this matters: This property immediately tells us that the process has zero expected change in the future, making it fundamentally different from processes that systematically increase or decrease. Building Martingales from Random Variables One powerful way to construct martingales is through independent random variables. If you take a sequence of independent random variables and combine them appropriately, you can create a martingale. For example, consider a sequence of independent, identically distributed random variables $X1, X2, X3, \ldots$ with zero mean (so $E[Xi] = 0$). Define: $$Mn = X1 + X2 + \cdots + Xn$$ Then $Mn$ is a martingale. To see why: $$E[M{n+1} \mid \mathcal{F}n] = E[Mn + X{n+1} \mid \mathcal{F}n] = Mn + E[X{n+1}] = Mn + 0 = Mn$$ The second equality holds because $Mn$ is known given the information up to time $n$, and the third equality uses that $X{n+1}$ is independent of all prior information and has expectation zero. In continuous time, the Wiener process (Brownian motion) can be used as a building block. Suitable transformations of the Wiener process produce other martingales, which makes Brownian motion a fundamental tool in the theory. Convergence of Martingales One of the most powerful theoretical results about martingales is that under certain moment conditions, they must converge. The Martingale Convergence Theorem states that if a martingale satisfies appropriate integrability conditions (roughly, if the values don't blow up too badly), then the process converges almost surely and in $L^1$. What does this mean practically? If you have a bounded martingale or one satisfying certain growth conditions, you're guaranteed that the sequence $M1, M2, M3, \ldots$ converges to some limit with probability one. This is surprisingly powerful because it often allows you to solve problems without explicitly calculating probabilities—you just need to show a process is a martingale, verify it satisfies convergence conditions, and you immediately know it converges. Why Martingales Matter in Problem Solving Many probability problems can be solved elegantly through martingale methods. The general approach is: Identify a martingale hidden within the problem Analyze its behavior using the properties we've discussed Draw conclusions about the original problem For instance, optional stopping problems (where you want to find the expected value when stopping at a random time) and maximal inequalities (which bound the probability that a process gets very large) both rely fundamentally on martingale theory. <extrainfo> Applications in Finance and Economics Beyond pure probability, martingales appear extensively in financial mathematics. When an asset price is modeled as a martingale under a risk-neutral measure, this captures the idea of no arbitrage—you cannot consistently make riskless profit. This application is fundamental to modern derivatives pricing, but it's primarily a specialized topic in mathematical finance rather than core probability theory. </extrainfo> Lévy Processes: Processes with Stationary, Independent Increments Defining Lévy Processes A Lévy process is a stochastic process $\{Xt\}{t \geq 0}$ with two crucial properties: Stationary increments: For any $s < t$, the increment $Xt - Xs$ has a distribution that depends only on the time difference $t - s$, not on the absolute times. Independent increments: If $0 \leq t1 < t2 < \cdots < tn$, then the increments $X{t2} - X{t1}$, $X{t3} - X{t2}$, etc., are all independent of each other. These properties make Lévy processes the natural continuous-time analogues of simple random walks and renewal processes. They're memoryless in a sense: the future evolution depends only on the current position, not on how you got there. Why these properties matter: Stationarity means the "rules of the game" don't change over time, and independence means different time intervals contribute independently to the overall motion. Together, they make the process tractable mathematically. Key Examples: Brownian Motion and Poisson Processes Wiener Process (Brownian Motion) The Wiener process, also called Brownian motion, is the canonical example of a continuous Lévy process. Its increments are normally distributed: the increment over a time interval of length $t$ is $\mathcal{N}(0, t)$. This process is fundamental in probability theory and appears throughout applications in physics, finance, and biology. Poisson Process The homogeneous Poisson process is the canonical example of a discrete (jumping) Lévy process. Rather than moving continuously, this process jumps at random times. The number of jumps in any time interval of length $t$ follows a Poisson distribution with parameter $\lambda t$ (where $\lambda$ is the constant jump rate). Unlike Brownian motion which has continuous paths, the Poisson process has discontinuous paths with jumps at random times. Yet both are Lévy processes because both have independent, stationary increments. Point Processes: Random Collections in Space What is a Point Process? A point process is simply a random collection of points scattered throughout some space—typically the real line, the positive reals (for time), or higher-dimensional Euclidean space. Rather than thinking of a point process as producing individual points one at a time (which can be cumbersome), it's often easier to think of it as a random counting measure. This measure counts how many points fall into any given region of space. For any measurable set $A$, the counting measure $N(A)$ tells you the (random) number of points in that set. Why this perspective matters: Working with the counting measure allows you to use tools from measure theory and makes calculations more systematic. Relationship to Other Processes The connection between point processes and other stochastic processes is important: Renewal processes track the cumulative times between events (like customer arrivals at a store). They're a special type of point process on the positive real line. Counting processes literally count how many events have occurred by time $t$. They're the "dual perspective" on point processes. Many practical applications—from earthquake times to website traffic to medical events—are naturally modeled as point processes. Important Families of Stochastic Processes Bernoulli Processes The Bernoulli process is the simplest model of repetition: a sequence of independent trials, each with the same probability $p$ of success. The classic example is repeated coin flips—even with a biased coin. Formally, let $X1, X2, X3, \ldots$ be independent random variables where each $Xi$ equals 1 (success) with probability $p$ and 0 (failure) with probability $1-p$. This simple structure is the foundation for more complex processes: summing Bernoulli trials gives you binomial distributions, and studying their long-run behavior leads to the law of large numbers. Random Walks: A Fundamental Model A random walk is a process where at each step, you move in some direction (often left or right on a line) determined by randomness. The simplest version is the simple symmetric random walk: at each step, move +1 or −1 with equal probability. The Gambler's Ruin Problem One classic application is the gambler's ruin problem: imagine you start with some initial amount, and at each step win or lose $1 with equal probability. What's the probability you eventually reach a target amount before losing all your money? This is a random walk with absorbing barriers (once you hit $0 or your target, you stop). The answer depends critically on the initial position and the target—roughly, if the game is truly fair, your chances depend on how much you have relative to what you're trying to reach. <extrainfo> Pólya's Return Theorem A remarkable result, proved by George Pólya in 1919–1921, characterizes random walk recurrence: In 1D and 2D: A symmetric random walk returns to its starting point infinitely often with probability 1. In 3D or higher: A symmetric random walk returns to its starting point with probability 0—it "escapes to infinity." This is a beautiful example of how dimension dramatically affects probabilistic behavior. The intuition: in high dimensions, there's so much space to wander that you're unlikely to stumble back to the origin. </extrainfo> Markov Processes and Chains: The Memoryless Property A Markov process (or Markov chain in discrete time) is a process with a special property: the future depends only on the current state, not on how you arrived there. Formally, given the present state, the future is independent of the past. This memoryless property is expressed as: $$P(X{t+1} = j \mid Xt = i, X{t-1}, X{t-2}, \ldots) = P(X{t+1} = j \mid Xt = i)$$ Historical Development and Key Figures Andrey Markov introduced Markov chains in 1906, proving that under certain conditions (roughly, when the chain can reach all states and doesn't cycle deterministically), the distribution of the chain converges to a stationary distribution—a fixed probability distribution that doesn't change under the chain's transition rules. Early important examples include: Ehrenfest Dog-Flea Model (1907): Dogs and fleas distributed between two locations; fleas randomly jump between dogs. This models heat distribution and was used to illustrate ergodic theory. Galton–Watson Branching Process (1873): Organisms reproduce, and each offspring is random. The process tracks population size. A famous result: the population dies out with probability 1 unless the average number of offspring exceeds 1. Continuous-Time Extension Andrey Kolmogorov in 1931 developed continuous-time Markov processes (diffusions), introducing Kolmogorov equations: The forward equation (also called the Fokker-Planck equation) describes how the probability density evolves forward in time. The backward equation describes the expected value of a future payoff in terms of the current state. Chapman–Kolmogorov Equation Sydney Chapman in 1928 derived a fundamental relationship. For transition probabilities $p{ij}(t)$ (probability of going from state $i$ to state $j$ in time $t$), the Chapman–Kolmogorov equation states: $$p{ij}(s+t) = \sumk p{ik}(s) p{kj}(t)$$ This elegant equation says: to go from $i$ to $j$ in time $s+t$, you must go from $i$ to some intermediate state $k$ in time $s$, then from $k$ to $j$ in time $t$. Sum over all possible intermediate states. Visual Intuition: Random Fields and Spatial Point Processes Random fields extend the idea of a random variable to multi-dimensional space: rather than a random value at a single location or time, you have a random value assigned to every point in a region. The image shows a random field on a spherical surface—imagine a random height or temperature assigned to each point on Earth. Point processes in space look similar but record presence/absence rather than continuous values. This visualization shows a spatial point process—random points scattered in 3D space—the kind of model used for spatial clustering in ecology, astronomy, or materials science. Conclusion: The Interconnected Framework These tools—martingales, Lévy processes, random walks, and Markov chains—form an interconnected toolkit for modeling randomness: Martingales provide a theoretical language for fair games and appear throughout probability theory as a proof technique. Lévy processes and Markov processes are the two main classes of continuous-time stochastic processes. Random walks and Bernoulli processes are the discrete-time analogues and often provide intuition for continuous-time behavior. Point processes model discrete random events scattered in space or time. Understanding how these concepts relate and when to apply each one is key to mastering stochastic processes.
Flashcards
How is a martingale defined in terms of its conditional expected future value?
The conditional expected future value equals the present value.
What is the formal conditional expectation formula for a martingale $Mt$ at times $s < t$?
$E[Mt \mid \mathcal{F}s] = Ms$
From what type of random variables can martingales be constructed via transformations?
Independent random variables.
What process is typically used to build continuous-time martingales?
The Wiener process (Brownian motion).
In financial mathematics, what do martingales represent under risk-neutral measures?
Price processes.
What are the two key characteristics of the increments in a Lévy process?
Stationary Independent
On what does the distribution of an increment $X{t{i+1}} - X{ti}$ depend in a Lévy process?
The time lag.
What is the index set for a Lévy process?
The set of non-negative real numbers ($[0, \infty)$).
Which Lévy process is characterized by having Gaussian increments?
The Wiener process (Brownian motion).
Which Lévy process is characterized by having Poisson-distributed jumps?
The homogeneous Poisson process.
By what are the random variables in a random field indexed?
Points in a multi-dimensional Euclidean space or manifold.
What is the definition of a point process?
A random collection of points located in a mathematical space.
How can a point process be interpreted in terms of measures?
As a random counting measure.
Which types of processes are studied as special cases within point-process theory?
Renewal processes Counting processes
What does a Bernoulli process model?
A sequence of independent Bernoulli trials (e.g., repeated biased coin flips).
What type of stochastic process models the classic gambler’s ruin problem?
A one-dimensional simple random walk with absorbing barriers.
In which dimensions does a symmetric random walk return to its start infinitely often with probability one?
One and two dimensions.
What is the probability that a symmetric random walk returns to its start infinitely often in three or more dimensions?
Zero.
What does a Markov chain's distribution converge to under certain conditions?
A stationary vector.
Which equations did Andrey Kolmogorov introduce for the continuous-time theory of Markov processes?
Kolmogorov forward and backward equations (diffusion equations).
What equation describes transition probabilities over intermediate times in Markov processes?
The Chapman–Kolmogorov equation.

Quiz

In probability theory, a point process is best described as:
1 of 1
Key Concepts
Stochastic Processes
Martingale
Lévy process
Random field
Point process
Bernoulli process
Random walk
Markov chain
Wiener process
Poisson process
Galton–Watson process