Dynkin system

Family closed under complements and countable disjoint unions

A Dynkin system,[1] named after Eugene Dynkin, is a collection of subsets of another universal set Ω {\displaystyle \Omega } satisfying a set of axioms weaker than those of 𝜎-algebra. Dynkin systems are sometimes referred to as 𝜆-systems (Dynkin himself used this term) or d-system.[2] These set families have applications in measure theory and probability.

A major application of 𝜆-systems is the π-𝜆 theorem, see below.

Definition

Let Ω {\displaystyle \Omega } be a nonempty set, and let D {\displaystyle D} be a collection of subsets of Ω {\displaystyle \Omega } (that is, D {\displaystyle D} is a subset of the power set of Ω {\displaystyle \Omega } ). Then D {\displaystyle D} is a Dynkin system if

  1. Ω D ; {\displaystyle \Omega \in D;}
  2. D {\displaystyle D} is closed under complements of subsets in supersets: if A , B D {\displaystyle A,B\in D} and A B , {\displaystyle A\subseteq B,} then B A D ; {\displaystyle B\setminus A\in D;}
  3. D {\displaystyle D} is closed under countable increasing unions: if A 1 A 2 A 3 {\displaystyle A_{1}\subseteq A_{2}\subseteq A_{3}\subseteq \cdots } is an increasing sequence[note 1] of sets in D {\displaystyle D} then n = 1 A n D . {\displaystyle \bigcup _{n=1}^{\infty }A_{n}\in D.}

It is easy to check[proof 1] that any Dynkin system D {\displaystyle D} satisfies:

  1. D ; {\displaystyle \varnothing \in D;}
  2. D {\displaystyle D} is closed under complements in Ω {\displaystyle \Omega } : if A D , {\textstyle A\in D,} then Ω A D ; {\displaystyle \Omega \setminus A\in D;}
    • Taking A := Ω {\displaystyle A:=\Omega } shows that D . {\displaystyle \varnothing \in D.}
  3. D {\displaystyle D} is closed under countable unions of pairwise disjoint sets: if A 1 , A 2 , A 3 , {\displaystyle A_{1},A_{2},A_{3},\ldots } is a sequence of pairwise disjoint sets in D {\displaystyle D} (meaning that A i A j = {\displaystyle A_{i}\cap A_{j}=\varnothing } for all i j {\displaystyle i\neq j} ) then n = 1 A n D . {\displaystyle \bigcup _{n=1}^{\infty }A_{n}\in D.}
    • To be clear, this property also holds for finite sequences A 1 , , A n {\displaystyle A_{1},\ldots ,A_{n}} of pairwise disjoint sets (by letting A i := {\displaystyle A_{i}:=\varnothing } for all i > n {\displaystyle i>n} ).

Conversely, it is easy to check that a family of sets that satisfy conditions 4-6 is a Dynkin class.[proof 2] For this reason, a small group of authors have adopted conditions 4-6 to define a Dynkin system.

An important fact is that any Dynkin system that is also a π-system (that is, closed under finite intersections) is a 𝜎-algebra. This can be verified by noting that conditions 2 and 3 together with closure under finite intersections imply closure under finite unions, which in turn implies closure under countable unions.

Given any collection J {\displaystyle {\mathcal {J}}} of subsets of Ω , {\displaystyle \Omega ,} there exists a unique Dynkin system denoted D { J } {\displaystyle D\{{\mathcal {J}}\}} which is minimal with respect to containing J . {\displaystyle {\mathcal {J}}.} That is, if D ~ {\displaystyle {\tilde {D}}} is any Dynkin system containing J , {\displaystyle {\mathcal {J}},} then D { J } D ~ . {\displaystyle D\{{\mathcal {J}}\}\subseteq {\tilde {D}}.} D { J } {\displaystyle D\{{\mathcal {J}}\}} is called the Dynkin system generated by J . {\displaystyle {\mathcal {J}}.} For instance, D { } = { , Ω } . {\displaystyle D\{\varnothing \}=\{\varnothing ,\Omega \}.} For another example, let Ω = { 1 , 2 , 3 , 4 } {\displaystyle \Omega =\{1,2,3,4\}} and J = { 1 } {\displaystyle {\mathcal {J}}=\{1\}} ; then D { J } = { , { 1 } , { 2 , 3 , 4 } , Ω } . {\displaystyle D\{{\mathcal {J}}\}=\{\varnothing ,\{1\},\{2,3,4\},\Omega \}.}

Sierpiński–Dynkin's π-λ theorem

Sierpiński-Dynkin's π-𝜆 theorem:[3] If P {\displaystyle P} is a π-system and D {\displaystyle D} is a Dynkin system with P D , {\displaystyle P\subseteq D,} then σ { P } D . {\displaystyle \sigma \{P\}\subseteq D.}

In other words, the 𝜎-algebra generated by P {\displaystyle P} is contained in D . {\displaystyle D.} Thus a Dynkin system contains a π-system if and only if it contains the 𝜎-algebra generated by that π-system.

One application of Sierpiński-Dynkin's π-𝜆 theorem is the uniqueness of a measure that evaluates the length of an interval (known as the Lebesgue measure):

Let ( Ω , B , ) {\displaystyle (\Omega ,{\mathcal {B}},\ell )} be the unit interval [0,1] with the Lebesgue measure on Borel sets. Let m {\displaystyle m} be another measure on Ω {\displaystyle \Omega } satisfying m [ ( a , b ) ] = b a , {\displaystyle m[(a,b)]=b-a,} and let D {\displaystyle D} be the family of sets S {\displaystyle S} such that m [ S ] = [ S ] . {\displaystyle m[S]=\ell [S].} Let I := { ( a , b ) , [ a , b ) , ( a , b ] , [ a , b ] : 0 < a b < 1 } , {\displaystyle I:=\{(a,b),[a,b),(a,b],[a,b]:0<a\leq b<1\},} and observe that I {\displaystyle I} is closed under finite intersections, that I D , {\displaystyle I\subseteq D,} and that B {\displaystyle {\mathcal {B}}} is the 𝜎-algebra generated by I . {\displaystyle I.} It may be shown that D {\displaystyle D} satisfies the above conditions for a Dynkin-system. From Sierpiński-Dynkin's π-𝜆 Theorem it follows that D {\displaystyle D} in fact includes all of B {\displaystyle {\mathcal {B}}} , which is equivalent to showing that the Lebesgue measure is unique on B {\displaystyle {\mathcal {B}}} .

Application to probability distributions

This section is transcluded from pi system. (edit | history)

The π-𝜆 theorem motivates the common definition of the probability distribution of a random variable X : ( Ω , F , P ) R {\displaystyle X:(\Omega ,{\mathcal {F}},\operatorname {P} )\to \mathbb {R} } in terms of its cumulative distribution function. Recall that the cumulative distribution of a random variable is defined as

F X ( a ) = P [ X a ] , a R , {\displaystyle F_{X}(a)=\operatorname {P} [X\leq a],\qquad a\in \mathbb {R} ,}
whereas the seemingly more general law of the variable is the probability measure
L X ( B ) = P [ X 1 ( B ) ]  for all  B B ( R ) , {\displaystyle {\mathcal {L}}_{X}(B)=\operatorname {P} \left[X^{-1}(B)\right]\quad {\text{ for all }}B\in {\mathcal {B}}(\mathbb {R} ),}
where B ( R ) {\displaystyle {\mathcal {B}}(\mathbb {R} )} is the Borel 𝜎-algebra. The random variables X : ( Ω , F , P ) R {\displaystyle X:(\Omega ,{\mathcal {F}},\operatorname {P} )\to \mathbb {R} } and Y : ( Ω ~ , F ~ , P ~ ) R {\displaystyle Y:({\tilde {\Omega }},{\tilde {\mathcal {F}}},{\tilde {\operatorname {P} }})\to \mathbb {R} } (on two possibly different probability spaces) are equal in distribution (or law), denoted by X = D Y , {\displaystyle X\,{\stackrel {\mathcal {D}}{=}}\,Y,} if they have the same cumulative distribution functions; that is, if F X = F Y . {\displaystyle F_{X}=F_{Y}.} The motivation for the definition stems from the observation that if F X = F Y , {\displaystyle F_{X}=F_{Y},} then that is exactly to say that L X {\displaystyle {\mathcal {L}}_{X}} and L Y {\displaystyle {\mathcal {L}}_{Y}} agree on the π-system { ( , a ] : a R } {\displaystyle \{(-\infty ,a]:a\in \mathbb {R} \}} which generates B ( R ) , {\displaystyle {\mathcal {B}}(\mathbb {R} ),} and so by the example above: L X = L Y . {\displaystyle {\mathcal {L}}_{X}={\mathcal {L}}_{Y}.}

A similar result holds for the joint distribution of a random vector. For example, suppose X {\displaystyle X} and Y {\displaystyle Y} are two random variables defined on the same probability space ( Ω , F , P ) , {\displaystyle (\Omega ,{\mathcal {F}},\operatorname {P} ),} with respectively generated π-systems I X {\displaystyle {\mathcal {I}}_{X}} and I Y . {\displaystyle {\mathcal {I}}_{Y}.} The joint cumulative distribution function of ( X , Y ) {\displaystyle (X,Y)} is

F X , Y ( a , b ) = P [ X a , Y b ] = P [ X 1 ( ( , a ] ) Y 1 ( ( , b ] ) ] ,  for all  a , b R . {\displaystyle F_{X,Y}(a,b)=\operatorname {P} [X\leq a,Y\leq b]=\operatorname {P} \left[X^{-1}((-\infty ,a])\cap Y^{-1}((-\infty ,b])\right],\quad {\text{ for all }}a,b\in \mathbb {R} .}

However, A = X 1 ( ( , a ] ) I X {\displaystyle A=X^{-1}((-\infty ,a])\in {\mathcal {I}}_{X}} and B = Y 1 ( ( , b ] ) I Y . {\displaystyle B=Y^{-1}((-\infty ,b])\in {\mathcal {I}}_{Y}.} Because

I X , Y = { A B : A I X ,  and  B I Y } {\displaystyle {\mathcal {I}}_{X,Y}=\left\{A\cap B:A\in {\mathcal {I}}_{X},{\text{ and }}B\in {\mathcal {I}}_{Y}\right\}}
is a π-system generated by the random pair ( X , Y ) , {\displaystyle (X,Y),} the π-𝜆 theorem is used to show that the joint cumulative distribution function suffices to determine the joint law of ( X , Y ) . {\displaystyle (X,Y).} In other words, ( X , Y ) {\displaystyle (X,Y)} and ( W , Z ) {\displaystyle (W,Z)} have the same distribution if and only if they have the same joint cumulative distribution function.

In the theory of stochastic processes, two processes ( X t ) t T , ( Y t ) t T {\displaystyle (X_{t})_{t\in T},(Y_{t})_{t\in T}} are known to be equal in distribution if and only if they agree on all finite-dimensional distributions; that is, for all t 1 , , t n T , n N , {\displaystyle t_{1},\ldots ,t_{n}\in T,\,n\in \mathbb {N} ,}

( X t 1 , , X t n ) = D ( Y t 1 , , Y t n ) . {\displaystyle \left(X_{t_{1}},\ldots ,X_{t_{n}}\right)\,{\stackrel {\mathcal {D}}{=}}\,\left(Y_{t_{1}},\ldots ,Y_{t_{n}}\right).}

The proof of this is another application of the π-𝜆 theorem.[4]

See also

  • Algebra of sets – Identities and relationships involving sets
  • δ-ring – Ring closed under countable intersections
  • Field of sets – Algebraic concept in measure theory, also referred to as an algebra of sets
  • Monotone class – theoremPages displaying wikidata descriptions as a fallbackPages displaying short descriptions with no spaces
  • π-system – Family of sets closed under intersection
  • Ring of sets – Family closed under unions and relative complements
  • σ-algebra – Algebraic structure of set algebra
  • 𝜎-ideal – Family closed under subsets and countable unions
  • 𝜎-ring – Ring closed under countable unions

Notes

  1. ^ A sequence of sets A 1 , A 2 , A 3 , {\displaystyle A_{1},A_{2},A_{3},\ldots } is called increasing if A n A n + 1 {\displaystyle A_{n}\subseteq A_{n+1}} for all n 1. {\displaystyle n\geq 1.}

Proofs

  1. ^ Assume D {\displaystyle {\mathcal {D}}} satisfies (1), (2), and (3). Proof of (5) :Property (5) follows from (1) and (2) by using B := Ω . {\displaystyle B:=\Omega .} The following lemma will be used to prove (6). Lemma: If A , B D {\displaystyle A,B\in {\mathcal {D}}} are disjoint then A B D . {\displaystyle A\cup B\in {\mathcal {D}}.} Proof of Lemma: A B = {\displaystyle A\cap B=\varnothing } implies B Ω A , {\displaystyle B\subseteq \Omega \setminus A,} where Ω A Ω {\displaystyle \Omega \setminus A\subseteq \Omega } by (5). Now (2) implies that D {\displaystyle {\mathcal {D}}} contains ( Ω A ) B = Ω ( A B ) {\displaystyle (\Omega \setminus A)\setminus B=\Omega \setminus (A\cup B)} so that (5) guarantees that A B D , {\displaystyle A\cup B\in {\mathcal {D}},} which proves the lemma. Proof of (6) Assume that A 1 , A 2 , A 3 , {\displaystyle A_{1},A_{2},A_{3},\ldots } are pairwise disjoint sets in D . {\displaystyle {\mathcal {D}}.} For every integer n > 0 , {\displaystyle n>0,} the lemma implies that D n := A 1 A n D {\displaystyle D_{n}:=A_{1}\cup \cdots \cup A_{n}\in {\mathcal {D}}} where because D 1 D 2 D 3 {\displaystyle D_{1}\subseteq D_{2}\subseteq D_{3}\subseteq \cdots } is increasing, (3) guarantees that D {\displaystyle {\mathcal {D}}} contains their union D 1 D 2 = A 1 A 2 , {\displaystyle D_{1}\cup D_{2}\cup \cdots =A_{1}\cup A_{2}\cup \cdots ,} as desired. {\displaystyle \blacksquare }
  2. ^ Assume D {\displaystyle {\mathcal {D}}} satisfies (4), (5), and (6). proof of (2): If A , B D {\displaystyle A,B\in {\mathcal {D}}} satisfy A B {\displaystyle A\subseteq B} then (5) implies Ω B D {\displaystyle \Omega \setminus B\in {\mathcal {D}}} and since ( Ω B ) A = , {\displaystyle (\Omega \setminus B)\cap A=\varnothing ,} (6) implies that D {\displaystyle {\mathcal {D}}} contains ( Ω B ) A = Ω ( B A ) {\displaystyle (\Omega \setminus B)\cup A=\Omega \setminus (B\setminus A)} so that finally (4) guarantees that Ω ( Ω ( B A ) ) = B A {\displaystyle \Omega \setminus (\Omega \setminus (B\setminus A))=B\setminus A} is in D . {\displaystyle {\mathcal {D}}.} Proof of (3): Assume A 1 A 2 {\displaystyle A_{1}\subseteq A_{2}\subseteq \cdots } is an increasing sequence of subsets in D , {\displaystyle {\mathcal {D}},} let D 1 = A 1 , {\displaystyle D_{1}=A_{1},} and let D i = A i A i 1 {\displaystyle D_{i}=A_{i}\setminus A_{i-1}} for every i > 1 , {\displaystyle i>1,} where (2) guarantees that D 2 , D 3 , {\displaystyle D_{2},D_{3},\ldots } all belong to D . {\displaystyle {\mathcal {D}}.} Since D 1 , D 2 , D 3 , {\displaystyle D_{1},D_{2},D_{3},\ldots } are pairwise disjoint, (6) guarantees that their union D 1 D 2 D 3 = A 1 A 2 A 3 {\displaystyle D_{1}\cup D_{2}\cup D_{3}\cup \cdots =A_{1}\cup A_{2}\cup A_{3}\cup \cdots } belongs to D , {\displaystyle {\mathcal {D}},} which proves (3). {\displaystyle \blacksquare }
  1. ^ Dynkin, E., "Foundations of the Theory of Markov Processes", Moscow, 1959
  2. ^ Aliprantis, Charalambos; Border, Kim C. (2006). Infinite Dimensional Analysis: a Hitchhiker's Guide (Third ed.). Springer. Retrieved August 23, 2010.
  3. ^ Sengupta. "Lectures on measure theory lecture 6: The Dynkin π − λ Theorem" (PDF). Math.lsu. Retrieved 3 January 2023.
  4. ^ Kallenberg, Foundations Of Modern probability, p. 48

References

  • Gut, Allan (2005). Probability: A Graduate Course. New York: Springer. doi:10.1007/b138932. ISBN 0-387-22833-0.
  • Billingsley, Patrick (1995). Probability and Measure. New York: John Wiley & Sons, Inc. ISBN 0-471-00710-2.
  • Williams, David (2007). Probability with Martingales. Cambridge University Press. p. 193. ISBN 0-521-40605-6.

This article incorporates material from Dynkin system on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.

Families F {\displaystyle {\mathcal {F}}} of sets over Ω {\displaystyle \Omega }
  • v
  • t
  • e
Is necessarily true of F : {\displaystyle {\mathcal {F}}\colon }
or, is F {\displaystyle {\mathcal {F}}} closed under:
Directed
by {\displaystyle \,\supseteq }
A B {\displaystyle A\cap B} A B {\displaystyle A\cup B} B A {\displaystyle B\setminus A} Ω A {\displaystyle \Omega \setminus A} A 1 A 2 {\displaystyle A_{1}\cap A_{2}\cap \cdots } A 1 A 2 {\displaystyle A_{1}\cup A_{2}\cup \cdots } Ω F {\displaystyle \Omega \in {\mathcal {F}}} F {\displaystyle \varnothing \in {\mathcal {F}}} F.I.P.
π-system Yes Yes No No No No No No No No
Semiring Yes Yes No No No No No No Yes Never
Semialgebra (Semifield) Yes Yes No No No No No No Yes Never
Monotone class No No No No No only if A i {\displaystyle A_{i}\searrow } only if A i {\displaystyle A_{i}\nearrow } No No No
𝜆-system (Dynkin System) Yes No No only if
A B {\displaystyle A\subseteq B}
Yes No only if A i {\displaystyle A_{i}\nearrow } or
they are disjoint
Yes Yes Never
Ring (Order theory) Yes Yes Yes No No No No No No No
Ring (Measure theory) Yes Yes Yes Yes No No No No Yes Never
δ-Ring Yes Yes Yes Yes No Yes No No Yes Never
𝜎-Ring Yes Yes Yes Yes No Yes Yes No Yes Never
Algebra (Field) Yes Yes Yes Yes Yes No No Yes Yes Never
𝜎-Algebra (𝜎-Field) Yes Yes Yes Yes Yes Yes Yes Yes Yes Never
Dual ideal Yes Yes Yes No No No Yes Yes No No
Filter Yes Yes Yes Never Never No Yes Yes F {\displaystyle \varnothing \not \in {\mathcal {F}}} Yes
Prefilter (Filter base) Yes No No Never Never No No No F {\displaystyle \varnothing \not \in {\mathcal {F}}} Yes
Filter subbase No No No Never Never No No No F {\displaystyle \varnothing \not \in {\mathcal {F}}} Yes
Open Topology Yes Yes Yes No No No
(even arbitrary {\displaystyle \cup } )
Yes Yes Never
Closed Topology Yes Yes Yes No No
(even arbitrary {\displaystyle \cap } )
No Yes Yes Never
Is necessarily true of F : {\displaystyle {\mathcal {F}}\colon }
or, is F {\displaystyle {\mathcal {F}}} closed under:
directed
downward
finite
intersections
finite
unions
relative
complements
complements
in Ω {\displaystyle \Omega }
countable
intersections
countable
unions
contains Ω {\displaystyle \Omega } contains {\displaystyle \varnothing } Finite
Intersection
Property

Additionally, a semiring is a π-system where every complement B A {\displaystyle B\setminus A} is equal to a finite disjoint union of sets in F . {\displaystyle {\mathcal {F}}.}
A semialgebra is a semiring where every complement Ω A {\displaystyle \Omega \setminus A} is equal to a finite disjoint union of sets in F . {\displaystyle {\mathcal {F}}.}
A , B , A 1 , A 2 , {\displaystyle A,B,A_{1},A_{2},\ldots } are arbitrary elements of F {\displaystyle {\mathcal {F}}} and it is assumed that F . {\displaystyle {\mathcal {F}}\neq \varnothing .}

  • v
  • t
  • e
Basic concepts
Sets
Types of Measures
Particular measures
Maps
Main results
Other results
For Lebesgue measure
Applications & related