5  Independence

Published

December 8, 2025

5.1 Information and measurability

Let \mathcal G be a \sigma-algebra \subset \mathcal H.

We say we have observed the information in \mathcal G if we know the value of \mathbb 1_A(\omega) for each set A \in \mathcal G. The larger (finer) a sigma algebra the more information it represents.

Observing the information in \mathcal G lets us know the exact value of every random variable X(\omega) measurable with \mathcal G. For real valued RVs, this is relatively easy to visualise- every real valued measurable function is a limit of measurable simple functions and we know the values of those since we know all indicators.

Observing the exact value of a random variable lets us make inferences about which sets in \mathcal H the random seed \omega falls in. That information is represented by the sigma algebra generated by the random variable.

5.2 Independence

5.2.1 Definition

Let \mathcal F_1 \dots \mathcal F_n be a collection of sigma algebras with each \mathcal F_i \subset \mathcal H. Then \{\mathcal F_1 \dots \mathcal F_n\} is an independency if for all positive random variables V_1 \in \mathcal F_1, \dots V_n \in \mathcal F_n:

\mathbb E\left[\prod_{i=1}^n V_i \right] = \prod_{i=1}^n \mathbb E\left[ V_i \right]

An arbitrary collection \{\mathcal F_t: t \in T\} is an independency iff each finite subset \{\mathcal F_{t_1}, \dots \mathcal F_{t_n}\}, (t_n) \subset T, n < \infty is an independency.

5.2.2 Alternative definition

Let \mathcal F_1, dots \mathcal F_n \subset \mathcal H be sigma algebras and for each i, \mathcal C_i be a p-system containing \Omega that generates \mathcal F_i. Then \mathcal F_1, dots \mathcal F_n are independent if and only if, for each A_1 \in \mathcal C_1, \dots A_n \in \mathcal C_n;

\mathbb P\left(\cap_{i=1}^n A_i \right) = \prod_{i =1}^n \mathbb P(A_i)

5.2.3 Subgrouping

Let \{\mathcal F_t: t\in T\} be an independency. Define \tilde F_s = \sigma \left(\cup_{t\in T_s} \mathcal F_t \right). If \{T_s: s \in S\} is a subpartition of T i.e. T_s \cup T_{s'} = \emptyset and \cup_{s\in S} T_s \subset T, then \{\tilde F_s : s \in S\} is an independency.

5.2.4 Coarsening

\{\mathcal F_t: t\in T\} is an independency if and only if \{\mathcal G_t: t\in T\} is an independency for all collections such that \mathcal G_t \subset \mathcal F_t for each t \in T.

5.2.5 Bootstrapping mutual independence of countable collection from pairwise independence

\mathcal F_1 , \mathcal F_2, \dots \subset \mathcal H are independent if and only if for all n \geq 1, \mathcal G_n = \sigma \left( \cup_{i=1}^n \mathcal F_n \right) and \mathcal F_{n+1} are independent.

5.2.6 Independence of random variables

A collection of random variables is mutually independent if the collection of sigma algebras generated is an independency.

For a finite collection of random variables X_1, \dots X_n, independence holds iff for all f_1 \in \mathcal E_1, \dots f_n \in \mathcal E_n:

\mathbb E\left[ \prod_{i=1}^n f_i \circ X_i \right] = \prod_{i=1}^n \mathbb E[ f_i \circ X_i]

A more familiar definition is X_1, \dots X_n are independent iff the joint distribution is the product of marginals:

\mathbb P\left( \cap_{i=1}^n X_i^{-1}(A_i) \right) = \prod_{i=1}^n \mathbb P\circ X_i^{-1} (A_i)

Also, measurable functions of independent random variables are independent.

5.2.7 Intersections of independencies

If a set A belongs to two (or more) independent sigma algebras, either \mathbb P(A) =0 or \mathbb P(A) =1. The converse is not true in general.

Proof

Let \mathcal C, \mathcal D be independent sigma algebras. For any sets C \in \mathcal C, D \in \mathcal D, \mathbb 1_C and \mathbb 1_D are random variables measurable wrt \mathcal C, \mathcal D respectively, so:

\begin{align*} \mathbb E [\mathbb 1_C \mathbb 1_D] &= \mathbb E [\mathbb 1_C ] \mathbb E [\mathbb 1_D ]\\ \mathbb E [\mathbb 1_{C \cap D}]= \mathbb P(C \cap D) &= \mathbb P(C) \mathbb P(D) \end{align*}

Then, if we take C= D = A:

\mathbb P(A \cap A ) = \mathbb P(A ) =\mathbb P(A)^2

Counterexample for converse

Even if the intersection of two sigma algebras only contains sets with probability 0/1, the two are not necessarily independent.

Consider \Omega = {a, b, c}, the sigma algebra 2^\Omega and uniform probability measure. Consider the random variables X = \mathbb 1_{\{a, b\}} and Y = \mathbb 1_{\{b, c\}}.

The sigma algebras generated are \sigma X = \{\emptyset, \Omega, \{c\}, \{a, b\} \} and \sigma Y = \{\emptyset, \Omega, \{a\}, \{b, c\} \}. The intersection is \{\emptyset, \Omega \}. but \mathbb P(X =1, Y =1) = \mathbb P\{b\} = 1/3 \neq \mathbb P(X= 1) \mathbb P(Y = 1)= \mathbb P\{a, b\} \mathbb P\{b, c\} = 4/9.

This is because having the ability to evaluate \mathbb 1_{\{c\}} \in \sigma X does tell us something about the value of \mathbb 1_{\{b, c\}} \in \sigma Y.