อภิธานศัพท์

เลือกหนึ่งในคำหลักทางด้านซ้าย ...

ProbabilityConditional Probability

เวลาอ่านหนังสือ: ~25 min

Conditional probability measures

One of the most important goals of modeling random phenomena is to account for partial information. We often discover something about the outcome of an experiment before we know the outcome exactly. For example, when we flip a fair coin twice, we see the result of the first flip before we see the result of the second flip, and we would like to define a new probability measure which reflects this intermediate knowledge. We call this a conditional probability measure.

Suppose we observe that the first of two flips is a tail. Then all of the \omega's which are incompatible with this observation should receive a probability of zero under our conditional probability measure. Since we have no new information about the remaining \omega's, it makes sense to keep their probabilities in the same proportions as in the original probability measure.

Consider the event E that the first flip is a tail. The conditional probability mass function XEQUATIONX1771XEQUATIONX given E assigns probability mass \frac{1}{2} to each of the \omega's in E.

These two observations are sufficient to fully determine the conditional probability measure. In other words, to condition on an event E, we set the masses at elements of E^\mathsf{c} to 0 and multiply the amount of mass at each point in E by 1/\mathbb{P}(E) to get the total mass up to 1 without changing the proportions:

Definition
Given a probability space (\Omega, \mathbb{P}) and an event E \subset \Omega whose probability is positive, the conditional probability mass function given E, written as \omega \mapsto m(\omega | E) is defined by

\begin{align*}m(\omega | E) = \begin{cases} \frac{m(\omega)}{P(E)} & \text{if }\omega \in E \\\ 0 & \text{otherwise}. \end{cases}\end{align*}

The conditional probability measure given E is the measure associated to \omega\mapsto m(\omega | E): for all events F, we have

\begin{align*}\mathbb{P}(F | E) = \frac{\mathbb{P}(F \cap E)}{\mathbb{P}(E)}.\end{align*}

Exercise
Two objects are submerged in a deep and murky body of water. The objects are chosen to be both positively buoyant with probability \frac{1}{4}, both are negatively buoyant with probability \frac{1}{4}, and with probability \frac{1}{2} the objects have opposite buoyancy. The objects, if they float, rise in the water at different rates, but they are visually indistinguishable.

After the objects are released, an observer sees one of them emerge at the water's surface. What is the conditional probability, given the observed information, that the second object will emerge?

Solution. Let's use the given sample space:

\begin{align*}\Omega = \{\text{both positive}, \text{opposite buoyancy}, \text{both negative}\}\end{align*}

The emergence of the object tells us precisely that the event

\begin{align*}E = \{\text{both positive}, \text{opposite buoyancy}\}\end{align*}

occurs. The conditional probability of the event \{\text{both positive}\} given E is

\begin{align*}\frac{\mathbb{P}(\{\text{both positive}\}\cap E)}{\mathbb{P}(E)} = \frac{\frac{1}{4}}{\frac{1}{4} + \frac{1}{2}} = \frac{1}{3}.\end{align*}

One reason that conditional probabilities play such an important role in the study of probability is that in many scenarios they are more fundamental than the probability measure on \Omega.

Example
Consider the following experiment: we roll a die, and if it shows 2 or less we select Urn A, and otherwise we select Urn B. Next, we draw a ball uniformly at random from the selected urn. Urn A contains one red and one blue ball, while urn B contains 3 blue balls and one red ball.

Find a probability space \Omega which models this experiment, find a pair of events E and F such that \mathbb{P}(E | F) = \frac{3}{4}.

Solution. The four possible outcomes of this experiment are (A, blue), (A, red), (B, blue), and (B, red). So we let our probability space \Omega consist of those four outcomes.

The probability of the outcome (A, blue) is equal to the probability that Urn A is selected times the conditional probability of selecting a blue ball given that Urn A was selected. We interpret the information that Urn A contains an equal number of blue and red balls as a statement that this conditional probability should be \frac{1}{2}. Therefore, we assign the probability \frac{1}{2} \cdot \frac{1}{3} = \frac{1}{6} to the event (A, blue).

Likewise, the probabilities we assign to the three other outcomes are \frac{1}{6}, \frac{1}{2}, and \frac{1}{6}, respectively.

With probabilities thus assigned to the outcomes in \Omega, we should have \mathbb{P}(E | F) = \frac{3}{4} where E is the event that we select a blue ball and F is the event that Urn B was selected. Let us check that this is indeed the case:

\begin{align*}\frac{\mathbb{P}(E \cap F)}{\mathbb{P}(F)} = \frac{\frac{1}{2}}{\frac{2}{3}} = \frac{3}{4}.\end{align*}

We have arrived at an important insight: a probability space may alternatively by specified via a tree diagram showing conditional probabilities, or by the probability space \Omega consisting of the endpoints of the tree diagram. We can translate back and forth between these two representations by multiplying along branches to get from the tree's conditional probabilities to \Omega's outcome probabilities or by calculating conditional probabilities to go from \Omega to the tree diagram.

Exercise
Consider three random variables X_1, X_2, and X_3, each of which is equal to 1 with probability 0.6 and to 0 with probability 0.4. These random variables are not necessarily independent.

  • Find the greatest possible value of the event X_1 + X_2 + X_3 = 0.
  • Find the least possible value of the event X_1 + X_2 + X_3 = 0.

Solution.

  • By monotonicity, we have

\begin{align*}\mathbb{P}(X_1 + X_2 +X_3 = 0) = \mathbb{P}(X_1 = X_2 = X_3 = 0) \leq \mathbb{P}(X_1 = 0) = 0.4.\end{align*}

We note that this maximum can be attained by setting X_1 = X_2 = X_3.

  • The least possible value is zero. This minimum can be attained, for example, if we take \Omega = \{\omega_1, \omega_2, \omega_3\}, with probability mass 0.4, 0.2, and 0.4, respectively, and set X_1(\omega_1) = 1, X_1(\omega_2) = 1, X_1(\omega_3) = 0, X_2(\omega_1) = 0, X_2(\omega_2) = 1, X_2(\omega_3) = 1, and X_3 = X_1.

Bayes' Theorem

Bayes' theorem tells us how to update beliefs in light of new evidence. It relates the conditional probabilities \mathbb{P}(A | E) and \mathbb{P}(E | A):

\begin{align*}\mathbb{P}(A | E) = \frac{\mathbb{P}(E | A)\mathbb{P}(A)}{\mathbb{P}(E)} = \frac{\mathbb{P}(E | A)\mathbb{P}(A)}{\mathbb{P}(E | A)\mathbb{P}(A) + \mathbb{P}(E | A^{\mathrm{c}})\mathbb{P}(A^{\mathrm{c}})}.\end{align*}

The last step follows from writing out \mathbb{P}(E) as \mathbb{P}(E \cap A) + \mathbb{P}(E \cap A^\mathsf{c}).

Bayes' theorem has many applications to everyday life, some intuitive and others counterintuitive.

Example
Suppose you're 90% sure that your package was delivered today and 75% sure that if it was delivered it would be on your door step rather than tucked away in your mailbox. When you arrive at home and do not see your package right away, what is the conditional probability—given the observed information—that you'll find it in your mailbox?

Solution. The desired conditional probability is \mathbb{P}(\text{delivered} | \text{invisible}), which by Bayes' theorem is

\begin{align*}\frac{\mathbb{P}(\text{invisible} | \text{delivered}) \mathbb{P}(\text{delivered})}{\mathbb{P}(\text{invisible} | \text{delivered}) \mathbb{P}(\text{delivered})+\mathbb{P}(\text{invisible} | \text{undelivered}) \mathbb{P}(\text{undelivered})}\end{align*}

\begin{align*}= \frac{(0.75)(0.9)}{(0.75)(0.9)+(1)(0.1)} \approx 0.871.\end{align*}

Exercise
Suppose a disease has 0.1% prevalence in the population and has a test with 90% reliability. A random selected person is tested for the disease and tests positive. What is the conditional probability that the person has the disease, given the positive test result?

Solution. Let D be the event that a person has the disease and P be the event that a person tests positive to the test. We would like to find \mathbb{P}(D | P) given that P(D) = 0.001, \mathbb{P}(P | D) = 0.9 and \mathbb{P}(P | D^c) = 0.1. By Bayes' Theorem,

\begin{align*}\mathbb{P}(D |P) &= \frac{\mathbb{P}(P | D) \cdot \mathbb{P}(D)}{\mathbb{P}(P |D) \cdot \mathbb{P}(D) + \mathbb{P}(P | D^c) \cdot \mathbb{P}(D^c)} \\ &= \frac{0.9 \times 0.001}{0.9 \times 0.001 + 0.1 \times 0.999} \\ &\approx 0.0089.\end{align*}

(Note: The fact that \mathbb{P}(P |D^c) = 0.1 = 1 - \mathbb{P}(P |D) follows from the fact that the test is 10% unreliable. In general, it is not the case that \mathbb{P}(A | B) = 1 - \mathbb{P}(A | B^\mathsf{c}) for any two events A and B.)

Bruno
Bruno Bruno