Independence in Discrete Probability
students, have you ever noticed that some events do not affect each other? For example, flipping a coin and rolling a die are separate actions. The result of the coin flip does not change the die roll, and the die roll does not change the coin flip 🎲🪙. That idea is called independence, and it is one of the most important ideas in discrete probability.
In this lesson, you will learn how to:
- explain what independence means in probability,
- use probability notation correctly,
- test whether events are independent,
- connect independence to conditional probability,
- and see how independence helps solve real counting and probability problems.
Understanding independence is useful because it helps you tell when probabilities can be simplified. Instead of tracking complicated interactions, you can often multiply probabilities when events are independent. That makes many problems much easier to solve.
What Independence Means
In discrete probability, two events are independent if the occurrence of one event does not change the probability of the other event happening. In other words, knowing that one event happened gives you no new information about the other event.
Suppose $A$ and $B$ are events. If $A$ and $B$ are independent, then
$$P(A \mid B) = P(A)$$
and also
$$P(B \mid A) = P(B).$$
This means the probability of $A$ stays the same even after you know $B$ occurred.
A very common way to test independence is using the multiplication rule. Two events $A$ and $B$ are independent exactly when
$$P(A \cap B) = P(A)P(B).$$
Here, $A \cap B$ means both events happen.
Example: Coin and Die
Let $A$ be the event “the coin shows heads,” and let $B$ be the event “the die shows a 6.”
Then
$$P(A) = \frac{1}{2}, \quad P(B) = \frac{1}{6}.$$
Because the coin flip and die roll do not influence each other,
$$P(A \cap B) = \frac{1}{2} \cdot \frac{1}{6} = \frac{1}{12}.$$
So $A$ and $B$ are independent. If students knows that the coin landed heads, the chance of rolling a 6 is still $\frac{1}{6}$.
Independence and Conditional Probability
Conditional probability tells us the probability of an event when we already know something else happened. The formula is
$$P(A \mid B) = \frac{P(A \cap B)}{P(B)}$$
as long as $P(B) > 0$.
This formula helps explain independence very clearly. If $A$ and $B$ are independent, then the knowledge that $B$ happened should not change the chance of $A$. So the conditional probability becomes
$$P(A \mid B) = P(A).$$
Using the conditional probability formula, this means
$$\frac{P(A \cap B)}{P(B)} = P(A),$$
which leads to
$$P(A \cap B) = P(A)P(B).$$
So independence and conditional probability are tightly connected.
Example: Drawing a Card Without Replacement
Suppose a standard deck of 52 cards is used. Let $A$ be the event “the first card drawn is an ace,” and let $B$ be the event “the second card drawn is an ace.”
If the first card is drawn and not replaced, then the probability of drawing an ace on the second draw depends on what happened first. If the first card was an ace, there are only $3$ aces left in $51$ cards. If the first card was not an ace, there are still $4$ aces in $51$ cards.
So
$$P(B \mid A) = \frac{3}{51}$$
but
$$P(B) = \frac{4}{52} = \frac{1}{13}.$$
Since
$$P(B \mid A) \neq P(B),$$
these events are not independent.
This is a great example of how dependence appears when one outcome changes the sample space. students should notice that “without replacement” often creates dependence.
How to Test Independence
There are three common ways to check whether events are independent.
1. Compare conditional probability and ordinary probability
Check whether
$$P(A \mid B) = P(A)$$
or
$$P(B \mid A) = P(B).$$
If either one fails, the events are not independent.
2. Use the multiplication rule
Check whether
$$P(A \cap B) = P(A)P(B).$$
If this equation is true, then the events are independent.
3. Use a probability table or tree diagram
In many problems, you can organize outcomes in a table or tree. Then compare the actual joint probability with the product of the separate probabilities.
Example: Table Check
Imagine a survey of $100$ students.
- $40$ play a sport.
- $30$ are in a music club.
- $12$ do both.
Let $A$ be “plays a sport,” and let $B$ be “is in a music club.”
Then
$$P(A) = \frac{40}{100} = 0.4,$$
$$P(B) = \frac{30}{100} = 0.3,$$
and
$$P(A \cap B) = \frac{12}{100} = 0.12.$$
Now compute
$$P(A)P(B) = 0.4 \cdot 0.3 = 0.12.$$
Since
$$P(A \cap B) = P(A)P(B),$$
the events are independent in this dataset.
This does not mean every student who plays a sport is equally likely to be in music club in real life forever. It only means the data matches the mathematical condition for independence.
Independence in Chains of Events
Independence becomes especially useful when multiple events happen in sequence. If events are pairwise independent in the right way, you can multiply probabilities to find the probability that all of them happen.
For example, if $A$, $B$, and $C$ are independent events, then
$$P(A \cap B \cap C) = P(A)P(B)P(C).$$
But students should be careful: not every pair of independent events automatically makes all three events mutually independent. That is an important distinction in discrete mathematics.
Pairwise Independence vs. Mutual Independence
- Pairwise independence means every pair of events is independent.
- Mutual independence means the independence works for the whole group together.
It is possible for three events to be pairwise independent but not mutually independent.
Example: Three Coin Flip Events
Flip a fair coin twice.
Let $A$ be the event “the first flip is heads,”
let $B$ be the event “the second flip is heads,”
and let $C$ be the event “the two flips match,” meaning both heads or both tails.
We have
$$P(A) = \frac{1}{2}, \quad P(B) = \frac{1}{2}, \quad P(C) = \frac{1}{2}.$$
Also,
$$P(A \cap B) = \frac{1}{4} = P(A)P(B),$$
so $A$ and $B$ are independent.
Likewise, $A$ and $C$ are independent, and $B$ and $C$ are independent. But all three together are not mutually independent, because
$$P(A \cap B \cap C) = P(A \cap B) = \frac{1}{4},$$
while
$$P(A)P(B)P(C) = \frac{1}{8}.$$
Since these are not equal, the three events are not mutually independent.
Independence and Real-World Reasoning
Independence is not just a formula. It is a way of thinking about whether events are connected.
A few examples help show this clearly:
- Two separate coin flips are independent 🪙🪙.
- Rolling a die and choosing a random card are independent 🎲🃏.
- Drawing two cards without replacement is usually dependent, because the first draw changes the second draw.
- A person being left-handed and owning a pet may not be independent in a data set, because one characteristic might be associated with the other.
When solving problems, students should ask:
- Does one event change the sample space of the other?
- Does knowing one event make the other more or less likely?
- Can I verify the condition $P(A \cap B) = P(A)P(B)$?
If the answer is yes to the first or second question, the events are probably not independent.
A Common Mistake
A frequent mistake is assuming that two events are independent just because they both involve chance. That is not enough. For independence, the events must not affect each other.
For example, if a bag contains $5$ red and $5$ blue marbles and one marble is drawn without replacement, then the color of the first marble affects the probability of the second. The events are not independent.
But if the marble is replaced after each draw, then the probabilities reset, and independence may hold.
Conclusion
Independence is one of the core ideas in discrete probability. It tells us when one event does not affect another. The key formulas are
$$P(A \mid B) = P(A)$$
and
$$P(A \cap B) = P(A)P(B).$$
These formulas help students check whether events are independent, solve probability problems more efficiently, and understand how separate random processes interact. Independence also connects directly to conditional probability and to broader probability reasoning in tables, tree diagrams, and repeated trials.
When you see a probability problem, ask whether the events influence each other. If they do not, independence may let you simplify the solution quickly and accurately ✅.
Study Notes
- Independence means one event does not change the probability of another event.
- For independent events $A$ and $B$,
$$P(A \mid B) = P(A)$$
and
$$P(A \cap B) = P(A)P(B).$$
- If events are independent, knowing one event happened gives no new information about the other.
- Coin flips, separate dice rolls, and other separate random processes are often independent.
- Drawing without replacement often creates dependence because the sample space changes.
- To test independence, compare $P(A \cap B)$ with $P(A)P(B)$ or compare $P(A \mid B)$ with $P(A)$.
- Pairwise independence is not always the same as mutual independence.
- Independence is a major tool in discrete probability because it simplifies multi-step probability calculations.
