11. Discrete Probability

Conditional Probability

Conditional Probability 📘

students, imagine you are drawing a card, spinning a wheel, or choosing a student at random from a class. In many real situations, you do not just want the chance of one event happening by itself. You want the chance of one event happening after you already know something else happened. That is the core idea of conditional probability.

In this lesson, you will learn how conditional probability works, how to compute it, and why it matters in discrete mathematics and everyday reasoning. By the end, you should be able to explain the meaning of conditional probability, use the standard formula, and connect it to sample spaces and independence.

What Conditional Probability Means

Conditional probability is the probability of an event happening given that another event has already happened.

We write this as $P(A\mid B)$, which is read as “the probability of $A$ given $B$.” Here:

  • $A$ is the event we care about,
  • $B$ is the event we already know occurred.

This changes the sample space. Instead of looking at all possible outcomes, we now only look at the outcomes inside $B$. Then we ask: among those outcomes, how many also belong to $A$?

A helpful way to think about it is like narrowing your search 🔍. If you know a card is red, you no longer consider black cards. If you know a student is on the soccer team, you only look at the soccer players when finding the chance they also play another sport.

The standard formula is:

$$P(A\mid B)=\frac{P(A\cap B)}{P(B)}$$

as long as $P(B)>0$.

Here, $A\cap B$ means the outcomes where both $A$ and $B$ happen.

Why the formula makes sense

The numerator $P(A\cap B)$ counts the part where both events happen. The denominator $P(B)$ restricts attention to the world where $B$ has already happened. So conditional probability is a “within the group $B$” probability.

For discrete probability, this often means counting outcomes:

$$P(A\mid B)=\frac{\text{number of outcomes in }A\cap B}{\text{number of outcomes in }B}$$

when all outcomes in the sample space are equally likely.

Building Conditional Probability from Sample Spaces

To use conditional probability correctly, students, you need to understand the sample space. A sample space is the set of all possible outcomes of an experiment.

Suppose you roll a fair six-sided die. The sample space is:

$$S=\{1,2,3,4,5,6\}$$

Let $A$ be the event “the result is even,” so

$$A=\{2,4,6\}$$

Let $B$ be the event “the result is greater than $3$,” so

$$B=\{4,5,6\}$$

Then

$$A\cap B=\{4,6\}$$

Now compute:

$$P(A\mid B)=\frac{P(A\cap B)}{P(B)}=\frac{2/6}{3/6}=\frac{2}{3}$$

This does not mean the probability of being even changed magically. It means that once you know the number is greater than $3$, the sample space has shrunk to $\{4,5,6\}$. In that reduced space, 2 of the 3 outcomes are even.

Example with a deck of cards

A standard deck has $52$ cards. Let $A$ be the event “the card is an ace,” and let $B$ be the event “the card is a spade.”

There are $4$ aces total, $13$ spades total, and exactly $1$ card that is both an ace and a spade: the ace of spades.

So:

$$P(A\mid B)=\frac{P(A\cap B)}{P(B)}=\frac{1/52}{13/52}=\frac{1}{13}$$

This says: if you already know the card is a spade, the chance it is an ace is $\frac{1}{13}$.

Conditional Probability and Counting Reasoning

Discrete mathematics often uses counting instead of listing every probability value. That makes conditional probability especially important in finite, equally likely sample spaces.

If there are $|B|$ outcomes in event $B$, and $|A\cap B|$ of them also satisfy $A$, then:

$$P(A\mid B)=\frac{|A\cap B|}{|B|}$$

provided all outcomes are equally likely.

Real-world example: students and clubs

Suppose a school has $30$ students in a math club. Of those, $12$ are also in the science club. If a student is chosen at random from the math club, what is the probability that the student is also in the science club?

Let $A$ be “student is in science club” and $B$ be “student is in math club.”

We are choosing from the $30$ math club students, so the condition is $B$. The favorable outcomes are the $12$ students in both clubs.

$$P(A\mid B)=\frac{12}{30}=\frac{2}{5}$$

This is different from asking for $P(A)$, the probability a randomly chosen student from the whole school is in science club. Conditional probability always depends on the information given.

A quick comparison

  • $P(A)$ asks about event $A$ in the full sample space.
  • $P(A\mid B)$ asks about event $A$ inside the restricted sample space determined by $B$.

That restriction is what makes conditional probability so useful in data analysis, game theory, and decision-making 📊.

Independence and Conditional Probability

Conditional probability is closely connected to independence.

Two events $A$ and $B$ are independent if knowing that $B$ happened does not change the probability of $A$. In symbols, this means:

$$P(A\mid B)=P(A)$$

when $P(B)>0$.

Using the conditional probability formula, this is equivalent to:

$$P(A\cap B)=P(A)P(B)$$

This equation is often used as the test for independence.

Example of independence

Suppose you flip a fair coin and roll a fair die.

Let $A$ be “the coin shows heads,” and let $B$ be “the die shows $6$.”

These are independent because the coin flip does not affect the die roll.

We have:

$$P(A)=\frac{1}{2}, \quad P(B)=\frac{1}{6}$$

and

$$P(A\cap B)=\frac{1}{12}$$

So:

$$P(A)P(B)=\frac{1}{2}\cdot\frac{1}{6}=\frac{1}{12}$$

Since the values match, the events are independent.

Example of dependence

Now suppose a bag has $3$ red marbles and $2$ blue marbles. You draw one marble without replacement, then draw a second marble. Let $A$ be “the second marble is red,” and let $B$ be “the first marble is red.”

If the first marble is red, the makeup of the bag changes. That means the second draw depends on the first.

Initially, the chance of red is $\frac{3}{5}$, but given that the first marble was red, the second red probability becomes:

$$P(A\mid B)=\frac{2}{4}=\frac{1}{2}$$

Because $\frac{1}{2}\neq\frac{3}{5}$, the events are not independent.

This is a powerful idea in discrete probability: if conditioning changes the probability, then the events are dependent.

Common Mistakes and How to Avoid Them

A frequent mistake is confusing $P(A\mid B)$ with $P(B\mid A)$. These are usually not the same.

For example, in the deck of cards example:

$$P(\text{ace}\mid \text{spade})=\frac{1}{13}$$

but

$$P(\text{spade}\mid \text{ace})=\frac{1}{4}$$

Why different? Because the conditioning event changes the sample space differently each time.

Another mistake is forgetting that $P(A\mid B)$ only makes sense when $P(B)>0$. If event $B$ cannot happen, then there is no valid conditional probability based on $B$.

A final mistake is using total sample space counts instead of the restricted sample space counts. Always ask:

  1. What event is given?
  2. What outcomes are now allowed?
  3. How many of those outcomes also satisfy the target event?

That checklist helps prevent errors ✅.

Conclusion

Conditional probability is one of the most important ideas in discrete probability because it lets you update probabilities when new information is known. The key formula is:

$$P(A\mid B)=\frac{P(A\cap B)}{P(B)}$$

It connects directly to sample spaces, counting methods, and independence. When events are independent, knowing one event does not change the probability of the other. When events are dependent, conditioning changes the probability.

students, if you can identify the new sample space created by the condition, you can solve many conditional probability problems correctly. This skill is essential in discrete mathematics and in real-world reasoning.

Study Notes

  • Conditional probability means the probability of $A$ given that $B$ has happened.
  • The notation is $P(A\mid B)$.
  • The main formula is $P(A\mid B)=\frac{P(A\cap B)}{P(B)}$, with $P(B)>0$.
  • For equally likely discrete outcomes, $P(A\mid B)=\frac{|A\cap B|}{|B|}$.
  • Conditioning changes the sample space by restricting attention to event $B$.
  • Independence means $P(A\mid B)=P(A)$, which is equivalent to $P(A\cap B)=P(A)P(B)$.
  • $P(A\mid B)$ and $P(B\mid A)$ are usually different.
  • Conditional probability is used in cards, dice, student groups, and many other discrete settings.
  • Understanding conditional probability helps connect sample spaces, counting, and independence in discrete mathematics.

Practice Quiz

5 questions to test your understanding

Conditional Probability — Discrete Mathematics | A-Warded