4. Statistics and Probability

Standardisation Of Normal Variables

Standardisation of Normal Variables

Welcome, students 🎯 In this lesson, you will learn how to turn any normal random variable into a standard normal variable. This is one of the most useful tools in statistics because it lets you compare values from different normal distributions using a single table or calculator method. By the end of the lesson, you should be able to explain what standardisation means, use the formula for a standard normal variable, and solve probability questions with confidence.

What is standardisation?

Standardisation is the process of converting a value from a normal distribution into a value on the standard normal distribution. The standard normal distribution is the special normal curve with mean $0$ and standard deviation $1$.

If a random variable $X$ is normally distributed with mean $\mu$ and standard deviation $\sigma$, we write this as $X \sim N(\mu,\sigma^2)$. To standardise a value $x$, we calculate

$$z=\frac{x-\mu}{\sigma}$$

The value $z$ tells us how many standard deviations $x$ is above or below the mean. If $z$ is positive, the value is above the mean. If $z$ is negative, the value is below the mean. This is a key idea in probability and statistics because it allows different normal distributions to be compared on the same scale.

For example, if a class test has mean $70$ and standard deviation $8$, then a score of $86$ gives

$$z=\frac{86-70}{8}=2$$

This means the score is $2$ standard deviations above the mean. If another test has a different mean and spread, the same score would not necessarily have the same meaning. Standardisation helps us compare performance fairly πŸ“Š

Why standardisation matters in IB Mathematics

In IB Mathematics: Analysis and Approaches HL, standardisation appears in the topic of statistics and probability because it links normal distributions, tables, and probability calculations. Many real-world situations follow a normal distribution closely enough for standard normal methods to be useful. Examples include heights, test scores, measurement error, and many biological data sets.

Standardisation helps with three important tasks:

  1. finding probabilities for values in a normal distribution,
  2. comparing scores from different distributions,
  3. using the symmetry and properties of the normal curve.

Suppose a company measures the lifetime of batteries and finds that battery life is normally distributed. If a battery lasts much longer than usual, standardisation can tell us how unusual that result is. In statistics, this is valuable because unusual values can indicate strong performance, faulty data, or important patterns.

The standard normal variable is usually written as $Z$. If $X \sim N(\mu,\sigma^2)$, then

$$Z=\frac{X-\mu}{\sigma} \sim N(0,1)$$

This is more than a formula to memorise. It is a transformation that changes the location and scale of the distribution while keeping the same normal shape.

How to standardise a value

To standardise a value, follow these steps:

  1. identify the mean $\mu$,
  2. identify the standard deviation $\sigma$,
  3. substitute the value $x$ into $z=\frac{x-\mu}{\sigma}$,
  4. interpret the result.

Let’s use a simple example. Suppose exam scores are normally distributed with mean $60$ and standard deviation $10$.

What is the standardised value of a score of $75$?

$$z=\frac{75-60}{10}=1.5$$

So the score of $75$ is $1.5$ standard deviations above the mean.

Now try a score of $45$:

$$z=\frac{45-60}{10}=-1.5$$

This means the score is $1.5$ standard deviations below the mean. Notice something important: values equally far above and below the mean have opposite $z$-scores. This matches the symmetry of the normal distribution.

A common mistake is to forget the order in the numerator. The correct formula is always $z=\frac{x-\mu}{\sigma}$, not $z=\frac{\mu-x}{\sigma}$. Also, the denominator must be the standard deviation, not the variance. If a question gives variance $\sigma^2$, you must first find $\sigma$ by taking the square root.

Using standardisation to find probabilities

The main purpose of standardisation is often to find probabilities. Probability questions about normal variables usually ask for the chance that a value is below, above, or between certain points.

For a normal random variable $X \sim N(\mu,\sigma^2)$, the probability $P(X<x)$ can be converted into a standard normal probability using $Z=\frac{X-\mu}{\sigma}$. This gives

$$P(X<x)=P\left(Z<\frac{x-\mu}{\sigma}\right)$$

This is powerful because many calculators and tables are built around the standard normal distribution.

Example: Suppose $X \sim N(100,15^2)$. Find $P(X<130)$.

First standardise:

$$z=\frac{130-100}{15}=2$$

So

$$P(X<130)=P(Z<2)$$

Using a calculator or standard normal table, this probability is approximately $0.9772$. That means about $97.72\%$ of values are below $130$.

Now consider $P(X>130)$.

Because total probability is $1$,

$$P(X>130)=1-P(X<130)=1-0.9772=0.0228$$

So only about $2.28\%$ of values are above $130$. This is a good example of how standardisation helps identify rare values πŸ”

Finding probabilities between two values

A very common type of question asks for the probability that a value lies between two numbers.

If $X \sim N(\mu,\sigma^2)$, then

$$P(a<X<b)=P\left(\frac{a-\mu}{\sigma}<Z<\frac{b-\mu}{\sigma}\right)$$

Example: Let $X \sim N(50,6^2)$. Find $P(44<X<62)$.

Standardise both values:

$$z_1=\frac{44-50}{6}=-1$$

$$z_2=\frac{62-50}{6}=2$$

So

$$P(44<X<62)=P(-1<Z<2)$$

Using standard normal probabilities,

$$P(Z<2)\approx 0.9772$$

and

$$P(Z<-1)\approx 0.1587$$

Therefore,

$$P(-1<Z<2)=0.9772-0.1587=0.8185$$

So the probability is about $0.8185$, or $81.85\%$.

This type of question shows why the standard normal distribution is so useful. Rather than dealing with every normal distribution separately, we translate the problem into one familiar distribution.

Interpreting $z$-scores in context

A $z$-score is not just a number from a formula. It tells a story about position relative to the mean.

  • $z=0$ means the value is exactly at the mean.
  • $z=1$ means the value is one standard deviation above the mean.
  • $z=-2$ means the value is two standard deviations below the mean.

In real life, this helps with comparison. For example, if a student gets $85$ on a test where the class average is $80$ with standard deviation $5$, the score has

$$z=\frac{85-80}{5}=1$$

If another student gets $72$ on a different test with mean $68$ and standard deviation $2$, the score has

$$z=\frac{72-68}{2}=2$$

Even though the second score is numerically smaller, it is more impressive relative to its class because it is further above the mean. Standardisation makes this comparison possible.

This idea is also used in quality control, where machines produce parts that must stay within a certain tolerance. A part with a large absolute $z$-score may be more unusual and may require inspection.

Connection to the broader topic of statistics and probability

Standardisation connects several major ideas in this topic area. It depends on the description of data using mean and standard deviation, which are core measures of location and spread. It also supports probability calculations for continuous random variables, especially the normal distribution.

In statistics, standardisation is related to comparing data from different scales. In probability, it allows us to turn any normal variable into the standard normal variable $Z$, which has well-known probabilities. In modeling, it is useful when judging whether an observation is typical or unusual.

This lesson also connects with later work in inferential statistics. Standardised values help describe sampling distributions, test statistics, and confidence intervals. Although those ideas go beyond this lesson, the same logic appears again: transform a quantity into a standard form so probabilities can be found more easily.

Conclusion

Standardisation of normal variables is a simple but powerful idea. The formula $z=\frac{x-\mu}{\sigma}$ converts any normal value into a standard normal value, showing how far it is from the mean in standard deviation units. This makes it possible to compare values across different normal distributions and to calculate probabilities using the standard normal distribution. students, if you understand how to standardise and interpret $z$-scores, you have a strong foundation for many probability problems in IB Mathematics: Analysis and Approaches HL βœ…

Study Notes

  • Standardisation converts a value from $X \sim N(\mu,\sigma^2)$ into a value from the standard normal distribution $Z \sim N(0,1)$.
  • The standardisation formula is $z=\frac{x-\mu}{\sigma}$.
  • A $z$-score tells how many standard deviations a value is above or below the mean.
  • Positive $z$-scores are above the mean; negative $z$-scores are below the mean.
  • Standardisation is used to find probabilities such as $P(X<x)$, $P(X>x)$, and $P(a<X<b)$.
  • For interval probabilities, standardise both endpoints and subtract probabilities if needed.
  • The standard normal distribution is the same for all normal problems, so it provides one common scale.
  • Standardisation is important in statistics, probability, comparisons, and interpretation of unusual values.
  • Always use the standard deviation $\sigma$, not the variance $\sigma^2$, in the formula.
  • Standardisation is a key skill for IB Mathematics: Analysis and Approaches HL because it supports normal distribution calculations and later statistical methods.

Practice Quiz

5 questions to test your understanding

Standardisation Of Normal Variables β€” IB Mathematics Analysis And Approaches HL | A-Warded