6. HL Extension — Resource Management

Deadlock

Deadlock in Resource Management

Imagine a busy school hallway where two students each need to pass through a door, but each one is holding the other’s backpack strap. Neither can move forward, and neither can let go. The hallway stops. In computer systems, a very similar problem can happen when processes or threads each wait for resources held by others. This is called deadlock ⚠️.

In this lesson, students, you will learn what deadlock is, why it matters in IB Computer Science HL, and how operating systems try to prevent, avoid, detect, or recover from it. By the end, you should be able to explain the main ideas, use correct terminology, and connect deadlock to the wider HL Extension — Resource Management topic.

What Deadlock Means

A deadlock happens when two or more processes are stuck because each one is waiting for a resource held by another process in the same group. Since none of them can continue, the system reaches a standstill.

A process is an active program in memory, and a resource can be anything a process needs to run, such as a printer, a file, a memory block, or a locked data item. In modern systems, resources are often shared by many tasks at once. That sharing is useful, but it creates the possibility of conflict.

Deadlock is not the same as a simple delay. A delay means a process is waiting, but it can still continue later. In deadlock, the waiting forms a cycle that cannot break on its own.

A common real-world example is traffic at a four-way intersection. If four cars enter at the same time and each car blocks the next one, all cars may remain stuck. In computing, the “cars” are processes and the “intersection” is the shared system resource.

The Four Necessary Conditions

Deadlock does not happen randomly. For deadlock to occur, all four of the following conditions must be true at the same time:

  1. Mutual exclusion: At least one resource must be held in a way that only one process can use it at a time.
  2. Hold and wait: A process is holding one resource while waiting for another.
  3. No preemption: Resources cannot be forcibly taken away; they must be released voluntarily.
  4. Circular wait: A circular chain exists where each process waits for a resource held by the next process.

These four conditions are important because if an operating system prevents even one of them, deadlock cannot occur.

For example, suppose Process A holds Printer 1 and waits for Scanner 1, while Process B holds Scanner 1 and waits for Printer 1. If neither resource can be taken away automatically, both processes are blocked. This is a classic deadlock situation.

A helpful way to remember this is to think of a chain of waiting. If each person in the chain is waiting for the next person to give up something, and nobody can move, the system freezes ❄️.

Resource Allocation and System Control

Deadlock belongs to the HL Extension — Resource Management because it shows what can go wrong when a computer system shares resources. An operating system must decide how to allocate resources efficiently while also keeping the system safe and responsive.

Resource allocation is the act of giving resources to processes. System control means managing that allocation so processes do not interfere with each other in harmful ways. If the OS gives resources too freely, deadlock may occur. If it is too strict, performance may drop because tasks spend too much time waiting.

This is a balance problem. The operating system must decide whether to allow a process to request multiple resources, whether to make it wait, or whether to reorder requests to reduce risk. Deadlock therefore connects directly to scheduling, concurrency, and memory and processor management.

Concurrency means multiple processes are making progress at the same time. That improves speed and responsiveness, but it also increases the chance that two processes will compete for the same resource. Deadlock is one of the main risks of concurrency.

A Simple Example with Two Resources

Let’s use a clear example.

Process A needs Resource $R_1$ and Resource $R_2$.

Process B also needs Resource $R_1$ and Resource $R_2$.

A dangerous sequence might be:

  • Process A gets $R_1$.
  • Process B gets $R_2$.
  • Process A waits for $R_2$.
  • Process B waits for $R_1$.

Now both processes are blocked. Neither can complete because each is holding one resource and waiting for the other. This creates a circular wait.

In diagrams, this is often shown with a resource allocation graph. Processes are circles, resources are squares, and arrows show requests and assignments. A cycle in such a graph may indicate deadlock. If each resource has only one instance, a cycle means deadlock has occurred.

This type of reasoning is useful in exam questions because it shows not just memorization, but understanding of process interaction.

Preventing Deadlock

One way to deal with deadlock is to stop it from happening in the first place. This is called prevention.

Prevention works by making sure at least one of the four necessary conditions can never happen. For example:

  • Remove mutual exclusion by making a resource sharable, if possible.
  • Remove hold and wait by requiring a process to request all needed resources at once.
  • Remove no preemption by allowing the OS to take back resources in some cases.
  • Remove circular wait by forcing resources to be requested in a fixed order.

A fixed ordering rule is common and practical. For example, if every process must request Resource $R_1$ before Resource $R_2$ before Resource $R_3$, then circular wait is much harder to create.

However, prevention can reduce efficiency. If a process must request all resources at once, it may hold resources longer than necessary, which can waste system time. IB Computer Science HL often asks you to consider these trade-offs.

Avoiding Deadlock

Another strategy is deadlock avoidance. In avoidance, the system examines each resource request and decides whether granting it might lead the system into an unsafe state.

An unsafe state is a state where deadlock could eventually happen, even if it has not happened yet. A safe state is one where the OS can still find a way for all processes to finish.

A well-known example is the Banker’s algorithm. The idea is similar to a bank deciding whether it can safely lend money to customers and still meet all future demands. The operating system checks whether it can still satisfy every process’s maximum possible resource needs after giving out a request.

This method is more flexible than prevention, but it requires extra information, such as the maximum resources each process may need. It also adds overhead because the OS must perform extra calculations. That makes it a good example of resource management trade-offs: better safety, but more system work.

Detecting and Recovering from Deadlock

Some systems allow deadlock to happen and then deal with it later. This is called detection and recovery.

In detection, the operating system checks whether deadlock exists, often by examining resource allocation patterns or waiting chains. If a cycle is found, the OS may conclude that deadlock has occurred.

Once deadlock is detected, recovery methods may include:

  • terminating one or more processes
  • preempting resources from a process
  • rolling back a process to a previous safe state

These actions can fix the deadlock, but they may also lose work or harm user experience. For example, if a word processor process is killed to recover from deadlock, unsaved work may be lost. So detection and recovery are useful, but they are not ideal if the goal is to avoid interruption.

In real systems, designers choose strategies based on the type of system. A database server may need strong avoidance or careful locking rules because data correctness is critical. A simpler system may accept rare deadlocks and recover from them when needed.

Deadlock and Broader HL Resource Management

Deadlock is only one part of resource management, but it connects to several other HL ideas.

It links to scheduling because the order in which processes are given CPU time can affect whether they reach a resource request at the same time. It links to concurrency because more simultaneous activity means more resource conflict. It links to memory and processor management because locks, buffers, and memory areas are shared resources.

For example, two threads in a program may each need access to the same data structure in memory. If both threads lock different parts of the system and wait for the other lock, deadlock may occur. That is why programmers use careful synchronization methods.

This shows that deadlock is not just an abstract theory. It is a practical issue in real systems such as databases, operating systems, network servers, and embedded devices.

Conclusion

Deadlock is a situation where processes are permanently blocked because each one waits for resources held by another. It is central to HL Extension — Resource Management because it shows the difficulties of sharing resources safely and efficiently. You should remember the four necessary conditions: mutual exclusion, hold and wait, no preemption, and circular wait. You should also know the main OS responses: prevention, avoidance, detection, and recovery.

For IB Computer Science HL, the key skill is not just naming deadlock, but explaining how it happens and evaluating methods to deal with it. When you can trace a chain of waiting and explain why no process can continue, you are applying strong HL reasoning ✅.

Study Notes

  • Deadlock is when processes are stuck waiting for each other’s resources.
  • It happens only when all four conditions are present: mutual exclusion, hold and wait, no preemption, and circular wait.
  • Deadlock is a major issue in concurrency and resource allocation.
  • A resource allocation graph can help show circular waiting.
  • Prevention stops deadlock by breaking one of the required conditions.
  • Avoidance uses safe-state checking, such as the Banker’s algorithm.
  • Detection lets deadlock happen, then the OS finds and fixes it.
  • Recovery may involve ending processes, taking resources back, or rolling back work.
  • Deadlock connects to scheduling, memory management, and processor/resource control.
  • In exams, always explain the cause, the condition(s) involved, and the OS response.

Practice Quiz

5 questions to test your understanding