Concurrency in Resource Management
In this lesson, students, you will learn how computers handle more than one task at the same time, or make it seem like they do. Concurrency is a key idea in HL Extension — Resource Management because modern systems must share limited resources like the CPU, memory, and storage while still responding quickly to users 💻⚡. By the end of this lesson, you should be able to explain the main ideas and terminology behind concurrency, apply IB Computer Science HL reasoning to examples, and connect concurrency to system optimization, scheduling, and control.
Learning objectives
- Explain what concurrency means and why it matters.
- Distinguish concurrency from parallelism.
- Describe how scheduling helps manage competing processes.
- Explain why shared resources can create problems such as race conditions.
- Connect concurrency to memory and processor management in real systems.
What Concurrency Means
Concurrency is the ability of a computer system to manage multiple tasks during the same time period. Those tasks might be running on one CPU core or across several cores. The important idea is that the system is coordinating work so that different programs, threads, or processes can make progress without one task completely blocking the others.
A useful real-world example is a busy restaurant 🍽️. One chef can prepare several meals by switching attention between tasks: chopping vegetables, boiling pasta, checking the oven, and plating dishes. The chef is not literally doing every task at exactly the same instant, but the kitchen still handles many orders at once. That is similar to concurrency.
In computing, this helps systems stay responsive. For example, while a music app is playing audio, your phone can still let you send a message or load a web page. The operating system uses concurrency to coordinate those tasks so they share the processor fairly.
Important terms include:
- Process: a running program with its own memory space.
- Thread: a smaller unit of execution inside a process.
- Task: a general piece of work that needs to be completed.
- State: the current condition of a process or thread, such as running, ready, or blocked.
Concurrency is not just about speed. It is also about fairness, responsiveness, and efficient use of hardware resources.
Concurrency and Parallelism
Concurrency and parallelism are related, but they are not the same.
- Concurrency means handling multiple tasks in overlapping time periods.
- Parallelism means doing multiple tasks at the exact same time.
A single-core CPU can support concurrency by switching quickly between tasks. This is called time slicing. Each task gets a small time period on the CPU, then the scheduler pauses it and gives another task a turn. To the user, it may look like several things are happening at once.
A multi-core CPU can support both concurrency and parallelism. For example, a laptop with $4$ cores may run one game, one browser tab, and a file download concurrently, while also running some parts of those tasks in parallel on different cores.
Why does this matter for IB Computer Science HL? Because many systems are designed to balance limited resources. If a program needs all the CPU time, other programs may become slow. Concurrency helps reduce that problem by sharing the processor in a controlled way.
A simple analogy is a single cashier at a shop versus several cashiers. One cashier can still serve many customers by calling the next person in line quickly, which is concurrency. Several cashiers actually serving different customers at the same time is parallelism.
Scheduling and the Operating System
Concurrency depends on the operating system, especially the scheduler. The scheduler decides which task gets CPU time and for how long. This is central to resource allocation and control because the CPU is one of the most valuable shared resources in the system.
Common scheduling ideas include:
- Preemptive scheduling: the operating system can stop a task and switch to another one.
- Non-preemptive scheduling: a task keeps the CPU until it finishes or waits for something.
- Time slicing: each task gets a short time slot.
- Context switching: the system saves the state of one task and loads the state of another.
Context switching is essential, but it is not free. Saving and restoring registers, program counters, and other state takes time. If there are too many switches, the system can waste time on management instead of useful work. This is one reason why resource management must be carefully balanced.
For example, imagine a student laptop running a video call, a word processor, and an antivirus scan. The OS may give the video call more frequent CPU access because real-time audio and video need low delay. The antivirus scan can run in the background with lower priority. This scheduling choice improves overall user experience.
Problems Caused by Shared Resources
Concurrency becomes difficult when multiple tasks need the same resource at the same time. Shared resources may include memory, files, printers, databases, or variables in a program.
A common problem is a race condition. This happens when the result depends on the exact timing of events. If two threads update the same value without proper coordination, the final result may be wrong.
For example, suppose two threads both try to add $1$ to a shared counter. Each thread may read the old value, calculate a new value, and write it back. If both read the value before either writes, one update can be lost. The final result may be smaller than expected.
This is why concurrency needs control mechanisms such as:
- Mutual exclusion: only one task can access a critical resource at a time.
- Critical section: the part of a program that accesses shared data.
- Lock: a tool that prevents other threads from entering a critical section.
- Semaphore: a control variable used to manage access to resources.
These tools protect data integrity. Without them, systems might crash, produce incorrect results, or show inconsistent behavior. For instance, in online shopping, two customers cannot both be assigned the same last item in stock if the inventory system is properly synchronized.
Concurrency and Memory Management
Concurrency also affects memory management. When many processes and threads are active, the system must decide how to store their data efficiently and safely.
Each process usually has its own address space, which helps protect it from interference by other processes. Threads within the same process often share memory, which makes communication faster but also increases the risk of errors if access is not controlled.
If memory is limited, the operating system may need to swap data in and out of main memory. Too many active tasks can increase memory pressure and slow the system down. This is part of resource management because the OS must balance processor time with memory use.
For example, a web browser can open many tabs. Each tab may use separate processes or threads. This improves stability because one crashed tab may not destroy the whole browser. However, it also increases memory use. The system must manage this trade-off carefully.
Concurrency can also increase cache efficiency in some cases, but it can create new issues too. If several threads repeatedly access nearby memory locations, they may interfere with one another’s cached data. Good system design tries to reduce unnecessary contention.
Why Concurrency Matters in Real Systems
Concurrency is essential in many real-world systems because users expect fast responses and reliable service. Examples include:
- mobile phones switching between apps, notifications, and network activity
- servers handling thousands of web requests
- games updating graphics, sound, and controls
- databases processing many users at once
In a web server, concurrency allows many clients to connect without making each person wait for all others to finish. The server may use multiple threads or asynchronous tasks to handle requests efficiently. This is a direct example of resource allocation and control, because the server must divide CPU time, memory, and network access among many users.
Concurrency also supports system optimization. A system that handles tasks concurrently can reduce idle CPU time. If one task waits for input from disk or network, another task can use the processor instead of leaving it unused. This improves throughput, which is the amount of work completed in a given time.
However, more concurrency is not always better. Too many threads can increase overhead, make debugging harder, and introduce synchronization problems. The best solution depends on the workload and the hardware.
Conclusion
Concurrency is a central idea in HL Extension — Resource Management because it shows how a computer system shares limited resources among many tasks. students, you should now understand that concurrency means managing overlapping tasks, not necessarily doing everything at the same instant. You should also be able to explain the difference between concurrency and parallelism, describe the role of scheduling, and identify problems like race conditions. In real systems, concurrency helps improve responsiveness, fairness, and efficient use of processor and memory resources. It is one of the main ways operating systems keep modern computers fast, stable, and useful 🚀.
Study Notes
- Concurrency means handling multiple tasks during the same time period.
- Parallelism means performing multiple tasks at exactly the same time.
- The operating system scheduler controls which task gets CPU time.
- Time slicing helps one CPU support concurrency.
- Context switching lets the OS move between tasks by saving and restoring state.
- Shared resources can cause race conditions if access is not controlled.
- Critical sections need mutual exclusion to protect data.
- Locks and semaphores are common synchronization tools.
- Threads in the same process may share memory, which is efficient but risky.
- Concurrency is important for responsiveness, throughput, and resource management.
- Too much concurrency can create overhead and synchronization problems.
- Concurrency connects directly to processor management, memory management, and system optimization.
