Computer Architecture Revisited
students, this lesson revisits how a computer is built and how its parts work together to carry out instructions 🧠💻. You will connect the big ideas of computer architecture to the fetch-execute cycle, memory, and performance, which are all important in IB Computer Science HL. By the end of this lesson, you should be able to explain key terms, describe how a program moves through the CPU, and relate architecture choices to real-world computing devices like phones, laptops, and game consoles.
Lesson objectives
- Explain the main ideas and terminology behind computer architecture revisited.
- Apply IB Computer Science HL reasoning to architecture-related questions.
- Connect computer architecture revisited to the broader topic of computer organization.
- Summarize how architecture affects performance and system design.
- Use examples and evidence to support explanations of how computers execute programs.
What computer architecture means
Computer architecture describes the design of a computer system at a level that affects how it works for the user and the programmer. In simple terms, it is the blueprint of the computer’s main parts and how those parts communicate. A useful way to think about it is like a city 🏙️: the streets are the pathways for data, the buildings are components such as the CPU and memory, and traffic rules decide how information moves around.
In IB Computer Science, computer architecture is usually linked to the idea of a stored program system, where instructions and data are kept in main memory and the processor reads them as needed. This is one of the most important ideas in computing because it explains how a general-purpose machine can run many different programs without being rebuilt each time.
The main components are:
- CPU: the part that processes instructions.
- Main memory: stores programs and data currently in use.
- Secondary storage: keeps data long-term, such as SSDs or hard drives.
- Input and output devices: allow communication with the outside world.
- Buses: pathways that transfer data, addresses, and control signals.
These parts work together so that instructions can be fetched, decoded, and executed. Without this structure, a computer would not be able to perform tasks like opening a game, calculating a spreadsheet, or streaming a video.
The role of the CPU
The CPU is often called the “brain” of the computer, but that idea should be used carefully. A brain thinks and reasons; a CPU follows instructions very quickly. The CPU does not decide goals by itself. Instead, it processes machine instructions supplied by software.
The CPU usually contains these parts:
- Control Unit (CU): directs the fetch-execute cycle and sends control signals.
- Arithmetic Logic Unit (ALU): carries out arithmetic and logic operations.
- Registers: very small, very fast storage locations inside the CPU.
Examples of registers include:
- Program Counter (PC): stores the address of the next instruction to fetch.
- Current Instruction Register (CIR): stores the instruction currently being decoded or executed.
- Memory Address Register (MAR): stores the address in memory that the CPU wants to access.
- Memory Data Register (MDR): stores the data being transferred to or from memory.
Registers matter because they are much faster than main memory. If the CPU had to read every value directly from RAM all the time, it would waste time waiting. The small size of registers is a trade-off for speed.
For example, when students opens a calculator app on a phone, the CPU repeatedly uses registers to hold the next instruction, the memory address to access, and the result of a calculation. This is how the processor keeps work organized at high speed ⚡.
Fetch-execute cycle in detail
The fetch-execute cycle is the repeating process the CPU uses to run instructions. It is a central idea in computer organization because it shows how the architecture turns stored instructions into actions.
A common sequence is:
- The address in the $PC$ is copied to the $MAR$.
- The instruction at that address is fetched from memory into the $MDR$.
- The instruction is copied from the $MDR$ into the $CIR$.
- The $PC$ is updated so it points to the next instruction.
- The instruction in the $CIR$ is decoded by the Control Unit.
- The instruction is executed using the $ALU$, registers, memory, or input/output devices.
This cycle repeats continuously while a program runs.
For example, suppose a program contains an instruction to add two numbers. During execution, the CPU may load the numbers from registers or memory into the $ALU$, perform the addition, and store the result back in a register or memory. If the instruction is a branch or jump, the $PC$ may be changed to a different address, which makes the program follow a new path.
It is important to know that modern CPUs often make this process more complex with techniques such as pipelining and caching. However, the basic fetch-decode-execute idea still explains the foundation of program execution.
Memory, buses, and data movement
A computer is not just about processing; it is also about movement of information. The architecture of buses and memory has a big effect on performance.
There are three main types of buses:
- Data bus: transfers actual data.
- Address bus: carries memory addresses.
- Control bus: carries control signals such as read and write.
If the $CPU$ wants to read data from memory, it puts the address on the address bus, sends a read signal on the control bus, and receives the data on the data bus. This is like asking a library clerk for a specific book shelf and then receiving the book 📚.
Main memory, usually RAM, is fast compared with secondary storage, but it is volatile. That means its contents are lost when power is turned off. Secondary storage is non-volatile, so it keeps files even without power. This difference matters when analyzing how a computer stores a document while it is being edited versus when it is saved.
A simple example is a photo editor. When students opens a picture, the image file is loaded from secondary storage into main memory so the CPU can work on it quickly. If the image were edited directly from a slow hard drive, the program would feel much slower.
Performance, architecture choices, and real-world impact
Computer architecture affects how fast and efficient a system feels. Several factors shape performance, including clock speed, number of cores, cache memory, and bus width.
- Clock speed: the number of cycles per second. A higher clock speed can mean more operations per second, though it is not the only factor.
- Multi-core processors: multiple CPU cores can handle different tasks at the same time.
- Cache memory: very fast memory located close to the CPU that stores frequently used instructions and data.
- Bus width: a wider bus can transfer more data at once.
Architecture choices involve trade-offs. For example, a very fast CPU is not helpful if memory access is slow. Similarly, a large amount of cache can improve speed, but it takes more chip space and costs more to manufacture.
Real-world devices show different architectural priorities. A smartphone may use a processor designed to save battery life, while a gaming computer may focus more on high performance. A server may have many cores and lots of memory because it must handle many users at once. This shows that there is no single “best” architecture for every situation.
In IB terms, you should be able to explain that performance is not determined by only one factor. It is the relationship among the CPU, memory, storage, and the way data flows through the system that shapes the final result.
Connecting architecture to software and low-level understanding
Computer architecture is closely tied to software because software must be translated into machine-level actions. A high-level language such as Python or Java is easier for humans to read, but the CPU cannot directly understand that code. The program must ultimately become machine instructions that match the hardware architecture.
This is why architecture matters to programmers and system designers. For example, some instructions are designed to complete a task in fewer steps. If a program uses many memory accesses, it may run more slowly than a program that reuses values already stored in registers or cache. Understanding this helps explain why efficient programs are often designed with hardware behavior in mind.
The low-level view also helps with debugging and performance analysis. If a program is slow, the issue may not be the algorithm alone; it could be caused by too many memory accesses, poor cache use, or frequent input/output delays. In computer science, this is called reasoning about how software interacts with hardware.
One useful exam-style idea is to compare two systems. If one has a faster CPU but less cache, and another has a slower CPU but more cache, the faster CPU is not automatically the better choice. The best answer depends on the type of work being done. For example, video editing, gaming, and data processing all stress a system in different ways.
Conclusion
Computer architecture revisited is about understanding the structure of a computer and how that structure supports program execution. The CPU, memory, buses, and registers work together through the fetch-execute cycle to carry out instructions. Architecture choices influence speed, power use, and cost, which is why different devices are designed in different ways.
For IB Computer Science HL, students should be able to explain these ideas clearly, use the correct terminology, and connect architecture to real-world examples. This topic is important because it shows how abstract instructions become actual actions inside a machine 🤖.
Study Notes
- Computer architecture is the design of the computer’s main parts and how they communicate.
- The main components are the $CPU$, main memory, secondary storage, buses, and input/output devices.
- The $CPU$ contains the Control Unit, the $ALU$, and registers.
- Important registers include the $PC$, $CIR$, $MAR$, and $MDR$.
- The fetch-execute cycle repeats while a program runs.
- In the cycle, the instruction address in the $PC$ is copied to the $MAR$, the instruction is fetched into the $MDR$, and then copied into the $CIR$.
- The Control Unit decodes the instruction and the CPU executes it.
- The data bus carries data, the address bus carries addresses, and the control bus carries signals.
- RAM is volatile; secondary storage is non-volatile.
- Cache memory can improve speed because it stores frequently used data closer to the $CPU$.
- Performance depends on several factors, not just clock speed.
- Different devices use different architectures depending on their purpose, such as battery life, speed, or multi-user workload.
- Understanding architecture helps explain how software runs on hardware and why programs perform differently on different systems.
