Computer Architecture Revisited
students, this lesson revisits computer architecture so you can see how the main parts of a computer work together to process data quickly and reliably 💻⚡ By the end of this lesson, you should be able to explain key terms, describe how the processor communicates with memory and input/output devices, and connect these ideas to the broader topic of computer organization. You will also see how the fetch-execute cycle depends on computer architecture, which is a central idea in IB Computer Science SL.
Why computer architecture matters
Computer architecture is the design of a computer system from the point of view of how its parts are arranged and how they work together. It focuses on the internal structure of the machine, especially the processor, memory, and buses that carry data around. In simple terms, it answers questions like: How does the CPU get instructions? Where is data stored while it is being used? How do devices such as keyboards and screens send and receive information?
A real-world example helps. When students opens a game or types a document, the keyboard or mouse sends input to the computer. The program instructions and data are stored in main memory, and the processor repeatedly fetches and carries out instructions. The computer feels fast when these parts are designed to communicate efficiently. If any part is slow, the whole system can be slowed down.
In IB Computer Science SL, this topic sits inside computer organization, which is the study of how hardware components work together. Computer architecture is important because it helps explain why computers behave the way they do and why some designs are better for certain tasks than others.
Core components of a computer system
A basic computer system includes the CPU, main memory, and input/output devices. The CPU, or central processing unit, is often called the brain of the computer. It performs calculations, makes decisions, and controls the flow of data. Main memory, usually RAM, stores the programs and data that are currently being used. Input devices include things like a keyboard, mouse, and microphone. Output devices include a monitor, speakers, and printer.
Inside the CPU, there are several important parts. The control unit directs the operations of the processor. It tells other parts what to do and when to do it. The arithmetic logic unit, or ALU, performs mathematical operations and logical comparisons such as $5 > 3$ or $8 + 2$. The registers are very small, very fast storage locations inside the CPU. They temporarily hold instructions, addresses, and data during processing.
Some key register names are often used in the fetch-execute cycle. The Program Counter, written as $PC$, stores the address of the next instruction to be fetched. The Memory Address Register, $MAR$, holds the address of the memory location being accessed. The Memory Data Register, $MDR$, holds the data being transferred to or from memory. The Current Instruction Register, $CIR$, stores the instruction currently being decoded and executed.
These terms are important because they explain how the processor handles information one step at a time. Even though modern computers are extremely fast, they still work through a careful sequence of operations.
The fetch-execute cycle in action
The fetch-execute cycle is the repeated process the CPU uses to run instructions. It is one of the most important ideas in computer architecture and in the whole topic of computer organization 🔁
First, the CPU fetches an instruction from main memory. The address of the next instruction is copied from the $PC$ to the $MAR$. The memory at that address is read, and the instruction is placed in the $MDR$. Then the instruction is copied from the $MDR$ into the $CIR$. At the same time, the $PC$ is usually increased so it points to the following instruction.
Next, the instruction is decoded. The control unit examines the instruction in the $CIR$ and figures out what action is required. For example, it may discover that it must add two numbers, compare two values, or move data from one place to another.
Then the instruction is executed. The CPU carries out the required action. If the instruction is arithmetic, the $ALU$ may be used. If the instruction needs data from memory, the CPU may move information through the buses.
Finally, the result is stored if needed, and the cycle begins again. This happens many millions or billions of times per second in a modern computer.
For example, suppose a program asks the computer to calculate $7 + 4$ and display the answer. The processor fetches the instruction to add, decodes it, uses the $ALU$ to compute $11$, stores the result in a register or memory, and then fetches the next instruction, such as a display command. This is how even a simple action depends on a structured internal process.
Data movement and the role of buses
Computer architecture also includes the way data travels between components. This is usually done using buses. A bus is a set of electrical pathways that carry signals between parts of the computer.
There are three main types of bus in the traditional model. The data bus carries the actual data being transferred. The address bus carries the memory address of where the data should come from or go to. The control bus carries control signals such as read, write, and clock signals.
A useful way to imagine this is like a delivery system 🚚 The address bus is like the address on a package, the data bus is the package itself, and the control bus is the instruction telling the system whether to send or receive the package. For instance, if the CPU wants to read a value stored in memory, it places the address on the address bus and sends a read signal on the control bus. Memory then places the value on the data bus.
Bus width matters. A wider data bus can transfer more bits at once, which can improve performance. If a computer has a $32$-bit data bus, it can transfer $32$ bits in one operation. Similarly, a wider address bus can allow access to more memory locations. This helps explain why architecture affects speed and capacity.
Memory hierarchy and performance
Not all memory in a computer is the same. Computer architecture includes a memory hierarchy, which is the arrangement of storage from fastest and smallest to slower and larger. Near the top are CPU registers and cache memory, which are very fast. Main memory is larger but slower than cache. Secondary storage, such as SSDs and hard drives, is much larger and cheaper per bit, but also much slower.
Cache memory is especially important. It stores frequently used data and instructions so the CPU can access them quickly without always going to RAM. This improves performance because the processor spends less time waiting. For example, if students opens the same app many times, the system may keep useful data in cache so it loads more quickly.
This hierarchy shows a common trade-off in computer design: faster memory is usually smaller and more expensive, while larger memory is usually slower. Understanding this trade-off is useful for IB questions that ask you to explain why a system is designed in a certain way.
Architecture and the bigger picture of computer organization
Computer architecture is not separate from computer organization; it is part of it. Computer organization is the broader topic that explains how hardware components work together, including the CPU, memory, buses, and input/output. Computer architecture helps us understand the design choices behind those components.
For example, when studying a processor, students may compare how instructions are represented, how registers are used, and how data flows between units. When studying memory, students may explain why RAM is used for active programs while secondary storage keeps files long term. When studying input/output, students may describe how peripherals communicate with the CPU using controllers and interrupts.
An interrupt is a signal that tells the CPU something needs immediate attention. For example, if a keyboard key is pressed or a printer finishes a task, the CPU can be interrupted and made aware of the event. This prevents the processor from wasting time constantly checking every device. Interrupts are another example of architecture supporting efficient operation.
IB Computer Science SL often expects students to describe systems clearly and use correct terms. A strong answer might explain that the CPU fetches instructions from RAM using the $PC$, processes them through the $ALU$ and control unit, and communicates using buses. That kind of explanation shows understanding of both architecture and organization.
Conclusion
Computer architecture revisited means looking again at the main internal design of a computer system and understanding how the pieces work together. students, the most important ideas are the CPU, memory, registers, buses, and the fetch-execute cycle. These ideas explain how instructions move through a computer and why different designs affect speed, capacity, and efficiency.
This topic connects directly to the broader study of computer organization because it shows how hardware is arranged and controlled. It also supports later understanding of performance, memory management, and input/output systems. If you can explain how the $PC$, $MAR$, $MDR$, and $CIR$ work in the fetch-execute cycle, you have a strong foundation for IB Computer Science SL.
Study Notes
- Computer architecture studies the internal design of a computer and how its parts work together.
- The main hardware components are the CPU, main memory, and input/output devices.
- The CPU contains the control unit, the $ALU$, and registers.
- The $PC$ stores the address of the next instruction.
- The $MAR$ stores the address currently being accessed in memory.
- The $MDR$ stores data being transferred to or from memory.
- The $CIR$ stores the current instruction being decoded and executed.
- The fetch-execute cycle has three main stages: fetch, decode, and execute.
- Buses move information between components.
- The data bus carries data, the address bus carries addresses, and the control bus carries signals.
- Memory hierarchy places cache and registers near the CPU for speed, with RAM and secondary storage further away.
- Faster memory is usually smaller and more expensive.
- Interrupts allow the CPU to respond quickly to important events.
- Computer architecture is part of the wider topic of computer organization.
- Good exam answers use correct terminology and clear step-by-step explanations.
