5. HL Extension — Abstract Data Structures

Linked Lists

Linked Lists

students, imagine you are organizing a line of students for a school event 🎟️. Each student holds the hand of the next student, but they are not standing in numbered desks or fixed seats. If someone leaves, the line can change without moving everyone else. That is the core idea behind a linked list: a flexible way to store data where each item points to the next one.

In this lesson, you will learn the main ideas and vocabulary of linked lists, how they work in memory, why they matter in computer science, and how they compare with arrays and other abstract data structures. By the end, you should be able to explain linked lists clearly, trace their behavior, and connect them to the broader HL topic of abstraction and data organization. 🎯

What a Linked List Is

A linked list is a linear data structure made of nodes. Each node stores data and a reference to another node. The reference is often called a pointer. In a singly linked list, each node points only to the next node. The last node points to $\text{null}$, which means there is no next node.

A simple linked list might look like this:

$$

$\text{Head}$ \rightarrow [10|next] \rightarrow [25|next] \rightarrow [40|next] \rightarrow $\text{null}$

$$

The first node is called the head. The last node is called the tail. The list is traversed by starting at the head and following each next reference until $\text{null}$ is reached.

This structure is abstract because you think about how the data is connected, not just where it is stored physically in memory. That fits the HL idea of abstract data structures: the important part is the organization and behavior of the data, not only the raw storage. 🧠

Key Terminology and Concepts

To understand linked lists well, students, you need the main terms.

  • Node: one unit in the linked list that stores data and a reference.
  • Data field: the value stored in the node, such as a number, name, or record.
  • Next reference: the link to the next node in the list.
  • Head: the first node in the list.
  • Tail: the last node in the list.
  • Null reference: a reference that points to nothing, showing the end of the list.
  • Traversal: moving through the list node by node.
  • Insertion: adding a node into the list.
  • Deletion: removing a node from the list.
  • Search: finding a value or node in the list.

A linked list is different from an array because array elements are stored in consecutive memory locations, while linked list nodes can be stored anywhere in memory as long as each node has the correct reference to the next one. This makes linked lists useful when data changes often. 📌

How Linked Lists Work in Memory

In a linked list, memory is allocated for each node separately. That means a node can be placed wherever there is available space. The next reference tells the computer how to move from one node to another.

This is useful because a linked list does not need a large continuous block of memory like an array often does. If a program frequently adds and removes items, linked lists can be practical because they can grow and shrink dynamically.

For example, think about a music app queue 🎵. If someone adds a song to the middle of the queue or removes one from near the start, a linked list can update connections between nodes without shifting every later item in memory.

However, linked lists also have a cost. To reach a node near the end, the computer must follow references from the head one at a time. This means access is not direct. If you want the $50^{\text{th}}$ node, you usually must visit the first $49$ nodes first. So linked lists are not ideal for fast random access.

Common Operations and Their Reasoning

Traversal

Traversal starts at the head and follows each next reference until the end.

Example: suppose the list stores $5$, $12$, and $20$.

$$

$\text{Head}$ \rightarrow [5] \rightarrow [12] \rightarrow [20] \rightarrow $\text{null}$

$$

To print all values, a program visits the nodes in order and outputs each data value.

Insertion

Insertion is often a strength of linked lists. If you know where to insert a new node, you only need to change a few references.

Example: inserting $8$ between $5$ and $12$.

Before:

$$

$\text{Head}$ \rightarrow [5] \rightarrow [12] \rightarrow [20] \rightarrow $\text{null}$

$$

After:

$$

$\text{Head}$ \rightarrow [5] \rightarrow [8] \rightarrow [12] \rightarrow [20] \rightarrow $\text{null}$

$$

The node containing $5$ now points to the node containing $8$, and $8$ points to $12$.

Deletion

Deletion means bypassing a node. If the node containing $12$ is removed, then the node containing $5$ can point directly to $20$.

Before:

$$

$\text{Head}$ \rightarrow [5] \rightarrow [12] \rightarrow [20] \rightarrow $\text{null}$

$$

After:

$$

$\text{Head}$ \rightarrow [5] \rightarrow [20] \rightarrow $\text{null}$

$$

The removed node can then be cleared from memory.

Search

To find a value, the list is usually searched from the head. This is called linear search. If the value is near the end, the process takes longer. The time taken is proportional to the number of nodes checked.

This is why linked lists are efficient for some updates but not for quick searching. ✅

Types of Linked Lists and Why They Matter

The most common version is the singly linked list, where each node points forward only. This is simple and uses less memory per node.

A doubly linked list stores two references in each node: one to the next node and one to the previous node. This allows movement in both directions. It is useful when data must be traversed forwards and backwards, such as in browser history or undo-redo systems.

A circular linked list connects the last node back to the head instead of using $\text{null}$. This can be useful in round-robin scheduling, where tasks are repeated in a cycle.

For IB Computer Science HL, the key idea is not memorizing every possible variation, but understanding how references define the structure and how those references affect operations and efficiency.

Efficiency and Comparison with Arrays

Efficiency is a major part of this topic. Linked lists and arrays solve similar problems, but they do so in different ways.

Arrays are good when you need fast direct access by index, such as $A[3]$ or $A[10]$. Linked lists are better when the data changes often and insertions or deletions happen near the front or middle.

A useful comparison is:

  • Access: arrays are faster because they support direct indexing; linked lists require traversal.
  • Insertion and deletion: linked lists can be faster if the position is already known, because only references need updating.
  • Memory use: linked lists need extra memory for references.
  • Flexibility: linked lists grow and shrink easily.

For example, if a classroom seating chart rarely changes, an array-like structure works well. If students frequently join or leave a line, a linked list is more natural. 🎓

In algorithm analysis, the time for traversal, search, or access in a singly linked list is generally $O(n)$ in the worst case, because the program may need to inspect every node. Insertion at the head can be $O(1)$, because only a few references change.

Linked Lists in the Bigger Picture of Abstract Data Structures

Linked lists are important because they show the difference between the logical view of data and its physical implementation. That is one of the central ideas of abstract data structures.

A stack or queue can be built using a linked list. For example:

  • A stack can use the head as the top, making push and pop operations efficient.
  • A queue can use the head for removal and the tail for insertion, depending on the design.

This shows how a linked list can act as a building block for other structures. It is not just a list of data; it is a flexible foundation for problem solving.

Linked lists also appear in real systems where frequent changes matter, such as memory management, playlists, undo features, and task scheduling. Understanding them helps you reason about when a structure is chosen and why a programmer might accept slower search in exchange for easier modification.

Conclusion

Linked lists are a classic HL abstract data structure because they combine data and references to create flexible, dynamic storage. students, you should now understand the main vocabulary, the role of the head, tail, and null reference, and how traversal, insertion, deletion, and search work. You should also be able to compare linked lists with arrays and explain why linked lists are useful when data changes often.

Most importantly, linked lists show the power of abstraction in computer science: the same information can be organized in different ways depending on the task. That is exactly the kind of reasoning expected in IB Computer Science HL. 🚀

Study Notes

  • A linked list is a linear data structure made of nodes.
  • Each node contains data and a reference to the next node.
  • The first node is called the head, and the last node is called the tail.
  • The end of a singly linked list is marked by $\text{null}$.
  • Traversal means moving from node to node using references.
  • Search in a singly linked list is usually $O(n)$.
  • Insertion and deletion can be efficient if the position is known.
  • Linked lists do not require consecutive memory locations.
  • Arrays allow fast direct access; linked lists require traversal.
  • Linked lists are useful for dynamic data that changes often.
  • Doubly linked lists include references to both next and previous nodes.
  • Circular linked lists connect the last node back to the head.
  • Linked lists support the HL idea of abstract data structures by separating logical organization from physical storage.

Practice Quiz

5 questions to test your understanding

Linked Lists — IB Computer Science HL | A-Warded