5. Decision Mathematics

Linear Programming

Formulate linear programming problems, feasible regions, objective optimization and use of simplex ideas.

Linear Programming

Hey there students! 👋 Ready to dive into one of the most practical areas of mathematics? Linear programming is like being a master puzzle solver - you'll learn how to find the best possible solution when you're working with limited resources and specific goals. By the end of this lesson, you'll understand how to set up linear programming problems, identify feasible regions graphically, optimize objective functions, and grasp the fundamental ideas behind the simplex method. Let's turn you into an optimization expert! 🎯

Understanding Linear Programming Fundamentals

Linear programming is a mathematical method used to find the best outcome (maximum or minimum) in situations where you have limited resources and specific constraints. Think of it like planning the perfect party with a limited budget - you want to maximize fun while staying within your spending limits! 🎉

The term "linear" comes from the fact that all relationships in these problems can be expressed as straight lines when graphed. This makes them much easier to solve than curved, non-linear relationships. Linear programming was first developed during World War II to help allocate military resources efficiently, and today it's used everywhere from airline scheduling to factory production planning.

Every linear programming problem has three essential components:

Decision Variables represent the quantities you can control or change. These are typically denoted as $x$, $y$, or $x_1, x_2, x_3$, etc. For example, if you're a bakery owner deciding how many chocolate cakes ($x$) and vanilla cakes ($y$) to make each day, these quantities are your decision variables.

The Objective Function is what you're trying to optimize - either maximize (like profit) or minimize (like cost). It's always a linear combination of your decision variables. If chocolate cakes give you £15 profit and vanilla cakes give you £12 profit, your objective function would be: Maximize $P = 15x + 12y$.

Constraints are the limitations or restrictions you must work within. These create boundaries for your problem. Maybe you only have 8 hours of baking time per day, or you can only store 50 cakes total. These become mathematical inequalities like $2x + 3y ≤ 480$ (time constraint) and $x + y ≤ 50$ (storage constraint).

Formulating Linear Programming Problems

Let's work through a real-world example to see how we transform word problems into mathematical models. Imagine you're managing a small electronics factory that produces smartphones and tablets. 📱

Here's the scenario: Smartphones generate £200 profit each, while tablets generate £150 profit each. Your factory has two main constraints - you have 1000 hours of assembly time available per week, and 800 hours of testing time. Each smartphone requires 2 hours of assembly and 1 hour of testing, while each tablet needs 1 hour of assembly and 2 hours of testing.

Step 1: Define Decision Variables

  • Let $x$ = number of smartphones produced per week
  • Let $y$ = number of tablets produced per week

Step 2: Write the Objective Function

We want to maximize profit: $P = 200x + 150y$

Step 3: Identify Constraints

  • Assembly time: $2x + y ≤ 1000$
  • Testing time: $x + 2y ≤ 800$
  • Non-negativity: $x ≥ 0, y ≥ 0$ (you can't produce negative quantities!)

This systematic approach works for any linear programming problem. The key is carefully reading the problem, identifying what you can control (decision variables), what you want to achieve (objective function), and what limits you face (constraints).

Feasible Regions and Graphical Solutions

The feasible region is like a playground where all your possible solutions live - it's the area on a graph where all constraints are satisfied simultaneously. 🎪 Understanding this concept is crucial because the optimal solution will always be found at one of the corners (vertices) of this region!

Let's continue with our electronics factory example. When we graph our constraints on a coordinate system with $x$ on the horizontal axis and $y$ on the vertical axis, each constraint creates a boundary line:

The assembly constraint $2x + y ≤ 1000$ creates a line where $2x + y = 1000$. Points below and to the left of this line satisfy the constraint. Similarly, the testing constraint $x + 2y ≤ 800$ creates another boundary line.

The non-negativity constraints $x ≥ 0$ and $y ≥ 0$ mean we only consider the first quadrant of the graph. When we combine all these constraints, we get a polygon-shaped feasible region.

Here's the amazing part - the Fundamental Theorem of Linear Programming tells us that if an optimal solution exists, it will always be at a corner point (vertex) of the feasible region! This means instead of checking infinite points within the region, we only need to test a few corner points.

To find corner points, we solve systems of equations where constraint lines intersect:

  • Origin: $(0, 0)$
  • Assembly line meets y-axis: $(0, 1000)$ - but check if this satisfies testing constraint!
  • Testing line meets x-axis: $(800, 0)$ - but check if this satisfies assembly constraint!
  • Assembly and testing lines intersect: solve $2x + y = 1000$ and $x + 2y = 800$ simultaneously

The intersection gives us $x = 400$ and $y = 200$. We evaluate our objective function $P = 200x + 150y$ at each feasible corner point to find the maximum profit.

Objective Function Optimization Techniques

Optimization in linear programming is like being a detective - you systematically check each suspect (corner point) to find the culprit (optimal solution)! 🕵️

There are two main graphical approaches to find the optimal solution:

Corner Point Method: Calculate the objective function value at each vertex of the feasible region. The vertex giving the highest value (for maximization) or lowest value (for minimization) is your optimal solution. This method is foolproof but can be tedious with many constraints.

Iso-profit Line Method: Draw lines representing constant values of the objective function (like contour lines on a map showing equal elevation). These lines are parallel to each other. Move these lines in the direction of increasing objective function value until you reach the last point that still touches the feasible region - that's your optimal point!

For our factory example, if we evaluate $P = 200x + 150y$ at our corner points:

  • At $(0, 0)$: $P = 0$
  • At $(0, 400)$: $P = 60,000$ (if feasible)
  • At $(800, 0)$: $P = 160,000$ (if feasible)
  • At $(400, 200)$: $P = 110,000$

Remember to always check that corner points actually satisfy ALL constraints - some intersection points might fall outside the feasible region!

Introduction to Simplex Method Ideas

While graphical methods work perfectly for problems with two variables, real-world problems often involve dozens or hundreds of variables - imagine trying to draw a graph in 50 dimensions! 🤯 This is where the simplex method comes to the rescue.

The simplex method, developed by George Dantzig in 1947, is an algebraic procedure that systematically moves from one corner point of the feasible region to an adjacent corner point with a better objective function value. It's like having a GPS that always knows the most efficient route to your destination!

The key insight is that the simplex method mimics what we do graphically but uses algebra instead of pictures. It starts at one corner point (usually the origin) and asks: "Can I move to a neighboring corner point and improve my objective function?" If yes, it moves there. If no neighboring point offers improvement, you've found the optimal solution.

The method requires converting the problem into standard form:

  • All constraints become equations (using slack variables)
  • All variables are non-negative
  • The objective function is written to minimize (maximization problems are converted)

For example, the constraint $2x + y ≤ 1000$ becomes $2x + y + s_1 = 1000$ where $s_1 ≥ 0$ is a slack variable representing unused assembly time.

The simplex method then uses matrix operations and pivot operations to systematically explore corner points. While the detailed calculations are beyond A-level scope, understanding that it's an efficient, systematic way to solve large linear programming problems is crucial for appreciating its power and widespread use in industry.

Conclusion

Linear programming is a powerful mathematical tool that helps us make optimal decisions when faced with limited resources and competing objectives. We've learned how to formulate problems by identifying decision variables, objective functions, and constraints, how to visualize solutions using feasible regions, and how to find optimal solutions through systematic evaluation of corner points. The simplex method extends these concepts to handle complex, multi-variable problems that arise in real-world applications. Whether you're optimizing production schedules, minimizing transportation costs, or maximizing investment returns, linear programming provides a structured approach to finding the best possible solution within given constraints.

Study Notes

• Linear Programming Components: Decision variables (what you control), objective function (what you optimize), constraints (your limitations)

• Standard Form: Maximize/minimize $Z = c_1x_1 + c_2x_2 + ... + c_nx_n$ subject to linear constraints and non-negativity conditions

• Feasible Region: The area where all constraints are satisfied simultaneously; always forms a polygon in 2D problems

• Fundamental Theorem: If an optimal solution exists, it occurs at a corner point (vertex) of the feasible region

• Corner Point Method: Evaluate objective function at each vertex; highest value = optimal solution for maximization problems

• Iso-profit/Iso-cost Lines: Parallel lines representing constant objective function values; move in direction of improvement to find optimum

• Non-negativity Constraints: $x ≥ 0, y ≥ 0$ - you cannot have negative quantities in real-world problems

• Slack Variables: Convert inequalities to equations; $ax + by ≤ c$ becomes $ax + by + s = c$ where $s ≥ 0$

• Simplex Method: Algebraic technique for solving multi-variable linear programming problems by moving between adjacent corner points

• Optimization Direction: For maximization, move iso-lines away from origin; for minimization, move toward origin

Practice Quiz

5 questions to test your understanding

Linear Programming — A-Level Mathematics | A-Warded