How to recognize the different types of solutions in linear optimization using the Simplex algorithm
In a linear optimization problem, there is not always a single optimal solution. Depending on the geometry of the feasible region and the direction of the objective function, several situations may arise: a unique optimal solution, multiple optimal solutions, an unbounded optimal solution, or even problems with no feasible solution.
In the post Graphical resolution as a starting point in linear optimization, we analyzed these situations from a geometric perspective by representing both the feasible region and the objective function graphically. This viewpoint helps build an intuitive understanding of what is happening in the problem.
However, when the number of variables increases, graphical interpretation is no longer practical. This is where the Simplex Algorithm becomes essential: it moves from vertex to vertex of the feasible polytope and allows us to identify algebraically the type of solution that the problem has.
In this post we will see how the Simplex Algorithm can be used to recognize the five types of solutions that may arise in a linear optimization problem by solving the same examples presented in the post Graphical resolution as a starting point in linear optimization.
Unique optimal solution
The most common situation is when the problem has a unique optimal solution.
Geometrically, this occurs when the objective function line (or hyperplane) touches the feasible region at a single vertex.
Consider the following linear optimization problem:
\[\begin{aligned} \max\;\; & z = 6x_1 + 3x_2 \\ \textrm{s.t.:}\;\; & 2x_1 + 4x_2 \leqslant 8 \\ & -x_1 + 4x_2 \leqslant 4 \\ & x_1 - x_2 \leqslant 2 \\ & x_1, x_2 \geqslant 0 \end{aligned}\]Its standard form is
\[\begin{aligned} \max\;\; & z = 6x_1 + 3x_2 \\ \textrm{s.t.:}\;\; & 2x_1 + 4x_2 + x_3^s = 8 \\ & -x_1 + 4x_2 + x_4^s = 4 \\ & x_1 - x_2 + x_5^s = 2 \\ & x_1, x_2, x_3^s, x_4^s, x_5^s \geqslant 0 \end{aligned}\]The final Simplex tableau is:
\[\begin{array}{cc|rrrrr|r} & & {\color{gray}6} & {\color{gray}3} & {\color{gray}0} & {\color{gray}0} & {\color{gray}0} & \\ & & x_1 & x_2 & x_3^s & x_4^s & x_5^s & \\ \hline & z_j-c_j & 0 & 0 & 3/2 & 0 & 3 & 18 \\ \hline {\color{gray}3} & x_2 & 0 & 1 & 1/6 & 0 & -1/3 & 2/3 \\ {\color{gray}0} & x_4^s & 0 & 0 & -1/2 & 1 & 2 & 4 \\ {\color{gray}6} & x_1 & 1 & 0 & 1/6 & 0 & 2/3 & 8/3 \\ \hline \end{array}\]From this tableau we see that the conditions of the first Simplex theorem are satisfied:
$z_j-c_j > 0$ (for a maximization problem) for every nonbasic variable. This means that no nonbasic variable can improve the value of the objective function. Therefore, the problem has a unique optimal solution:
Multiple optimal solutions
When multiple optimal solutions exist, two different situations may occur:
- The optimal solutions lie in a bounded convex region.
- The optimal solutions lie in an unbounded region.
Both cases are analyzed below.
Multiple optimal solutions in a bounded region
When optimal solutions lie in a bounded convex region, they are given by convex combinations of all extreme points that are optimal.
In the particular case of $\mathbb{R}^2$, the only possibility is that the optimal solutions form a line segment, whose endpoints are two optimal extreme points.
Consider the following linear optimization problem:
\[\begin{aligned} \max\;\; & z = x_1 + x_2 \\ \textrm{s.t.:}\;\; & x_1 + x_2 \leqslant 8 \\ & -4x_1 + 4x_2 \leqslant 8 \\ & 2x_1 - x_2 \leqslant 6 \\ & x_1, x_2 \geqslant 0 \end{aligned}\]whose standard form is
\[\begin{aligned} \max\;\; & z = x_1 + x_2 \\ \textrm{s.t.:}\;\; & x_1 + x_2 + x_3^s = 8 \\ & -4x_1 + 4x_2 + x_4^s = 8 \\ & 2x_1 - x_2 + x_5^s = 6 \\ & x_1, x_2, x_3^s, x_4^s, x_5^s \geqslant 0 \end{aligned}\]The final Simplex tableau is:
\[\begin{array}{cc|rrrrr|r} & & {\color{gray}1} & {\color{gray}1} & {\color{gray}0} & {\color{gray}0} & {\color{gray}0} & \\ & & x_1 & x_2 & x_3^s & x_4^s & x_5^s & \\ \hline & z_j-c_j & 0 & 0 & 1 & 0 & 0 & 8 \\ \hline {\color{gray}1} & x_2 & 0 & 1 & 2/3 & 0 & -1/3 & 10/3 \\ {\color{gray}0} & x_4^s & 0 & 0 & -4/3 & 1 & {\color{#2698ba}8/3} & 40/3 \\ {\color{gray}1} & x_1 & 1 & 0 & 1/3 & 0 & 1/3 & 14/3 \\ \hline \end{array}\]where we observe that the conditions of the first Simplex theorem are satisfied, $z_j-c_j \geqslant 0$ (since this is a maximization problem) for every nonbasic variable. However, for the nonbasic variable $x_5^s$, $z_5-c_5=0$, which indicates that the solution is not unique. If the variable $x_5^s$ is introduced into the basis, the following Simplex tableau is obtained:
\[\begin{array}{cc|rrrrr|r} & & {\color{gray}1} & {\color{gray}1} & {\color{gray}0} & {\color{gray}0} & {\color{gray}0} & \\ & & x_1 & x_2 & x_3^s & x_4^s & x_5^s & \\ \hline & z_j-c_j & 0 & 0 & 1 & 0 & 0 & 8 \\ \hline {\color{gray}1} & x_2 & 0 & 1 & 1/2 & 1/8 & 0 & 5 \\ {\color{gray}0} & x_5^s & 0 & 0 & -1/2 & 3/8 & 1 & 5 \\ {\color{gray}1} & x_1 & 1 & 0 & 1/2 & -1/8 & 0 & 3 \\ \hline \end{array}\]which yields another extreme point. Since we are in $\mathbb{R}^2$, the set of optimal solutions lies on the segment connecting the two extreme points obtained. Therefore, the set of optimal solutions of the problem can be expressed as
\[\lambda \mathbf{x}_1 + (1-\lambda)\mathbf{x}_2 = \lambda \left( \begin{array}{c} 3 \\ 5 \\ 0 \\ 0 \\ 5\end{array} \right) + (1-\lambda) \left(\begin{array}{c} 14/3 \\ 10/3 \\ 0 \\ 40/3 \\ 0\end{array}\right),\;\; \lambda \in [0,1] \qquad z^* = 8\]Multiple optima in an unbounded region
This case is mathematically interesting, but it is not usually common in real-world contexts since variables are typically bounded by limits imposed by reality itself.
Consider the following linear optimization problem:
\[\begin{aligned} \min\;\; & z = x_1 \\ \textrm{s.t.:}\;\; & x_1 - x_2 \leqslant 2 \\ & 2x_1 + 2x_2 \geqslant 1 \\ & x_1, x_2 \geqslant 0 \end{aligned}\]whose standard form is:
\[\begin{aligned} \min\;\; & z = x_1 \\ \textrm{s.t.:}\;\; & x_1 - x_2 + x_3^s = 2 \\ & 2x_1 + 2x_2 - x_4^s + x_5^a = 1 \\ & x_1, x_2, x_3^s, x_4^s, x_5^a \geqslant 0 \end{aligned}\]where an artificial variable has been introduced. Therefore, either the two-phase method or the penalty (Big-M) method can be used.
Using the two-phase method, the first phase is given by the following optimization problem:
\[\begin{aligned} \min\;\; & w = x_5^a \\ \textrm{s.t.:}\;\; & x_1 - x_2 + x_3^s = 2 \\ & 2x_1 + 2x_2 - x_4^s + x_5^a = 1 \\ & x_1, x_2, x_3^s, x_4^s, x_5^a \geqslant 0 \end{aligned}\]The following final tableau for phase one is obtained:
\[\begin{array}{cc|rrrrr|r} & & {\color{gray}0} & {\color{gray}0} & {\color{gray}0} & {\color{gray}0} & {\color{gray}1} & \\ & & x_1 & x_2 & x_3^s & x_4^s & x_5^a & \\ \hline & w_j-c_j & 0 & 0 & 0 & 0 & -1 & 0 \\ \hline {\color{gray}0} & x_3^s & 2 & 0 & 1 & -1/2 & 1/2 & 5/2 \\ {\color{gray}0} & x_2 & 1 & 1 & 0 & -1/2 & 1/2 & 1/2 \\ \hline \end{array}\]Since $w=0$ is obtained, we can proceed to the second phase by updating the $z_j-c_j$ row and removing the column corresponding to the artificial variable $x_5^a$, obtaining the following tableau:
\[\begin{array}{cc|rrrr|r} & & {\color{gray}1} & {\color{gray}0} & {\color{gray}0} & {\color{gray}0} & \\ & & x_1 & x_2 & x_3^s & x_4^s & \\ \hline & z_j-c_j & -1 & 0 & 0 & 0 & 0 \\ \hline {\color{gray}0} & x_3^s & 2 & 0 & 1 & -1/2 & 5/2 \\ {\color{gray}0} & x_2 & 1 & 1 & 0 & -1/2 & 1/2 \\ \hline \end{array}\]which turns out to be the final tableau. We observe that the nonbasic variable $x_4^s$ has $z_4-c_4 = 0$. Therefore, the problem has multiple optimal solutions. However, when attempting to force the entry of $x_4^s$ into the basis, the leaving variable criterion cannot be applied since $\mathbf{y}_4 \leqslant \mathbf{0}$. Therefore, an extreme direction has been detected:
\[(\mathbf{d}^1)^\intercal = (0, 1/2, 1/2, 1)\]Thus, the optimal solutions of the problem are given by:
\[\mathbf{x}_1^* + \mu \mathbf{d}^1 = \left( \begin{array}{c} 0 \\ 1/2 \\ 5/2 \\ 0\end{array} \right) + \mu \left( \begin{array}{c} 0 \\ 1/2 \\ 1/2 \\ 1 \end{array} \right) \;\; \mu \geqslant 0 \qquad z^*=0\]Using the penalty (Big-M) method, the optimization problem to be solved is:
\[\begin{aligned} \min\;\; & z = x_1 + Mx_5^a \\ \textrm{s.t.:}\;\; & x_1 - x_2 + x_3^s = 2 \\ & 2x_1 + 2x_2 - x_4^s + x_5^a = 1 \\ & x_1, x_2, x_3^s, x_4^s, x_5^a \geqslant 0 \end{aligned}\]The final tableau is given by:
\[\begin{array}{cc|rrrrr|r} & & {\color{gray}1} & {\color{gray}0} & {\color{gray}0} & {\color{gray}0} & {\color{gray}M} & \\ & & x_1 & x_2 & x_3^s & x_4^s & x_5^a & \\ \hline & z_j-c_j & -1 & 0 & 0 & 0 & -M & 0 \\ \hline {\color{gray}0} & x_3^s & 2 & 0 & 1 & -1/2 & 1/2 & 5/2 \\ {\color{gray}0} & x_2 & 1 & 1 & 0 & -1/2 & 1/2 & 1/2 \\ \hline \end{array}\]from which we observe that the nonbasic variable $x_4^s$ has $z_4-c_4 = 0$, just as when using the two-phase method, leading to the same conclusion.
Unbounded optimal solution
When a problem has an unbounded optimal solution, in practice it may indicate that some constraint is missing that would close the feasible region. However, this case is studied in duality theory and in decomposition methods that allow more complex optimization problems to be solved.
Consider the following linear optimization problem:
\[\begin{aligned} \max\;\; & z = x_1 + 2x_2 \\ \textrm{s.t.:}\;\; & x_1 + 2x_2 \geqslant 2 \\ & -2x_1 + x_2 \leqslant 4 \\ & x_1, x_2 \geqslant 0 \end{aligned}\]whose standard form is:
\[\begin{aligned} \max\;\; & z = x_1 + 2x_2 \\ \textrm{s.t.:}\;\; & x_1 + 2x_2 - x_3^s + x_5^a = 2 \\ & -2x_1 + x_2 + x_4^s = 4 \\ & x_1, x_2, x_3^s, x_4^s, x_5^a \geqslant 0 \end{aligned}\]where two artificial variables have been introduced. Therefore, either the two-phase method or the penalty (Big-M) method can be used.
Using the two-phase method, the first phase is given by the following optimization problem:
\[\begin{aligned} \min\;\; & w = x_5^a \\ \textrm{s.t.:}\;\; & x_1 + 2x_2 - x_3^s + x_5^a = 2 \\ & -2x_1 + x_2 + x_4^s = 4 \\ & x_1, x_2, x_3^s, x_4^s, x_5^a \geqslant 0 \end{aligned}\]The following final tableau for the first phase is obtained:
\[\begin{array}{cc|rrrrr|r} & & {\color{gray}0} & {\color{gray}0} & {\color{gray}0} & {\color{gray}0} & {\color{gray}1} & \\ & & x_1 & x_2 & x_3^s & x_4^s & x_5^a & \\ \hline & w_j - c_j & 0 & 0 & 0 & 0 & -1 & 0 \\ \hline {\color{gray}0} & x_2 & 1/2 & 1 & -1/2 & 0 & 1/2 & 1 \\ {\color{gray}0} & x_4^s & -5/2 & 0 & 1/2 & 1 & -1/2 & 3 \\ \hline \end{array}\]Since a basic solution with $w=0$ has been obtained, we can proceed to the second phase by updating the $z_j-c_j$ row and removing the column corresponding to the artificial variable $x_5^a$.
The final tableau of the second phase is:
\[\begin{array}{cc|rrrr|r} & & \textcolor{gray}{1} & \textcolor{gray}{2} & \textcolor{gray}{0} & \textcolor{gray}{0} & \\ & & x_1 & x_2 & x_3^s & x_4^s & \\ \hline & z_j - c_j & -5 & 0 & 0 & 2 & 8 \\ \hline \textcolor{gray}{2} & x_2 & -2 & 1 & 0 & 1 & 4 \\ \textcolor{gray}{0} & x_3^s & -5 & 0 & 1 & 2 & 6 \\ \hline \end{array}\]where we observe that $x_1$ is a candidate to enter the basis but cannot do so because $\mathbf{y}_1 \leqslant \mathbf{0}$. Therefore, the conditions of the second Simplex theorem are satisfied and the problem has an unbounded optimal solution.
An unbounded ray can be obtained from the extreme point in the final tableau and the extreme direction obtained from the variable $x_3^s$:
\[R=\left\{\left(\begin{array}{r} 0 \\ 4\\ 6 \\ 0\end{array} \right) + \mu\left(\begin{array}{r} 1 \\ 2 \\ 5 \\ 0\end{array} \right): \mu \geqslant 0\right\}\]Using the penalty (Big-M) method, the optimization problem to be solved is:
\[\begin{aligned} \max\;\; & z = x_1 + 2x_2 -M x_5^a \\ \textrm{s.t.:}\;\; & x_1 + 2x_2 - x_3^s + x_5^a = 2 \\ & -2x_1 + x_2 + x_4^s = 4 \\ & x_1, x_2, x_3^s, x_4^s, x_5^a \geqslant 0 \end{aligned}\]The final tableau is:
\[\begin{array}{cc|rrrrr|r} & & {\color{gray}1} & {\color{gray}{2}} & {\color{gray}0} & {\color{gray}0} & {\color{gray}{-M}} & \\ & & x_1 & x_2 & x_3^s & x_4^s & x_5^a & \\ \hline & z_j - c_j & -5 & 0 & 0 & 2 & M & 8 \\ \hline {\color{gray}0} & x_2 & -2 & 1 & 0 & 1 & 0 & 4 \\ {\color{gray}1} & x_3^s & -5 & 0 & 1 & 2 & -1 & 6 \\ \hline \end{array}\]where we observe that the variable $x_1$ is a candidate to enter the basis, but none can leave since $\mathbf{y}_1 \leqslant \mathbf{0}$. Therefore, the problem has an unbounded optimal solution. Since the same extreme point is reached as with the two-phase method, the unbounded ray obtained is the same.
Infeasible problem
The case of infeasibility is usually due to an excess of constraints preventing the existence of solutions in the problem under study.
Consider the linear optimization problem:
\[\begin{aligned} \min\;\; & z = x_1 + x_2 \\ \textrm{s.t.:}\;\; & 2x_1 + x_2 \leqslant 5 \\ & x_1 - x_2 \geqslant 4 \\ & x_1, x_2 \geqslant 0 \end{aligned}\]whose standard form is:
\[\begin{aligned} \min\;\; & z = x_1 + x_2 \\ \textrm{s.t.:}\;\; & 2x_1 + x_2 + x_3^s = 5 \\ & x_1 - x_2 - x_4^s + x_5^a = 4 \\ & x_1, x_2, x_3^s, x_4^s, x_5^a \geqslant 0 \end{aligned}\]where an artificial variable has been introduced. Therefore, either the two-phase method or the penalty (Big-M) method can be used.
Using the two-phase method, the first phase is given by the following optimization problem:
\[\begin{aligned} \min\;\; & w = x_5^a \\ \textrm{s.t.:}\;\; & 2x_1 + x_2 + x_3^s = 5 \\ & x_1 - x_2 - x_4^s + x_5^a = 4 \\ & x_1, x_2, x_3^s, x_4^s, x_5^a \geqslant 0 \end{aligned}\]The following final tableau for the first phase is obtained:
\[\begin{array}{cc|rrrrr|r} & & {\color{gray}0} & {\color{gray}0} & {\color{gray}0} & {\color{gray}0} & {\color{gray}1} & \\ & & x_1 & x_2 & x_3^s & x_4^s & x_5^a & \\ \hline & w_j - c_j & 0 & -3/2 & -1/2 & -1 & 0 & 3/2 \\ \hline {\color{gray}0} & x_1 & 1 & 1/2 & 1/2 & 0 & 0 & 5/2 \\ {\color{gray}1} & x_5^a & 0 & -3/2 & -1/2 & -1 & 1 & 3/2 \\ \hline \end{array}\]It can be observed that $w \neq 0$ and the variable $x_5^a$ is in the basis with $x_5^a \neq 0$. Therefore, the original problem is infeasible.
Using the penalty (Big-M) method, the optimization problem to be solved is:
\[\begin{aligned} \min\;\; & z = x_1 + x_2 - Mx_5^a \\ \textrm{s.t.:}\;\; & 2x_1 + x_2 + x_3^s = 5 \\ & x_1 - x_2 - x_4^s + x_5^a = 4 \\ & x_1, x_2, x_3^s, x_4^s, x_5^a \geqslant 0 \end{aligned}\]The final tableau is:
\[\begin{array}{cc|rrrrr|r} & & {\color{gray}1} & {\color{gray}{1}} & {\color{gray}0} & {\color{gray}0} & {\color{gray}{-M}} & \\ & & x_1 & x_2 & x_3^s & x_4^s & x_5^a & \\ \hline & z_j-c_j & 0 & -3M/2-1/2 & -M/2+1/2 & -M & 0 & -3M/2 + 5/2 \\ \hline {\color{gray}{1}} & x_1 & 1 & 1/2 & 1/2 & 0 & 0 & 5/2 \\ {\color{gray}{-M}} & x_5^a & 0 & -3/2 & -1/2 & -1 & 1 & 3/2 \\ \hline \end{array}\]where it can be observed that the variable $x_5^a$ is in the basis with $x_5^a \neq 0$ and therefore the original problem is infeasible.
Conclusion
Graphical analysis helps us understand intuitively what happens in a linear optimization problem. However, the Simplex Algorithm provides a systematic way to detect exactly those same situations using algebraic operations.
Understanding this connection between geometry and algorithm is essential to correctly interpret the behavior of the Simplex method and the results it produces.
In practice, recognizing these patterns in the Simplex tableaux allows us to quickly diagnose the type of problem we are dealing with and understand the meaning of the solution obtained.
If you found this useful, please cite this as:
Martín-Campo, F. Javier (Feb 2026). How to recognize the different types of solutions in linear optimization using the Simplex algorithm. https://www.fjmartincampo.com/blog/2026/simplexexamples/.
or as a BibTeX entry:
@misc{martín-campo2026how-to-recognize-the-different-types-of-solutions-in-linear-optimization-using-the-simplex-algorithm,
title = {How to recognize the different types of solutions in linear optimization using the Simplex algorithm},
author = {Martín-Campo, F. Javier},
year = {2026},
month = {Feb},
url = {https://www.fjmartincampo.com/blog/2026/simplexexamples/}
}
Enjoy Reading This Article?
Here are some more articles you might like to read next: