An Improved Subgradiend Optimization Technique for Solving IPs with Lagrangean Relaxation
Keywords:
Relaxation, subgradient, optimization, IPAbstract
The objective of this paper is to improve the subgradient optimization method which is used to solve non-differentiable optimization problems in the Lagrangian dual problem. One of the main drawbacks of the subgradient method is the tuning process to determine the sequence of step-lengths to update successive iterates. In this paper, we propose a modified subgradient optimization method with various step size rules to compute a tuning- free subgradient step-length that is geometrically motivated and algebraically deduced. It is well known that the dual function is a concave function over its domain (regardless of the structure of the cost and constraints of the primal problem), but not necessarily differentiable. We solve the dual problem whenever it is easier to solve than the primal problem with no duality gap. However, even if there is a duality gap the solution of the dual problem provides a lower bound to the primal optimum that can be useful in combinatorial optimization. Numerical examples are illustrated to demonstrate the method.Downloads
Issue
Section
Articles