Introducing Slack and Surplus Variable on Lagrangian Objective Function with Penalty in Optimization Problems

Categories: MathScience

Abstract

The process of finding the best solution from all feasible solution of a problem is known as optimization problem. There are many methods to solve the optimization problem, augmented Lagrangian method is one of them where an algorithm is constructed for finding the best solution.

In this work we add a penalty function with the augmented Lagrangian function. For constructing the augmented Lagrangian objective function we make the unequal constraint into equal with the help of surplus and slack variable and we also transfer the optimization problem into dual problem.

With some conditions we find out the saddle point of the augmented Lagrangian penalty function with slack and surplus variable which satisfies the first order Karush-kuhn-Tucker (KKT) conditions.

These conditions are true only for the convex problems. We find out a global solution of the optimization problem by constricting an algorithm. The convergence of the solution is also proved with some conditions.

Introduction

Augmented Lagrangian method is the process of finding the optimize solution of an optimization problem through an algorithm.

Get quality help now
writer-Charlotte
writer-Charlotte
checked Verified writer

Proficient in: Math

star star star star 4.7 (348)

“ Amazing as always, gave her a week to finish a big assignment and came through way ahead of time. ”

avatar avatar avatar
+84 relevant experts are online
Hire writer

If the penalty function is added with the augmented lagrangian function, it approaches more effective to the optimize problem. To find out the optimize solution of the problem, the main concept is convert the constrained problem into an unconstrained optimize problem.

In the research of Du’ etal [1] an algorithm of Lagrangian penalty function was introduced with some theories. Researchers try to find better Lagrangian penalty function so that the optimize solution can find out more easily. Many research papers have been published in theoretically and practically on augmented Lagrangian functions.

Get to Know The Price Estimate For Your Paper
Topic
Number of pages
Email Invalid email

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy. We’ll occasionally send you promo and account related email

"You must agree to out terms of services and privacy policy"
Write my paper

You won’t be charged yet!

The main concepts of these researches are covering the zero-gap of dual problem, existence of saddle point, exactness, constructing algorithm and so on [2]-[8].

There are two part of an augmented lagrangian function, one part is lagrangian functions with Lagrangian parameters and another part is penalty functions with penalty parameters [2]-[8]. The principal purpose of augmented Lagrangian function is dual and saddle point. If the problem is convex then the zero gap of dual lagrangian is true. A sequence algorithm is formatted to find out the optimal solution of the optimization problem by taking differential of Lagrangian and penalty parameters in [2],[3],[4],[5].

The solution of exact augmented Lagrangian function is pointed out by Lucidi [6] and Di Piloetal. [7], but the numerical results are were not find out. An update dual primal convergence with augmented penalty method is proved for general optimization problem by S. Burachik and C.Y. Kaya [8]. If there are many lagrangian parameters or penalty parameters we need to solve the dual optimization problem by considering some conditions. The conditions have to be some meaningful and logical reasons.

Many researchers have been published their work based on objective penalty perameter in [9]-[16]. A general type of penalty function is discussed by Burke [12]. Introduction of sequential unconstrained minimization technique is found in the research work of Fiacco and McCormick [13]. Mauricio and Maculan [14] used Boolean penalty method for zero-one nonlinear programming. General objective penalty function method is discussed by Mengetal [15]. The properties of dual and saddle point of the augmented Lagrangian function is also studied by Mengetal [15].

In our work the main target is to find out the optimize solution by converting the optimization problem into dual problem. Here we use slack and surplus variable for making unequal constraints equal. An augmented lagrangian objective penalty function with slack and surplus variable is constructed which help to find out the saddle point. The first order Karush-kuhn-Tucker (KKT) conditions are also satisfied by the saddle point. An algorithm is also constructed sequentially to find out the global solution and the convergence of the solution is also proved in the work.

Augmented Lagrangian function

Optimization is the process of finding out the best solution among the set of feasible solution. An optimization problem consists of objective function, constraints (equal or unequal). Here we consider an optimization problem with both equal and unequal constraints.

The problem is (P) Now introducing the slack and surplus variable to the unequal constrained we get, The equal constrained are function of only, so we can write them as Where,,&: , i The feasible set of (P) is denoted by , , & } Let us consider be monotonically increasing functions which are satisfying the following conditions if if respectively. For example maxobey the conditions.

The augmented Lagrangian objective penalty function (1) Where is objective parameter, are the Lagrangian multiplier, y is the penalty parameter also and is the penalty function. Here,, ,with and G, H and K are function of x, so we can consider that , thus we get When we can easily say that is smooth function. We can define a new function (2) (3) Now we can define the optimization problem into augmented Lagrangian dual problem as (DP) When we have By using (3) we have (4) From (1) we have for. Let then we have so Thus from (4) and (1) we get (5) Theorem 2.1: Let be a feasible solution to (P) and be a feasible solution to (DP).

Then (6) Proof: according to the consideration, we have And Theorem 2.2: Let. Then is a saddle point of if and only if is an optimal solution of (P) and is an optimal solution to (DP) with Proof: Let ,be an optimal solution to (P) and be an optimal solution to (DP). Then we have to prove that is a saddle point of. Since is an optimal solution of (P) and is an optimal solution of (DP).

Thus we can write (7) By (5) if is an optimal solution of then is an optimal solution of (P) with. We can write (8) Now is an optimal solution of (DP) if is an optimal solution of. So we have (9) According to (8) and (9) we can say that is a saddle point of. Conversely, let is a saddle point of. We have to prove that is an optimal solution of (P) and is an optimal solution to (DP). For this it is enough to show that A saddle point of is defined by (10)

Using (10) we observe that the saddle point gives the relationship between (P) and (DP). The optimal solution of (P) can be obtained by the optimal solution of (DP) and the zero gap exist in Theorem 2. The Theorem 3 and Theorem 4 are proved with satisfying the condition of convexity, saddle points are equivalent to the optimality condition of (P). From (10) we can write.

Hence the following theorem proved.

Theorem 2.4: Letare convex and differentiable. Let for and for. If satisfies the first order Karush-Kuhn-Tucker (KKKT) condition thenis a saddle point of for any .

Proof: Let any. According to our considerationis convex and differentiable on. From (21) we have and

We also have, when satisfies the first order Karush-Kuhn-Tucker (KKKT) condition then and By the definition of penalty function, we can write that for. So for any and We have Example 2.1: Consider the problem Subject to:

Now introducing the slack and surplus variable we get, When, the augmented Lagrangian objective penalty function is given by

The optimal solution to is for and. For some,and it is clear that holds.

Thenis a saddle point of.

The example shows that the augmented lagrangian objective penalty function is same in terms of the exactness as the exact penalty function.

For any given, define the following Problem as Subject to:

In the example, is an optimal solution of when , Now we construct an algorithm to find out a global solution of (P) which is same as the algorithm [15]. The algorithm solves the above example sequentially.

Algorithm:

  • Step 1: Choose,,,,,,and .
  • Step 2: Solve. Let be a global minimizer.
  • Step 3: If is not a feasible solution of (P), Let,,,,,and go to step 2.

Otherwise, stop and is an approximate solution to (P). The convergence of the algorithm is proved in theorem 5.

The set is called a Q-level set. S is bounded if for any given and a convergence sequence, is bounded.

Theorem 2.5: Let exist. Suppose that and are continuous, and the -level set is bounded. Let be a sequence generated by the algorithm. If is an infinite sequence with, then is bounded and any limit point of it is an optimal solution to (P).

Proof: Firstly we have to prove that the sequence is bounded. As is an optimal solution to.

Because for.

We have, then there is a bound of sequence, because has an optimal solution. Therefore there is asuchthatfor.and as and it is concluded that there is some such that

Since the –level set is bounded, the sequence is bounded without loss of generality, we consider. Let be an optimal solution to (P). So,

Ifin the above inequality, we get that

That indicates. Therefore is an optimal solution to (P).

Theorem 5 implies the global convergence in theory of the algorithm. When is taken big enough, an approximate solution of (P) is obtained by the algorithm.

Conclusion

In this work we convert the unequal constraints into equal constraints by introducing slack and surplus variables of the optimization problem. The optimization problem is transfer into dual problem and a new augmented Lagrangian objective function of optimization problem is constructed based on constraints and penalty function. We discuss about the properties of dual problem and algorithm of the augmented lagrangian function. The zero gap and duality of the dual problem is also solved.

With some conditions of the dual problem, the saddle point of the lagrangian function is found out which satisfies the first order Karush-kuhn-Tucker (KKT) conditions. We also construct an algorithm for finding the global solution of the optimization problem. The convergence of the solution is also proved with continuity, boundary, limit point and some other conditions.

References

  1. Du, D.Z., Pardalos, P.M. and Wu, W. (2001) Mathematical Theory of Optimization. Kluwer Academic Publishers.
  2. Miele, A., Cragg, E.E., Iyer, R.R. and Levy, A.V. (1971) Use of the Augmented Penalty Function in Mathematical Programming. Journal of Optimization Theory and Applications.
  3. Miele, A., Moseley, P., Levy, A. V. and Coggins, G.H. (1972) On the Method of Multipliers for Mathematical Programming Problems. Journal of optimization Theory and Applications.
  4. Pillo, G.D. and Grippo, L. (1982) A New Augmented Lagrangian Function for In-equality Constraints in Nonlinear Programming Problems. Journal of Optimization Theory and Applications.
  5. Goldfarb, D., Polyak, R., Scheinberg, K. and Yuzefobvicha, I. (1999) A Modified Barrier-Augmented Lagrangian Method for Constrained Minimization. Computational Optimization and Applications.
  6. Lucidi, S. (1990) Recursive Quadratic Programming Algorithm That Uses an Exact Augmented Lagrangian Function. Journal of Optimization Theory and Applications.
  7. Pillo, G.D., Liuzzi, G., Lucidi, S. and Palagi, L. (2003) An Exact Augmented Lagrangian Function for Nonlinear Programming with Two-Sided Constraints. Computational Optimization and Applications.
  8. Burachik, R.S. and Kaya, C.Y. (2012) An Augmented Penalty Function Method with Penalty Parameter Updates for Nonconvex Optimization. Nonlinear Analysis.
  9. Morrison, D.D. (1968) Optimization by Least Squares. SIAM Journal on Numerical Analysis.
  10. Fletcher, R. (1981) Practical Method of Optimization. A Wiley-Interscience Publication, New York.
  11. Fletcher, R. (1983) Penalty Functions. In: Bachem, A., Grotschel, M. and Korte, B., Eds., Mathematical Programming: The State of the Art, Springer, Berlin.
  12. Burke, J.V. (1991) An Exact Penalization Viewpoint of Constrained Optimization. SIAM Journal on Control and Optimization.
  13. Fiacco, A.V. and McCormick, G.P. (1968) Nonlinear Programming: Sequential Unconstrained Minimization Techniques. Wiley, New York.
  14. Mauricio, D. and Maculan, N. (2000) A Boolean Penalty Method for Zero-One Nonlinear Programming. Journal of Global Optimaization.
  15. Meng, Z.Q., Hu, Q.Y. and Dang, C.Y. (2009) A Penalty Function Algorithm with Objective Parameters for Nonlinear Mathematical Programming. Journal of Indus-trial and Management Optimization.
  16. Meng, Z.Q., Shen, R., Dang, C.Y. and Jiang, M. (2015) Augmented Lagrangian Objective Penalty Function. Numerical Functional Analysis and Optimization.
Updated: Feb 23, 2024
Cite this page

Introducing Slack and Surplus Variable on Lagrangian Objective Function with Penalty in Optimization Problems. (2024, Feb 14). Retrieved from https://studymoose.com/document/introducing-slack-and-surplus-variable-on-lagrangian-objective-function-with-penalty-in-optimization-problems

Live chat  with support 24/7

👋 Hi! I’m your smart assistant Amy!

Don’t know where to start? Type your requirements and I’ll connect you to an academic expert within 3 minutes.

get help with your assignment