Engineering System Optimization: Non-linear Programming and SQP Method

Categories: Math

Introduction

Non-linear programming (NLP) is crucial in solving various optimization problems in engineering, economics, and applied sciences. It involves minimizing or maximizing a non-linear objective function, subject to equality and inequality constraints. This essay delves into the mathematical foundations of NLP and introduces Sequential Quadratic Programming (SQP), a robust method for solving NLP problems.

Non-linear Programming Problem Formulation

A general non-linear programming problem can be expressed as:

  • Objective: Minimize fobjective(x)
  • Subject to:
    • Equality constraints: eq(x)>0, for i=1,2,...neq
    • Inequality constraints: iq(x)≥0, for j=1,2,.

      ..niq

Here, fobjective(x) represents the objective function to be minimized, eq(x) denotes equality constraints, and iq(x) signifies inequality constraints.

Prerequisites Method

The Lagrangian function combines all the data about the issue into one capacity utilizing Lagrangian multipliers \lambda for equality constraints and \mu for inequality constraints: Adding Lagrangian multipliers mi,mj the above function will be modified as fobjective(x) + {sum of product of \lambda and eq_i(x) from i to neq} + {sum of product of \mu and iq_j(x) from j to niq}.

Get quality help now
RhizMan
RhizMan
checked Verified writer

Proficient in: Math

star star star star 4.9 (247)

“ Rhizman is absolutely amazing at what he does . I highly recommend him if you need an assignment done ”

avatar avatar avatar
+84 relevant experts are online
Hire writer

A single function can be upgraded by discovering basic focuses where the slope is zero. This methodology currently incorporates \lambda and \mu as variables (which are vectors for multi-imperative NLP). The framework shaped from this slope is given the mark KKT conditions:

This forms a Hessian Matrix of the given Lagrangian Function

\nabla L =\begin{bmatrix} \frac{dL}{dx} \\ \frac{dL}{d\lambda} \\ \frac{dL}{d\mu} \end{bmatrix} = \begin{bmatrix} \nabla f + \lambda \nabla h + \mu \nabla g^* \\ h \\ g^* \end{bmatrix} =0

The second KKT condition is simply a possibility; eq(x) was constrained to zero in the first NLP.

Get to Know The Price Estimate For Your Paper
Topic
Number of pages
Email Invalid email

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy. We’ll occasionally send you promo and account related email

"You must agree to out terms of services and privacy policy"
Write my paper

You won’t be charged yet!

The third KKT condition is somewhat trickier in that solitary the arrangement of dynamic disparity requirements needs to fulfill this equality, the dynamic set being signified by g^*. Disparity requirements that are not even close to the ideal solution are insignificant, however, constraints that effectively take an interest in deciding the ideal solution will be at their farthest point of zero, and in this way the third KKT condition holds. At last, the Lagrangian multipliers depict the change in the target work as for an adjustment in a requirement, so \mu is zero for inactive constraints, so those inactive constraints can be viewed as expelled from the Lagrangian work before the angle is even taken.

Newton's Method The principle thought behind Newton's Method is to improve a supposition in the extent to how rapidly the function is changing at the theory and contrarily corresponding to how the function is quickening at the conjecture. Strolling through a couple of extraordinary situations makes this methodology more intuitive: a long, steep incline in a function won't be near a critical point, so the improvement ought to be enormous, and a shallow incline that is quickly terminating is probably going to be close to a critical point, so the improvement ought to be little. The cycles merge to critical values of any function f with progress steps that pursue the structure underneath:

x_{k+1} = x_k - \frac{\nabla f}{\nabla^2 f}

The negative sign is significant. Close essentials, a positive gradient should diminish the estimate and the other way around, and the divergence is positive. Near maximums, a positive gradient should expand the guess and the other way around, yet the dissimilarity is negative. This sign shows likewise keeps the calculation from getting away from a solitary curved or inward district; the improvement will switch bearing on the off chance that it overshoots. This is a significant thought in non-curved issues with different neighborhood maximums and essentials. Newton's strategy will locate the basic direct nearest toward the first speculation. Consolidating Newton's Method into the dynamic set technique will change the cycle above into a network condition.

SQP Method: Critical points of the target function will likewise be critical points of the Lagrangian function and the other way around in light of the fact that the Lagrangian function is equivalent to the target work at a KKT point; all limitations are either equivalent to zero or latent. The calculation is hence basically repeating Newton's method to discover basic purposes of the Lagrangian multipliers. Since the Lagrangian multipliers are extra factors, the emphasis shapes a framework:

\begin{bmatrix} x_{k+1} \\ \lambda_{k+1} \\ \mu_{k+1} \end{bmatrix} = \begin{bmatrix} x_{k} \\ \lambda_{k} \\ \mu_{k} \end{bmatrix} - (\nabla^2 L_k)^{-1} \nabla L_k

Recall: \nabla L =\begin{bmatrix} \frac{dL}{dx} \\ \frac{dL}{d\lambda} \\ \frac{dL}{d\mu} \end{bmatrix} = \begin{bmatrix} \nabla f + \lambda \nabla h + \mu \nabla g^* \\ h \\ g^* \end{bmatrix}

Then \nabla^2 L = \begin{bmatrix} \nabla_{xx}^2 L & \nabla h & \nabla g \\ \nabla h & 0 & 0 \\ \nabla g & 0 & 0 \end{bmatrix}

In contrast to the dynamic set method, the need to ever comprehend an arrangement of non-straight conditions has been totally dispensed within SQP, regardless of how non-direct the target and requirements. Hypothetically, If the subsidiary articulations above can be planned systematically then coded, programming could repeat in all respects rapidly in light of the fact that the framework doesn't change. By and by, in any case, almost certainly, the uniqueness won't be an invertible network since factors are probably going to be straightly bound from above and underneath. The improvement bearing 'p' for the Newton's Method emphasess is in this manner regularly found in a progressively aberrant manner: with a quadratic minimization sub-issue that is explained utilizing quadratic calculations. The subproblem is determined as pursues:

p = \frac{\nabla L}{\nabla^2 L} = \frac{(\nabla L)p}{(\nabla^2 L)p}

Since p is an incremental change to the goal work, this condition at that point looks like a two-term Taylor Series for the subordinate of the goal work, which demonstrates that a Taylor extension with the augmentation p as a variable is identical to a Newton emphasis. Disintegrating the various conditions inside this framework and slicing the subsequent request term down the middle to coordinate Taylor Series ideas, a minimization sub-issue can be gotten. This issue is quadratic and in this manner must be settled with non-direct strategies, which indeed acquaints the need with taking care of a non-straight issue into the calculation, however, this anticipated sub-issue with one variable is a lot simpler to handle than the parent issue.

minimize f_k(x) + \nabla f_k^T p + \frac{1}{2}p^T\nabla_{xx}^2L_k p

subjected to \nabla h_k p + h_k = 0

and \nabla g_k p + g_k = 0

An approximated subproblem is comprehended and accordingly, monotonic descent is guaranteed at each progression. This procedure is iterated until the Karush–Kuhn–Tucker (KKT) condition is come to. The aforementioned processes are combined to obtain an ensemble model of accident cause classification. These methods improve the cross-entropy.

Limitations and Challenges

This approach requires a huge dictionary of industry-specific keywords and derived keywords. The dictionary should be autoindexing and should be self-tuning. The performance of the application is the next challenge. Proper infrastructure, algorithms should be used to process the data in seconds.

Conclusion and Future Work

Many industries have been recording work-related injury events and near-miss accidents in the form of reports. The needs for manual analysis of the report can be eliminated using the proposed approach. This proposed approach provides an automated way to collect summarize the industrial accident reports. The proposed approach will enable the industries to quickly extract the knowledge contained in their databases of unstructured injury reports to improve work-related safety management.

Updated: Feb 22, 2024
Cite this page

Engineering System Optimization: Non-linear Programming and SQP Method. (2024, Feb 22). Retrieved from https://studymoose.com/document/engineering-system-optimization-non-linear-programming-and-sqp-method

Live chat  with support 24/7

👋 Hi! I’m your smart assistant Amy!

Don’t know where to start? Type your requirements and I’ll connect you to an academic expert within 3 minutes.

get help with your assignment