Mathematical Aspects

The mathematical complexity inherent to this synthesis strategy is related to the nondifferentiable, discontinuous, and nonconvex nature of the resulting MINLP problems. However, new optimization techniques have been developed in the last decade that, along with the great advances in computational resources, have become more feasible in the utilization of this approach for process synthesis (Barnicki and Siirola, 2004), although the difficulty in the case of very large and complex synthesis problems persists.

One of the concepts related to the solution of MINLP-type problems for pro­cess synthesis is shown in Figure 2.1. For instance, this strategy has been used by Floudas (1995). After defining the superstructure containing all the possible process units for transformation of a given feedstock into a specific end prod­uct (and that includes all the possible connections among these units), an opti­mization loop that employs mixed-integer linear programming (MILP) tools is started. During the calculation of this optimization loop, the streams connecting the process units of the first configuration to be evaluated are chosen. This is done by specifying the value of the integer (discrete) variables (for example, 1 if the stream connects two units, 0 if this stream does not exist). The selected technological configuration represents the model of the process, which is involved in a nonlinear programming (NLP) loop. In this loop, the optimal values of the continuous process variables (temperatures, pressures, flowrates, compositions, etc.) are determined completing the optimization of the evaluated process model. Once completing the optimization of this first configuration by NLP, a second configuration of the technological scheme is defined by varying the values of the discrete variables, i. e., by choosing new connection streams among the process units, and the procedure is repeated. In this way, the different configurations of the process flowsheets are selected in an outer loop represented by the MILP

image015

algorithm, whereas the optimal values of the operating parameters are found in the inner loop based on NLP. The best technological configuration of the process that maximizes or minimizes the employed objective function is identified by repeating and executing these two loops. For this configuration, the optimal val­ues of its operating parameters are defined as well.

The solution of NLP problems is usually based either on successive quadratic programming (SQP) or on reduced gradient methods. Among the main solu­tion methods of MINLP problems are the branch and bound method (Gupta and Ravindran, 1985), generalized Benders decomposition (Geoffrion 1972), and outer approximation (Duran and Grossmann, 1986). A new trend for solving MINLP problems is the generalized disjunctive programming whose formulation includes the condition that one of a set of three types of constraints should be exactly satis­fied (Lee and Grossmann, 2005; Raman and Grossmann, 1991). The three types of constraints comprise global independent inequalities concerning discrete deci­sions, disjunctions that are conditional constraints involving the operator OR, and logic pure constraints involving Boolean variables (Grossmann et al., 2000).

The formulation and solution of the main types of mathematical program­ming problems can be accomplished in an effective way using specialized com­puter programs, such as GAMS (Generic Algebraic Modeling System; GAMS Development Corporation, 2007). This system requires that the models and the formulation of the optimization problem be explicitly introduced in algebraic form. It automatically creates an interface with the solution codes for various types of problems (solvers). This offers a great advantage because it makes its
main efforts to formulate the problem itself and not to develop methods for solv­ing it. GAMS has a powerful set of solvers for different optimization problems (LP, NLP, MILP, and MINLP, among others). Moreover, it makes possible the input of indexed equations that is very useful in the case of large-sized models.

The NLP methods ensure that the global optimum can be found if and only if both the objective function and the constraints are convex (Floudas, 1995). If this is not the case, the location of the global solution cannot be guaranteed. The rigor­ous or deterministic methods for global optimization ensure an arbitrarily close approximation to the global optimum and, in addition, carry out the verification if this approximation has been attained. These methods include the branch and bound method, methods based on interval arithmetic (Byrne and Bogle, 1996; Stadherr, 1997) and generalized disjunctive programming (Lee and Grossmann, 2005), and procedures with multiple starting points in which a local optimizer is invoked from these points (Edgar et al., 2001). In the past few years, signifi­cant advances in the methods of rigorous global optimization have been achieved. These methods assume that special structures in these types of problems are pres­ent, such as bilinear, linear, fractioned, and separable concave functions. It has been demonstrated that the algebraic models always can be reduced to these sim­pler structures if they do not involve trigonometric functions (Grossmann et al.,

2000) . Floudas (2005) points out that the most important advances in determinis­tic global optimization belong to the following categories: convex envelopes and convex underestimators, twice continuously differentiable constrained nonlinear optimization problems, mixed-integer nonlinear optimization problems, bilevel nonlinear optimization problems, optimization problems with differential-alge­braic equations, grey-box and factorable models, and enclosure of all solutions.

Other types of techniques for global optimization correspond to nonrigorous or heuristic search methods. These methods can find the global optima, but do not guarantee and generally are not able to prove that the global solution is found even if they do so. However, these procedures often find good solutions and can be successfully applied to MINLP problems. In such cases, the heuristic method starts with a starting solution and explores all the solutions in a certain vicinity to that starting point, looking for a better solution. The method repeats the pro­cedure every time a better solution is found. The metaheuristic methods direct and improve the search with a heuristic algorithm. Tabu search, sparse search, simulated annealing, and genetic algorithms belong to this category (Edgar et al.,

2001) . In particular, stochastic methods, such as simulated annealing (Kirkpatrick et al., 1983) and the genetic algorithms (Goldberg, 1989), have gained more and more popularity, do not make any assumptions on the form of the functions, and require some type of discretization, and the violation of the constraints are tack­led through penalization functions (Grossmann et al., 2000).