We consider nonconvex vector optimization problems with variable ordering structures in Banach spaces. Under certain boundedness and continuity properties we present necessary conditions for approximate solutions of these problems. Using a generic approach to subdifferentials we derive necessary conditions for approximate minimizers and approximately minimal solutions of vector optimization problems with variable ordering structures applying nonlinear separating functionals and Ekeland's variational principle.
We analyze a sequential decision making process, in which at each step the decision is made in two stages. In the rst stage a partially optimal action is chosen, which allows the decision maker to learn how to improve it under the new environment. We show how inertia (cost of changing) may lead the process to converge to a routine where no further changes are made. We illustrate our scheme with some economic models.
We study the support sets of sub-topical functions and investigate their maximal elements in order to establish a necessary and sufficient condition for the global minimum of the difference of two sub-topical functions.
We consider a linear programming problem in a general form and suppose that all coefficients may vary in some prescribed intervals. Contrary to classical models, where parameters can attain any value from the interval domains independently, we study problems with linear dependencies between the parameters. We present a class of problems that are easily solved by reduction to the classical case. In contrast, we also show a class of problems with very simple dependencies, which appear to be hard to deal with. We also point out some interesting open problems.
We present an improved version of a full Nesterov-Todd step infeasible interior-point method for linear complementarityproblem over symmetric cone (Bull. Iranian Math. Soc., 40(3), 541-564, (2014)). In the earlier version, each iteration consisted of one so-called feasibility step and a few -at most three - centering steps. Here, each iteration consists of only a feasibility step. Thus, the new algorithm demands less work in each iteration and admits a simple analysis of complexity bound. The complexity result coincides with the best-known iteration bound for infeasible interior-point methods.
Here, we aim to develop a new algorithm for solving a multiobjective linear programming problem. The algorithm is to obtain a solution which approximately meets the decision maker's preferences. It is proved that the proposed algorithm always converges to a weak efficient solution and at times converges to an efficient solution. Numerical examples and a simulation study are used to illustrate the performance of the proposed algorithm.
A common approach to determine efficient solutions of a multiple objective optimization problem is reformulating it to a parameter dependent scalar optimization problem. This reformulation is called scalarization approach. Here, a well-known scalarization approach named Pascoletti-Serafini scalarization is considered. First, some difficulties of this scalarization are discussed and then removed by restricting the parameter set. A method is presented to convert a space ordered by a specific ordering cone to an equivalent space ordered by the natural ordering cone. Utilizing the presented conversion, all confirmed results and theorems for multiple objective optimization problems ordered by the natural ordering cone can be extended to multiple objective optimization problems ordered by specific ordering cones.