42 research outputs found

    Optimization Models Using Fuzzy Sets and Possibility Theory

    Get PDF
    Optimization is of central concern to a number of disciplines. Operations Research and Decision Theory are often considered to be identical with optimization. But also in other areas such as engineering design, regional policy, logistics and many others, the search for optimal solutions is one of the prime goals. The methods and models which have been used over the last decades in these areas have primarily been "hard" or "crisp", i.e. the solutions were considered to be either feasible or unfeasible, either above a certain aspiration level or below. This dichotomous structure of methods very often forced the modeler to approximate real problem situations of the more-or-less type by yes-or-no-type models, the solutions of which might turn out not to be the solutions to the real problems. This is particularly true if the problem under consideration includes vaguely defined relationships, human evaluations, uncertainty due to inconsistent or incomplete evidence, if natural language has to be modeled or if state variables can only be described approximately. Until recently, everything which was not known with certainty, i.e. which was not known to be either true or false or which was not known to either happen with certainty or to be impossible to occur, was modeled by means of probabilities. This holds in particular for uncertainties concerning the occurrence of events. probability theory was used irrespective of whether its axioms (such as, for instance, the law of large numbers) were satisfied or not, or whether the "events" could really be described unequivocally and crisply. In the meantime one has become aware of the fact that uncertainties concerning the occurrence as well as concerning the description of events ought to be modeled in a much more differentiated way. New concepts and theories have been developed to do this: the theory of evidence, possibility theory, the theory of fuzzy sets have been advanced to a stage of remarkable maturity and have already been applied successfully in numerous cases and in many areas. Unluckily, the progress in these areas has been so fast in the last years that it has not been documented in a way which makes these results easily accessible and understandable for newcomers to these areas: text-books have not been able to keep up with the speed of new developments; edited volumes have been published which are very useful for specialists in these areas, but which are of very little use to nonspecialists because they assume too much of a background in fuzzy set theory. To a certain degree the same is true for the existing professional journals in the area of fuzzy set theory. Altogether this volume is a very important and appreciable contribution to the literature on fuzzy set theory

    PREFERENCES: OPTIMIZATION, IMPORTANCE LEARNING AND STRATEGIC BEHAVIORS

    Get PDF
    Preferences are fundamental to decision making and play an important role in artificial intelligence. Our research focuses on three group of problems based on the preference formalism Answer Set Optimization (ASO): preference aggregation problems such as computing optimal (near optimal) solutions, strategic behaviors in preference representation, and learning ranks (weights) for preferences. In the first group of problems, of interest are optimal outcomes, that is, outcomes that are optimal with respect to the preorder defined by the preference rules. In this work, we consider computational problems concerning optimal outcomes. We propose, implement and study methods to compute an optimal outcome; to compute another optimal outcome once the first one is found; to compute an optimal outcome that is similar to (or, dissimilar from) a given candidate outcome; and to compute a set of optimal answer sets each significantly different from the others. For the decision version of several of these problems we establish their computational complexity. For the second topic, the strategic behaviors such as manipulation and bribery have received much attention from the social choice community. We study these concepts for preference formalisms that identify a set of optimal outcomes rather than a single winning outcome, the case common to social choice. Such preference formalisms are of interest in the context of combinatorial domains, where preference representations are only approximations to true preferences, and seeking a single optimal outcome runs a risk of missing the one which is optimal with respect to the actual preferences. In this work, we assume that preferences may be ranked (differ in importance), and we use the Pareto principle adjusted to the case of ranked preferences as the preference aggregation rule. For two important classes of preferences, representing the extreme ends of the spectrum, we provide characterizations of situations when manipulation and bribery is possible, and establish the complexity of the problem to decide that. Finally, we study the problem of learning the importance of individual preferences in preference profiles aggregated by the ranked Pareto rule or positional scoring rules. We provide a polynomial-time algorithm that finds a ranking of preferences such that the ranked profile correctly decided all the examples, whenever such a ranking exists. We also show that the problem to learn a ranking maximizing the number of correctly decided examples is NP-hard. We obtain similar results for the case of weighted profiles

    Algorithms for Scheduling Problems

    Get PDF
    This edited book presents new results in the area of algorithm development for different types of scheduling problems. In eleven chapters, algorithms for single machine problems, flow-shop and job-shop scheduling problems (including their hybrid (flexible) variants), the resource-constrained project scheduling problem, scheduling problems in complex manufacturing systems and supply chains, and workflow scheduling problems are given. The chapters address such subjects as insertion heuristics for energy-efficient scheduling, the re-scheduling of train traffic in real time, control algorithms for short-term scheduling in manufacturing systems, bi-objective optimization of tortilla production, scheduling problems with uncertain (interval) processing times, workflow scheduling for digital signal processor (DSP) clusters, and many more

    Contributions to nonlinear system modelling and controller synthesis via convex structures

    Full text link
    Esta tesis discute diferentes metodologías de modelado para extraer mejores prestaciones o resultados de estabilidad que aquéllas que el modelado convencional basado en sector no-lineal de sistemas Takagi-Sugeno (también denominados cuasi-LPV) es capaz de producir. En efecto, incluso si las LMIs pueden probar distintas cotas de prestaciones o márgenes de estabilidad (tasa de decaimiento, H\mathcal H_\infty, etc.) para sistemas politópicos, es bien conocido que las prestaciones probadas dependen del modelo elegido y, dado un sistema no-lineal, dicho modelo politópico no es único. Por tanto, se presentan exploraciones hacia cómo obtener el modelo que es menos perjudicial para la medida de prestaciones elegida. Como una última contribución, mejores resultados son obtenidos mediante la extensión del modelado politópico Takagi-Sugeno a un marco de inclusiones en diferencias cuasi-convexas con planificación de ganancia. En efecto, una versión sin planificación de ganancia fue propuesta por un equipo de investigadores de la Universidad de Sevilla (Fiaccini, Álamo, Camacho) para generalizar el modelado politópico, y esta tesis propone una version aún más general de algunos de dichos resultados que incorpora planificación de ganancia.This thesis discusses different modelling methodologies to eke out best performance/stability results than conventional sector-nonlinearity Takagi-Sugeno (also known as quasi-LPV) systems modelling techniques are able to yield. Indeed, even if LMIs can prove various performance and stability bounds (decay rate, H\mathcal H_\infty, etc.) for polytopic systems, it is well known that the proven performance depends on the chosen model and, given a nonlinear dynamic systems, the polytopic embeddings available for it are not unique. Thus, explorations on how to obtain the model which is less deletereous for performance are presented. As a last contribution, extending the polytopic Takagi-Sugeno setup to a gain-scheduled quasi-convex difference inclusion framework allows to improve the results over the polytopic models. Indeed, the non-scheduled convex difference inclusion framework was proposed by a research team in University of Seville (Fiacchini, Alamo, Camacho) as a generalised modelling methodology which included the polytopic one; this thesis poses a further generalised gain-scheduled version of some of these results.Aquesta tesi discuteix diferents metodologies de modelatge per extreure millors prestacions o resultats d'estabilitat que aquelles que el modelatge convencional basat en sector no-lineal de sistemes Takagi-Sugeno (també anomenats quasi-LPV) és capaç de produir. En efecte, fins i tot si les LMIs poden provar diferents cotes de prestacions o marges d'estabilitat (taxa de decaïment, H\mathcal H_\infty, etc.) per a sistemes politòpics, és ben conegut que les prestacions provades depenen del model triat i, donat un sistema no-lineal, el dit model politòpic no és únic. Per tant, es presenten exploracions cap a com obtenir el model que és menys perjudicial per a la mesura de prestacions triada. Com una darrera contribució, millors resultats són obtinguts mitjançant l'extensió del modelatge politòpic Takagi-Sugeno a un marc d'inclusions en diferències quasi-convexes amb planificació de guany. En efecte, una versió sense planificació de guany va ser proposada per un equip d'investigadors de la Universitat de Sevilla (Fiaccini, Álamo, Camacho) per a generalitzar el modelatge politòpic, i aquesta tesi proposa una versió més general d'alguns d'aquests resultats que incorpora planificació de guany.Robles Ruiz, R. (2018). Contributions to nonlinear system modelling and controller synthesis via convex structures [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/100848TESI

    Exploiting Global Constraints for Search and Propagation

    Get PDF
    Résumé Cette thèse se concentre sur la Programmation par contraintes (PPC), qui est un paradigme émergent pour résoudre des problèmes complexes d’optimisation combinatoire. Les principales contributions tournent autour du filtrage des contraintes et de la recherche; les deux sont des composantes cl´e dans la résolution de problèmes complexes à travers la PPC. D’un côté, le filtrage des contraintes permet de réduire la taille de l’espace de recherche, d’autre part, la recherche définit la manière dont cet espace sera exploré. Les progrès sur ces sujets sont essentiels pour élargir l’applicabilité de CP à des problèmes réels. En ce qui concerne le filtrage des contraintes, les contributions sont les suivantes: premièrement, on propose une amélioration sur un algorithme existant de la version relaxée d’une contrainte commune qui apparaît souvent dans les problèmes d’affectation (soft gcc). L’algorithme proposé améliore en termes de complexité soit pour la cohérence, soit pour le filtrage et en termes de facilité d’implémentation. Deuxièmement, on introduit une nouvelle contrainte (soit dure soit relaxée) et les algorithmes de filtrage pour une sous-structure récurrente qui se produit dans les problèmes d’affectation des ressources hétérogènes (hierarchical gcc). Nous montrons des résultats encourageants par rapport à une d´écomposition équivalente basée sur gcc. En ce qui concerne la recherche, nous présentons tout d’abord les algorithmes pour compter le nombre de solutions pour deux importantes familles de contraintes: les contraintes sur les occurrences, par exemple, alldifferent, symmetric alldifferent et gcc, et les contraintes de séquence admissible, telles que regular. Ces algorithmes sont à la base d’une nouvelle famille d’heuristiques de recherche, centrées sur les contraintes et basées sur le d´énombrement. Ces heuristiques extraient des informations sur le nombre de solutions des contraintes, pour guider la recherche vers des parties de l’espace de recherche qui contiennent probablement un grand nombre de solutions. Les résultats expérimentaux sur huit différents problèmes montrent une performance impressionnante par rapport à l’état de l’art des heuristiques génériques. Enfin, nous expérimentons une forme forte, déjà connue, de filtrage qui est guidée par la recherche (quick shaving). Cette technique donne des résultats soit encourageants soit mauvais lorsqu’elle est appliquée aveuglément à tous les problèmes. Nous avons introduit un estimateur simple mais très efficace pour activer ou désactiver dynamiquement le quick shaving; de tests expérimentaux ont montré des résultats très prometteurs.----------Abstract This thesis focuses on Constraint Programming (CP), that is an emergent paradigm to solve complex combinatorial optimization problems. The main contributions revolve around constraint filtering and search that are two main components of CP. On one side, constraint filtering allows to reduce the size of the search space, on the other, search defines how this space will be explored. Advances on these topics are crucial to broaden the applicability of CP to real-life problems. For what concerns constraint filtering, the contribution is twofold: we firstly propose an improvement on an existing algorithm of the relaxed version of a constraint that frequently appears in assignment problems (soft gcc). The algorithm proposed outperforms the previously known in terms of time-complexity both for the consistency check and for the filtering and in term of ease of implementiation. Secondly, we introduce a new constraint (both hard and soft version) and associated filtering algorithms for a recurrent sub-structure that occurs in assignment problems with heterogeneous resources (hierarchical gcc). We show promising results when compared to an equivalent decomposition based on gcc. For what concerns search, we introduce algorithms to count the number of solutions for two important families of constraints: occurrence counting constraints, such as alldifferent, symmetric alldifferent and gcc, and sequencing constraints, such as regular. These algorithms are the building blocks of a new family of search heuristics, called constraint-centered counting-based heuristics. They extract information about the number of solutions the individual constraints admit, to guide search towards parts of the search space that are likely to contain a high number of solutions. Experimental results on eight different problems show an impressive performance compared to other generic state-of-the-art heuristics. Finally, we experiment on an already known strong form of constraint filtering that is heuristically guided by the search (quick shaving). This technique gives mixed results when applied blindly to any problem. We introduced a simple yet very effective estimator to dynamically disable quick shaving and showed experimentally very promising results

    Advances and Novel Approaches in Discrete Optimization

    Get PDF
    Discrete optimization is an important area of Applied Mathematics with a broad spectrum of applications in many fields. This book results from a Special Issue in the journal Mathematics entitled ‘Advances and Novel Approaches in Discrete Optimization’. It contains 17 articles covering a broad spectrum of subjects which have been selected from 43 submitted papers after a thorough refereeing process. Among other topics, it includes seven articles dealing with scheduling problems, e.g., online scheduling, batching, dual and inverse scheduling problems, or uncertain scheduling problems. Other subjects are graphs and applications, evacuation planning, the max-cut problem, capacitated lot-sizing, and packing algorithms

    ISIPTA'07: Proceedings of the Fifth International Symposium on Imprecise Probability: Theories and Applications

    Get PDF
    B

    Efficient Maximum A-Posteriori Inference in Markov Logic and Application in Description Logics

    Full text link
    Maximum a-posteriori (MAP) query in statistical relational models computes the most probable world given evidence and further knowledge about the domain. It is arguably one of the most important types of computational problems, since it is also used as a subroutine in weight learning algorithms. In this thesis, we discuss an improved inference algorithm and an application for MAP queries. We focus on Markov logic (ML) as statistical relational formalism. Markov logic combines Markov networks with first-order logic by attaching weights to first-order formulas. For inference, we improve existing work which translates MAP queries to integer linear programs (ILP). The motivation is that existing ILP solvers are very stable and fast and are able to precisely estimate the quality of an intermediate solution. In our work, we focus on improving the translation process such that we result in ILPs having fewer variables and fewer constraints. Our main contribution is the Cutting Plane Aggregation (CPA) approach which leverages symmetries in ML networks and parallelizes MAP inference. Additionally, we integrate the cutting plane inference (Riedel 2008) algorithm which significantly reduces the number of groundings by solving multiple smaller ILPs instead of one large ILP. We present the new Markov logic engine RockIt which outperforms state-of-the-art engines in standard Markov logic benchmarks. Afterwards, we apply the MAP query to description logics. Description logics (DL) are knowledge representation formalisms whose expressivity is higher than propositional logic but lower than first-order logic. The most popular DLs have been standardized in the ontology language OWL and are an elementary component in the Semantic Web. We combine Markov logic, which essentially follows the semantic of a log-linear model, with description logics to log-linear description logics. In log-linear description logic weights can be attached to any description logic axiom. Furthermore, we introduce a new query type which computes the most-probable 'coherent' world. Possible applications of log-linear description logics are mainly located in the area of ontology learning and data integration. With our novel log-linear description logic reasoner ELog, we experimentally show that more expressivity increases quality and that the solutions of optimal solving strategies have higher quality than the solutions of approximate solving strategies
    corecore