12 research outputs found

    Calculation of chemical and phase equilibria

    Get PDF
    Bibliography: pages 167-169.The computation of chemical and phase equilibria is an essential aspect of chemical engineering design and development. Important applications range from flash calculations to distillation and pyrometallurgy. Despite the firm theoretical foundations on which the theory of chemical equilibrium is based there are two major difficulties that prevent the equilibrium state from being accurately determined. The first of these hindrances is the inaccuracy or total absence of pertinent thermodynamic data. The second is the complexity of the required calculation. It is the latter consideration which is the sole concern of this dissertation

    Analysis of large scale linear programming problems with embedded network structures: Detection and solution algorithms

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Linear programming (LP) models that contain a (substantial) network structure frequently arise in many real life applications. In this thesis, we investigate two main questions; i) how an embedded network structure can be detected, ii) how the network structure can be exploited to create improved sparse simplex solution algorithms. In order to extract an embedded pure network structure from a general LP problem we develop two new heuristics. The first heuristic is an alternative multi-stage generalised upper bounds (GUB) based approach which finds as many GUB subsets as possible. In order to identify a GUB subset two different approaches are introduced; the first is based on the notion of Markowitz merit count and the second exploits an independent set in the corresponding graph. The second heuristic is based on the generalised signed graph of the coefficient matrix. This heuristic determines whether the given LP problem is an entirely pure network; this is in contrast to all previously known heuristics. Using generalised signed graphs, we prove that the problem of detecting the maximum size embedded network structure within an LP problem is NP-hard. The two detection algorithms perform very well computationally and make positive contributions to the known body of results for the embedded network detection. For computational solution a decomposition based approach is presented which solves a network problem with side constraints. In this approach, the original coefficient matrix is partitioned into the network and the non-network parts. For the partitioned problem, we investigate two alternative decomposition techniques namely, Lagrangean relaxation and Benders decomposition. Active variables identified by these procedures are then used to create an advanced basis for the original problem. The computational results of applying these techniques to a selection of Netlib models are encouraging. The development and computational investigation of this solution algorithm constitute further contribution made by the research reported in this thesis.This study is funded by the Turkish Educational Council and Mugla University

    Market-based transmission congestion management using extended optimal power flow techniques

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University, 5/9/2001This thesis describes research into the problem of transmission congestion management. The causes, remedies, pricing methods, and other issues of transmission congestion are briefly reviewed. This research is to develop market-based approaches to cope with transmission congestion in real-time, short-run and long-run efficiently, economically and fairly. Extended OPF techniques have been playing key roles in many aspects of electricity markets. The Primal-Dual Interior Point Linear Programming and Quadratic Programming are applied to solve various optimization problems of congestion management proposed in the thesis. A coordinated real-time optimal dispatch method for unbundled electricity markets is proposed for system balancing and congestion management. With this method, almost all the possible resources in different electricity markets, including operating reserves and bilateral transactions, can be used to eliminate the real-time congestion according to their bids into the balancing market. Spot pricing theory is applied to real-time congestion pricing. Under the same framework, a Lagrangian Relaxation based region decomposition OPF algorithm is presented to deal with the problems of real-time active power congestion management across multiple regions. The inter/intra-regional congestion can be relieved without exchanging any information between regional ISOs but the Lagrangian Multipliers. In day-ahead spot market, a new optimal dispatch method is proposed for congestion and price risk management, particularly for bilateral transaction curtailment. Individual revenue adequacy constraints, which include payments from financial instruments, are involved in the original dispatch problem. An iterative procedure is applied to solve this special optimization problem with both primal and dual variables involved in its constraints. An optimal Financial Transmission Rights (FTR) auction model is presented as an approach to the long-term congestion management. Two types of series F ACTS devices are incorporated into this auction problem using the Power Injection Model to maximize the auction revenue. Some new treatment has been done on TCSC's operating limits to keep the auction problem linear

    User-Oriented Methodology and Techniques of Decision Analysis and Support

    Get PDF
    This volume contains 26 papers selected from Workshop presentations. The book is divided into two sections; the first is devoted to the methodology of decision analysis and support and related theoretical developments, and the second reports on the development of tools -- algorithms, software packages -- for decision support as well as on their applications. Several major contributions on constructing user interfaces, on organizing intelligent DSS, on modifying theory and tools in response to user needs -- are included in this volume

    Exact and heuristic methods for statistical tabular data protection

    Get PDF
    One of the main purposes of National Statistical Agencies (NSAs) is to provide citizens or researchers with a large amount of trustful and high quality statistical information. NSAs must guarantee that no confidential individual information can be obtained from the released statistical outputs. The discipline of Statistical disclosure control (SDC) aims to avoid that confidential information is derived from data released while, at the same time, maintaining as much as possible the data utility. NSAs work with two types of data: microdata and tabular data. Microdata files contain records of individuals or respondents (persons or enterprises) with attributes. For instance, a national census might collect attributes such as age, address, salary, etc. Tabular data contains aggregated information obtained by crossing one or more categorical variables from those microdata files. Several SDC methods are available to avoid that no confidential individual information can be obtained from the released microdata or tabular data. This thesis focus on tabular data protection, although the research carried out can be applied to other classes of problems. Controlled Tabular Adjustment(CTA) and Cell Suppression Problem (CSP) have concentrated most of the recent research in the tabular data protection field. Both methods formulate Mixed Integer Linear Programming problems (MILPs) which are challenging for tables of moderate size. Even finding a feasible initial solution may be a challenging task for large instances. Due to the fact that many end users give priority to fast executions and are thus satisfied, in practice, with suboptimal solutions, as a first result of this thesis we present an improvement of a known and successful heuristic for finding feasible solutions of MILPs, called feasibility pump. The new approach, based on the computation of analytic centers, is named the Analytic Center Feasbility Pump.The second contribution consists in the application of the fix-and-relax heuristic (FR) to the CTA method. FR (alone or in combination with other heuristics) is shown to be competitive compared to CPLEX branch-and-cut in terms of quickly finding either a feasible solution or a good upper bound. The last contribution of this thesis deals with general Benders decomposition, which is improved with the application of stabilization techniques. A stabilized Benders decomposition is presented,which focus on finding new solutions in the neighborhood of "good'' points. This approach is efficiently applied to the solution of realistic and real-world CSP instances, outperforming alternative approaches.The first two contributions are already published in indexed journals (Operations Research Letters and Computers and Operations Research). The third contribution is a working paper to be submitted soon.Un dels principals objectius dels Instituts Nacionals d'Estadística (INEs) és proporcionar, als ciutadans o als investigadors, una gran quantitat de dades estadístiques fiables i precises. Al mateix temps els INEs deuen garantir la confidencialitat estadística i que cap dada personal pot ser obtinguda gràcies a les dades estadístiques disseminades. La disciplina Control de revelació estadística (en anglès Statistical Disclosure Control, SDC) s'ocupa de garantir que cap dada individual pot derivar-se dels outputs de estadístics publicats però intentant al mateix temps mantenir el màxim possible de riquesa de les dades. Els INEs treballen amb dos tipus de dades: microdades i dades tabulars. Les microdades son arxius amb registres individuals de persones o empreses amb un conjunt d'atributs. Per exemple, el censos nacional recull atributs tals com l'edat, sexe, adreça o salari entre d'altres. Les dades tabulars són dades agregades obtingudes a partir del creuament d’un o més atributs o variables categòriques dels fitxers de microdades. Varis mètodes CRE són disponibles per evitar la revelació estadística en fitxers de microdades o dades tabulars. Aquesta tesi es centra en la protecció de dades tabulars tot i que la recerca duta a terme pot ser aplicada també a altres tipus de problemes. Els mètodes CTA (en anglès Controlled Tabular Adjustment) i CSP (en anglès Cell Suppression Problem) ha centrat la major part de la recerca feta en el camp de protecció de dades tabulars. Tots dos mètodes formulen problemes MILP (Mixed Integer Linear Programming problems) difícils de solucionar en taules de mida moderada. Fins i tot trobar solucions inicials factibles pot resultar molt difícil. Donat el fet que molts usuaris finals donen prioritat a tenir solucions ràpides i bones tot i que aquestes no siguin les òptimes, la primera contribució de la tesis presenta una millora en una coneguda i exitosa heurística per trobar solucions factibles de MILPs, anomenada feasibility pump. La nova aproximació, basada en el càlcul de centres analítics, s'anomena Analytic Center Feasibility Pump. La segona contribució consisteix en l'aplicació de la heurística fix-and-relax (FR) al mètode CTA. FR (sol o en combinació amb d'altres heurístiques) es mostra com a competitiu davant CPLEX branch-and-cut en termes de trobar ràpidament solucions factibles o bons upper bounds. La darrera contribució d’aquesta tesi tracta sobre el problema general de descomposició de Benders, aportant una millora amb l'aplicació de tècniques d’estabilització. Presentem un mètode anomenat stabilized Benders decomposition que es centra en trobar noves solucions properes a punts considerats prèviament com a bons. Aquesta aproximació ha estat eficientment aplicada al problema CSP, obtenint molt bons resultats en dades tabulars reals, millorant altres alternatives conegudes del mètode CSP. Les dues primeres contribucions ja han estat publicades en revistes indexades (Operations Research Letters and Computers and Operations Research). Actualment estem treballant en la publicació de la tercera contribució i serà en breu enviada a revisar.Postprint (published version

    Auction algorithms for generalized nonlinear network flow problems

    Full text link
    Thesis (Ph.D.)--Boston UniversityNetwork flow is an area of optimization theory concerned with optimization over networks with a range of applicability in fields such as computer networks, manufacturing, finance, scheduling and routing, telecommunications, and transportation. In both linear and nonlinear networks, a family of primal-dual algorithms based on "approximate" Complementary Slackness (ε-CS) is among the fastest in centralized and distributed environments. These include the auction algorithm for the linear assignment/transportation problems, ε-relaxation and Auction/Sequential Shortest Path (ASSP) for the min-cost flow and max-flow problems. Within this family, the auction algorithm is particularly fast, as it uses "second best" information, as compared to using the more generic ε-relaxation for linear assignment/transportation. Inspired by the success of auction algorithms, we extend them to two important classes of nonlinear network flow problems. We start with the nonlinear Resource Allocation Problem (RAP). This problem consists of optimally assigning N divisible resources to M competing missions/tasks each with its own utility function. This simple yet powerful framework has found applications in diverse fields such as finance, economics, logistics, sensor and wireless networks. RAP is an instance of generalized network (networks with arc gains) flow problem but it has significant special structure analogous to the assignment/transportation problem. We develop a class of auction algorithms for RAP: a finite-time auction algorithm for both synchronous and asynchronous environments followed by a combination of forward and reverse auction with ε-scaling to achieve pseudo polynomial complexity for any non-increasing generalized convex utilities including non-continuous and/ or non-differentiable functions. These techniques are then generalized to handle shipping costs on allocations. Lastly, we demonstrate how these techniques can be used for solving a dynamic RAP where nodes may appear or disappear over time. In later part of the thesis, we consider the convex nonlinear min-cost flow problem. Although E-relaxation and ASSP are among the fastest available techniques here, we illustrate how nonlinear costs, as opposed to linear, introduce a significant bottleneck on the progress that these algorithms make per iteration. We then extend the core idea of the auction algorithm, use of second best to make aggressive steps, to overcome this bottleneck and hence develop a faster version of ε-relaxation. This new algorithm shares the same theoretical complexity as the original but outperforms it in our numerical experiments based on random test problem suites

    Network Flows

    Get PDF
    Not Availabl

    Inductive learning of tree-based regression models

    Get PDF
    Dissertação de Doutoramento em Ciência de Computadores apresentada à Faculdade de Ciências da Universidade do PortoEsta tese explora diferentes aspectos da metodologia de indução de árvores de regressão a partir de amostras de dados. O objectivo principal deste estudo é o de melhorar a capacidade predictiva das árvores de regressão tentando manter, tanto quanto possível, a sua compreensibilidade e eficiência computacional. O nosso estudo sobre este tipo de modelos de regressão é dividido em três partes principais.Na primeira parte do estudo são descritas em detalhe duas metodologias para crescer árvores de regressão: uma que minimiza o erro quadrado médio; e outra que minimiza o desvio absoluto médio. A análise que é apresentada concentra-se primordialmente na questão da eficiência computacional do processo de crescimento das árvores. São apresentados diversos algoritmos novos que originam ganhos de eficiência computacional significativos. Por fim, é apresentada uma comparação experimental das duas metodologias alternativas, mostrando claramente os diferentes objectivos práticos de cada uma. A poda das árvores de regressão é um procedimento "standard" neste tipo de metodologias cujo objectivo principal é o de proporcionar um melhor compromisso entre a simplicidade e compreensibilidade das árvores e a sua capacidade predictiva. Na segunda parte desta dissertação são descritas uma série de técnicas novas de poda baseadas num processo de selecção a partir de um conjunto de árvores podadas alternativas. Apresentamos também um conjunto extenso de experiências comparando diferentes métodos de podar árvores de regressão. Os resultados desta comparação, levada a cabo num largo conjunto de problemas, mostram que as nossas técnicas de poda obtêm resultados, em termos de capacidade predictiva, significativamente superiores aos obtidos pelos métodos do actual "estado da arte". Na parte final desta dissertação é apresentado um novo tipo de árvores, que denominamos árvores de regressão locais. Estes modelos híbridos resultam da integração das árvores de regressão com técnicas de modelação ..
    corecore