122 research outputs found
Applications of Mathematical Programming in Personnel Scheduling
In the few decades of its existence, mathematical programming has evolved into an important branch of operations research and management science. This thesis consists of four papers in which we apply mathematical programming to real-life personnel scheduling and project management problems. We develop exact mathematical programming formulations. Furthermore, we propose effective heuristic strategies to decompose the original problems into subproblems that can be solved effciently with tailored mathematical programming formulations. We opt for solution methods that are based on mathematical programming, because their advantages in practice are a) the exibility to easily accommodate changes in the problem setting, b) the possibility to evaluate the quality of the solutions obtained, and c) the possibility to use general-purpose solvers, which are often the only software available in practice
Impartiality, Solidarity, and Priority in the Theory of Justice
The veil of ignorance has been used often as a tool for recommending what justice requires with respect to the distribution of wealth. We show that John Harsanyi’s and Ronald Dworkin’s conceptions of the veil, when modeled formally, recommend wealth allocations in conflict with the prominently espoused view that priority should be given to the worse off with respect to wealth allocation. It follows that those who believe that justice requires impartiality and priority must seek some method of assuring the former other than the veil of ignorance. We propose that impartiality and solidarity are fundamentals of justice, and study the relationship among these two axioms and priority. We characterize axiomatically resource allocation rules that jointly satisfy impartiality, solidarity, and priority: they comprise a class of general indices of wealth and welfare, including, as polar cases, the classical equal-wealth and equal-welfare rules
Recommended from our members
Sequencing mixed-model assembly lines in just-in-time production systems
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.This thesis proposes a new simulated annealing approach to solve multiple objective sequencing problems in mixed-model assembly lines. Mixed-model assembly lines are a type of production line where a variety of product models similar in product characteristics are assembled. Such an assembly line is increasingly accepted in industry to cope with the recently observed trend of diversification of customer demands.
Sequencing problems are important for an efficient use of mixed-model assembly lines. There is a rich of criteria on which to judge sequences of product models in terms of line utilization. We consider three practically important objectives: the goal of minimizing the variation of the actual production from the desired production, which is minimizing usage variation, workload smoothing in order to reduce the chance of production delays and line stoppages and minimizing total set-ups cost. A considerate line manager would like to take into account all these factors. These are important for an efficient operation of mixed-model assembly lines. They work efficiently and find good solution in a very short time, even when the size of the problem is too large. The multiple objective sequencing problems is described and its mathematical formulation is provided. Simulated annealing algorithms are designed for near or optimal solutions and find an efficiency frontier of all efficient design configurations for the problem.
This approach combines the SA methodology with a specific neighborhood search, which in the case of this study is a "swapping two sequence". Two annealing methods are proposed based on this approach, which differ only in cooling and freezing schedules.
This research used correlation to describe the degree of relationship between results obtained by method B and other heuristics method and also for quality of our algorithm ANOVA's of output is constructed to analyse and evaluate the accuracy of the CPU time taken to determine near or optimal solution.Ministry of Culture and Higher Education of the
Islamic Republic of Ira
Region based gene expression via reanalysis of publicly available microarray data sets.
A DNA microarray is a high-throughput technology used to identify relative gene expression. One of the most widely used platforms is the Affymetrix® GeneChip® technology which detects gene expression levels based on probe sets composed of a set of twenty-five nucleotide probes designed to hybridize with specific gene targets. Given a particular Affymetrix® GeneChip® platform, the design of the probes is fixed. However, the method of analysis is dynamic in nature due to the ability to annotate and group probes into uniquely defined groupings. This is particularly important since publicly available repositories of microarray datasets, such as ArrayExpress and NCBI’s Gene Expression Omnibus (GEO) have made millions of samples readily available to be reanalyzed computationally without the need for new biological experiments. One way in which the analysis can dynamically change is by correcting the mapping between probe sets and targets by creating custom Chip Description Files (CDFs) to arrange which probes belong to which probe set based on the latest genomic information or specific annotations of interest. Since default probe sets in Affymetrix® GeneChip® platforms are specific for a gene, transcript or exon, the analyses are then limited to profile differential expression at the gene, transcript or individual exon level. However, it has been revealed that untranslated regions (UTRs) of mRNA have important impacts on the regulation of proteins. We therefore developed a new probe mapping protocol that addresses three issues of Affymetrix® GeneChip® data analyses: removing nonspecific probes, updating probe target mapping based on the latest genome information and grouping the probes into region (UTR, individual exon), gene and transcript level targets of interest to support a better understanding of the effect of UTRs and individual exons on gene expression levels. Furthermore, we developed an R package, affyCustomCdf, for users to dynamically create custom CDFs. The affyCustomCdf tool takes annotations in a General/Gene Transfer Format File (GTF), aligns probes to gene annotations via Nested Containment List (NCList) indexing and generates a custom Chip Description File (CDF) to regroup probes into probe sets based on a region (UTR and individual exon), transcript or gene level. Our results indicate that removing probes that no longer align to the genome without mismatches or align to multiple locations can help to reduce false-positive differential expression, as can removal of probes in regions overlapping multiple genes. Moreover, our method based on regions can detect changes that would have been missed by analysis based on gene and transcript. It also allows for a better understanding of 3’ UTR dynamics through the reanalysis of publicly available data
Recommended from our members
Designing and Optimizing Matching Markets
Matching market design studies the fundamental problem of how to allocate scarce resources to individuals with varied needs. In recent years, the theoretical study of matching markets such as medical residency, public housing and school choice has greatly informed and improved the design of such markets in practice. Impactful work in matching market design frequently makes use of techniques from computer science, economics and operations research to provide end–to-end solutions that address design questions holistically. In this dissertation, I develop tools for optimization in market design by studying matching mechanisms for school choice, an important societal problem that exemplifies many of the challenges in effective marketplace design.
In the first part of this work I develop frameworks for optimization in school choice that allow us to address operational problems in the assignment process. In the school choice market, where scarce public school seats are assigned to students, a key operational issue is how to reassign seats that are vacated after an initial round of centralized assignment. We propose a class of reassignment mechanisms, the Permuted Lottery Deferred Acceptance (PLDA) mechanisms, which generalize the commonly used Deferred Acceptance school choice mechanism and retain its desirable incentive and efficiency properties. We find that under natural conditions on demand all PLDA mechanisms achieve equivalent allocative welfare, and the PLDA based on reversing the tie-breaking lottery during the reassignment round minimizes reassignment. Empirical investigations on data from NYC high school admissions support our theoretical findings. In this part, we also provide a framework for optimization when using the prominent Top Trading Cycles (TTC) mechanism. We show that the TTC assignment can be described by admission cutoffs, which explain the role of priorities in determining the TTC assignment and can be used to tractably analyze TTC. In a large-scale continuum model we show how to compute these cutoffs directly from the distribution of preferences and priorities, providing a framework for evaluating policy choices. As an application of the model we solve for optimal investment in school quality under choice and find that an egalitarian distribution can be more efficient as it allows students to choose schools based on idiosyncracies in their preferences.
In the second part of this work, I consider the role of a marketplace as an information provider and explore how mechanisms affect information acquisition by agents in matching markets. I provide a tractable “Pandora's box” model where students hold a prior over their value for each school and can pay an inspection cost to learn their realized value. The model captures how students’ decisions to acquire information depend on priors and market information, and can rationalize a student’s choice to remain partially uninformed. In such a model students need market information in order to optimally acquire their personal preferences, and students benefit from waiting for the market to resolve before acquiring information. We extend the definition of stability to this partial information setting and define regret-free stable outcomes, where the matching is stable and each student has acquired the same information as if they had waited for the market to resolve. We show that regret-free stable outcomes have a cutoff characterization, and the set of regret-free stable outcomes is a non-empty lattice. However, there is no mechanism that always produces a regret-free stable matching, as there can be information deadlocks where every student finds it suboptimal to be the first to acquire information. In settings with sufficient information about the distribution of preferences, we provide mechanisms that exploit the cutoff structure to break the deadlock and approximately implement a regret-free stable matching
PREFERENCES: OPTIMIZATION, IMPORTANCE LEARNING AND STRATEGIC BEHAVIORS
Preferences are fundamental to decision making and play an important role in artificial intelligence. Our research focuses on three group of problems based on the preference formalism Answer Set Optimization (ASO): preference aggregation problems such as computing optimal (near optimal) solutions, strategic behaviors in preference representation, and learning ranks (weights) for preferences.
In the first group of problems, of interest are optimal outcomes, that is, outcomes that are optimal with respect to the preorder defined by the preference rules. In this work, we consider computational problems concerning optimal outcomes. We propose, implement and study methods to compute an optimal outcome; to compute another optimal outcome once the first one is found; to compute an optimal outcome that is similar to (or, dissimilar from) a given candidate outcome; and to compute a set of optimal answer sets each significantly different from the others. For the decision version of several of these problems we establish their computational complexity.
For the second topic, the strategic behaviors such as manipulation and bribery have received much attention from the social choice community. We study these concepts for preference formalisms that identify a set of optimal outcomes rather than a single winning outcome, the case common to social choice. Such preference formalisms are of interest in the context of combinatorial domains, where preference representations are only approximations to true preferences, and seeking a single optimal outcome runs a risk of missing the one which is optimal with respect to the actual preferences. In this work, we assume that preferences may be ranked (differ in importance), and we use the Pareto principle adjusted to the case of ranked preferences as the preference aggregation rule. For two important classes of preferences, representing the extreme ends of the spectrum, we provide characterizations of situations when manipulation and bribery is possible, and establish the complexity of the problem to decide that.
Finally, we study the problem of learning the importance of individual preferences in preference profiles aggregated by the ranked Pareto rule or positional scoring rules. We provide a polynomial-time algorithm that finds a ranking of preferences such that the ranked profile correctly decided all the examples, whenever such a ranking exists. We also show that the problem to learn a ranking maximizing the number of correctly decided examples is NP-hard. We obtain similar results for the case of weighted profiles
Efficient Storage of Genomic Sequences in High Performance Computing Systems
ABSTRACT: In this dissertation, we address the challenges of genomic data storage in high performance computing systems. In particular, we focus on developing a referential compression approach for Next Generation Sequence data stored in FASTQ format files. The amount of genomic data available for researchers to process has increased exponentially, bringing enormous challenges for its efficient storage and transmission. General-purpose compressors can only offer limited performance for genomic data, thus the need for specialized compression solutions. Two trends have emerged as alternatives to harness the particular properties of genomic data: non-referential and referential compression. Non-referential compressors offer higher compression rations than general purpose compressors, but still below of what a referential compressor could theoretically achieve. However, the effectiveness of referential compression depends on selecting a good reference and on having enough computing resources available. This thesis presents one of the first referential compressors for FASTQ files. We first present a comprehensive analytical and experimental evaluation of the most relevant tools for genomic raw data compression, which led us to identify the main needs and opportunities in this field. As a consequence, we propose a novel compression workflow that aims at improving the usability of referential compressors. Subsequently, we discuss the implementation and performance evaluation for the core of the proposed workflow: a referential compressor for reads in FASTQ format that combines local read-to-reference alignments with a specialized binary-encoding strategy. The compression algorithm, named UdeACompress, achieved very competitive compression ratios when compared to the best compressors in the current state of the art, while showing reasonable execution times and memory use. In particular, UdeACompress outperformed all competitors when compressing long reads, typical of the newest sequencing technologies. Finally, we study the main aspects of the data-level parallelism in the Intel AVX-512 architecture, in order to develop a parallel version of the UdeACompress algorithms to reduce the runtime. Through the use of SIMD programming, we managed to significantly accelerate the main bottleneck found in UdeACompress, the Suffix Array Construction
Optimization for Decision Making II
In the current context of the electronic governance of society, both administrations and citizens are demanding the greater participation of all the actors involved in the decision-making process relative to the governance of society. This book presents collective works published in the recent Special Issue (SI) entitled “Optimization for Decision Making II”. These works give an appropriate response to the new challenges raised, the decision-making process can be done by applying different methods and tools, as well as using different objectives. In real-life problems, the formulation of decision-making problems and the application of optimization techniques to support decisions are particularly complex and a wide range of optimization techniques and methodologies are used to minimize risks, improve quality in making decisions or, in general, to solve problems. In addition, a sensitivity or robustness analysis should be done to validate/analyze the influence of uncertainty regarding decision-making. This book brings together a collection of inter-/multi-disciplinary works applied to the optimization of decision making in a coherent manner
Real-time algorithm configuration
This dissertation presents a number of contributions to the field of algorithm configur- ation. In particular, we present an extension to the algorithm configuration problem, real-time algorithm configuration, where configuration occurs online on a stream of instances, without the need for prior training, and problem solutions are returned in the shortest time possible. We propose a framework for solving the real-time algorithm configuration problem, ReACT. With ReACT we demonstrate that by using the parallel computing architectures, commonplace in many systems today, and a robust aggregate ranking system, configuration can occur without any impact on performance from the perspective of the user. This is achieved by means of a racing procedure. We show two concrete instantiations of the framework, and show them to be on a par with or even exceed the state-of-the-art in offline algorithm configuration using empirical evaluations on a range of combinatorial problems from the literature.
We discuss, assess, and provide justification for each of the components used in our framework instantiations. Specifically, we show that the TrueSkill ranking system commonly used to rank players’ skill in multiplayer games can be used to accurately es- timate the quality of an algorithm’s configuration using only censored results from races between algorithm configurations. We confirm that the order that problem instances arrive in influences the configuration performance and that the optimal selection of configurations to participate in races is dependent on the distribution of the incoming in- stance stream. We outline how to maintain a pool of quality configurations by removing underperforming configurations, and techniques to generate replacement configurations with minimal computational overhead. Finally, we show that the configuration space can be reduced using feature selection techniques from the machine learning literature, and that doing so can provide a boost in configuration performance
- …