57 research outputs found
An Improved Bound for First-Fit on Posets Without Two Long Incomparable Chains
It is known that the First-Fit algorithm for partitioning a poset P into
chains uses relatively few chains when P does not have two incomparable chains
each of size k. In particular, if P has width w then Bosek, Krawczyk, and
Szczypka (SIAM J. Discrete Math., 23(4):1992--1999, 2010) proved an upper bound
of ckw^{2} on the number of chains used by First-Fit for some constant c, while
Joret and Milans (Order, 28(3):455--464, 2011) gave one of ck^{2}w. In this
paper we prove an upper bound of the form ckw. This is best possible up to the
value of c.Comment: v3: referees' comments incorporate
TS2PACK: A Two-Level Tabu Search for the Three-dimensional Bin Packing Problem
Three-dimensional orthogonal bin packing is a problem NP-hard in the strong sense where a set of boxes must be orthogonally packed into the minimum number of three-dimensional bins. We present a two-level tabu search for this problem. The first-level aims to reduce the number of bins. The second optimizes the packing of the bins. This latter procedure is based on the Interval Graph representation of the packing, proposed by Fekete and Schepers, which reduces the size of the search space. We also introduce a general method to increase the size of the associated neighborhoods, and thus the quality of the search, without increasing the overall complexity of the algorithm. Extensive computational results on benchmark problem instances show the effectiveness of the proposed approach, obtaining better results compared to the existing one
A VLSI architecture for enhancing software reliability
As a solution to the software crisis, we propose an architecture that supports and encourages the use of programming techniques and mechanisms for enhancing software reliability. The proposed architecture provides efficient mechanisms for detecting a wide variety of run-time errors, for supporting data abstraction, module-based programming and encourages the use of small protection domains through a highly efficient capability mechanism. The proposed architecture also provides efficient support for user-specified exception handlers and both event-driven and trace-driven debugging mechanisms. The shortcomings of the existing capability-based architectures that were designed with a similar goal in mind are examined critically to identify their problems with regard to capability translation, domain switching, storage management, data abstraction and interprocess communication. Assuming realistic VLSI implementation constraints, an instruction set for the proposed architecture is designed. Performance estimates of the proposed system are then made from the microprograms corresponding to these instructions based on observed characteristics of similar systems and language usage. A comparison of the proposed architecture with similar ones, both in terms of functional characteristics and low-level performance indicates the proposed design to be superior
Glassy dynamics of kinetically constrained models
We review the use of kinetically constrained models (KCMs) for the study of
dynamics in glassy systems. The characteristic feature of KCMs is that they
have trivial, often non-interacting, equilibrium behaviour but interesting slow
dynamics due to restrictions on the allowed transitions between configurations.
The basic question which KCMs ask is therefore how much glassy physics can be
understood without an underlying ``equilibrium glass transition''. After a
brief review of glassy phenomenology, we describe the main model classes, which
include spin-facilitated (Ising) models, constrained lattice gases, models
inspired by cellular structures such as soap froths, models obtained via
mappings from interacting systems without constraints, and finally related
models such as urn, oscillator, tiling and needle models. We then describe the
broad range of techniques that have been applied to KCMs, including exact
solutions, adiabatic approximations, projection and mode-coupling techniques,
diagrammatic approaches and mappings to quantum systems or effective models.
Finally, we give a survey of the known results for the dynamics of KCMs both in
and out of equilibrium, including topics such as relaxation time divergences
and dynamical transitions, nonlinear relaxation, aging and effective
temperatures, cooperativity and dynamical heterogeneities, and finally
non-equilibrium stationary states generated by external driving. We conclude
with a discussion of open questions and possibilities for future work.Comment: 137 pages. Additions to section on dynamical heterogeneities (5.5,
new pages 110 and 112), otherwise minor corrections, additions and reference
updates. Version to be published in Advances in Physic
Innovative Hybrid Approaches for Vehicle Routing Problems
This thesis deals with the efficient resolution of Vehicle Routing Problems (VRPs).
The first chapter faces the archetype of all VRPs: the Capacitated Vehicle Routing Problem (CVRP). Despite having being introduced more than 60 years ago, it still remains an extremely challenging problem. In this chapter I design a Fast Iterated-Local-Search Localized Optimization algorithm for the CVRP, shortened to FILO. The simplicity of the CVRP definition allowed me to experiment with advanced local search acceleration and pruning techniques that have eventually became the core optimization engine of FILO. FILO experimentally shown to be extremely scalable and able to solve very large scale instances of the CVRP in a fraction of the computing time compared to existing state-of-the-art methods, still obtaining competitive solutions in terms of their quality.
The second chapter deals with an extension of the CVRP called the Extended Single Truck and Trailer Vehicle Routing Problem, or simply XSTTRP. The XSTTRP models a broad class of VRPs in which a single vehicle, composed of a truck and a detachable trailer, has to serve a set of customers with accessibility constraints making some of them not reachable by using the entire vehicle. This problem moves towards VRPs including more realistic constraints and it models scenarios such as parcel deliveries in crowded city centers or rural areas, where maneuvering a large vehicle is forbidden or dangerous. The XSTTRP generalizes several well known VRPs such as the Multiple Depot VRP and the Location Routing Problem. For its solution I developed an hybrid metaheuristic which combines a fast heuristic optimization with a polishing phase based on the resolution of a limited set partitioning problem. Finally, the thesis includes a final chapter aimed at guiding the computational evaluation of new approaches to VRPs proposed by the machine learning community
Complex scheduling models and analyses for property-based real-time embedded systems
Modern multi core architectures and parallel applications
pose a significant challenge to the worst-case centric real-time system verification
and design efforts.
The involved model and parameter uncertainty contest the fidelity of formal real-time analyses,
which are mostly based on exact model assumptions.
In this dissertation, various approaches that can accept parameter and model uncertainty
are presented.
In an attempt to improve predictability in worst-case centric analyses, the exploration of timing predictable protocols
are examined for parallel task scheduling on multiprocessors and network-on-chip arbitration.
A novel scheduling algorithm, called stationary rigid gang scheduling, for gang tasks on multiprocessors is proposed.
In regard to fixed-priority wormhole-switched network-on-chips, a more restrictive family of transmission protocols called
simultaneous progression switching protocols is proposed with predictability enhancing properties.
Moreover, hierarchical scheduling for parallel DAG tasks under parameter
uncertainty is studied to achieve temporal- and spatial isolation.
Fault-tolerance as a supplementary reliability aspect of real-time systems
is examined, in spite of dynamic external causes of fault.
Using various job variants, which trade off increased execution time demand with increased error protection,
a state-based policy selection strategy is proposed, which provably assures an acceptable quality-of-service (QoS).
Lastly, the temporal misalignment of sensor data in sensor fusion applications
in cyber-physical systems is examined. A modular analysis based on minimal properties to obtain an upper-bound for the
maximal sensor data time-stamp difference is proposed
Three-Dimensional Capacitated Vehicle Routing Problems with Loading Constraints
City logistics planning involves organizing the movement of goods in urban areas carried out by logistics operators. The loading and routing of goods are critical components of these operations. Efficient utilization of vehicle space and limiting number of empty vehicle movements can strongly impact the nuisances created by goods delivery vehicles in urban areas. We consider an integrated problem of routing and loading known as the three-dimensional loading capacitated vehicle routing problem (3L-CVRP). 3L-CVRP consists of finding feasible routes with the minimum total travel cost while satisfying customers’ demands expressed in terms of cuboid and weighted items. Practical constraints related to connectivity, stability, fragility, and LIFO are considered as parts of the problem. We address the problem in two stages. Firstly, we address the three-dimensional (3D) loading problem followed by 3L-CVRP.
The main objective of a 3D loading problem without routing aspect is finding the best way of packing 3D items into vehicles or containers to increase the loading factor with the purpose of minimizing empty vehicle movements. We present the general linear programming model to the pure three-dimensional vehicle loading problem and solve it by CPLEX. To deal with large-sized instances, Column Generation (CG) technique is applied. The designed method in this work outperforms the best existing techniques in the literature.
The 3DVLP with allocation and capacity constraints, called 3DVLP-AC, is also considered. For the 3DVLP-AC, CPLEX could handle moderate-sized instances with up to 40 customers. To deal with large-sized instances, a Tabu Search (TS) heuristic algorithm is developed. There are no solution methods or lower bounds (LBs) for the 3DVLP-AC existent in the literature by which to evaluate the TS results. Therefore, we evaluate our TS with the CPLEX results for small instances.
3L-CVRP is addressed by using CG technique. To generate new columns, the pricing problem that is part of CG is solved by using two approaches: 1-by means of shortest path problem with resource constraints (ESPPRC) and loading problem, and 2-a heuristic pricing method (HP). CG using HP with a simple scheme can attain solutions competitive with the efficient TS algorithms described in the literature
Recommended from our members
Construction of a support tool for the design of the activity structures based computer system architectures
This thesis was submitted for the degree of Doctor of Philosophy and was awarded by Brunel University.This thesis is a reapproachment of diverse design concepts, brought to bear upon the computer system
engineering problem of identification and control of highly constrained multiprocessing (HCM)
computer machines. It contributes to the area of meta/general systems methodology, and brings
a new insight into the design formalisms, and results afforded by bringing together various design
concepts that can be used for the construction of highly constrained computer system architectures.
A unique point of view is taken by assuming the process of identification and control of HCM
computer systems to be the process generated by the Activity Structures Methodology (ASM).
The research in ASM has emerged from the Neuroscience research, aiming at providing the
techniques for combining the diverse knowledge sources that capture the 'deep knowledge' of this
application field in an effective formal and computer representable form. To apply the ASM design
guidelines in the realm of the distributed computer system design, we provide new design definitions
for the identification and control of such machines in terms of realisations. These realisation definitions
characterise the various classes of the identification and control problem. The classes covered
consist of:
1. the identification of the designer activities,
2. the identification and control of the machine's distributed structures of behaviour,
3. the identification and control of the conversational environment activities (i.e. the randomised/
adaptive activities and interactions of both the user and the machine environments),
4. the identification and control of the substrata needed for the realisation of the machine, and
5. the identification of the admissible design data, both user-oriented and machineoriented,
that can force the conversational environment to act in a self-regulating
manner.
All extent results are considered in this context, allowing the development of both necessary
conditions for machine identification in terms of their distributed behaviours as well as the substrata
structures of the unknown machine and sufficient conditions in terms of experiments on the unknown
machine to achieve the self-regulation behaviour.
We provide a detailed description of the design and implementation of the support software tool
which can be used for aiding the process of constructing effective, HCM computer systems, based
on various classes of identification and control. The design data of a highly constrained system, the
NUKE, are used to verify the tool logic as well as the various identification and control procedures.
Possible extensions as well as future work implied by the results are considered.Government of Ira
Model-based Imputation of Below Detection Limit Missing Data and Group Selection in Bayesian Group Index Regression
Investigations into the association between chemical exposure and health outcomes are increasingly focused on the role of chemical mixtures, as opposed to individual chemicals. The analysis of chemical mixture data required the development of novel statistical methods, one of these being Bayesian group index regression. A statistical challenge common to all chemical mixture analyses is the ubiquitous presence of below detection limit (BDL) data. We propose an extension of Bayesian group index regression that treats both regression effects and missing BDL observations as parameters in a model estimated through a Markov Chain Monte Carlo algorithm that we refer to as Pseudo-Gibbs imputation. The Pseudo-Gibbs approach enables the estimated parameters of the health effects model to inform the missing data imputations and vice versa, as well as accounting for the true variance of the BDL imputations. We conduct a simulation study showing greater power to detect chemical indices significantly associated with an outcome and sensitivity for identifying important chemicals within indices at high levels of BDL missing data. We apply our model to a case-control study on the effects of chemical exposure on childhood leukemia. We next address a problem specific to group index models: how to partition a given set of chemicals into groups to form the requisite indices. We first proposed a novel variable clustering algorithm using a variant on the traditional PCA algorithm called Robust PCA. We compared this clustering method with other variable clustering methods from the literature using a simulation study. Finally, we extended the variable clustering method identified previously to incorporate information from an outcome variable. This semi-supervised clustering extension incorporates the ability to constrain clusters based on the direction of association of individual chemicals with the outcome of interest. We apply both unsupervised variable clustering and semi-supervised clustering methods identified to a case-control study on the effects of chemical exposure on non-Hodkin’s lymphoma
The multiprocessor real-time scheduling of general task systems
The recent emergence of multicore and related technologies in many commercial systems has increased the prevalence of multiprocessor architectures. Contemporaneously, real-time applications have become more complex and sophisticated in their behavior and interaction. Inevitably, these complex real-time applications will be deployed upon these multiprocessor platforms and require temporal analysis techniques to verify their correctness. However, most prior research in multiprocessor real-time scheduling has addressed the temporal analysis only of Liu and Layland task systems. The goal of this dissertation is to extend real-time scheduling theory for multiprocessor systems by developing temporal analysis techniques for more general task models such as the sporadic task model, the generalized multiframe task model, and the recurring real-time task model. The thesis of this dissertation is: Optimal online multiprocessor real-time scheduling algorithms for sporadic and more general task systems are impossible; however, efficient, online scheduling algorithms and associated feasibility and schedulability tests, with provably bounded deviation from any optimal test, exist. To support our thesis, this dissertation develops feasibility and schedulability tests for various multiprocessor scheduling paradigms. We consider three classes of multiprocessor scheduling based on whether a real-time job may migrate between processors: full-migration, restricted-migration, and partitioned. For all general task systems, we obtain feasibility tests for arbitrary real-time instances under the full-and restricted-migration paradigms. Despite the existence of tests for feasibility, we show that optimal online scheduling of sporadic and more general systems is impossible. Therefore, we focus on scheduling algorithms that have constant-factor approximation ratios in terms of an analysis technique known as resource augmentation. We develop schedulability tests for scheduling algorithms, earliest-deadline-first (edf) and deadline-monotonic (dm), under full-migration and partitioned scheduling paradigms. Feasibility and schedulability tests presented in this dissertation use the workload metrics of demand-based load and maximum job density and have provably bounded deviation from optimal in terms of resource augmentation. We show the demand-based load and maximum job density metrics may be exactly computed in pseudo-polynomial time for general task systems and approximated in polynomial time for sporadic task systems
- …