10 research outputs found

    AFPTAS results for common variants of bin packing: A new method to handle the small items

    Full text link
    We consider two well-known natural variants of bin packing, and show that these packing problems admit asymptotic fully polynomial time approximation schemes (AFPTAS). In bin packing problems, a set of one-dimensional items of size at most 1 is to be assigned (packed) to subsets of sum at most 1 (bins). It has been known for a while that the most basic problem admits an AFPTAS. In this paper, we develop methods that allow to extend this result to other variants of bin packing. Specifically, the problems which we study in this paper, for which we design asymptotic fully polynomial time approximation schemes, are the following. The first problem is "Bin packing with cardinality constraints", where a parameter k is given, such that a bin may contain up to k items. The goal is to minimize the number of bins used. The second problem is "Bin packing with rejection", where every item has a rejection penalty associated with it. An item needs to be either packed to a bin or rejected, and the goal is to minimize the number of used bins plus the total rejection penalty of unpacked items. This resolves the complexity of two important variants of the bin packing problem. Our approximation schemes use a novel method for packing the small items. This new method is the core of the improved running times of our schemes over the running times of the previous results, which are only asymptotic polynomial time approximation schemes (APTAS)

    Models and algorithms for decomposition problems

    Get PDF
    This thesis deals with the decomposition both as a solution method and as a problem itself. A decomposition approach can be very effective for mathematical problems presenting a specific structure in which the associated matrix of coefficients is sparse and it is diagonalizable in blocks. But, this kind of structure may not be evident from the most natural formulation of the problem. Thus, its coefficient matrix may be preprocessed by solving a structure detection problem in order to understand if a decomposition method can successfully be applied. So, this thesis deals with the k-Vertex Cut problem, that is the problem of finding the minimum subset of nodes whose removal disconnects a graph into at least k components, and it models relevant applications in matrix decomposition for solving systems of equations by parallel computing. The capacitated k-Vertex Separator problem, instead, asks to find a subset of vertices of minimum cardinality the deletion of which disconnects a given graph in at most k shores and the size of each shore must not be larger than a given capacity value. Also this problem is of great importance for matrix decomposition algorithms. This thesis also addresses the Chance-Constrained Mathematical Program that represents a significant example in which decomposition techniques can be successfully applied. This is a class of stochastic optimization problems in which the feasible region depends on the realization of a random variable and the solution must optimize a given objective function while belonging to the feasible region with a probability that must be above a given value. In this thesis, a decomposition approach for this problem is introduced. The thesis also addresses the Fractional Knapsack Problem with Penalties, a variant of the knapsack problem in which items can be split at the expense of a penalty depending on the fractional quantity

    Production Scheduling

    Get PDF
    Generally speaking, scheduling is the procedure of mapping a set of tasks or jobs (studied objects) to a set of target resources efficiently. More specifically, as a part of a larger planning and scheduling process, production scheduling is essential for the proper functioning of a manufacturing enterprise. This book presents ten chapters divided into five sections. Section 1 discusses rescheduling strategies, policies, and methods for production scheduling. Section 2 presents two chapters about flow shop scheduling. Section 3 describes heuristic and metaheuristic methods for treating the scheduling problem in an efficient manner. In addition, two test cases are presented in Section 4. The first uses simulation, while the second shows a real implementation of a production scheduling system. Finally, Section 5 presents some modeling strategies for building production scheduling systems. This book will be of interest to those working in the decision-making branches of production, in various operational research areas, as well as computational methods design. People from a diverse background ranging from academia and research to those working in industry, can take advantage of this volume

    Approximationsalgorithmen fĂĽr Packungs- und Scheduling-Probleme

    Get PDF
    Algorithms for solving optimization problems play a major role in the industry. For example in the logistics industry, route plans have to be optimized according to various criteria. However, many natural optimization problems are hard to solve. That is, for many optimization problems no algorithms with running time polynomial in the size of the instance are known. Furthermore, it is a widely accepted assumption that many optimization problems do not allow algorithms that solve the problem optimally in polynomial time. One way of overcoming this dilemma is using approximation algorithms. These algorithms have a polynomial running time, but their solutions are in general not optimal but rather close to an optimum. The main subject of this thesis is approximation algorithms for packing and scheduling problems: For the three-dimensional orthogonal knapsack problem (OKP-3) without rotations we present algorithms with approximation ratios arbitrarily close to 9, 8 and 7. For OKP-3 with 90 degree rotations around the z-axis or around all axes, we present algorithms with approximation ratios arbitrarily close to 6 and 5, respectively. Both for the malleable and for the non-malleable case of the non-preemptive parallel job scheduling problem in which the number of available machines is polynomially bounded in the number of jobs, we present polynomial time approximation schemes. For the cases in which additionally the machines allotted to each job have to be contiguous, we show the existence of approximation algorithms with ratio arbitrarily close to 1.5

    Algorithmes efficaces de gestion des règles dans les réseaux définis par logiciel

    Get PDF
    In software-defined networks (SDN), the filtering requirements for critical applications often vary according to flow changes and security policies. SDN addresses this issue with a flexible software abstraction, allowing simultaneous and convenient modification and implementation of a network policy on flow-based switches.With the increase in the number of entries in the ruleset and the size of data that traverses the network each second, it remains crucial to minimize the number of entries and accelerate the lookup process. On the other hand, attacks on Internet have reached a high level. The number keeps increasing, which increases the size of blacklists and the number of rules in firewalls. The limited storage capacity requires efficient management of that space. In the first part of this thesis, our primary goal is to find a simple representation of filtering rules that enables more compact rule tables and thus is easier to manage while keeping their semantics unchanged. The construction of rules should be obtained with reasonably efficient algorithms too. This new representation can add flexibility and efficiency in deploying security policies since the generated rules are easier to manage. A complementary approach to rule compression would be to use multiple smaller switch tables to enforce access-control policies in the network. However, most of them have a significant rules replication, or even they modify the packet's header to avoid matching a rule by a packet in the next switch. The second part of this thesis introduces new techniques to decompose and distribute filtering rule sets over a given network topology. We also introduce an update strategy to handle the changes in network policy and topology. In addition, we also exploit the structure of a series-parallel graph to efficiently resolve the rule placement problem for all-sized networks intractable time.Au sein des réseaux définis par logiciel (SDN), les exigences de filtrage pour les applications critiques varient souvent en fonction des changements de flux et des politiques de sécurité. SDN résout ce problème avec une abstraction logicielle flexible, permettant la modification et la mise en \oe{}uvre simultanées et pratiques d'une politique réseau sur les routeurs.Avec l'augmentation du nombre de règles de filtrage et la taille des données qui traversent le réseau chaque seconde, il est crucial de minimiser le nombre d'entrées et d'accélérer le processus de recherche. D'autre part, l'accroissement du nombre d'attaques sur Internet s'accompagne d'une augmentation de la taille des listes noires et du nombre de règles des pare-feux. Leur capacité de stockage limitée nécessite une gestion efficace de l'espace. Dans la première partie de cette thèse, nous proposons une représentation compacte des règles de filtrage tout en préservant leur sémantique. La construction de cette représentation est obtenue par des algorithmes raisonnablement efficaces. Cette technique permet flexibilité et efficacité dans le déploiement des politiques de sécurité puisque les règles engendrées sont plus faciles à gérer.Des approches complémentaires à la compression de règles consistent à décomposer et répartir les tables de règles, pour implémenter, par exemple, des politiques de contrôle d'accès distribué.Cependant, la plupart d'entre elles nécessitent une réplication importante de règles, voire la modification des en-têtes de paquets. La deuxième partie de cette thèse présente de nouvelles techniques pour décomposer et distribuer des ensembles de règles de filtrage sur une topologie de réseau donnée. Nous introduisons également une stratégie de mise à jour pour gérer les changements de politique et de topologie du réseau. De plus, nous exploitons également la structure de graphe série-parallèle pour résoudre efficacement le problème de placement de règles

    From Pollution to Solution: A global assessment of marine litter and plastic pollution

    Get PDF
    This assessment describes the far-reaching impacts of plastics in our oceans and across the planet. Plastics are a marker of the current geological era, the Anthropocene (Zalasiewicz et al. 2016). They have given their name to a new microbial habitat known as the plastisphere (Amaral-Zettler et al. 2020; see Glossary). Increased awareness of the negative impacts of microplastics on marine ecosystems and human health has led them to be referred to as a type of "Ocean PM2.5" akin to air pollution (i.e. particulate matter less than 2.5 micrometres [?m] in diameter) (Shu 2018). With cumulative global production of primary plastic between 1950 and 2017 estimated at 9,200 million metric tons and forecast to reach 34 billion metric tons by 2050 (Geyer 2020) (Figure i), the most urgent issues now to be addressed are how to reduce the volume of uncontrolled or mismanaged waste streams going into the oceans (Andrades et al. 2018) and how to increase the level of recycling. Of the 7 billion tons of plastic waste generated globally so far, less than 10 per cent has been recycled (Geyer 2020)

    From Pollution To Solution: a global assessment of marine litter and plastic pollution

    Get PDF
    Outcome from working on the United Nations Environment Programme Advisory Group with the aim to address the UN Environment Assembly’s adopted resolution (UN/EA.4/RES.6) on Marine Plastic Litter and Microplastics by recommending indicators to harmonise monitoring and assessment and informing on policies and action environmentally sound technology innovations

    Investigation of Factors Affecting the Adoption of Information and Communication Technologies for Communication of Research Output in Research Institutions in Kenya

    No full text
    Using Rogers' (2003) and Hofstede's (2001) technology diffusion theories as lenses, this exploratory and interpretive study was an endeavour to contribute to the understanding of ICT-enabled research communication by and for scholars and researchers working in Kenya. The main purpose of the study was to identify factors affecting ICT-enabled research communication by researchers in research institutions in specific fields within the natural and applied sciences in Kenya, which are viewed as key result areas in socio-economic development. Qualitative techniques were used to collect and analyze the data and present the findings. The researcher sought to identify, understand and explain key factors affecting ICT-mediated scientific research communication with a view to coming up with an ICT-adoption framework that would assist the Kenyan research community in more effectively adopting ICT-enabled research dissemination practices. This in turn should support Kenya's national development goals and contribute to the existing knowledge base and serve as a useful reference point in research communication debates and policy deliberations. The findings revealed researchers' priority research communication need was reinforcement of capacity for strategic research through recognising and prioritising research communication in budgetary planning. Thus, the findings call for investment in scientific and technological research and its communication, which includes improving tools and infrastructure, especially ICT-enabled ones like Internet connectivity and other e-resources. The findings affirmed the literature and extant theories guiding the study but also revealed information unique to the Kenyan context. Among emerging factors affecting adoption of ICT for scientific research communication were socio-cultural factors such as appreciation and perception of ICT; attitude of the scientific research community; demographic issues such as age/level of qualification, gender, poverty and literacy levels; communication networks and traditional cultural values such as orature, communalism and education culture. There were also institutional factors which included issues to do with ICT governance such as political and institutional leadership and culture; institutional framework; policy and strategy and legal and regulatory framework; and control over mass media communication channels. Moreover, inadequate institutional capacity for ICT-mediated research communication, lack of demand for MIS for research and teaching, lack of recognition and motivation for researchers were found to hinder ICT-mediated research communication. Though ICT had the perceived attributes of relative advantages, compatibility, complexity, observability and reliability, there were relative disadvantages that discouraged adoption. These included the need for hard- & software and virus upgrades; its susceptibility to environmental factors; dependence on other infrastructures that may be unavailable or unreliable; and possibilities for information overload and plagiarism. Other factors affecting ICT adoption that emerged outside the preliminary model included the nature of discipline/type of data; personal/individual institution's initiative; telephone wire thefts and lack of ICT research. All these contextual perspectives informed the framework for adoption of ICT for scientific research communication by researchers and scholars in research institutions in Kenya
    corecore