42 research outputs found

    On the design of a cost-efficient resource management framework for low latency applications

    Get PDF
    The ability to offer low latency communications is one of the critical design requirements for the upcoming 5G era. The current practice for achieving low latency is to overprovision network resources (e.g., bandwidth and computing resources). However, this approach is not cost-efficient, and cannot be applied in large-scale. To solve this, more cost-efficient resource management is required to dynamically and efficiently exploit network resources to guarantee low latencies. The advent of network virtualization provides novel opportunities in achieving cost-efficient low latency communications. It decouples network resources from physical machines through virtualization, and groups resources in the form of virtual machines (VMs). By doing so, network resources can be flexibly increased at any network locations through VM auto-scaling to alleviate network delays due to lack of resources. At the same time, the operational cost can be largely reduced by shutting down low-utilized VMs (e.g., energy saving). Also, network virtualization enables the emerging concept of mobile edge-computing, whereby VMs can be utilized to host low latency applications at the network edge to shorten communication latency. Despite these advantages provided by virtualization, a key challenge is the optimal resource management of different physical and virtual resources for low latency communications. This thesis addresses the challenge by deploying a novel cost-efficient resource management framework that aims to solve the cost-efficient design of 1) low latency communication infrastructures; 2) dynamic resource management for low latency applications; and 3) fault-tolerant resource management. Compared to the current practices, the proposed framework achieves 80% of deployment cost reduction for the design of low latency communication infrastructures; continuously saves up to 33% of operational cost through dynamic resource management while always achieving low latencies; and succeeds in providing fault tolerance to low latency communications with a guaranteed operational cost

    Contributions à la stabilisation des systèmes à commutation affine

    Get PDF
    Cette thèse porte sur la stabilisation des systèmes à commutation dont la commande, le signal de commutation, est échantillonné de manière périodique. Les difficultés liées à cette classe de systèmes non linéaires sont d'abord dues au fait que l'action de contrôle est effectuée aux instants de calcul en sélectionnant le mode de commutation à activer et, ensuite, au problème de fournir une caractérisation précise de l'ensemble vers lequel convergent les solutions du système, c'est-à-dire l'attracteur. Dans cette thèse, les contributions ont pour fil conducteur la réduction du conservatisme fait pendant la définition d'attracteurs, ce qui a mené à garantir la stabilisation du système à un cycle limite. Après une introduction générale où sont présentés le contexte et les principaux résultats de la littérature, le premier chapitre contributif introduit une nouvelle méthode basée sur une nouvelle classe de fonctions de Lyapunov contrôlées qui fournit une caractérisation plus précise des ensembles invariants pour les systèmes en boucle fermée. La contribution présentée comme un problème d'optimisation non convexe et faisant référence à une condition de Lyapunov-Metzler apparaît comme un résultat préliminaire et une étape clé pour les chapitres à suivre. La deuxième partie traite de la stabilisation des systèmes affines commutés vers des cycles limites. Après avoir présenté quelques préliminaires sur les cycles limites hybrides et les notions dérivées telles que les cycles au Chapitre 3, les lois de commutation stabilisantes sont introduites dans le Chapitre 4. Une approche par fonctions de Lyapunov contrôlées et une stratégie de min-switching sont utilisées pour garantir que les solutions du système nominal en boucle fermée convergent vers un cycle limite. Les conditions du théorème sont exprimées en termes d'Inégalités Matricielles Linéaires (dont l'abréviation anglaise est LMI) simples, dont les conditions nécessaires sous-jacentes relâchent les conditions habituelles dans cette littérature. Cette méthode est étendue au cas des systèmes incertains dans le Chapitre 5, pour lesquels la notion de cycles limites doit être adaptée. Enfin, le cas des systèmes dynamiques hybrides est exploré au Chapitre 6 et les attracteurs ne sont plus caractérisés par des régions éventuellement disjointes mais par des trajectoires fermées et isolées en temps continu. Tout au long de la thèse, les résultats théoriques sont évalués sur des exemples académiques et démontrent le potentiel de la méthode par rapport à la littérature récente sur le sujet.This thesis deals with the stabilization of switched affine systems with a periodic sampled-data switching control. The particularities of this class of nonlinear systems are first related to the fact that the control action is performed at the computation times by selecting the switching mode to be activated and, second, to the problem of providing an accurate characterization of the set where the solutions to the system converge to, i.e. the attractors. The contributions reported in this thesis have as common thread to reduce the conservatism made in the characterization of attractors, leading to guarantee the stabilization of the system at a limit cycle. After a brief introduction presenting the context and some main results, the first contributive chapter introduced a new method based on a new class of control Lyapunov functions that provides a more accurate characterization of the invariant set for a closed-loop system. The contribution presented as a nonconvex optimization problem and referring to a Lyapunov-Metzler condition appears to be a preliminary result and the milestone of the forthcoming chapters. The second part deals with the stabilization of switched affine systems to limit cycles. After presenting some preliminaries on hybrid limit cycles and derived notions such as cycles in Chapter 3, stabilizing switching control laws are developed in Chapter 4. A control Lyapunov approach and a min-switching strategy are used to guarantee that the solutions to a nominal closed-loop system converge to a limit cycle. The conditions of the theorem are expressed in terms of simple linear matrix inequalities (LMI), whose underlying necessary conditions relax the usual one in this literature. This method is then extended to the case of uncertain systems in Chapter 5, for which the notion of limit cycle needs to be adapted. Finally, the hybrid dynamical system framework is explored in Chapter 6 and the attractors are no longer characterized by possibly disjoint regions but as continuous-time closed and isolated trajectory. All along the dissertation, the theoretical results are evaluated on academic examples and demonstrate the potential of the method over the recent literature on this subject

    Application of advanced on-board processing concepts to future satellite communications systems

    Get PDF
    An initial definition of on-board processing requirements for an advanced satellite communications system to service domestic markets in the 1990's is presented. An exemplar system architecture with both RF on-board switching and demodulation/remodulation baseband processing was used to identify important issues related to system implementation, cost, and technology development

    Loss allocation in a distribution system with distributed generation units

    Get PDF
    In Denmark, a large part of the electricity is produced by wind turbines and combined heat and power plants (CHPs). Most of them are connected to the network through distribution systems. This paper presents a new algorithm for allocation of the losses in a distribution system with distributed generation. The algorithm is based on a reduced impedance matrix of the network and current injections from loads and production units. With the algorithm, the effect of the covariance between production and consumption can be evaluated. To verify the theoretical results, a model of the distribution system in Brønderslev in Northern Jutland, including measurement data, has been studied

    Parallelization, scalability, and reproducibility in next generation sequencing analysis

    Get PDF
    The analysis of next-generation sequencing (NGS) data is a major topic in bioinformatics: short reads obtained from DNA, the molecule encoding the genome of living organisms, are processed to provide insight into biological or medical questions. This thesis provides novel solutions to major topics within the analysis of NGS data, focusing on parallelization, scalability and reproducibility. The read mapping problem is to find the origin of the short reads within a given reference genome. We contribute the q-group index, a novel data structure for read mapping with particularly small memory footprint. The q-group index comes with massively parallel build and query algorithms targeted towards modern graphics processing units (GPUs). On top, the read mapping software PEANUT is presented, which outperforms state of the art read mappers in speed while maintaining their accuracy. The variant calling problem is to infer (i.e., call) genetic variants of individuals compared to a reference genome using mapped reads. It is usually solved in a Bayesian way. Often, variant calling is followed by filtering variants of different biological samples against each other. With state of the art solutions, the filtering is decoupled from the calling, leading to difficulties in controlling the false discovery rate. In this work, we show how to integrate the filtering into the calling with an algebraic approach and provide an intuitive solution for controlling the false discovery rate along with solving other challenges of variant calling like scaling with a growing set of biological samples. For this, a hierarchical index data structure for storage of preprocessing results is presented and compression strategies are provided. The developed methods are implemented in the software ALPACA. Depending on the research question, the analysis of NGS data entails many other steps, typically involving diverse tools, data transformations and aggregation of results. These steps can be orchestrated by work ow management. We present the general purpose work ow system Snakemake, which provides an easy to read domain-specific language for defining and documenting work ows, thereby ensuring reproducibility of analyses. The language is complemented by an execution environment that allows to scale a work ow to available resources, including parallelization across CPU cores or cluster nodes, restricting memory usage or the number of available coprocessors like GPUs. The benefits of using Snakemake are exemplified by combining the presented approaches for read mapping and variant calling to a complete, scalable and reproducible NGS analysis

    Improving software engineering processes using machine learning and data mining techniques

    Get PDF
    The availability of large amounts of data from software development has created an area of research called mining software repositories. Researchers mine data from software repositories both to improve understanding of software development and evolution, and to empirically validate novel ideas and techniques. The large amount of data collected from software processes can then be leveraged for machine learning applications. Indeed, machine learning can have a large impact in software engineering, just like it has had in other fields, supporting developers, and other actors involved in the software development process, in automating or improving parts of their work. The automation can not only make some phases of the development process less tedious or cheaper, but also more efficient and less prone to errors. Moreover, employing machine learning can reduce the complexity of difficult problems, enabling engineers to focus on more interesting problems rather than the basics of development. The aim of this dissertation is to show how the development and the use of machine learning and data mining techniques can support several software engineering phases, ranging from crash handling, to code review, to patch uplifting, to software ecosystem management. To validate our thesis we conducted several studies tackling different problems in an industrial open-source context, focusing on the case of Mozilla
    corecore