32 research outputs found

    SLA violation prediction : a machine learning perspective

    Get PDF
    Le cloud computing réduit les coûts de maintenance des services et permet aux utilisateurs d'accéder à la demande aux services sans devoir être impliqués dans des détails techniques d'implémentation. Le lien entre un fournisseur de services cloud et un client est régi par une Validation du Niveau Service (VNS) qui définit pour chaque service le niveau et le coût associé. La VNS contient habituellement des paramètres spécifiques et un niveau minimum de qualité pour chaque élément du service qui est négocié entre les deux parties. Cependant, une ou plusieurs des conditions convenues dans une VNS pourraient être violées en raison de plusieurs problèmes tels que des problèmes techniques occasionnels. Du point de vue d'apprentissage automatique, le problème de la prédiction de violation de la VNS équivaut à un problème de classification binaire. Nous avons exploré deux modèles de classification en apprentissage automatique lors de cette thèse. Il s’agit des modèles de classification de Bayes naïve et de Forêts Aléatoires afin de prédire des violations futures d’une certaine tâche utilisant ses traits caractéristiques. Comparativement aux travaux précédents sur la prédiction d'une violation de la VNS, nos modèles ont été entraînés sur des ensembles de données réels introduisant ainsi de nouveaux défis. Nous avons validé le tout en utilisant Google Cloud Cluster trace comme avec l’ensemble de données. Les violations de la VNS étant des évènements rares 2.2 %, leur classification automatique reste une tâche difficile. Un modèle de classification aura en effet une forte tendance à prédire la classe dominante au détriment des classes rares. Pour répondre à ce problème, il existe plusieurs méthodes de ré-échantillonages telles que Random Over-Sampling, Under-Sampling, SMOTH, NearMiss, One-sided Selection, Neighborhood Cleaning Rule. Il est donc possible de les combiner afin de ré-équilibrer le jeu de données.Cloud computing reduces the maintenance costs of services and allows users to access on demand services without being involved in technical implementation details. The relationship between a cloud provider and a customer is governed with a Service Level Agreement (SLA) that is established to define the level of the service and its associated costs. SLA usually contains specific parameters and a minimum level of quality for each element of the service that is negotiated between a cloud provider and a customer. However, one or more than one of the agreed terms in an SLA might be violated due to several issues such as occasional technical problems. Violations do happen in real world. In terms of availability, Amazon Elastic Cloud faced an outage in 2011 when it crashed and many large customers such as Reddit and Quora were down for more than one day. As SLA violation prediction benefits both user and cloud provider, in recent years, cloud researchers have started investigating models that are capable of prediction future violations. From a Machine Learning point of view, the problem of SLA violation prediction amounts to a binary classification problem. In this thesis, we explore two Machine Learning classification models: Naive Bayes and Random Forest to predict future violations using features of a submitted task. Unlike previous works on SLA violation prediction or avoidance, our models are trained on a real world dataset which introduces new challenges. We validate our models using Google Cloud Cluster trace as the dataset. Since SLA violations are rare events in real world 2.2 %, the classification task becomes more challenging because the classifier will always have the tendency to predict the dominant class. In order to overcome this issue, we use several re-sampling methods such as Random Over-Sampling, Under-Sampling, SMOTH, NearMiss, One-sided Selection, Neighborhood Cleaning Rule and an ensemble of them to re-balance the dataset

    Pantic B-spline wavelets and their application for solving linear integral equations

    Get PDF
    Abstract In this work we deal with the question: how can one improve the approximation level for some nonlinear integral equations? Good candidates for this aim are semi orthogonal B-spline scaling functions and their duals. Although there are different works in this area, only B-spline of degree at most 2 are used for this approximation. Here we compute B-spline scaling functions of degree 4 and their duals, then we will show that, by using them, one can have better approximation results for the solution of integral equations in comparison with less degrees or other kinds of scaling functions. Some numerical examples show their attractiveness and usefulness

    LEAD: Least-Action Dynamics for Min-Max Optimization

    Full text link
    Adversarial formulations such as generative adversarial networks (GANs) have rekindled interest in two-player min-max games. A central obstacle in the optimization of such games is the rotational dynamics that hinder their convergence. Existing methods typically employ intuitive, carefully hand-designed mechanisms for controlling such rotations. In this paper, we take a novel approach to address this issue by casting min-max optimization as a physical system. We leverage tools from physics to introduce LEAD (Least-Action Dynamics), a second-order optimizer for min-max games. Next, using Lyapunov stability theory and spectral analysis, we study LEAD's convergence properties in continuous and discrete-time settings for bilinear games to demonstrate linear convergence to the Nash equilibrium. Finally, we empirically evaluate our method on synthetic setups and CIFAR-10 image generation to demonstrate improvements over baseline methods

    Coherent Frames

    Get PDF
    Frames which can be generated by the action of some operators (e.g. translation, dilation, modulation, ...) on a single element ff in a Hilbert space, called coherent frames. In this paper, we introduce a class of continuous frames in a Hilbert space mathcalHmathcal{H} which is indexed by some locally compact group GG, equipped with its left Haar measure. These frames are obtained as the orbits of a single element of Hilbert space mathcalHmathcal{H} under some unitary representation pipi of GG on mathcalHmathcal{H}. It is interesting that most of important frames are coherent. We investigate canonical dual and combinations of this frame

    Improving the Distribution of Rural Health Houses Using Elicitation and GIS in Khuzestan Province (the Southwest of Iran)

    Get PDF
    Abstract Background: Rural health houses constitute a major provider of some primary health services in the villages of Iran. Given the challenges of providing health services in rural areas, health houses should be established based on the criteria of health network systems (HNSs). The value of these criteria and their precedence over others have not yet been thoroughly investigated. The present study was conducted to propose a model for improving the distribution of rural health houses in HNSs. Methods: The present applied study was conducted in Khuzestan province in the southwest of Iran in 2014-2016. First, the descriptive and spatial data required were collected and entered into ArcGIS after modifications, and the Geodatabase was then created. Based on the criteria of the HNS and according to experts’ opinions, the main criteria and the sub-criteria for an optimal site selection were determined. To determine the criteria’s coefficient of importance (ie, their weight), the main criteria and the sub-criteria were compared in pairs according to experts’ opinions. The results of the pairwise comparisons were entered into Expert Choice and the weight of the main criteria and the sub-criteria were determined using the analytic hierarchy process (AHP). The application layers were then formed in geographic information system (GIS). A model was ultimately proposed in the GIS for the optimal distribution of rural health houses by overlaying the weighting layers and the other layers related to villages and rural health houses. Results: Based on the experts’ opinions, six criteria were determined as the main criteria for an optimal site selection for rural health houses, including welfare infrastructures, population, dispersion, accessibility, corresponding routes, distance to the rural health center and the absence of natural barriers to accessibility. Of the main criteria proposed, the highest weight was given to “population” (0.506). The priorities suggested in the proposed model for establishing rural health houses are presented within five zoning levels –from excellent to very poor. Conclusion: The results of the study showed that the proposed model can help provide a better picture of the distribution of rural health houses. The GIS is recommended to be used as a means of making the HNS more efficient

    Hybridization and its application in aquaculture

    Get PDF
    Inter‐specific hybrids are usually formed by mating two different species in the same genus. They have been produced to increase growth rate, improve production performance, transfer desirable traits, reduce unwanted reproduction, combine other valuable traits such as good flesh quality, disease resistance and increase environmental tolerances, better feed conversion, and increase harvesting rate in culture systems. Hybrids play a significant role in helping to increase aquaculture production of several species of freshwater and marine fishes – for example, hybrid catfish in Thailand, hybrid striped bass in the USA, hybrid tilapia in Israel, and hybrid characids in Venezuela. As the domestication of fish species increases, the possibilities to increase production through appropriate hybridization techniques are ongoing, with a view to produce new hybrid fishes, especially in culture systems where sterile fish may be preferred because of the concern that fish may escape into the open freshwater, marine and coastal environment. Intentional or accidental hybridization can lead to unexpected results in hybrid progeny, such as reduced viability and growth performances, loss of color pattern and flesh quality, and it also raises risks for maintenance of genetic integrity. Appropriate knowledge on the genetic constitution of the brood stock, proper brood stock management, and monitoring of the viability and fertility of the progeny of brood fishes, is thus very crucial before initiating hybridization experiments. In addition, some non‐generic factors, such as weather conditions, culture systems, seasons, and stresses associated with selecting, collecting, handling, breeding and rearing of brood stock and progeny, may influence hybridization success in a wide variety of freshwater and marine fin fishes to a greater extent

    Hardware aware acceleration in deep neural networks

    No full text
    RÉSUMÉ: Ces dernières années, les réseaux de neurones profonds sont devenus de plus en plus sophistiqués, leur permettant d’accomplir des tâches plus complexes. Cependant, à mesure que leur performance a augmentée, leur taille et leurs exigences en matière de calcul l’ont également. En particulier, pour les dispositifs de pointe où le calcul et la consommation d’énergie sont les plus importants, l’exécution efficace de modèles complexes est un défi. L’une des méthodes efficaces pour réduire les besoins en énergie et la complexité de calcul d’un réseau neuronal profond est appelée quantification. Ce processus implique de faire correspondre des valeurs à virgule flottante sur des valeurs entières de manière à minimiser la perte de précision. En réduisant la précision des paramètres et des calculs intermédiaires, la quantification peut conduire à une inférence plus rapide et à des besoins en mémoire réduits, ce qui est particulièrement bénéfique pour le déploiement de réseaux de neurones sur des appareils à ressources limitées. Dans cette thèse, notre objectif est de comprendre le fonctionnement de la quantification et son impact sur l’entraînement des réseaux de neurones. De plus, nous explorons les moyens les plus efficaces d’employer la quantification en proposant du matériel dédié innovant. Pour atteindre ces objectifs, cinq articles distincts seront présentés. Le premier article présente un nouvel algorithme de quantification en virgule fixe qui cible spécifiquement l’accélération de l’inférence pour les tâches de segmentation d’images médicales. Les résultats de nos recherches révèlent trois points clés. Premièrement, la quantification peut être exploitée pour améliorer la vitesse de calcul, même dans les applications médicales qui exigent une grande précision. Deuxièmement, nos expériences suggèrent qu’il peut y avoir de légères améliorations de la précision par rapport aux modèles de précision totale lors de l’utilisation de la quantification. Cela nous a conduit à étudier les effets potentiels de régularisation de la quantification. Enfin, nous avons découvert que le matériel informatique standard peut présenter un goulot d’étranglement pour le déploiement efficace de modèles quantifiés, soulignant le besoin de conception de matériel sur mesure. Dans le deuxième article, en nous appuyant sur les connaissances acquises dans notre premier article, nous avons formulé une hypothèse sur l’effet de régularisation de la quantification. Grâce à notre étude empirique, nous avons constaté que même si tous les niveaux de quantification ne présentent pas cet effet, la quantification 8 bits fournit de manière fiable une forme de régularisation. Pour répondre aux exigences de calcul des modèles quantifiés, nous présentons des solutions matérielles personnalisées dans le troisième, quatrième et cinquième article. Dans le troisième et quatrième article, nous proposons un accélérateur entièrement personnalisé capable d’exécuter des modèles quantifiés avec une précision arbitraire. Enfin, dans le cinquième article, nous démontrons les modifications nécessaires requises pour qu’un processeur vectoriel à usage général exécute des modèles quantifiés avec une précision inférieure à l’octet. Ces articles contribuent collectivement à notre objectif d’explorer le potentiel de la quantification et de développer des solutions matérielles efficaces pour son déploiement. ABSTRACT: Deep neural networks have become increasingly sophisticated in recent years, allowing them to handle more complex tasks. However, as their capabilities have grown, so too has their size and computational demands. Especially, for edge devices where computation and power consumption is of the utmost most importance, running complex models efficiently is a challenge. One of the effective methods to reduce power requirement and computation complexity of deep neural network is called quantization. This process involves mapping floating-point values to integer values in a way that minimizes the loss of accuracy. By reducing the precision of the parameters and intermediate computations, quantization can lead to faster inference and lower memory requirements, which are particularly beneficial for deploying neural networks on resource-constrained devices. In this dissertation, our goal is to understand how quantization works and its impact on training neural networks. Additionally, we endeavor to explore the most effective means of employing quantization by proposing novel custom hardware. To achieve these objectives, we present five distinct articles. The first article introduces a novel fixed-point quantization algorithm that specifically targets accelerating inference for medical image segmentation tasks. Our research findings reveal three key takeaways. Firstly, quantization can be leveraged to enhance computation speed even in medical applications that demand high precision. Secondly, our experiments suggest that there may be slight improvements in accuracy over full precision models when using quantization. This led us to investigate the potential regularization effects of quantization. Finally, we discovered that commodity hardware may present a bottleneck for the efficient deployment of quantized models, highlighting the need for bespoke hardware designs. In the second article, building on the insights gained from our first article, we formulated a hypothesis about the regularization effect of quantization. Through our empirical investigation, we found that while not all quantization levels exhibit this effect, 8-bit quantization reliably provides a form of regularization. To address the computational demands of quantized models, we present custom hardware solutions in the third, fourth, and fifth articles. In the third and fourth articles, we propose a fully customized accelerator capable of running quantized models with arbitrary precision. Finally, in the fifth article, we demonstrate the necessary modifications required for a general-purpose vector processor to run quantized models with sub-byte precision. These articles collectively contribute to our goal of exploring the potential of quantization and developing efficient hardware solutions for its deployment
    corecore