386 research outputs found

    Weight filtration on the cohomology of complex analytic spaces

    Get PDF
    We extend Deligne's weight filtration to the integer cohomology of complex analytic spaces (endowed with an equivalence class of compactifications). In general, the weight filtration that we obtain is not part of a mixed Hodge structure. Our purely geometric proof is based on cubical descent for resolution of singularities and Poincar\'e-Verdier duality. Using similar techniques, we introduce the singularity filtration on the cohomology of compactificable analytic spaces. This is a new and natural analytic invariant which does not depend on the equivalence class of compactifications and is related to the weight filtration.Comment: examples added + minor correction

    Optimal Map Reduce Job Capacity Allocation in Cloud Systems.

    Get PDF
    We are entering a Big Data world. Many sectors of our economy are now guided by data-driven decision processes. Big Data and business intelligence applications are facilitated by the MapReduce programming model while, at infrastructural layer, cloud computing provides flexible and cost effective solutions for allocating on demand large clusters. Capacity allocation in such systems is a key challenge to provide performance for MapReduce jobs and minimize cloud resource costs. The contribution of this paper is twofold: (i) we provide new upper and lower bounds for MapReduce job execution time in shared Hadoop clusters, (ii) we formulate a linear programming model able to minimize cloud resources costs and job rejection penalties for the execution of jobs of multiple classes with (soft) deadline guarantees. Simulation results show how the execution time of MapReduce jobs falls within 14% of our upper bound on average. Moreover, numerical analyses demonstrate that our method is able to determine the global optimal solution of the linear problem for systems including up to 1,000 user classes in less than 0.5 seconds

    A Declarative Framework for Specifying and Enforcing Purpose-aware Policies

    Full text link
    Purpose is crucial for privacy protection as it makes users confident that their personal data are processed as intended. Available proposals for the specification and enforcement of purpose-aware policies are unsatisfactory for their ambiguous semantics of purposes and/or lack of support to the run-time enforcement of policies. In this paper, we propose a declarative framework based on a first-order temporal logic that allows us to give a precise semantics to purpose-aware policies and to reuse algorithms for the design of a run-time monitor enforcing purpose-aware policies. We also show the complexity of the generation and use of the monitor which, to the best of our knowledge, is the first such a result in literature on purpose-aware policies.Comment: Extended version of the paper accepted at the 11th International Workshop on Security and Trust Management (STM 2015

    MALIBOO: When Machine Learning meets Bayesian Optimization

    Get PDF
    Bayesian Optimization (BO) is an efficient method for finding optimal cloud computing configurations for several types of applications. On the other hand, Machine Learning (ML) methods can provide useful knowledge about the application at hand thanks to their predicting capabilities. In this paper, we propose a hybrid algorithm that is based on BO and integrates elements from ML techniques, to find the optimal configuration of time-constrained recurring jobs executed in cloud environments. The algorithm is tested by considering edge computing and Apache Spark big data applications. The results we achieve show that this algorithm reduces the amount of unfeasible executions up to 2-3 times with respect to state-of-the-art techniques

    SPACE4AI-R: a Runtime Management Tool for AI Applications Component Placement and Resource Scaling in Computing Continua

    Get PDF
    The recent migration towards Internet of Things determined the rise of a Computing Continuum paradigm where Edge and Cloud resources coordinate to support the execution of Artificial Intelligence (AI) applications, becoming the foundation of use-cases spanning from predictive maintenance to machine vision and healthcare. This generates a fragmented scenario where computing and storage power are distributed among multiple devices with highly heterogeneous capacities. The runtime management of AI applications executed in the Computing Continuum is challenging, and requires ad-hoc solutions. We propose SPACE4AI-R, which combines Random Search and Stochastic Local Search algorithms to cope with workload fluctuations by identifying the minimum-cost reconfiguration of the initial production deployment, while providing performance guarantees across heterogeneous resources including Edge devices and servers, Cloud GPU-based Virtual Machines and Function as a Service solutions. Experimental results prove the efficacy of our tool, yielding up to 60% cost reductions against a static design-time placement, with a maximum execution time under 1.5s in the most complex scenarios

    A Random Greedy based Design Time Tool for AI Applications Component Placement and Resource Selection in Computing Continua

    Get PDF
    Artificial Intelligence (AI) and Deep Learning (DL) are pervasive today, with applications spanning from personal assistants to healthcare. Nowadays, the accelerated migration towards mobile computing and Internet of Things, where a huge amount of data is generated by widespread end devices, is determining the rise of the edge computing paradigm, where computing resources are distributed among devices with highly heterogeneous capacities. In this fragmented scenario, efficient component placement and resource allocation algorithms are crucial to orchestrate at best the computing continuum resources. In this paper, we propose a tool to effectively address the component placement problem for AI applications at design time. Through a randomized greedy algorithm, our approach identifies the placement of minimum cost providing performance guarantees across heterogeneous resources including edge devices, cloud GPU-based Virtual Machines and Function as a Service solutions. Finally, we compare the random greedy method with the HyperOpt framework and demonstrate that our proposed approach converges to a near-optimal solution much faster, especially in large scale systems

    Bayesian optimization with machine learning for big data applications in the cloud

    Get PDF
    L'ottimizzazione bayesiana è un metodo promettente per trovare configurazioni ottimali di applicazioni big data eseguite su cloud. I metodi di machine learning possono fornire informazioni utili sull'applicazione in oggetto grazie alle loro capacità predittive. In questo articolo, proponiamo un algoritmo ibrido basato sull'ottimizzazione bayesiana che integra tecniche di machine learning per risolvere problemi di ottimizzazione con vincoli di tempo in sistemi di cloud computing.Bayesian Optimization is a promising method for efficiently finding optimal cloud computing configurations for big data applications. Machine Learning methods can provide useful knowledge about the application at hand thanks to their predicting capabilities. In this paper, we propose a hybrid algorithm that is based on Bayesian Optimization and integrates elements from Machine Learning techniques to tackle time-constrained optimization problems in a cloud computing setting
    corecore