30 research outputs found

    Estimativos del error a posteriori para problemas de valores iniciales no lineales en el contexto de los espacios de Banach y los semigrupos

    Get PDF
    Un discretización en tiempo basada en el método de Euler regresivo para el problema parab´olico no lineal abstracto u = F(u), u(0) = u0, es considerada. En el presente trabajo se obtienen estimativos a posteriori para la citada discretización en tiempo en el marco de los espacios de Banach, los semigrupos y la regularidad maximal. Los estimativos obtenidos resultan ser de tipo condicional, es decir están sujetos a hipótesis que son verificables en la práctica como son las condiciones sobre la propia solución numérica

    Non-stationary wave relaxation methods for general linear systems of Volterra equations: convergence and parallel GPU implementation

    Get PDF
    Producción CientíficaIn the present paper, a parallel-in-time discretization of linear systems of Volterra equations of type u¯(t)=u¯0+∫t0K(t−s)u¯(s) d s+f¯(t),0<t≤T, is addressed. Related to the analytical solution, a general enough functional setting is firstly stated. Related to the numerical solution, a parallel numerical scheme based on the Non-Stationary Wave Relaxation (NSWR) method for the time discretization is proposed, and its convergence is studied as well. A CUDA parallel implementation of the method is carried out in order to exploit Graphics Processing Units (GPUs), which are nowadays widely employed for reducing the computational time of several general purpose applications. The performance of these methods is compared to some sequential implementation. It is revealed throughout several experiments of special interest in practical applications the good performance of the parallel approach.Ministerio de Universidades e Investigación de Italia (MUR), a través del proyecto PRIN 2017 (No. 2017JYCLSF) “Aproximación preservadora de estructuras de problemas evolutivos”Publicación en abierto financiada por el Consorcio de Bibliotecas Universitarias de Castilla y León (BUCLE), con cargo al Programa Operativo 2014ES16RFOP009 FEDER 2014-2020 DE CASTILLA Y LEÓN, Actuación:20007-CL - Apoyo Consorcio BUCL

    Implementation and Provisioning of Federated Networks in Hybrid Clouds (pre-print)

    Get PDF
    Federated cloud networking is needed to allow the seamless and efficient interconnection of resources distributed among different clouds. This work introduces a new cloud network federation framework for the automatic provision of Layer 2 (L2) and layer 3 (L3) virtual networks to interconnect geographically distributed cloud infrastructures in a hybrid cloud scenario. After a revision of existing encapsulation technologies to implement L2 and L3 overlay networks, the paper analyzes the main topologies that can be used to construct federated network overlays within hybrid clouds. In order to demonstrate the proposed solution and compare the different topologies, the article shows a proof-of-concept of a real federated network deployment in a hybrid cloud, which spans a local private cloud, managed with OpenNebula, and two public clouds, two different regions of mazon EC2. Results show that L2 and L3 overlay connectivity can be achieved with a minimal bandwidth overhead, lower than 10%

    Interoperable Federated Cloud Networking

    Get PDF
    The BEACON framework enables the provision of federated cloud infrastructures, with special emphasis on inter-cloud networking and security issues, to support the automated deployment of applications and services across different clouds and datacenters. BEACON is distributed as open source (see http://github.com/BeaconFramework) and some enhancements are being contributed to the OpenNebula and OpenStack cloud management platforms

    Cross-Site Virtual Network in Cloud and Fog Computing

    Get PDF
    The interconnection of the different geographically dispersed cloud and fog infrastructures is a key issue for the development of the fog technology. Although most existing cloud providers and platforms offer some kind of connectivity services to allow the interconnection with external networks, these services exhibit many limitations and they are not suitable for fog computing environments. In this work we present a hybrid fog and cloud interconnection framework, which allows the automatic provision of cross-site virtual networks to interconnect geographically distributed cloud and fog infrastructures. This framework provides a scalable and multi-tenant solution, and a simple and generic interface for instantiating, configuring and deploying Layer 2 and Layer 3 overlay networks across heterogeneous fog and cloud platforms, with abstraction from the underlying cloud/fog technologies and network virtualization technologies

    GWpilot: Enabling multi-level scheduling in distributed infrastructures with GridWay and pilot jobs

    Get PDF
    Current systems based on pilot jobs are not exploiting all the scheduling advantages that the technique offers, or they lack compatibility or adaptability. To overcome the limitations or drawbacks in existing approaches, this study presents a different general-purpose pilot system, GWpilot. This system provides individual users or institutions with a more easy-to-use, easy-toinstall, scalable, extendable, flexible and adjustable framework to efficiently run legacy applications. The framework is based on the GridWay meta-scheduler and incorporates the powerful features of this system, such as standard interfaces, fair-share policies, ranking, migration, accounting and compatibility with diverse infrastructures. GWpilot goes beyond establishing simple network overlays to overcome the waiting times in remote queues or to improve the reliability in task production. It properly tackles the characterisation problem in current infrastructures, allowing users to arbitrarily incorporate customised monitoring of resources and their running applications into the system. This functionality allows the new framework to implement innovative scheduling algorithms that accomplish the computational needs of a wide range of calculations faster and more efficiently. The system can also be easily stacked under other software layers, such as self-schedulers. The advanced techniques included by default in the framework result in significant performance improvements even when very short tasks are scheduled

    On Demand/Agile Deployment of Edge Cloud Infrastructures for Federated Learning

    Get PDF
    Depto. de Arquitectura de Computadores y AutomáticaFac. de InformáticaFALSEMinisterio de Ciencia e Innovación (MICINN)Comunidad de Madridunpu

    Disk Image Storage, Distribution and Caching for Edge Cloud Infrastructures

    Get PDF
    Depto. de Arquitectura de Computadores y AutomáticaFac. de InformáticaFALSEMinisterio de Ciencia e Innovación (MICINN)Comunidad de Madridunpu

    Latency and resource consumption analysis for serverless edge analytics

    Get PDF
    The serverless computing model, implemented by Function as a Service (FaaS) platforms, can offer several advantages for the deployment of data analytics solutions in IoT environments, such as agile and on-demand resource provisioning, automatic scaling, high elasticity, infrastructure management abstraction, and a fine-grained cost model. Nonetheless, in case of applications with strict latency requirements, the cold start problem in FaaS platforms can represent an important drawback. The most common techniques to alleviate this problem, mainly based on instance pre-warming and instance reusing mechanisms, are usually not well adapted to different application profiles and, in general, can entail an extra expense of resources. In this work, we analyze the effect of instance pre-warming and instance reusing on both, application latency (response time) and resource consumption, for a typical data analytics use case (a machine learning application for image classification) with different input data patterns. Furthermore, we propose to extend the classical centralized cloud-based serverless FaaS platform to a two-tier distributed edge-cloud platform to bring the platform closer to the data source and reduce network latencies
    corecore