120 research outputs found

    Transparent resource sharing framework for internet services on handheld devices

    Get PDF
    Handheld devices have limited processing power and a short battery lifetime. As a result, computationally intensive applications cannot run appropriately or cause the device to run out of battery too early. Additionally, Internet-based service providers targeting these mobile devices lack information to estimate the remaining battery autonomy and have no view on the availability of idle resources in the neighborhood of the handheld device. These battery-related issues create an opportunity for Internet providers to broaden their role and start managing energy aspects of battery-driven mobile devices inside the home. In this paper, we propose an energy-aware resource-sharing framework that enables Internet access providers to delegate (a part of) a client application from a handheld device to idle resources in the LAN, in a transparent way for the end-user. The key component is the resource sharing service, hosted on the LAN gateway, which can be remotely queried and managed by the Internet access provider. The service includes a battery model to predict the remaining battery lifetime. We describe the concept of resource-sharing-as-a-service that allows users of handheld devices to subscribe to the resource sharing service. In a proof-of-concept, we evaluate the delay to offload a client application to an idle computer and study the impact on battery autonomy as a function of the CPU cycles that can be offloaded

    A Home E-Health System for Dependent People Based on OSGI

    Get PDF
    This chapter presents a e-health system for dependent people installed in a home environment. After reviewing the state of art in e-health applications and technologies several limitations have been detected because many solutions are proprietary and lack interoperability. The developed home e-health system provides an architecture capable to integrate different telecare services in a smart home gateway hardware independent from the application layer. We propose a rule system to define users’ behavior and monitor relevant events. Two example systems have been implemented to monitor patients. A data model for the e-health platform is described as well.Ministerio de Educación y Ciencia TSI2006-13390-C02-0

    BETaaS: A Platform for Development and Execution of Machine-to-Machine Applications in the Internet of Things

    Get PDF
    The integration of everyday objects into the Internet represents the foundation of the forthcoming Internet of Things (IoT). Such “smart” objects will be the building blocks of the next generation of applications that will exploit interaction between machines to implement enhanced services with minimum or no human intervention in the loop. A crucial factor to enable Machine-to-Machine (M2M) applications is a horizontal service infrastructure that seamlessly integrates existing IoT heterogeneous systems. The authors present BETaaS, a framework that enables horizontal M2M deployments. BETaaS is based on a distributed service infrastructure built on top of an overlay network of gateways that allows seamless integration of existing IoT systems. The platform enables easy deployment of applications by exposing to developers a service oriented interface to access things (the Things-as-a-Service model) regardless of the technology and the physical infrastructure they belong

    Network Infrastructures for Highly Distributed Cloud-Computing

    Get PDF
    Software-Defined-Network (SDN) is emerging as a solid opportunity for the Network Service Providers (NSP) to reduce costs while at the same time providing better and/or new services. The possibility to flexibly manage and configure highly-available and scalable network services through data model abstractions and easy-to-consume APIs is attractive and the adoption of such technologies is gaining momentum. At the same time, NSPs are planning to innovate their infrastructures through a process of network softwarisation and programmability. The SDN paradigm aims at improving the design, configuration, maintenance and service provisioning agility of the network through a centralised software control. This can be easily achievable in local area networks, typical of data-centers, where the benefits of having programmable access to the entire network is not restricted by latency between the network devices and the SDN controller which is reasonably located in the same LAN of the data path nodes. In Wide Area Networks (WAN), instead, a centralised control plane limits the speed of responsiveness in reaction to time-constrained network events due to unavoidable latencies caused by physical distances. Moreover, an end-to-end control shall involve the participation of multiple, domain-specific, controllers: access devices, data-center fabrics and backbone networks have very different characteristics and their control-plane could hardly coexist in a single centralised entity, unless of very complex solutions which inevitably lead to software bugs, inconsistent states and performance issues. In recent years, the idea to exploit SDN for WAN infrastructures to connect multiple sites together has spread in both the scientific community and the industry. The former has produced interesting results in terms of framework proposals, complexity and performance analysis for network resource allocation schemes and open-source proof of concept prototypes targeting SDN architectures spanning multiple technological and administrative domains. On the other hand, much of the work still remains confined to the academy mainly because based on pure Openflow prototype implementation, networks emulated on a single general-purpose machine or on simulations proving algorithms effectiveness. The industry has made SDN a reality via closed-source systems, running on single administrative domain networks with little if no diversification of access and backbone devices. In this dissertation we present our contributions to the design and the implementation of SDN architectures for the control plane of WAN infrastructures. In particular, we studied and prototyped two SDN platforms to build a programmable, intent-based, control-plane suitable for the today highly distributed cloud infrastructures. Our main contributions are: (i) an holistic and architectural description of a distributed SDN control-plane for end-end QoS provisioning; we compare the legacy IntServ RSVP protocol with a novel approach for prioritising application-sensitive flows via centralised vantage points. It is based on a peer-to-peer architecture and could so be suitable for the inter-authoritative domains scenario. (ii) An open-source platform based on a two-layer hierarchy of network controllers designed to provision end-to-end connectivity in real networks composed by heterogeneous devices and links within a single authoritative domain. This platform has been integrated in CORD, an open-source project whose goal is to bring data-center economics and cloud agility to the NSP central office infrastructures, combining NFV (Network Function Virtualization), SDN and the elasticity of commodity clouds. Our platform enables the provisioning of connectivity services between multiple CORD sites, up to the customer premises. Thus our system and software contributions in SDN has been combined with a NFV infrastructure for network service automation and orchestration

    Safety-Critical Java: : level 2 in practice

    Get PDF
    Safety-Critical Java (SCJ) is a profile of the Real-Time Specification for Java that brings to the safety-critical industry the possibility of using Java. SCJ defines three compliance levels: Level 0, Level 1 and Level 2. The SCJ specification is clear on what constitutes a Level 2 application in terms of its use of the defined API but not the occasions on which it should be used. This paper broadly classifies the features that are only available at Level 2 into three groups: nested mission sequencers, managed threads and global scheduling across multiple processors. We explore the first two groups to elicit programming requirements that they support. We identify several areas where the SCJ specification needs modifications to support these requirements fully; these include the following: support for terminating managed threads, the ability to set a deadline on the transition between missions and augmentation of the mission sequencer concept to support composibility of timing constraints. We also propose simplifications to the termination protocol of missions and their mission sequencers. To illustrate the benefit of our changes, we present excerpts from a formal model of SCJ Level 2 written in Circus, a state-rich process algebra for refinement. Copyright © 2016 John Wiley & Sons, Ltd

    Design and optimization of medical information services for decision support

    Get PDF

    Correlated multi-streaming in distributed interactive multimedia systems

    Get PDF
    Distributed Interactive Multimedia Environments (DIMEs) enable geographically distributed people to interact with each other in a joint media-rich virtual environment for a wide range of activities, such as art performance, medical consultation, sport training, etc. The real-time collaboration is made possible by exchanging a set of multi-modal sensory streams over the network in real time. The characterization and evaluation of such multi-stream interactive environments is challenging because the traditional Quality of Service metrics (e.g., delay, jitter) are limited to a per stream basis. In this work, we present a novel ???Bundle of Streams??? concept to de???ne correlated multi-streams in DIMEs and present new cyber-physical, spatio-temporal QoS metrics to measure QoS over bundle of streams. We realize Bundle of Streams concept by presenting a novel paradigm of Bundle Streaming as a Service (SAS). We propose and develop SAS Kernel, a generic, distributed, modular and highly ???exible streaming kernel realizing SAS concept. We validate the Bundle of Streams model by comparing the QoS performance of bundle of streams over different transport protocols in a 3D tele-immersive testbed. Also, further experiments demonstrate that the SAS Kernel incurs low overhead in delay, CPU, and bandwidth demands

    Contributions to the safe execution of dynamic component-based real-time systems

    Get PDF
    Traditionally, real-time systems have based their design and execution on barely dynamic models to ensure, since design time, the temporal guarantees in the execution of their functionality. Great effort is being applied nowadays to progressively develop more dynamic systems, with the target of changing during their execution and to adapt themselves to their environment. The capability to change and to reconfigure themselves represents remarkable advantages as the capability to fix errors and to add new functionality with on-line updates. This means to be able to be updated without needing to stop the service, that may imply monetary losses in many cases. Design and development techniques based on components have become popular due to the use of components, which allows simplifying the system design, code reusability and updates through the substitution of components. The target of this thesis work is to provide certain degree of dynamism to real-time systems allowing them to replace components, incorporating new functionality of fixing existing bugs. On that purpose, a component-based framework is proposed, as well as the corresponding task in charge of providing dynamism to the system. The main contribution is to provide a framework to allow safe component replacements. Safe meaning that incorrect executions of tasks are avoided even y multiple tasks are executing concurrently and making use of the same data. Also that temporal guarantees are provided for every task. This framework incorporates a generic component model with real-time threads, a components replacement model with execution times that are known and bounded, and different strategies to apply such component replacement model. Some mechanisms to maintain a seamless and safe execution, regarding concurrency, before, during, and after applying the processes in charge of replacing running components are also described. Seamless execution means that components themselves do not perform the replacements, and safe means that temporal guarantees are provided and components are not affected in their execution. Part of these mechanisms are the system schedulability analysis and the framework tasks as well as reserving the needed resources for such scheduling to be correct. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------Los sistemas de tiempo real han basado tradicionalmente su desarrollo en modelos altamente predecibles ya que estos requieren garantías temporales en su ejecución. A lo largo de los años, la technología de tiempo real ha ido penetrando en diferentes campos de aplicación y ajustándose a paradigmas de desarrollo software más novedosos. Esto ha presentado y presenta en la actualidad un tremendo reto ya que estas aplicaciones suelen tener un alto grado de dinamismo, lo que entra en conflicto con la predictibilidad temporal y, en general la ejecución segura de los mismos. Hoy en dia se esta realizando un gran esfuerzo en el desarrollo de sistemas cada vez más dinamicos que permitan adaptar su estructura en tiempo de ejecución para adaptarse a entornos que presentan condiciones cambiantes. La capacidad de soportar este tipo de dinamismo presenta ventajas descatables como permitir corregir fallos y anadir funcionalidad mediante actualizaciones en caliente, es decir, poder actualizarse sin necesidad de realizar paradas en su servicio, lo que podria implicar costes monetarios en muchos casos o perdidas temporales de servicio. Por otro lado, las técnicas de diseño y desarrollo basadas en componentes se han hecho muy populares y su aplicación a los sistemas de tiempo real gana terreno día a día. Uno de los principales motivos de ellos es que el uso de componentes permite simplificar el diseño del sistema, la reutilizacion de codigo e incluso la actualizacion del mismo mediante la substitucion de componentes. En esta tesis se aborda el objetivo de proveer a los sistemas de tiempo real de cierto grado de dinamismo para poder reemplazar componentes de forma segura, que permita incorporar nuevas funcionalidades o corregir errores existentes. Para ello, en esta tesis se ha elaborado de un marco de trabajo para dar soporte a reemplazos de componentes de forma segura, entendiendo como tal que el hecho de que no se produzcan ejecuciones incorrectas debido a la ejecución concurrente de multiples tareas, asi como el garantizar los tiempos de ejecucion de cada tarea y acotar la duración temporal de los reemplazos. El marco de trabajo propuesto está basado, pues, en componentes de tiempo real, que tiene en cuenta los requisitos temporales en la ejecución de los componentes del sistema y de las tareas propias del marco que dan soporte a estos mecanismos de reemplazo. Este marco de trabajo incorpora un modelo generico de componente con tareas de tiempo real, un modelo de reemplazo de componentes cuyos tiempos de ejecucion son conocidos y limitados en tiempo y diferentes estrategias de aplicacion de dicho modelo de reemplazo de componente. Las contribuciones propuestas integran el analisis de la planificabilidad de los componentes del sistema y de las tareas del marco de componentes para permitir establecer los parámetros de reserva de los recursos necesarios para las tareas del marco. Por último, se realiza una validación empírica en la que se comprueba experimentalmente la validez del modelo tanto de forma genérica como en un escenario específico y determinando también los recursos necesarios para su implementación

    Prediction-based VM provisioning and admission control for multi-tier web applications

    Get PDF
    corecore