1,057 research outputs found

    XML Integrated Environment for Service-Oriented Data Management

    Get PDF
    The proliferation of XML as a family of related standards including a markup language (XML), formatting semantics (XSL style sheets), a linking syntax (XLINK), and appropriate data schema standards have emerged as a de facto standard for encoding and sharing data between various applications. XML is designed to be simple, easily parsed and self-describing. XML is based on and support the idea of separation of concerns: information content is separated from information rendering, and relationships between data elements are provided via simple nesting and references. As the XML content grows, the ability to handle schemaless XML documents becomes more critical as most XML documents do not have schema or Document Type Definitions (DTDs). In addition, XML content and XML tools are often required to be combined in effective ways for better performance and higher flexibility. In this research, we proposed XML Integrated Environment (XIE) which is a general-purpose service-oriented architecture for processing XML documents in a scalable and efficient fashion. The XIE supports a new software service model that provides a proper abstraction to describe a service and divide it into four components: structure, connection, interface and logic. We also proposed and implemented XIE Service Language (XIESL) that can capture the creation and maintenance of the XML processes and the data flow specified by the user and then orchestrates the interactions between different XIE services. Moreover, XIESL manages the complexity of XML processing by implementing an XML processing pipeline that enables better management, control, interpretation and presentation of the XML data even for non-professional users. The XML Integrated Environment is envisioned to revolutionize the way non-professional programmers see, work and manage their XML assets. It offers them powerful tools and constructs to fully utilize the XML processing power embedded in its unified framework and service-oriented architecture

    Towards adaptive actors for scalable iot applications at the edge

    Get PDF
    Traditional device-cloud architectures are not scalable to the size of future IoT deployments. While edge and fog-computing principles seem like a tangible solution, they increase the programming effort of IoT systems, do not provide the same elasticity guarantees as the cloud and are of much greater hardware heterogeneity. Future IoT applications will be highly distributed and place their computational tasks on any combination of end-devices (sensor nodes, smartphones, drones), edge and cloud resources in order to achieve their application goals. These complex distributed systems require a programming model that allows developers to implement their applications in a simple way (i.e., focus on the application logic) and an execution framework that runs these applications resiliently with a high resource efficiency, while maximizing application utility. Towards such distributed execution runtime, we propose Nandu, an actor based system that adapts and migrates tasks dynamically using developer provided hints as seed information. Nandu allows developers to focus on sequential application logic and transforms their application into distributed, adaptive actors. The resulting actors support fine-grained entry points for the execution environment. These entry points allow local schedulers to adapt actors seamlessly to the current context, while optimizing the overall application utility according to developer provided requirements

    GMSME: An Architecture for Heterogeneous Collaboration with Mobile Devices

    Full text link

    Adaptive content management for collaborative 3D virtual spaces

    Get PDF
    Collaborative 3D virtual spaces and their services are often too heavy for a mobile device to handle. The burden of such services is divided between extensive amounts of data, which need to be downloaded prior to using the service, and the complexity of the resulting graphical rendering process. In this paper, a proxy based architecture for collaborative virtual spaces is used to manipulate graphical data demand-time to favor both network bandwidth usage and graphical rendering process. In addition, a proof of concept test is shown, regarding how the simplification process gains savings for different client device profiles, including laptops, tablets and mobile devices

    Managing Event-Driven Applications in Heterogeneous Fog Infrastructures

    Get PDF
    The steady increase in digitalization propelled by the Internet of Things (IoT) has led to a deluge of generated data at unprecedented pace. Thereby, the promise to realize data-driven decision-making is a major innovation driver in a myriad of industries. Based on the widely used event processing paradigm, event-driven applications allow to analyze data in the form of event streams in order to extract relevant information in a timely manner. Most recently, graphical flow-based approaches in no-code event processing systems have been introduced to significantly lower technological entry barriers. This empowers non-technical citizen technologists to create event-driven applications comprised of multiple interconnected event-driven processing services. Still, today’s event-driven applications are focused on centralized cloud deployments that come with inevitable drawbacks, especially in the context of IoT scenarios that require fast results, are limited by the available bandwidth, or are bound by the regulations in terms of privacy and security. Despite recent advances in the area of fog computing which mitigate these shortcomings by extending the cloud and moving certain processing closer to the event source, these approaches are hardly established in existing systems. Inherent fog computing characteristics, especially the heterogeneity of resources alongside novel application management demands, particularly the aspects of geo-distribution and dynamic adaptation, pose challenges that are currently insufficiently addressed and hinder the transition to a next generation of no-code event processing systems. The contributions of this thesis enable citizen technologists to manage event-driven applications in heterogeneous fog infrastructures along the application life cycle. Therefore, an approach for a holistic application management is proposed which abstracts citizen technologists from underlying technicalities. This allows to evolve present event processing systems and advances the democratization of event-driven application management in fog computing. Individual contributions of this thesis are summarized as follows: 1. A model, manifested in a geo-distributed system architecture, to semantically describe characteristics specific to node resources, event-driven applications and their management to blend application-centric and infrastructure-centric realms. 2. Concepts for geo-distributed deployment and operation of event-driven applications alongside strategies for flexible event stream management. 3. A methodology to support the evolution of event-driven applications including methods to dynamically reconfigure, migrate and offload individual event-driven processing services at run-time. The contributions are introduced, applied and evaluated along two scenarios from the manufacturing and logistics domain

    A policy-based framework towards smooth adaptive playback for dynamic video streaming over HTTP

    Get PDF
    The growth of video streaming in the Internet in the last few years has been highly significant and promises to continue in the future. This fact is related to the growth of Internet users and especially with the diversification of the end-user devices that happens nowadays. Earlier video streaming solutions didn´t consider adequately the Quality of Experience from the user’s perspective. This weakness has been since overcame with the DASH video streaming. The main feature of this protocol is to provide different versions, in terms of quality, of the same content. This way, depending on the status of the network infrastructure between the video server and the user device, the DASH protocol automatically selects the more adequate content version. Thus, it provides to the user the best possible quality for the consumption of that content. The main issue with the DASH protocol is associated to the loop, between each client and video server, which controls the rate of the video stream. In fact, as the network congestion increases, the client requests to the server a video stream with a lower rate. Nevertheless, due to the network latency, the DASH protocol in a standalone way may not be able to stabilize the video stream rate at a level that can guarantee a satisfactory QoE to the end-users. Network programming is a very active and popular topic in the field of network infrastructures management. In this area, the Software Defined Networking paradigm is an approach where a network controller, with a relatively abstracted view of the physical network infrastructure, tries to perform a more efficient management of the data path. The current work studies the combination of the DASH protocol and the Software Defined Networking paradigm in order to achieve a more adequate sharing of the network resources that could benefit both the users’ QoE and network management.O streaming de vídeo na Internet é um fenómeno que tem vindo a crescer de forma significativa nos últimos anos e que promete continuar a crescer no futuro. Este facto está associado ao aumento do número de utilizadores na Internet e, sobretudo, à crescente diversificação de dispositivos que se verifica atualmente. As primeiras soluções utilizadas no streaming de vídeo não acomodavam adequadamente o ponto de vista do utilizador na avaliação da qualidade do vídeo, i.e., a Qualidade de Experiência (QoE) do utilizador. Esta debilidade foi suplantada com o protocolo de streaming de vídeo adaptativo DASH. A principal funcionalidade deste protocolo é fornecer diferente versões, em termos de qualidade, para o mesmo conteúdo. Desta forma, dependendo do estado da infraestrutura de rede entre o servidor de vídeo e o dispositivo do utilizador, o protocolo DASH seleciona automaticamente a versão do conteúdo mais adequada a essas condições. Tal permite fornecer ao utilizador a melhor qualidade possível para o consumo deste conteúdo. O principal problema com o protocolo DASH está associado com o ciclo, entre cada cliente e o servidor de vídeo, que controla o débito de cada fluxo de vídeo. De facto, à medida que a rede fica congestionada, o cliente irá começar a requerer ao servidor um fluxo de vídeo com um débito menor. Ainda assim, devido à latência da rede, o protocolo DASH pode não ser capaz por si só de estabilizar o débito do fluxo de vídeo num nível que consiga garantir uma QoE satisfatória para os utilizadores. A programação de redes é uma área muito popular e ativa na gestão de infraestruturas de redes. Nesta área, o paradigma de Software Defined Networking é uma abordagem onde um controlador da rede, com um ponto de vista relativamente abstrato da infraestrutura física da rede, tenta desempenhar uma gestão mais eficiente do encaminhamento de rede. Neste trabalho estuda-se a junção do protocolo DASH e do paradigma de Software Defined Networking, de forma a atingir uma partilha mais adequada dos recursos da rede. O objetivo é implementar uma solução que seja benéfica tanto para a qualidade de experiência dos utilizadores como para a gestão da rede

    BodyCloud: a SaaS approach for community body sensor networks

    Get PDF
    Body Sensor Networks (BSNs) have been recently introduced for the remote monitoring of human activities in a broad range of application domains, such as health care, emergency management, fitness and behaviour surveillance. BSNs can be deployed in a community of people and can generate large amounts of contextual data that require a scalable approach for storage, processing and analysis. Cloud computing can provide a flexible storage and processing infrastructure to perform both online and offline analysis of data streams generated in BSNs. This paper proposes BodyCloud, a SaaS approach for community BSNs that supports the development and deployment of Cloud-assisted BSN applications. BodyCloud is a multi-tier application-level architecture that integrates a Cloud computing platform and BSN data streams middleware. BodyCloud provides programming abstractions that allow the rapid development of community BSN applications. This work describes the general architecture of the proposed approach and presents a case study for the real-time monitoring and analysis of cardiac data streams of many individuals

    Collaborative simulation and scientific big data analysis: Illustration for sustainability in natural hazards management and chemical process engineering

    Get PDF
    Classical approaches for remote visualization and collaboration used in Computer-Aided Design and Engineering (CAD/E) applications are no longer appropriate due to the increasing amount of data generated, especially using standard networks. We introduce a lightweight and computing platform for scientific simulation, collaboration in engineering, 3D visualization and big data management. This ICT based platform provides scientists an “easy-to-integrate” generic tool, thus enabling worldwide collaboration and remote processing for any kind of data. The service-oriented architecture is based on the cloud computing paradigm and relies on standard internet technologies to be efficient on a large panel of networks and clients. In this paper, we discuss the need of innovations in (i) pre and post processing visualization services, (ii) 3D large scientific data set scalable compression and transmission methods, (iii) collaborative virtual environments, and (iv) collaboration in multi-domains of CAD/E. We propose our open platform for collaborative simulation and scientific big data analysis. This platform is now available as an open project with all core components licensed under LGPL V2.1. We provide two examples of usage of the platform in CAD/E for sustainability engineering from one academic application and one industrial case study. Firstly, we consider chemical process engineering showing the development of a domain specific service. With the rise of global warming issues and with growing importance granted to sustainable development, chemical process engineering has turned to think more and more environmentally. Indeed, the chemical engineer has now taken into account not only the engineering and economic criteria of the process, but also its environmental and social performances. Secondly, an example of natural hazards management illustrates the efficiency of our approach for remote collaboration that involves big data exchange and analysis between distant locations. Finally we underline the platform benefits and we open our platform through next activities in innovation techniques and inventive design

    A Scalable Cluster-based Infrastructure for Edge-computing Services

    Get PDF
    In this paper we present a scalable and dynamic intermediary infrastruc- ture, SEcS (acronym of BScalable Edge computing Services’’), for developing and deploying advanced Edge computing services, by using a cluster of heterogeneous machines. Our goal is to address the challenges of the next-generation Internet services: scalability, high availability, fault-tolerance and robustness, as well as programmability and quick prototyping. The system is written in Java and is based on IBM’s Web Based Intermediaries (WBI) [71] developed at IBM Almaden Research Center
    • …
    corecore