43 research outputs found

    The QoSxLabel: a quality of service cross layer label

    Get PDF
    A quality of service cross layer label

    Towards an unified experimentation framework for protocol engineering

    Get PDF
    The design and development process of complex systems require an adequate methodology and efficient instrumental support in order to early detect and correct anomalies in the functional and non-functional properties of the solution. In this article, an Unified Experimentation Framework (UEF) providing experimentation facilities at both design and development stages is introduced. This UEF provides a mean to achieve experiment in both simulation mode with UML2 models of the designed protocol and emulation mode using real protocol implementation. A practical use case of the experimentation framework is illustrated in the context of satellite environment

    Building self-optimized communication systems based on applicative cross-layer information

    Get PDF
    This article proposes the Implicit Packet Meta Header(IPMH) as a standard method to compute and represent common QoS properties of the Application Data Units (ADU) of multimedia streams using legacy and proprietary streams’ headers (e.g. Real-time Transport Protocol headers). The use of IPMH by mechanisms located at different layers of the communication architecture will allow implementing fine per-packet selfoptimization of communication services regarding the actual application requirements. A case study showing how IPMH is used by error control mechanisms in the context of wireless networks is presented in order to demonstrate the feasibility and advantages of this approach

    A cross-layer approach to enhance QoS for multimedia applications over satellite

    Get PDF
    The need for on-demand QoS support for communications over satellite is of primary importance for distributed multimedia applications. This is particularly true for the return link which is often a bottleneck due to the large set of end-users accessing a very limited uplink resource. Facing this need, Demand Assignment Multiple Access (DAMA) is a classical technique that allows satellite operators to offer various types of services, while managing the resources of the satellite system efficiently. Tackling the quality degradation and delay accumulation issues that can result from the use of these techniques, this paper proposes an instantiation of the Application Layer Framing (ALF) approach, using a cross-layer interpreter(xQoS-Interpreter). The information provided by this interpreter is used to manage the resource provided to a terminal by the satellite system in order to improve the quality of multimedia presentations from the end users point of view. Several experiments are carried out for different loads on the return link. Their impact on QoS is measured through different application as well as network level metrics

    Sensor Observation Streams Within Cloud-based IoT Platforms: Challenges and Directions

    Get PDF
    Observation streams can be considered as a special case of data streams produced by sensors. With the growth of the Internet of Things (IoT), more and more connected sensors will produce unbounded observation streams. In order to bridge the gap between sensors and observation consumers, we have witnessed the design and the development of Cloud-based IoT platforms. Such systems raise new research challenges, in particular regarding observation collection, processing and consumption. These new research challenges are related to observation streams and should be addressed from the implementation phase by developers to build platforms able to meet other non-functional requirements later. Unlike existing surveys, this paper is intended for developers that would like to design and implement a Cloud-based IoT platform capable of handling sensor observation streams. It provides a comprehensive way to understand main observation-related challenges, as well as non-functional requirements of IoT platforms such as platform adaptation, scalability and availability. Last but not the least, it gives recommendations and compares some relevant open-source software that can speed up the development process

    Survey on Quality of Observation within Sensor Web Systems

    Get PDF
    The Sensor Web vision refers to the addition of a middleware layer between sensors and applications. To bridge the gap between these two layers, Sensor Web systems must deal with heterogeneous sources, which produce heterogeneous observations of disparate quality. Managing such diversity at the application level can be complex and requires high levels of expertise from application developers. Moreover, as an information-centric system, any Sensor Web should provide support for Quality of Observation (QoO) requirements. In practice, however, only few Sensor Webs provide satisfying QoO support and are able to deliver high-quality observations to end consumers in a specific manner. This survey aims to study why and how observation quality should be addressed in Sensor Webs. It proposes three original contributions. First, it provides important insights into quality dimensions and proposes to use the QoO notion to deal with information quality within Sensor Webs. Second, it proposes a QoO-oriented review of 29 Sensor Web solutions developed between 2003 and 2016, as well as a custom taxonomy to characterise some of their features from a QoO perspective. Finally, it draws four major requirements required to build future adaptive and QoO-aware Sensor Web solutions

    IREEL: remote experimentation with real protocols and applications over emulated network (extended version)

    Get PDF
    In the context of education, experimenting with networking protocols is a very important step in the learning process. These experiments are usually achieved using either simulation or real test bed. Progresses in high speed processing and networking enable the development of network emulators. These emulators use both real protocol implementations and network models that allow a controlled communication environment to be created

    Overview on the Blockchain-Based Supply Chain Systematics and Their Scalability Tools

    Get PDF
    Modern IT technologies shaped the shift in economic models with many advantages on cost, optimization, and time to market. This economic shift has increased the need for transparency and traceability in supply chain platforms to achieve trust among partners. Distributed ledger technology (DLT) is proposed to enable supply chains systems with trust requirements. In this paper, we investigate the existing DLT-based supply chain projects to show their technical part and limitations and extract the tools and techniques used to avoid the DLT scalability issue. We then set the requirements for a typical DLT-based supply chain in this context. The analyses are based on the scalability metrics such as computing, data storage, and transaction fees that fit the typical supply chain system. This paper highlights the effects of Blockchain techniques on scalability and their incorporation in supply chains systems. It also presents other existing solutions that can be applied to the supply chain. The investigation shows the necessity of having such tools in supply chains and developing them to achieve an efficient and scalable system. The paper calls for further scalability enhancements throughout introducing new tools and/or reutilize the current ones. Doi: 10.28991/esj-2021-SP1-04 Full Text: PD

    Collaborative knowledge as a service applied to the disaster management domain

    Get PDF
    Cloud computing offers services which promise to meet continuously increasing computing demands by using a large number of networked resources. However, data heterogeneity remains a major hurdle for data interoperability and data integration. In this context, a Knowledge as a Service (KaaS) approach has been proposed with the aim of generating knowledge from heterogeneous data and making it available as a service. In this paper, a Collaborative Knowledge as a Service (CKaaS) architecture is proposed, with the objective of satisfying consumer knowledge needs by integrating disparate cloud knowledge through collaboration among distributed KaaS entities. The NIST cloud computing reference architecture is extended by adding a KaaS layer that integrates diverse sources of data stored in a cloud environment. CKaaS implementation is domain-specific; therefore, this paper presents its application to the disaster management domain. A use case demonstrates collaboration of knowledge providers and shows how CKaaS operates with simulation models

    Knowledge as a Service Framework for Disaster Data Management

    Get PDF
    Each year, a number of natural disasters strike across the globe, killing hundreds and causing billions of dollars in property and infrastructure damage. Minimizing the impact of disasters is imperative in today’s society. As the capabilities of software and hardware evolve, so does the role of information and communication technology in disaster mitigation, preparation, response, and recovery. A large quantity of disaster-related data is available, including response plans, records of previous incidents, simulation data, social media data, and Web sites. However, current data management solutions offer few or no integration capabilities. Moreover, recent advances in cloud computing, big data, and NoSQL open the door for new solutions in disaster data management. In this paper, a Knowledge as a Service (KaaS) framework is proposed for disaster cloud data management (Disaster-CDM), with the objectives of 1) storing large amounts of disaster-related data from diverse sources, 2) facilitating search, and 3) supporting their interoperability and integration. Data are stored in a cloud environment using a combination of relational and NoSQL databases. The case study presented in this paper illustrates the use of Disaster-CDM on an example of simulation models
    corecore