267 research outputs found

    Construction of a real vehicular delay-tolerant network testbed

    Get PDF
    Vehicular Delay-Tolerant Networks (VDTNs) appear as innovative network architecture, able to outline communication challenges caused by issues like variable delays, disruption and intermittent connectivity once that it utilizes the store-carry-and-forward method to allow that in-transit messages (called bundles) can be delivered to the destination by hopping over the mobile vehicles even that an end-to-end path does not exist. Since messages are stored persistently in a buffer and forward to the next hop, a new communication infrastructure is created allowing low-cost asynchronous opportunistic communication under the most critical situations like variable delays and bandwidth constraints. VDTN introduces a layered architecture, acting as an overlay network over the link layer, aggregating incoming IP packets in data bundles (large IP packets), using out-of-band signaling, based on the separation of the control plane and planes. This dissertation presents and evaluates the performance of a real VDTN testbed, demonstrating the real applicability of this new vehicular communication approach. It includes an embedded VDTN testbed created to evaluate safety systems in a real-world scenario. It was used cars with laptops to realize terminal and relay nodes. A real testbed is very important because some complex issues presented in vehicular communication systems can be treated with more realism in real-world environments than in a laboratory environment. The experiments were performed on the internal streets of Brazilian Fiat Automobile manufacturing plant. Performance measurements and analysis were also conduct to verify the efficiency of the system. The results obtained show that safety applications and services can be executed with the actual proposal VDTN architecture in several environments, although notable interference as fading and characteristics of the radio channel, require the use of more modern, appropriate and robust technologies. Thus, the real deployment of VDTNs confirms that VDTNs can be seen as a very promising technology for vehicular communications.Fundação para a CiĂȘncia e a Tecnologia (FCT

    Disruption tolerance for SIP

    Get PDF
    The wireless networks have been built with versatile wireless network technology, including both wide area wireless networks and local area wireless networks. In such heterogeneous network environment, mobile users may experience either short or long interruption for different reasons while having a multimedia conversation. A lot of emphasis is concentrated on improving radio signal and enhancing seamless handover. However, recovery and backup multimedia conversation from a temporary network failure is also an interesting topic to be discussed. In this thesis, a SIP-based communication with enabling of disruption tolerance mechanism is introduced. We present the idea of media stream and SIP signaling based detection and recovery mechanisms, and come with an implementation of the prototype. The disruption tolerance functions include the ways of detecting network failure, storing the conversation during the meanwhile of the network disconnection, recover the previous broken multimedia session and replay the unheard voice. A brief experimental SIP network is built to evaluate the disruption tolerance functions of the software prototype. The result of the experimentation shows that the multimedia session can be recovered from the broken session in a short time, and the important conversation will not be lost during the short network disconnection. The replayed voice brings a delay to the recovered conversation which is not good experience for the users. However, the delayed conversation is much better than losing the conversation

    Axon: A Middleware for Robotics

    Get PDF
    The area of multi-robot systems and frameworks has become, in recent years, a hot research area in the field of robotics. This is attributed to the great advances made in robotic hardware, software, and the diversity of robotic systems. The need to integrate different heterogeneous robotic components and systems has led to the birth of robotic middleware. A robotic middleware is an intricate piece of software that masks the heterogeneity of underlying components and provides high-level interfaces that enable developers to make efficient use of the components. A large number of robotic middleware programs exist today. Each one comes with its own design methodologies and complexities. Up to this moment, however, there exists no unified standard for robotic middleware. Moreover, many of the middleware in use today deal with low-level and hardware aspects. This adds unnecessary complexity in research involving robotic behavior, inter-robot collaboration, and other high-level experiments which do not require prior knowledge of low-level details. In addition, the notion of structured lightweight data transfer between robots is not emphasized in existing work. This dissertation tackles the robotic middleware problem from a different perspective. The aim of this work is to develop a robust middleware that is able to handle multiple robots and clients within a laboratory environment. In the proposed middleware, a high-level representation of robots in an environment is introduced. Also, this work introduces the notion of structured and efficient data exchange as an important issue in robotic middleware research. The middleware has been designed and developed using rigorous methodologies and leading edge technologies. Moreover, the middleware’s ability to integrate different types of robots in a seamless manner, as well as its ability to accommodate multiple robots and clients, has been tested and evaluated

    Revised reference model

    Get PDF
    This document contains an update of the HIDENETS Reference Model, whose preliminary version was introduced in D1.1. The Reference Model contains the overall approach to development and assessment of end-to-end resilience solutions. As such, it presents a framework, which due to its abstraction level is not only restricted to the HIDENETS car-to-car and car-to-infrastructure applications and use-cases. Starting from a condensed summary of the used dependability terminology, the network architecture containing the ad hoc and infrastructure domain and the definition of the main networking elements together with the software architecture of the mobile nodes is presented. The concept of architectural hybridization and its inclusion in HIDENETS-like dependability solutions is described subsequently. A set of communication and middleware level services following the architecture hybridization concept and motivated by the dependability and resilience challenges raised by HIDENETS-like scenarios is then described. Besides architecture solutions, the reference model addresses the assessment of dependability solutions in HIDENETS-like scenarios using quantitative evaluations, realized by a combination of top-down and bottom-up modelling, as well as verification via test scenarios. In order to allow for fault prevention in the software development phase of HIDENETS-like applications, generic UML-based modelling approaches with focus on dependability related aspects are described. The HIDENETS reference model provides the framework in which the detailed solution in the HIDENETS project are being developed, while at the same time facilitating the same task for non-vehicular scenarios and application

    Dataflow/Actor-Oriented language for the design of complex signal processing systems

    Get PDF
    Signal processing algorithms become more and more complex and the algorithm architecture adaptation and design processes cannot any longer rely only on the intuition of the designers to build efficient systems. Specific tools and methods are needed to cope with the increasing complexity of both algorithms and platforms. This paper presents a new framework which allows the specification, design, simulation and implementation of a system operating at a higher level of abstraction compared to current approaches. The framework is base on the usage of a new actor/dataflow oriented language called CAL. Such language has been specifically designed for modelling complex signal processing systems. CAL data flow models expose the intrinsic concurrency of the algorithms by employing the notions of actor programming and dataflow. Concurrency and parallelism are very important aspects of embedded system design as we enter in the multicore era. The design framework is composed by a simulation platform and by Cal2C and CAL2HDL code generators. This paper described in details the principles on which such code generators are based and shows how efficient software (C) and hardware (VHDL and Verilog) code can be generated by appropriate CAL models. Results on a real design case, a MPEG-4 Simple Profile decoder, show that systems obtained with the hardware code generator outperform the hand written VHDL version both in terms of performance and resource usage. Concerning the C code generator results, the results show that the synthesized C-software mapped on a SystemC scheduler platform, is much faster than the simulated CAL dataflow program and approaches handwritten C versions

    Context-aware information delivery for mobile construction workers

    Get PDF
    The potential of mobile Information Technology (IT) applications to support the information needs of mobile construction workers has long been understood. However, existing mobile IT applications in the construction industry have underlined limitations, including their inability to respond to the changing user context, lack of semantic awareness and poor integration with the desktop-based infrastructure. This research argues that awareness of the user context (such as user role, preferences, task-at-hand, location, etc.) can enhance mobile IT applications in the construction industry by providing a mechanism to deliver highly specific information to mobile workers by intelligent interpretation of their context. Against this this background, the aim of this research was to investigate the applicability of context-aware information delivery (CAID) technologies in the construction industry. The research methodology adopted consisted of various methods. A literature review on context-aware and enabling technologies was undertaken and a conceptual framework developed, which addressed the key issues of context-capture, contextinference and context-integration. To illustrate the application of CAID in realistic construction situations, five futuristic deployment scenarios were developed which were analysed with several industry and technology experts. From the analysis, a common set of user needs was drawn up. These needs were subsequently translated into the system design goals, which acted as a key input to the design and evaluation of a prototype system, which was implemented on a Pocket-PC platform. The main achievements of this research include development of a CAID framework for mobile construction workers, demonstration of CAID concepts in realistic construction scenarios, analysis of the Construction industry needs for CAID and implementation and validation of the prototype to demonstrate the CAID concepts. The research concludes that CAID has the potential to significantly improve support for mobile construction workers and identifies the requirements for its effective deployment in the construction project delivery process. However, the industry needs to address various identified barriers to enable the realisation of the full potential of CAID

    Quality of service differentiation for multimedia delivery in wireless LANs

    Get PDF
    Delivering multimedia content to heterogeneous devices over a variable networking environment while maintaining high quality levels involves many technical challenges. The research reported in this thesis presents a solution for Quality of Service (QoS)-based service differentiation when delivering multimedia content over the wireless LANs. This thesis has three major contributions outlined below: 1. A Model-based Bandwidth Estimation algorithm (MBE), which estimates the available bandwidth based on novel TCP and UDP throughput models over IEEE 802.11 WLANs. MBE has been modelled, implemented, and tested through simulations and real life testing. In comparison with other bandwidth estimation techniques, MBE shows better performance in terms of error rate, overhead, and loss. 2. An intelligent Prioritized Adaptive Scheme (iPAS), which provides QoS service differentiation for multimedia delivery in wireless networks. iPAS assigns dynamic priorities to various streams and determines their bandwidth share by employing a probabilistic approach-which makes use of stereotypes. The total bandwidth to be allocated is estimated using MBE. The priority level of individual stream is variable and dependent on stream-related characteristics and delivery QoS parameters. iPAS can be deployed seamlessly over the original IEEE 802.11 protocols and can be included in the IEEE 802.21 framework in order to optimize the control signal communication. iPAS has been modelled, implemented, and evaluated via simulations. The results demonstrate that iPAS achieves better performance than the equal channel access mechanism over IEEE 802.11 DCF and a service differentiation scheme on top of IEEE 802.11e EDCA, in terms of fairness, throughput, delay, loss, and estimated PSNR. Additionally, both objective and subjective video quality assessment have been performed using a prototype system. 3. A QoS-based Downlink/Uplink Fairness Scheme, which uses the stereotypes-based structure to balance the QoS parameters (i.e. throughput, delay, and loss) between downlink and uplink VoIP traffic. The proposed scheme has been modelled and tested through simulations. The results show that, in comparison with other downlink/uplink fairness-oriented solutions, the proposed scheme performs better in terms of VoIP capacity and fairness level between downlink and uplink traffic

    Development of a spatial data infrastructure for precision agriculture applications

    Get PDF
    Precision agriculture (PA) is the technical answer to tackling heterogeneous conditions in a field. It works through site specific operations on a small scale and is driven by data. The objective is an optimized agricultural field application that is adaptable to local needs. The needs differ within a task by spatial conditions. A field, as a homogenous-planted unit, exceeds by its size the scale units of different landscape ecological properties, like soil type, slope, moisture content, solar radiation etc. Various PA-sensors sample data of the heterogeneous conditions in a field. PA-software and Farm Management Information Systems (FMIS) transfer the data into status information or application instructions, which are optimized for the local conditions. The starting point of the research was the determination that the process of PA was only being used in individual environments without exchange between different users and to other domains. Data have been sampled regarding specific operations, but the model of PA suffers from these closed data streams and software products. Initial sensors, data processing and controlled implementations were constructed and sold as monolithic application. An exchange of hard- or software as well as of data was not planned. The design was focused on functionality in a fixed surrounding and conceived as being a unit. This has been identified as a disadvantage for ongoing developments and the creation of added value. Influences from the outside that may be innovative or even inspired cannot be considered. To make this possible, the underlying infrastructure must be flexible and optimized for the exchange of data. This thesis explores the necessary data handling, in terms of integrating knowledge of other domains with a focus on the geo-spatial data processing. As PA is largely dependent on geographical data, this work develops spatial data infrastructure (SDI) components and is based on the methods and tools of geo-informatics. An SDI provides concepts for the organization of geospatial components. It consists of spatial- and metadata in geospatial workflows. The SDI in the center of these workflows is implemented by technologies, policies, arrangements, and interfaces to make the data accessible for various users. Data exchange is the major aim of the concept. As previously stated, data exchange is necessary for PA operations, and it can benefit from defined components of an SDI. Furthermore, PA-processes gain access to interchange with other domains. The import of additional, external data is a benefit. Simultaneously, an export interface for agricultural data offers new possibilities. Coordinated communication ensures understanding for each participant. From the technological point of view, standardized interfaces are best practice. This work demonstrates the benefit of a standardized data exchange for PA, by using the standards of the Open Geospatial Consortium (OGC). The OGC develops and publishes a wide range of relevant standards, which are widely adopted in geospatially enabled software. They are practically proven in other domains and were implemented partially in FMIS in the recent years. Depending on their focus, they could support software solutions by incorporating additional information for humans or machines into additional logics and algorithms. This work demonstrates the benefits of standardized data exchange for PA, especially by the standards of the OGC. The process of research follows five objectives: (i) to increase the usability of PA-tools in order to open the technology for a wider group of users, (ii) to include external data and services seamlessly through standardized interfaces to PA-applications, (iii) to support exchange with other domains concerning data and technology, (iv) to create a modern PA-software architecture, which allows new players and known brands to support processes in PA and to develop new business segments, (v) to use IT-technologies as a driver for agriculture and to contribute to the digitalization of agriculture.Precision agriculture (PA) ist die technische Antwort, um heterogenen Bedingungen in einem Feld zu begegnen. Es arbeitet mit teilflĂ€chenspezifischen Handlungen kleinrĂ€umig und ist durch Daten angetrieben. Das Ziel ist die optimierte landwirtschaftliche Feldanwendung, welche an die lokalen Gegebenheiten angepasst wird. Die BedĂŒrfnisse unterscheiden sich innerhalb einer Anwendung in den rĂ€umlichen Bedingungen. Ein Feld, als gleichmĂ€ĂŸig bepflanzte Einheit, ĂŒberschreitet in seiner GrĂ¶ĂŸe die rĂ€umlichen Einheiten verschiedener landschaftsökologischer GrĂ¶ĂŸen, wie den Bodentyp, die Hangneigung, den Feuchtigkeitsgehalt, die Sonneneinstrahlung etc. Unterschiedliche Sensoren sammeln Daten zu den heterogenen Bedingungen im Feld. PA-Software und farm management information systems (FMIS) ĂŒberfĂŒhren die Daten in Statusinformationen oder Bearbeitungsanweisungen, die fĂŒr die Bedingungen am Ort optimiert sind. Ausgangspunkt dieser Dissertation war die Feststellung, dass der Prozess innerhalb von PA sich nur in einer individuellen Umgebung abspielte, ohne dass es einen Austausch zwischen verschiedenen Nutzern oder anderen DomĂ€nen gab. Daten wurden gezielt fĂŒr Anwendungen gesammelt, aber das Modell von PA leidet unter diesen geschlossenen Datenströmen und Softwareprodukten. UrsprĂŒnglich wurden Sensoren, die Datenverarbeitung und die Steuerung von AnbaugerĂ€ten konstruiert und als monolithische Anwendung verkauft. Ein Austausch von Hard- und Software war ebenso nicht vorgesehen wie der von Daten. Das Design war auf Funktionen in einer festen Umgebung ausgerichtet und als eine Einheit konzipiert. Dieses zeigte sich als Nachteil fĂŒr weitere Entwicklungen und bei der Erzeugung von Mehrwerten. Äußere innovative oder inspirierende EinflĂŒsse können nicht berĂŒcksichtigt werden. Um dieses zu ermöglichen muss die darunterliegende Infrastruktur flexibel und auf einen Austausch von Daten optimiert sein. Diese Dissertation erkundet die notwendige Datenverarbeitung im Sinne der Integration von Wissen aus anderen Bereichen mit dem Fokus auf der Verarbeitung von Geodaten. Da PA sehr abhĂ€ngig von geographischen Daten ist, werden in dieser Arbeit die Bausteine einer Geodateninfrastruktur (GDI) entwickelt, die auf den Methoden undWerkzeugen der Geoinformatik beruhen. Eine GDI stellt Konzepte zur Organisation rĂ€umlicher Komponenten. Sie besteht aus Geodaten und Metadaten in raumbezogenen Arbeitsprozessen. Die GDI, als Zentrum dieser Arbeitsprozesse, wird mit Technologien, Richtlinien, Regelungen sowie Schnittstellen, die den Zugriff durch unterschiedliche Nutzer ermöglichen, umgesetzt. Datenaustausch ist das Hauptziel des Konzeptes. Wie bereits erwĂ€hnt, ist der Datenaustausch wichtig fĂŒr PA-TĂ€tigkeiten und er kann von den definierten Komponenten einer GDI profitieren. Ferner bereichert der Austausch mit anderen Gebieten die PA-Prozesse. Der Import zusĂ€tzlicher Daten ist daher ein Gewinn. Gleichzeitig bietet eine Export-Schnittstelle fĂŒr landwirtschaftliche Daten neue Möglichkeiten. Koordinierte Kommunikation sichert das VerstĂ€ndnis fĂŒr jeden Teilnehmer. Aus technischer Sicht sind standardisierte Schnittstellen die beste Lösung. Diese Arbeit zeigt den Gewinn durch einen standardisierten Datenaustausch fĂŒr PA, indem die Standards des Open Geospatial Consortium (OGC) genutzt wurden. Der OGC entwickelt und publiziert eine Vielzahl von relevanten Standards, die eine große Reichweite in Geo-Software haben. Sie haben sich in der Praxis anderer Bereiche bewĂ€hrt und wurden in den letzten Jahren teilweise in FMIS eingesetzt. AbhĂ€ngig von ihrer Ausrichtung könnten sie Softwarelösungen unterstĂŒtzen, indem sie zusĂ€tzliche Informationen fĂŒr Menschen oder Maschinen in zusĂ€tzlicher Logik oder Algorithmen integrieren. Diese Arbeit zeigt die VorzĂŒge eines standardisierten Datenaustauschs fĂŒr PA, insbesondere durch die Standards des OGC. Die Ziele der Forschung waren: (i) die Nutzbarkeit von PA-Werkzeugen zu erhöhen und damit die Technologie einer breiteren Gruppe von Anwendern verfĂŒgbar zu machen, (ii) externe Daten und Dienste ohne Unterbrechung sowie ĂŒber standardisierte Schnittstellen fĂŒr PA-Anwendungen einzubeziehen, (iii) den Austausch mit anderen Bereichen im Bezug auf Daten und Technologien zu unterstĂŒtzen, (iv) eine moderne PA-Softwarearchitektur zu erschaffen, die es neuen Teilnehmern und bekannten Marken ermöglicht, Prozesse in PA zu unterstĂŒtzen und neue GeschĂ€ftsfelder zu entwickeln, (v) IT-Technologien als Antrieb fĂŒr die Landwirtschaft zu nutzen und einen Beitrag zur Digitalisierung der Landwirtschaft zu leisten

    System Level Performance Evaluation of Distributed Embedded Systems

    Get PDF
    In order to evaluate the feasibility of the distributed embedded systems in different application domains at an early phase, the System Level Performance Evaluation (SLPE) must provide reliable estimates of the nonfunctional properties of the system such as end-to-end delays and packet losses rate. The values of these non-functional properties depend not only on the application layer of the OSI model but also on the technologies residing at the MAC, transport and Physical layers. Therefore, the system level performance evaluation methodology must provide functionally accurate models of the protocols and technologies operating at these layers. After conducting a state of the art survey, it was found that the existing approaches for SLPE are either specialized for a particular domain of systems or apply a particular model of computation (MOC) for modeling the communication and synchronization between the different components of a distributed application. Therefore, these approaches abstract the functionalities of the data-link, Transport and MAC layers by the highly abstract message passing methods employed by the different models of computation. On the other hand, network simulators such as OMNeT++, ns-2 and Opnet do not provide the models for platform components of devices such as processors and memories and totally abstract the application processing by delays obtained via traffic generators. Therefore the system designer is not able to determine the potential impact of an application in terms of utilization of the platform used by the device. Hence, for a system level performance evaluation approach to estimate both the platform utilization and the non-functional properties which are a consequence of the lower layers of OSI models (such as end-to-end delays), it must provide the tools for automatic workload extraction of application workload models at various levels of refinement and functionally correct models of lower layers of OSI model (Transport MAC and Physical layers). Since ABSOLUT is not restricted to a particular domain and also does not depend on any MOC, therefore it was selected for the extension to a system level performance evaluation approach for distributed embedded systems. The models of data-link and Transport layer protocols and automatic workload generation of system calls was not available in ABSOLUT performance evaluation methodology. The, thesis describes the design and modelling of these OSI model layers and automatic workload generation tool for system calls. The tools and models integrated to ABSOLUT methodology were used in a number of case studies. The accuracy of the protocols was compared to network simulators and real systems. The results were 88% accurate for user space code of the application layer and provide an improvement of over 50% as compared to manual models for external libraries and system calls. The ABSOLUT physical layer models were found to be 99.8% accurate when compared to analytical models. The MAC and transport layer models were found to be 70-80% accurate when compared with the same scenarios simulated by ns-2 and OMNeT++ simulators. The bit error rates, frame error probability and packet loss rates show close correlation with the analytical methods .i.e., over 99%, 92% and 80% respectively. Therefore the results of ABSOLUT framework for application layer outperform the results of performance evaluation approaches which employ virtual systems and at the same time provide as accurate estimates of the end-to-end delays and packet loss rate as network simulators. The results of the network simulators also vary in absolute values but they follow the same trend. Therefore, the extensions made to ABSOLUT allow the system designer to identify the potential bottlenecks in the system at different OSI model layers and evaluate the non-functional properties with a high level of accuracy. Also, if the system designer wants to focus entirely on the application layer, different models of computations can be easily instantiated on top of extended ABSOLUT framework to achieve higher simulation speeds as described in the thesis
    • 

    corecore