2,381 research outputs found

    Tandem: A Context-Aware Method for Spontaneous Clustering of Dynamic Wireless Sensor Nodes

    Get PDF
    Wireless sensor nodes attached to everyday objects and worn by people are able to collaborate and actively assist users in their activities. We propose a method through which wireless sensor nodes organize spontaneously into clusters based on a common context. Provided that the confidence of sharing a common context varies in time, the algorithm takes into account a window-based history of believes. We approximate the behaviour of the algorithm using a Markov chain model and we analyse theoretically the cluster stability. We compare the theoretical approximation with simulations, by making use of experimental results reported from field tests. We show the tradeoff between the time history necessary to achieve a certain stability and the responsiveness of the clustering algorithm

    Runtime adaptive iomt node on multi-core processor platform

    Get PDF
    The Internet of Medical Things (IoMT) paradigm is becoming mainstream in multiple clinical trials and healthcare procedures. Thanks to innovative technologies, latest-generation communication networks, and state-of-the-art portable devices, IoTM opens up new scenarios for data collection and continuous patient monitoring. Two very important aspects should be considered to make the most of this paradigm. For the first aspect, moving the processing task from the cloud to the edge leads to several advantages, such as responsiveness, portability, scalability, and reliability of the sensor node. For the second aspect, in order to increase the accuracy of the system, state-of-the-art cognitive algorithms based on artificial intelligence and deep learning must be integrated. Sensory nodes often need to be battery powered and need to remain active for a long time without a different power source. Therefore, one of the challenges to be addressed during the design and development of IoMT devices concerns energy optimization. Our work proposes an implementation of cognitive data analysis based on deep learning techniques on resource-constrained computing platform. To handle power efficiency, we introduced a component called Adaptive runtime Manager (ADAM). This component takes care of reconfiguring the hardware and software of the device dynamically during the execution, in order to better adapt it to the workload and the required operating mode. To test the high computational load on a multi-core system, the Orlando prototype board by STMicroelectronics, cognitive analysis of Electrocardiogram (ECG) traces have been adopted, considering single-channel and six-channel simultaneous cases. Experimental results show that by managing the sensory node configuration at runtime, energy savings of at least 15% can be achieved

    Case studies for a new IoT programming paradigm: Fluidware

    Get PDF
    A number of scientific and technological advancements enabled turning the Internet of Things vision into reality. However, there is still a bottleneck in designing and developing IoT applications and services: each device has to be programmed individually, and services are deployed to specific devices. The Fluidware approach advocates that to truly scale and raise the level of abstraction a novel perspective is needed, focussing on device ensembles and dynamic allocation of resources. In this paper, we motivate the need for such a paradigm shift through three case studies emphasising a mismatch between state of art solutions and desired properties to achieve

    High dynamic range video merging, tone mapping, and real-time implementation

    Get PDF
    Although High Dynamic Range (High Dynamic Range (HDR)) imaging has been the subject of significant research over the past fifteen years, the goal of cinemaquality HDR video has not yet been achieved. This work references an optical method patented by Contrast Optical which is used to capture sequences of Low Dynamic Range (LDR) images that can be used to form HDR images as the basis for HDR video. Because of the large diverence in exposure spacing of the LDR images captured by this camera, present methods of merging LDR images are insufficient to produce cinema quality HDR images and video without significant visible artifacts. Thus the focus of the research presented is two fold. The first contribution is a new method of combining LDR images with exposure differences of greater than 3 stops into an HDR image. The second contribution is a method of tone mapping HDR video which solves potential problems of HDR video flicker and automated parameter control of the tone mapping operator. A prototype of this HDR video capture technique along with the combining and tone mapping algorithms have been implemented in a high-definition HDR-video system. Additionally, Field Programmable Gate Array (FPGA) hardware implementation details are given to support real time HDR video. Still frames from the acquired HDR video system which have been merged used the merging and tone mapping techniques will be presented

    Hardware-Aware Algorithm Designs for Efficient Parallel and Distributed Processing

    Get PDF
    The introduction and widespread adoption of the Internet of Things, together with emerging new industrial applications, bring new requirements in data processing. Specifically, the need for timely processing of data that arrives at high rates creates a challenge for the traditional cloud computing paradigm, where data collected at various sources is sent to the cloud for processing. As an approach to this challenge, processing algorithms and infrastructure are distributed from the cloud to multiple tiers of computing, closer to the sources of data. This creates a wide range of devices for algorithms to be deployed on and software designs to adapt to.In this thesis, we investigate how hardware-aware algorithm designs on a variety of platforms lead to algorithm implementations that efficiently utilize the underlying resources. We design, implement and evaluate new techniques for representative applications that involve the whole spectrum of devices, from resource-constrained sensors in the field, to highly parallel servers. At each tier of processing capability, we identify key architectural features that are relevant for applications and propose designs that make use of these features to achieve high-rate, timely and energy-efficient processing.In the first part of the thesis, we focus on high-end servers and utilize two main approaches to achieve high throughput processing: vectorization and thread parallelism. We employ vectorization for the case of pattern matching algorithms used in security applications. We show that re-thinking the design of algorithms to better utilize the resources available in the platforms they are deployed on, such as vector processing units, can bring significant speedups in processing throughout. We then show how thread-aware data distribution and proper inter-thread synchronization allow scalability, especially for the problem of high-rate network traffic monitoring. We design a parallelization scheme for sketch-based algorithms that summarize traffic information, which allows them to handle incoming data at high rates and be able to answer queries on that data efficiently, without overheads.In the second part of the thesis, we target the intermediate tier of computing devices and focus on the typical examples of hardware that is found there. We show how single-board computers with embedded accelerators can be used to handle the computationally heavy part of applications and showcase it specifically for pattern matching for security-related processing. We further identify key hardware features that affect the performance of pattern matching algorithms on such devices, present a co-evaluation framework to compare algorithms, and design a new algorithm that efficiently utilizes the hardware features.In the last part of the thesis, we shift the focus to the low-power, resource-constrained tier of processing devices. We target wireless sensor networks and study distributed data processing algorithms where the processing happens on the same devices that generate the data. Specifically, we focus on a continuous monitoring algorithm (geometric monitoring) that aims to minimize communication between nodes. By deploying that algorithm in action, under realistic environments, we demonstrate that the interplay between the network protocol and the application plays an important role in this layer of devices. Based on that observation, we co-design a continuous monitoring application with a modern network stack and augment it further with an in-network aggregation technique. In this way, we show that awareness of the underlying network stack is important to realize the full potential of the continuous monitoring algorithm.The techniques and solutions presented in this thesis contribute to better utilization of hardware characteristics, across a wide spectrum of platforms. We employ these techniques on problems that are representative examples of current and upcoming applications and contribute with an outlook of emerging possibilities that can build on the results of the thesis

    Study, design and implementation of WebRTC for a realtime multimedia messaging application

    Get PDF
    Social networks are no longer a phenomenon; nowadays it is not that they are a reality but have become something indispensable. During its growth and consolidations period internet has suffered a great transformation due to the new kind of most demanded content. Sharing images, videos or even making calls with another user are tasks that an average user would make several times a day. This transition could only happen thanks to new technologies that not only simplify those tasks but, due to handheld devices' irruption, would work successfully under reasonable data and battery consumption rates. Videoconferences over the network and multimedia data streams in general have always gone hand in hand of closed software products like Macromedia Flash, for instance, that required of a plugin installation on the browser by the end user. Under those premises, this project will focus on the investigation of WebRTC as a technology capable of successfully achieving videoconferences between users without the need of any browser plugin. In order to verify the knowledge gathered through the study of the technology, the design, architecture and implementation of an application capable of doing so will be proposed

    SPINN: Synergistic Progressive Inference of Neural Networks over Device and Cloud

    Full text link
    Despite the soaring use of convolutional neural networks (CNNs) in mobile applications, uniformly sustaining high-performance inference on mobile has been elusive due to the excessive computational demands of modern CNNs and the increasing diversity of deployed devices. A popular alternative comprises offloading CNN processing to powerful cloud-based servers. Nevertheless, by relying on the cloud to produce outputs, emerging mission-critical and high-mobility applications, such as drone obstacle avoidance or interactive applications, can suffer from the dynamic connectivity conditions and the uncertain availability of the cloud. In this paper, we propose SPINN, a distributed inference system that employs synergistic device-cloud computation together with a progressive inference method to deliver fast and robust CNN inference across diverse settings. The proposed system introduces a novel scheduler that co-optimises the early-exit policy and the CNN splitting at run time, in order to adapt to dynamic conditions and meet user-defined service-level requirements. Quantitative evaluation illustrates that SPINN outperforms its state-of-the-art collaborative inference counterparts by up to 2x in achieved throughput under varying network conditions, reduces the server cost by up to 6.8x and improves accuracy by 20.7% under latency constraints, while providing robust operation under uncertain connectivity conditions and significant energy savings compared to cloud-centric execution.Comment: Accepted at the 26th Annual International Conference on Mobile Computing and Networking (MobiCom), 202

    Application Layer Architectures for Disaster Response Systems

    Get PDF
    Traditional disaster response methods face several issues such as limited situational awareness, lack of interoperability and reliance on voice-oriented communications. Disaster response systems (DRSs) aim to address these issues and assist responders by providing a wide range of services. Since the network infrastructure in disaster area may become non-operational, mobile ad-hoc networks (MANETs) are the only alternative to provide connectivity and other network services. Because of the dynamic nature of MANETs the applications/services provided by DRSs should be based on distributed architectures. These distributed application/services form overlays on top of MANETs. This thesis aims to improve three main aspect of DRSs: interoperability, automation, and prioritization. Interoperability enables the communication and collaboration between different rescue teams which improve the efficiency of rescue operations and avoid potential interferences between teams. Automation allows responders to focus more on their tasks by minimizing the required human interventions in DRSs. Automation also allows machines to operate in areas where human cannot because of safety issues. Prioritization ensures that emergency services (e.g. firefighter communications) in DRSs have higher priority to receive resources (e.g. network services) than non-emergency services (e.g. new reporters’ communications). Prioritizing vital services in disaster area can save lives. This thesis proposes application layer architectures that enable three important services in DRSs and contribute to the improvement of the three aforementioned aspects of DRSs: overlay interconnection, service discovery and differentiated quality of service (QoS). The overlay interconnection architecture provides a distributed and scalable mechanism to interconnect end-user application overlays and gateway overlays in MANETs. The service discovery architecture is a distributed directory-based service discovery mechanism based on the standard Domain Name System (DNS) protocol. Lastly, a differentiated QoS architecture is presented that provides admission control and policy enforcement functions based on a given prioritization scheme. For each of the provided services, a motivation scenario is presented, requirements are derived and related work is evaluated with respect to these requirements. Furthermore, performance evaluations are provided for each of the proposed architectures. For the overlay interconnection architecture, a prototype is presented along with performance measurements. The results show that our architecture achieves acceptable request-response delays and network load overhead. For the service discovery architecture, extensive simulations have been run to evaluate the performance of our architecture and to compare it with the Internet Engineering Task Force (IETF) directory-less service discovery proposal based on Multicast DNS. The results show that our architecture generates less overall network load and ensures successful discovery with higher probability. Finally, for the differentiated QoS architecture, simulations results show that our architecture not only enables differentiated QoS, it also improves overall QoS in terms of the number of successful overlay flows

    Doctor of Philosophy

    Get PDF
    dissertationInteractive editing and manipulation of digital media is a fundamental component in digital content creation. One media in particular, digital imagery, has seen a recent increase in popularity of its large or even massive image formats. Unfortunately, current systems and techniques are rarely concerned with scalability or usability with these large images. Moreover, processing massive (or even large) imagery is assumed to be an off-line, automatic process, although many problems associated with these datasets require human intervention for high quality results. This dissertation details how to design interactive image techniques that scale. In particular, massive imagery is typically constructed as a seamless mosaic of many smaller images. The focus of this work is the creation of new technologies to enable user interaction in the formation of these large mosaics. While an interactive system for all stages of the mosaic creation pipeline is a long-term research goal, this dissertation concentrates on the last phase of the mosaic creation pipeline - the composition of registered images into a seamless composite. The work detailed in this dissertation provides the technologies to fully realize interactive editing in mosaic composition on image collections ranging from the very small to massive in scale

    Fog Connectivity Clustering and MDP Modeling for Software-defined Vehicular Networks

    Get PDF
    Intelligent and networked vehicles cooperate to create a mobile Cloud through vehicular Fog computing (VFC). Such clouds rely heavily on the underlying vehicular networks, so estimating communication resilience allows to address the problems caused by intermittent vehicle connectivity for data transfers. Individually estimating the communication stability of vehicles, nevertheless, undergoes incorrect predictions due to their particular mobility patterns. Therefore, we provide a region-oriented fog management model based on the connectivity through vehicular heterogeneous network environment via V2X and C-V2X. A fog management strategy dynamically monitors nearby vehicles to determine distinct regions in urban centres. The model enables a software-defined vehicular network (\Gls{SDVN}) controller to coordinate data flows. The vehicular connectivity described by our model assesses the potential for vehicle communication and conducts dynamic vehicle clustering. From the stochasticity of the environment, our model is based on Markov Decision Process (MDP), tracking the status of vehicle clusters and their potential for provisioning services. The model for vehicular clustering is supported by 5G and DSRC heterogeneous networks. Simulated analyses have shown the capability of our proposed model to estimate cluster reliability in real-time urban scenarios and support effective vehicular fog management
    corecore