226 research outputs found

    Connecting the World of Embedded Mobiles: The RIOT Approach to Ubiquitous Networking for the Internet of Things

    Full text link
    The Internet of Things (IoT) is rapidly evolving based on low-power compliant protocol standards that extend the Internet into the embedded world. Pioneering implementations have proven it is feasible to inter-network very constrained devices, but had to rely on peculiar cross-layered designs and offer a minimalistic set of features. In the long run, however, professional use and massive deployment of IoT devices require full-featured, cleanly composed, and flexible network stacks. This paper introduces the networking architecture that turns RIOT into a powerful IoT system, to enable low-power wireless scenarios. RIOT networking offers (i) a modular architecture with generic interfaces for plugging in drivers, protocols, or entire stacks, (ii) support for multiple heterogeneous interfaces and stacks that can concurrently operate, and (iii) GNRC, its cleanly layered, recursively composed default network stack. We contribute an in-depth analysis of the communication performance and resource efficiency of RIOT, both on a micro-benchmarking level as well as by comparing IoT communication across different platforms. Our findings show that, though it is based on significantly different design trade-offs, the networking subsystem of RIOT achieves a performance equivalent to that of Contiki and TinyOS, the two operating systems which pioneered IoT software platforms

    Leveraging Automated Machine Learning, Edge Computing for Video Understanding

    Get PDF
    Computer Vision is witnessing unprecedented growth over the past few years mainly because of the applications of deep learning methods to computer vision tasks like classification, action recognition, segmentation, and object detection. Video-based action recognition is an important task for video understanding with broad applications in security and behavior analysis. However, developing an effective action recognition solution often requires extensive engineering efforts in building and testing different combinations of the modules and optimizing for the best set of their hyperparameters. The recent advancements in computer vision has shown its vast applicability across several real-world problems. However, developing an optimal end-end machine learning pipeline requires considerable knowledge in computer vision and significant engineering efforts by the developers. To address these problems, in this paper, we present AutoVideo, an AutoML framework for automated video action recognition. AutoVideo aims to tackle these problems by 1) being a highly modular and extendable infrastructure following the standard pipeline language, 2) having an exhaustive list of primitives for pipeline construction, 3) including data-driven tuners to save the efforts of pipeline tuning, and 4) integrating an easy-to-use Graphical User Interface (GUI). Another major problem with computer vision applications is the deployment of these machine learning models to edge devices for real world applications, especially because these usually require low latency, low power or data privacy. This requires significant research and engineering efforts due to the computational and memory limitations of edge devices. To tackle this problem, we also present BED, an object detection system for edge devices practiced on the MAX78000 DNN accelerator. To demonstrate real world applicability, we integrate on-device DNN inference with a camera and a screen for image acquisition and output exhibition respectively. AutoVideo is released at GitHub - AutoVideo-GitHub under MIT license with a demo video hosted at Demo Video-AutoVideo while BED is released at Github - BED_main-GitHub with a demo video at Demo Video-BED

    IoT requirements and architecture for personal health

    Get PDF
    Personal health devices and wearables have the potential to drastically change the current landscape of wellness and care delivery. As these devices become commonplace, more and more patients are gaining access to new forms of simplified health monitoring and data collection, empowering them to engage in their own health and well-being in unprecedented ways. Cheap and easy-to-use health IoT devices are leading the transformation towards a continuum-of-care health system—focused on detection and prevention—where health issues can be caught before hospital care or professional intervention is needed. However, this vision is set to outpace the expectations and capabilities of today’s connected health devices, challenging existing ecosystems with unique requirements on functionality, connectivity, and usability. This thesis presents a set of health IoT requirements that are especially relevant to the design of a connected device’s low-level software features: its thing architecture. These requirements represent shared concerns in health-related IoT scenarios that can be solved with the features and capabilities of smart things. The thesis presents an architectural design and implementation of concrete features influenced by some of these requirements—leading to the Atlas Health IoT Architecture—which explores the role of safe and meaningful interactions between devices and users, referred to as IoTility. The thesis also considers the IoTility of smartphone applications in health scenarios, called Mobile Apps As Things (MAAT), resulting in a programming enabler that more closely integrates app features with those of physical thing devices. Alongside these implementations, this thesis presents a set of experimental evaluations investigating the feasibility of both MAAT and the architectural requirements as a whole

    The Four-C Framework for High Capacity Ultra-Low Latency in 5G Networks: A Review

    Get PDF
    Network latency will be a critical performance metric for the Fifth Generation (5G) networks expected to be fully rolled out in 2020 through the IMT-2020 project. The multi-user multiple-input multiple-output (MU-MIMO) technology is a key enabler for the 5G massive connectivity criterion, especially from the massive densification perspective. Naturally, it appears that 5G MU-MIMO will face a daunting task to achieve an end-to-end 1 ms ultra-low latency budget if traditional network set-ups criteria are strictly adhered to. Moreover, 5G latency will have added dimensions of scalability and flexibility compared to prior existing deployed technologies. The scalability dimension caters for meeting rapid demand as new applications evolve. While flexibility complements the scalability dimension by investigating novel non-stacked protocol architecture. The goal of this review paper is to deploy ultra-low latency reduction framework for 5G communications considering flexibility and scalability. The Four (4) C framework consisting of cost, complexity, cross-layer and computing is hereby analyzed and discussed. The Four (4) C framework discusses several emerging new technologies of software defined network (SDN), network function virtualization (NFV) and fog networking. This review paper will contribute significantly towards the future implementation of flexible and high capacity ultra-low latency 5G communications

    Orchestration of distributed ingestion and processing of IoT data for fog platforms

    Get PDF
    In recent years there has been an extraordinary growth of the Internet of Things (IoT) and its protocols. The increasing diffusion of electronic devices with identification, computing and communication capabilities is laying ground for the emergence of a highly distributed service and networking environment. The above mentioned situation implies that there is an increasing demand for advanced IoT data management and processing platforms. Such platforms require support for multiple protocols at the edge for extended connectivity with the objects, but also need to exhibit uniform internal data organization and advanced data processing capabilities to fulfill the demands of the application and services that consume IoT data. One of the initial approaches to address this demand is the integration between IoT and the Cloud computing paradigm. There are many benefits of integrating IoT with Cloud computing. The IoT generates massive amounts of data, and Cloud computing provides a pathway for that data to travel to its destination. But today’s Cloud computing models do not quite fit for the volume, variety, and velocity of data that the IoT generates. Among the new technologies emerging around the Internet of Things to provide a new whole scenario, the Fog Computing paradigm has become the most relevant. Fog computing was introduced a few years ago in response to challenges posed by many IoT applications, including requirements such as very low latency, real-time operation, large geo-distribution, and mobility. Also this low latency, geo-distributed and mobility environments are covered by the network architecture MEC (Mobile Edge Computing) that provides an IT service environment and Cloud-computing capabilities at the edge of the mobile network, within the Radio Access Network (RAN) and in close proximity to mobile subscribers. Fog computing addresses use cases with requirements far beyond Cloud-only solution capabilities. The interplay between Cloud and Fog computing is crucial for the evolution of the so-called IoT, but the reach and specification of such interplay is an open problem. This thesis aims to find the right techniques and design decisions to build a scalable distributed system for the IoT under the Fog Computing paradigm to ingest and process data. The final goal is to explore the trade-offs and challenges in the design of a solution from Edge to Cloud to address opportunities that current and future technologies will bring in an integrated way. This thesis describes an architectural approach that addresses some of the technical challenges behind the convergence between IoT, Cloud and Fog with special focus on bridging the gap between Cloud and Fog. To that end, new models and techniques are introduced in order to explore solutions for IoT environments. This thesis contributes to the architectural proposals for IoT ingestion and data processing by 1) proposing the characterization of a platform for hosting IoT workloads in the Cloud providing multi-tenant data stream processing capabilities, the interfaces over an advanced data-centric technology, including the building of a state-of-the-art infrastructure to evaluate the performance and to validate the proposed solution. 2) studying an architectural approach following the Fog paradigm that addresses some of the technical challenges found in the first contribution. The idea is to study an extension of the model that addresses some of the central challenges behind the converge of Fog and IoT. 3) Design a distributed and scalable platform to perform IoT operations in a moving data environment. The idea after study data processing in Cloud, and after study the convenience of the Fog paradigm to solve the IoT close to the Edge challenges, is to define the protocols, the interfaces and the data management to solve the ingestion and processing of data in a distributed and orchestrated manner for the Fog Computing paradigm for IoT in a moving data environment.En els últims anys hi ha hagut un gran creixement del Internet of Things (IoT) i els seus protocols. La creixent difusió de dispositius electrònics amb capacitats d'identificació, computació i comunicació esta establint les bases de l’aparició de serveis altament distribuïts i del seu entorn de xarxa. L’esmentada situació implica que hi ha una creixent demanda de plataformes de processament i gestió avançada de dades per IoT. Aquestes plataformes requereixen suport per a múltiples protocols al Edge per connectivitat amb el objectes, però també necessiten d’una organització de dades interna i capacitats avançades de processament de dades per satisfer les demandes de les aplicacions i els serveis que consumeixen dades IoT. Una de les aproximacions inicials per abordar aquesta demanda és la integració entre IoT i el paradigma del Cloud computing. Hi ha molts avantatges d'integrar IoT amb el Cloud. IoT genera quantitats massives de dades i el Cloud proporciona una via perquè aquestes dades viatgin a la seva destinació. Però els models actuals del Cloud no s'ajusten del tot al volum, varietat i velocitat de les dades que genera l'IoT. Entre les noves tecnologies que sorgeixen al voltant del IoT per proporcionar un escenari nou, el paradigma del Fog Computing s'ha convertit en la més rellevant. Fog Computing es va introduir fa uns anys com a resposta als desafiaments que plantegen moltes aplicacions IoT, incloent requisits com baixa latència, operacions en temps real, distribució geogràfica extensa i mobilitat. També aquest entorn està cobert per l'arquitectura de xarxa MEC (Mobile Edge Computing) que proporciona serveis de TI i capacitats Cloud al edge per la xarxa mòbil dins la Radio Access Network (RAN) i a prop dels subscriptors mòbils. El Fog aborda casos d?us amb requisits que van més enllà de les capacitats de solucions només Cloud. La interacció entre Cloud i Fog és crucial per a l'evolució de l'anomenat IoT, però l'abast i especificació d'aquesta interacció és un problema obert. Aquesta tesi té com objectiu trobar les decisions de disseny i les tècniques adequades per construir un sistema distribuït escalable per IoT sota el paradigma del Fog Computing per a ingerir i processar dades. L'objectiu final és explorar els avantatges/desavantatges i els desafiaments en el disseny d'una solució des del Edge al Cloud per abordar les oportunitats que les tecnologies actuals i futures portaran d'una manera integrada. Aquesta tesi descriu un enfocament arquitectònic que aborda alguns dels reptes tècnics que hi ha darrere de la convergència entre IoT, Cloud i Fog amb especial atenció a reduir la bretxa entre el Cloud i el Fog. Amb aquesta finalitat, s'introdueixen nous models i tècniques per explorar solucions per entorns IoT. Aquesta tesi contribueix a les propostes arquitectòniques per a la ingesta i el processament de dades IoT mitjançant 1) proposant la caracterització d'una plataforma per a l'allotjament de workloads IoT en el Cloud que proporcioni capacitats de processament de flux de dades multi-tenant, les interfícies a través d'una tecnologia centrada en dades incloent la construcció d'una infraestructura avançada per avaluar el rendiment i validar la solució proposada. 2) estudiar un enfocament arquitectònic seguint el paradigma Fog que aborda alguns dels reptes tècnics que es troben en la primera contribució. La idea és estudiar una extensió del model que abordi alguns dels reptes centrals que hi ha darrere de la convergència de Fog i IoT. 3) Dissenyar una plataforma distribuïda i escalable per a realitzar operacions IoT en un entorn de dades en moviment. La idea després d'estudiar el processament de dades a Cloud, i després d'estudiar la conveniència del paradigma Fog per resoldre el IoT prop dels desafiaments Edge, és definir els protocols, les interfícies i la gestió de dades per resoldre la ingestió i processament de dades en un distribuït i orquestrat per al paradigma Fog Computing per a l'IoT en un entorn de dades en moviment

    The Perception/Action loop: A Study on the Bandwidth of Human Perception and on Natural Human Computer Interaction for Immersive Virtual Reality Applications

    Get PDF
    Virtual Reality (VR) is an innovating technology which, in the last decade, has had a widespread success, mainly thanks to the release of low cost devices, which have contributed to the diversification of its domains of application. In particular, the current work mainly focuses on the general mechanisms underling perception/action loop in VR, in order to improve the design and implementation of applications for training and simulation in immersive VR, especially in the context of Industry 4.0 and the medical field. On the one hand, we want to understand how humans gather and process all the information presented in a virtual environment, through the evaluation of the visual system bandwidth. On the other hand, since interface has to be a sort of transparent layer allowing trainees to accomplish a task without directing any cognitive effort on the interaction itself, we compare two state of the art solutions for selection and manipulation tasks, a touchful one, the HTC Vive controllers, and a touchless vision-based one, the Leap Motion. To this aim we have developed ad hoc frameworks and methodologies. The software frameworks consist in the creation of VR scenarios, where the experimenter can choose the modality of interaction and the headset to be used and set experimental parameters, guaranteeing experiments repeatability and controlled conditions. The methodology includes the evaluation of performance, user experience and preferences, considering both quantitative and qualitative metrics derived from the collection and the analysis of heterogeneous data, as physiological and inertial sensors measurements, timing and self-assessment questionnaires. In general, VR has been found to be a powerful tool able to simulate specific situations in a realistic and involving way, eliciting user\u2019s sense of presence, without causing severe cybersickness, at least when interaction is limited to the peripersonal and near-action space. Moreover, when designing a VR application, it is possible to manipulate its features in order to trigger or avoid triggering specific emotions and voluntarily create potentially stressful or relaxing situations. Considering the ability of trainees to perceive and process information presented in an immersive virtual environment, results show that, when people are given enough time to build a gist of the scene, they are able to recognize a change with 0.75 accuracy when up to 8 elements are in the scene. For interaction, instead, when selection and manipulation tasks do not require fine movements, controllers and Leap Motion ensure comparable performance; whereas, when tasks are complex, the first solution turns out to be more stable and efficient, also because visual and audio feedback, provided as a substitute of the haptic one, does not substantially contribute to improve performance in the touchless case

    Digital-based analog processing in nanoscale CMOS ICs for IoT applications

    Get PDF
    The Internet-of-Things (IoT) concept has been opening up a variety of applications, such as urban and environmental monitoring, smart health, surveillance, and home automation. Most of these IoT applications require more and more power/area efficient Complemen tary Metal–Oxide–Semiconductor (CMOS) systems and faster prototypes (lower time-to market), demanding special modifications in the current IoT design system bottleneck: the analog/RF interfaces. Specially after the 2000s, it is evident that there have been significant improvements in CMOS digital circuits when compared to analog building blocks. Digital circuits have been taking advantage of CMOS technology scaling in terms of speed, power consump tion, and cost, while the techniques running behind the analog signal processing are still lagging. To decrease this historical gap, there has been an increasing trend in finding alternative IC design strategies to implement typical analog functions exploiting Digital in-Concept Design Methodologies (DCDM). This idea of re-thinking analog functions in digital terms has shown that Analog ICs blocks can also avail of the feature-size shrinking and energy efficiency of new technologies. This thesis deals with the development of DCDM, demonstrating its compatibility for Ultra-Low-Voltage (ULV) and Power (ULP) IoT applications. This work proves this state ment through the proposing of new digital-based analog blocks, such as an Operational Transconductance Amplifiers (OTAs) and an ac-coupled Bio-signal Amplifier (BioAmp). As an initial contribution, for the first time, a silicon demonstration of an embryonic Digital-Based OTA (DB-OTA) published in 2013 is exhibited. The fabricated DB-OTA test chip occupies a compact area of 1,426 µm2 , operating at supply voltages (VDD) down to 300 mV, consuming only 590 pW while driving a capacitive load of 80pF. With a Total Harmonic Distortion (THD) lower than 5% for a 100mV input signal swing, its measured small-signal figure of merit (FOMS) and large-signal figure of merit (FOML) are 2,101 V −1 and 1,070, respectively. To the best of this thesis author’s knowledge, this measured power is the lowest reported to date in OTA literature, and its figures of merit are the best in sub-500mV OTAs reported to date. As the second step, mainly due to the robustness limitation of previous DB-OTA, a novel calibration-free digital-based topology is proposed, named here as Digital OTA (DIG OTA). A 180-nm DIGOTA test chip is also developed exhibiting an area below the 1000 µm2 wall, 2.4nW power under 150pF load, and a minimum VDD of 0.25 V. The proposed DIGOTA is more digital-like compared with DB-OTA since no pseudo-resistor is needed. As the last contribution, the previously proposed DIGOTA is then used as a building block to demonstrate the operation principle of power-efficient ULV and ultra-low area (ULA) fully-differential, digital-based Operational Transconductance Amplifier (OTA), suitable for microscale biosensing applications (BioDIGOTA) such as extreme low area Body Dust. Measured results in 180nm CMOS confirm that the proposed BioDIGOTA can work with a supply voltage down to 400 mV, consuming only 95 nW. The BioDIGOTA layout occupies only 0.022 mm2 of total silicon area, lowering the area by 3.22X times compared to the current state of the art while keeping reasonable system performance, such as 7.6 Noise Efficiency Factor (NEF) with 1.25 µVRMS input-referred noise over a 10 Hz bandwidth, 1.8% of THD, 62 dB of the common-mode rejection ratio (CMRR) and 55 dB of power supply rejection ratio (PSRR). After reviewing the current DCDM trend and all proposed silicon demonstrations, the thesis concludes that, despite the current analog design strategies involved during the analog block development

    Architecture of participation : the realization of the Semantic Web, and Internet OS

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, System Design and Management Program, February 2008.Includes bibliographical references (p. 65-68).The Internet and World Wide Web (WWW) is becoming an integral part of our daily life and touching every part of the society around the world including both well-developed and developing countries. The simple technology and genuine intention of the original WWW, which is to help researchers share and exchange information and data across incompatible platforms and systems, have evolved into something larger and beyond what one could conceive. While WWW has reached the critical mass, many limitations are uncovered. To address the limitations, the development of its extension, the Semantic Web, has been underway for more than five years by the inventor of WWW, Tim Berners-Lee, and the technical community. Yet, no significant impact has been made. Its awareness by the public is surprisingly and unfortunately low. This thesis will review the development effort of the Semantic Web, examine its progress which appears lagging compared to WWW, and propose a promising business model to accelerate its adoption path.by Shelley Lau.S.M
    • …
    corecore