4,779 research outputs found

    Synergizing Roadway Infrastructure Investment with Digital Infrastructure for Infrastructure-Based Connected Vehicle Applications: Review of Current Status and Future Directions

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.The safety, mobility, environmental and economic benefits of Connected and Autonomous Vehicles (CAVs) are potentially dramatic. However, realization of these benefits largely hinges on the timely upgrading of the existing transportation system. CAVs must be enabled to send and receive data to and from other vehicles and drivers (V2V communication) and to and from infrastructure (V2I communication). Further, infrastructure and the transportation agencies that manage it must be able to collect, process, distribute and archive these data quickly, reliably, and securely. This paper focuses on current digital roadway infrastructure initiatives and highlights the importance of including digital infrastructure investment alongside more traditional infrastructure investment to keep up with the auto industry's push towards this real time communication and data processing capability. Agencies responsible for transportation infrastructure construction and management must collaborate, establishing national and international platforms to guide the planning, deployment and management of digital infrastructure in their jurisdictions. This will help create standardized interoperable national and international systems so that CAV technology is not deployed in a haphazard and uncoordinated manner

    A sub-mW IoT-endnode for always-on visual monitoring and smart triggering

    Full text link
    This work presents a fully-programmable Internet of Things (IoT) visual sensing node that targets sub-mW power consumption in always-on monitoring scenarios. The system features a spatial-contrast 128x64128\mathrm{x}64 binary pixel imager with focal-plane processing. The sensor, when working at its lowest power mode (10μW10\mu W at 10 fps), provides as output the number of changed pixels. Based on this information, a dedicated camera interface, implemented on a low-power FPGA, wakes up an ultra-low-power parallel processing unit to extract context-aware visual information. We evaluate the smart sensor on three always-on visual triggering application scenarios. Triggering accuracy comparable to RGB image sensors is achieved at nominal lighting conditions, while consuming an average power between 193μW193\mu W and 277μW277\mu W, depending on context activity. The digital sub-system is extremely flexible, thanks to a fully-programmable digital signal processing engine, but still achieves 19x lower power consumption compared to MCU-based cameras with significantly lower on-board computing capabilities.Comment: 11 pages, 9 figures, submitteted to IEEE IoT Journa

    Machine Learning in Wireless Sensor Networks: Algorithms, Strategies, and Applications

    Get PDF
    Wireless sensor networks monitor dynamic environments that change rapidly over time. This dynamic behavior is either caused by external factors or initiated by the system designers themselves. To adapt to such conditions, sensor networks often adopt machine learning techniques to eliminate the need for unnecessary redesign. Machine learning also inspires many practical solutions that maximize resource utilization and prolong the lifespan of the network. In this paper, we present an extensive literature review over the period 2002-2013 of machine learning methods that were used to address common issues in wireless sensor networks (WSNs). The advantages and disadvantages of each proposed algorithm are evaluated against the corresponding problem. We also provide a comparative guide to aid WSN designers in developing suitable machine learning solutions for their specific application challenges.Comment: Accepted for publication in IEEE Communications Surveys and Tutorial

    A Wearable Platform for Patient Monitoring during Mass Casualty Incidents

    Get PDF
    Based on physiological data, intelligent algorithms can assist with the classification and recognition of the most severely impaired victims. This dissertation presents a new sensorbased triage platform with the main proposal to join different sensor and communications technologies into a portable device. This new device must be able to assist the rescue units along with the tactical planning of the operation. This dissertation discusses the implementation and the evaluation of the platform

    A Wearable Platform for Patient Monitoring during Mass Casualty Incidents

    Get PDF
    Based on physiological data, intelligent algorithms can assist with the classification and recognition of the most severely impaired victims. This book presents a new sensorbased triage platform with the main proposal to join different sensor and communications technologies into a portable device. This new device must be able to assist the rescue units along with the tactical planning of the operation. This work discusses the implementation and the evaluation of the platform

    A fine-grain time-sharing Time Warp system

    Get PDF
    Although Parallel Discrete Event Simulation (PDES) platforms relying on the Time Warp (optimistic) synchronization protocol already allow for exploiting parallelism, several techniques have been proposed to further favor performance. Among them we can mention optimized approaches for state restore, as well as techniques for load balancing or (dynamically) controlling the speculation degree, the latter being specifically targeted at reducing the incidence of causality errors leading to waste of computation. However, in state of the art Time Warp systems, events’ processing is not preemptable, which may prevent the possibility to promptly react to the injection of higher priority (say lower timestamp) events. Delaying the processing of these events may, in turn, give rise to higher incidence of incorrect speculation. In this article we present the design and realization of a fine-grain time-sharing Time Warp system, to be run on multi-core Linux machines, which makes systematic use of event preemption in order to dynamically reassign the CPU to higher priority events/tasks. Our proposal is based on a truly dual mode execution, application vs platform, which includes a timer-interrupt based support for bringing control back to platform mode for possible CPU reassignment according to very fine grain periods. The latter facility is offered by an ad-hoc timer-interrupt management module for Linux, which we release, together with the overall time-sharing support, within the open source ROOT-Sim platform. An experimental assessment based on the classical PHOLD benchmark and two real world models is presented, which shows how our proposal effectively leads to the reduction of the incidence of causality errors, as compared to traditional Time Warp, especially when running with higher degrees of parallelism
    • …
    corecore