1,001 research outputs found

    Runtime resource management for vision-based applications in mobile robots

    Get PDF
    Computer-vision (CV) applications are an important part of mobile robot automation, analyzing the perceived raw data from vision sensors and providing a rich amount of information on the surrounding environment. The design of a high-speed and energy-efficient CV application for a resource-constrained mobile robot, while maintaining a certain targeted level of accuracy in computation, is a challenging task. This is because such applications demand a lot of resources, e.g. computing capacity and battery energy, to run seamlessly in real time. Moreover, there is always a trade-off between accuracy, performance and energy consumption, as these factors dynamically affect each other at runtime. In this thesis, we investigate novel runtime resource management approaches to improve performance and energy efficiency of vision-based applications in mobile robots. Due to the dynamic correlation between different management objectives, such as energy consumption and execution time, both environmental and computational observations need to be dynamically updated, and the actuators are manipulated at runtime based on these observations. Algorithmic and computational parameters of a CV application (output accuracy and CPU voltage/frequency) are adjusted by measuring the key factors associated with the intensity of computations and strain on CPUs (environmental complexity and instantaneous power). Furthermore, we show how mechanical characteristics of the robot, i.e. the speed of movement in this thesis, can affect the computational behaviour. Based on this investigation, we add the speed of a robot, as an actuator, to our resource management algorithm besides the considered computational knobs (output accuracy and CPU voltage/frequency). To evaluate the proposed approach, we perform several experiments on an unmanned ground vehicle equipped with an embedded computer board and use RGB and event cameras as the vision sensors for CV applications. The obtained results show that the presented management strategy improves the performance and accuracy of vision-based applications while significantly reducing the energy consumption compared with the state-of-the-art solutions. Moreover, we demonstrate that considering simultaneously both computational and mechanical aspects in management of CV applications running on mobile robots significantly reduces the energy consumption compared with similar methods that consider these two aspects separately, oblivious to each other’s outcome

    Night vision obstacle detection and avoidance based on Bio-Inspired Vision Sensors

    Full text link
    Moving towards autonomy, unmanned vehicles rely heavily on state-of-the-art collision avoidance systems (CAS). However, the detection of obstacles especially during night-time is still a challenging task since the lighting conditions are not sufficient for traditional cameras to function properly. Therefore, we exploit the powerful attributes of event-based cameras to perform obstacle detection in low lighting conditions. Event cameras trigger events asynchronously at high output temporal rate with high dynamic range of up to 120 dBdB. The algorithm filters background activity noise and extracts objects using robust Hough transform technique. The depth of each detected object is computed by triangulating 2D features extracted utilising LC-Harris. Finally, asynchronous adaptive collision avoidance (AACA) algorithm is applied for effective avoidance. Qualitative evaluation is compared using event-camera and traditional camera.Comment: Accepted to IEEE SENSORS 202

    Event-based pedestrian detection using dynamic vision sensors

    Get PDF
    Pedestrian detection has attracted great research attention in video surveillance, traffic statistics, and especially in autonomous driving. To date, almost all pedestrian detection solutions are derived from conventional framed-based image sensors with limited reaction speed and high data redundancy. Dynamic vision sensor (DVS), which is inspired by biological retinas, efficiently captures the visual information with sparse, asynchronous events rather than dense, synchronous frames. It can eliminate redundant data transmission and avoid motion blur or data leakage in high-speed imaging applications. However, it is usually impractical to directly apply the event streams to conventional object detection algorithms. For this issue, we first propose a novel event-to-frame conversion method by integrating the inherent characteristics of events more efficiently. Moreover, we design an improved feature extraction network that can reuse intermediate features to further reduce the computational effort. We evaluate the performance of our proposed method on a custom dataset containing multiple real-world pedestrian scenes. The results indicate that our proposed method raised its pedestrian detection accuracy by about 5.6–10.8%, and its detection speed is nearly 20% faster than previously reported methods. Furthermore, it can achieve a processing speed of about 26 FPS and an AP of 87.43% when implanted on a single CPU so that it fully meets the requirement of real-time detection

    2020 IEEE Sensors

    Get PDF

    5Growth: An end-to-end service platform for automated deployment and management of vertical services over 5G networks

    Get PDF
    This article introduces the key innovations of the 5Growth service platform to empower vertical industries with an AI-driven automated 5G end-to-end slicing solution that allows industries to achieve their service requirements. Specifically, we present multiple vertical pilots (Industry 4.0, transportation, and energy), identify the key 5G requirements to enable them, and analyze existing technical and functional gaps as compared to current solutions. Based on the identified gaps, we propose a set of innovations to address them with: (i) support of 3GPP-based RAN slices by introducing a RAN slicing model and providing automated RAN orchestration and control; (ii) an AI-driven closed-loop for automated service management with service level agreement assurance; and (iii) multi-domain solutions to expand service offerings by aggregating services and resources from different provider domains and also enable the integration of private 5G networks with public networks.This work has been partially supported by EC H2020 5GPPP 5Growth project (Grant 856709)

    New Waves of IoT Technologies Research – Transcending Intelligence and Senses at the Edge to Create Multi Experience Environments

    Get PDF
    The next wave of Internet of Things (IoT) and Industrial Internet of Things (IIoT) brings new technological developments that incorporate radical advances in Artificial Intelligence (AI), edge computing processing, new sensing capabilities, more security protection and autonomous functions accelerating progress towards the ability for IoT systems to self-develop, self-maintain and self-optimise. The emergence of hyper autonomous IoT applications with enhanced sensing, distributed intelligence, edge processing and connectivity, combined with human augmentation, has the potential to power the transformation and optimisation of industrial sectors and to change the innovation landscape. This chapter is reviewing the most recent advances in the next wave of the IoT by looking not only at the technology enabling the IoT but also at the platforms and smart data aspects that will bring intelligence, sustainability, dependability, autonomy, and will support human-centric solutions.acceptedVersio

    Single-shot ultrafast optical imaging

    Get PDF
    Single-shot ultrafast optical imaging can capture two-dimensional transient scenes in the optical spectral range at ≥100 million frames per second. This rapidly evolving field surpasses conventional pump-probe methods by possessing real-time imaging capability, which is indispensable for recording nonrepeatable and difficult-to-reproduce events and for understanding physical, chemical, and biological mechanisms. In this mini-review, we survey state-of-the-art single-shot ultrafast optical imaging comprehensively. Based on the illumination requirement, we categorized the field into active-detection and passive-detection domains. Depending on the specific image acquisition and reconstruction strategies, these two categories are further divided into a total of six subcategories. Under each subcategory, we describe operating principles, present representative cutting-edge techniques, with a particular emphasis on their methodology and applications, and discuss their advantages and challenges. Finally, we envision prospects for technical advancement in this field

    Methods for Massive, Reliable, and Timely Access for Wireless Internet of Things (IoT)

    Get PDF
    • …
    corecore