8 research outputs found

    Trajectory Prediction with Event-Based Cameras for Robotics Applications

    Get PDF
    This thesis presents the study, analysis, and implementation of a framework to perform trajectory prediction using an event-based camera for robotics applications. Event-based perception represents a novel computation paradigm based on unconventional sensing technology that holds promise for data acquisition, transmission, and processing at very low latency and power consumption, crucial in the future of robotics. An event-based camera, in particular, is a sensor that responds to light changes in the scene, producing an asynchronous and sparse output over a wide illumination dynamic range. They only capture relevant spatio-temporal information - mostly driven by motion - at high rate, avoiding the inherent redundancy in static areas of the field of view. For such reasons, this device represents a potential key tool for robots that must function in highly dynamic and/or rapidly changing scenarios, or where the optimisation of the resources is fundamental, like robots with on-board systems. Prediction skills are something humans rely on daily - even unconsciously - for instance when driving, playing sports, or collaborating with other people. In the same way, predicting the trajectory or the end-point of a moving target allows a robot to plan for appropriate actions and their timing in advance, interacting with it in many different manners. Moreover, prediction is also helpful for compensating robot internal delays in the perception-action chain, due for instance to limited sensors and/or actuators. The question I addressed in this work is whether event-based cameras are advantageous or not in trajectory prediction for robotics. In particular, if classical deep learning architecture used for this task can accommodate for event-based data, working asynchronously, and which benefit they can bring with respect to standard cameras. The a priori hypothesis is that being the sampling of the scene driven by motion, such a device would allow for more meaningful information acquisition, improving the prediction accuracy and processing data only when needed - without any information loss or redundant acquisition. To test the hypothesis, experiments are mostly carried out using the neuromorphic iCub, a custom version of the iCub humanoid platform that mounts two event-based cameras in the eyeballs, along with standard RGB cameras. To further motivate the work on iCub, a preliminary step is the evaluation of the robot's internal delays, a value that should be compensated by the prediction to interact in real-time with the object perceived. The first part of this thesis sees the implementation of the event-based framework for prediction, to answer the question if Long Short-Term Memory neural networks, the architecture used in this work, can be combined with event-based cameras. The task considered is the handover Human-Robot Interaction, during which the trajectory of the object in the human's hand must be inferred. Results show that the proposed pipeline can predict both spatial and temporal coordinates of the incoming trajectory with higher accuracy than model-based regression methods. Moreover, fast recovery from failure cases and adaptive prediction horizon behavior are exhibited. Successively, I questioned how much the event-based sampling approach can be convenient with respect to the classical fixed-rate approach. The test case used is the trajectory prediction of a bouncing ball, implemented with the pipeline previously introduced. A comparison between the two sampling methods is analysed in terms of error for different working rates, showing how the spatial sampling of the event-based approach allows to achieve lower error and also to adapt the computational load dynamically, depending on the motion in the scene. Results from both works prove that the merging of event-based data and Long Short-Term Memory networks looks promising for spatio-temporal features prediction in highly dynamic tasks, and paves the way to further studies about the temporal aspect and to a wide range of applications, not only robotics-related. Ongoing work is now focusing on the robot control side, finding the best way to exploit the spatio-temporal information provided by the predictor and defining the optimal robot behavior. Future work will see the shift of the full pipeline - prediction and robot control - to a spiking implementation. First steps in this direction have been already made thanks to a collaboration with a group from the University of Zurich, with which I propose a closed-loop motor controller implemented on a mixed-signal analog/digital neuromorphic processor, emulating a classical PID controller by means of spiking neural networks

    Sensor Signal and Information Processing II

    Get PDF
    In the current age of information explosion, newly invented technological sensors and software are now tightly integrated with our everyday lives. Many sensor processing algorithms have incorporated some forms of computational intelligence as part of their core framework in problem solving. These algorithms have the capacity to generalize and discover knowledge for themselves and learn new information whenever unseen data are captured. The primary aim of sensor processing is to develop techniques to interpret, understand, and act on information contained in the data. The interest of this book is in developing intelligent signal processing in order to pave the way for smart sensors. This involves mathematical advancement of nonlinear signal processing theory and its applications that extend far beyond traditional techniques. It bridges the boundary between theory and application, developing novel theoretically inspired methodologies targeting both longstanding and emergent signal processing applications. The topic ranges from phishing detection to integration of terrestrial laser scanning, and from fault diagnosis to bio-inspiring filtering. The book will appeal to established practitioners, along with researchers and students in the emerging field of smart sensors processing

    Self-aware reliable monitoring

    Get PDF
    Cyber-Physical Systems (CPSs) can be found in almost all technical areas where they constitute a key enabler for anticipated autonomous machines and devices. They are used in a wide range of applications such as autonomous driving, traffic control, manufacturing plants, telecommunication systems, smart grids, and portable health monitoring systems. CPSs are facing steadily increasing requirements such as autonomy, adaptability, reliability, robustness, efficiency, and performance. A CPS necessitates comprehensive knowledge about itself and its environment to meet these requirements as well as make rational, well-informed decisions, manage its objectives in a sophisticated way, and adapt to a possibly changing environment. To gain such comprehensive knowledge, a CPS must monitor itself and its environment. However, the data obtained during this process comes from physical properties measured by sensors and may differ from the ground truth. Sensors are neither completely accurate nor precise. Even if they were, they could still be used incorrectly or break while operating. Besides, it is possible that not all characteristics of physical quantities in the environment are entirely known. Furthermore, some input data may be meaningless as long as they are not transferred to a domain understandable to the CPS. Regardless of the reason, whether erroneous data, incomplete knowledge or unintelligibility of data, such circumstances can result in a CPS that has an incomplete or inaccurate picture of itself and its environment, which can lead to wrong decisions with possible negative consequences. Therefore, a CPS must know the obtained data’s reliability and may need to abstract information of it to fulfill its tasks. Besides, a CPS should base its decisions on a measure that reflects its confidence about certain circumstances. Computational Self-Awareness (CSA) is a promising solution for providing a CPS with a monitoring ability that is reliable and robust — even in the presence of erroneous data. This dissertation proves that CSA, especially the properties abstraction, data reliability, and confidence, can improve a system’s monitoring capabilities regarding its robustness and reliability. The extensive experiments conducted are based on two case studies from different fields: the health- and industrial sectors

    GIS and Remote Sensing for Renewable Energy Assessment and Maps

    Get PDF
    This book aims at providing the state-of-the-art on all of the aforementioned tools in different energy applications and at different scales, i.e., urban, regional, national, and even continental for renewable scenarios planning and policy making

    Just-in-Time Correntropy Soft Sensor with Noisy Data for Industrial Silicon Content Prediction

    No full text
    Development of accurate data-driven quality prediction models for industrial blast furnaces encounters several challenges mainly because the collected data are nonlinear, non-Gaussian, and uneven distributed. A just-in-time correntropy-based local soft sensing approach is presented to predict the silicon content in this work. Without cumbersome efforts for outlier detection, a correntropy support vector regression (CSVR) modeling framework is proposed to deal with the soft sensor development and outlier detection simultaneously. Moreover, with a continuous updating database and a clustering strategy, a just-in-time CSVR (JCSVR) method is developed. Consequently, more accurate prediction and efficient implementations of JCSVR can be achieved. Better prediction performance of JCSVR is validated on the online silicon content prediction, compared with traditional soft sensors

    Secondary Analysis of Electronic Health Records

    Get PDF
    Health Informatics; Ethics; Data Mining and Knowledge Discovery; Statistics for Life Sciences, Medicine, Health Science
    corecore