143 research outputs found

    Context Change Detection for an Ultra-Low Power Low-Resolution Ego-Vision Imager

    Get PDF
    With the increasing popularity of wearable cameras, such as GoPro or Narrative Clip, research on continuous activity monitoring from egocentric cameras has received a lot of attention. Research in hardware and software is devoted to find new efficient, stable and long-time running solutions; however, devices are too power-hungry for truly always-on operation, and are aggressively duty-cycled to achieve acceptable lifetimes. In this paper we present a wearable system for context change detection based on an egocentric camera with ultra-low power consumption that can collect data 24/7. Although the resolution of the captured images is low, experimental results in real scenarios demonstrate how our approach, based on Siamese Neural Networks, can achieve visual context awareness. In particular, we compare our solution with hand-crafted features and with state of art technique and propose a novel and challenging dataset composed of roughly 30000 low-resolution images

    Electronic Systems with High Energy Efficiency for Embedded Computer Vision

    Get PDF
    Electronic systems are now widely adopted in everyday use. Moreover, nowadays there is an extensive use of embedded wearable and portable devices from industrial to consumer applications. The growing demand of embedded devices and applications has opened several new research fields due to the need of low power consumption and real time responsiveness. Focusing on this class of devices, computer vision algorithms are a challenging application target. In embedded computer vision hardware and software design have to interact to meet application specific requirements. The focus of this thesis is to study computer vision algorithms for embedded systems. The presented work starts presenting a novel algorithm for an IoT stationary use case targeting a high-end embedded device class, where power can be supplied to the platform through wires. Moreover, further contributions focus on algorithmic design and optimization on low and ultra-low power devices. Solutions are presented to gesture recognition and context change detection for wearable devices, focusing on first person wearable devices (Ego-Centric Vision), with the aim to exploit more constrained systems in terms of available power budget and computational resources. A novel gesture recognition algorithm is presented that improves state of art approaches. We then demonstrate the effectiveness of low resolution images exploitation in context change detection with real world ultra-low power imagers. The last part of the thesis deals with more flexible software models to support multiple applications linked at runtime and executed on Cortex-M device class, supporting critical isolation features typical of virtualization-ready CPUs on low-cost low-power microcontrollers and covering some defects in security and deployment capabilities of current firmwares

    A Review of Sensor Technologies for Perception in Automated Driving

    Get PDF
    After more than 20 years of research, ADAS are common in modern vehicles available in the market. Automated Driving systems, still in research phase and limited in their capabilities, are starting early commercial tests in public roads. These systems rely on the information provided by on-board sensors, which allow to describe the state of the vehicle, its environment and other actors. Selection and arrangement of sensors represent a key factor in the design of the system. This survey reviews existing, novel and upcoming sensor technologies, applied to common perception tasks for ADAS and Automated Driving. They are put in context making a historical review of the most relevant demonstrations on Automated Driving, focused on their sensing setup. Finally, the article presents a snapshot of the future challenges for sensing technologies and perception, finishing with an overview of the commercial initiatives and manufacturers alliances that will show future market trends in sensors technologies for Automated Vehicles.This work has been partly supported by ECSEL Project ENABLE- S3 (with grant agreement number 692455-2), by the Spanish Government through CICYT projects (TRA2015- 63708-R and TRA2016-78886-C3-1-R)

    Application of Multi-Sensor Fusion Technology in Target Detection and Recognition

    Get PDF
    Application of multi-sensor fusion technology has drawn a lot of industrial and academic interest in recent years. The multi-sensor fusion methods are widely used in many applications, such as autonomous systems, remote sensing, video surveillance, and the military. These methods can obtain the complementary properties of targets by considering multiple sensors. On the other hand, they can achieve a detailed environment description and accurate detection of interest targets based on the information from different sensors.This book collects novel developments in the field of multi-sensor, multi-source, and multi-process information fusion. Articles are expected to emphasize one or more of the three facets: architectures, algorithms, and applications. Published papers dealing with fundamental theoretical analyses, as well as those demonstrating their application to real-world problems

    Method for fabricating an artificial compound eye

    Get PDF
    A method for fabricating an imaging system, the method comprising providing a flexible substrate (200), a first layer (220) comprising a plurality of microlenses (232) and a second layer (240) comprising a plurality of image sensors (242). The method further comprises stacking the first and the second layer (220; 240) onto the flexible substrate (200) by attaching the plurality of image sensors (242) to the flexible substrate, such that each of the plurality of microlenses (232) and image sensors (242) are aligned to form a plurality of optical channels (300) , each optical channel comprising at least one microlens and at least one associated image sensor, and mechanically separating the optical channels (300) such that the separated optical channels remain attached to the flexible substrate (200) to form a mechanically flexible imaging system

    3D Motion Analysis via Energy Minimization

    Get PDF
    This work deals with 3D motion analysis from stereo image sequences for driver assistance systems. It consists of two parts: the estimation of motion from the image data and the segmentation of moving objects in the input images. The content can be summarized with the technical term machine visual kinesthesia, the sensation or perception and cognition of motion. In the first three chapters, the importance of motion information is discussed for driver assistance systems, for machine vision in general, and for the estimation of ego motion. The next two chapters delineate on motion perception, analyzing the apparent movement of pixels in image sequences for both a monocular and binocular camera setup. Then, the obtained motion information is used to segment moving objects in the input video. Thus, one can clearly identify the thread from analyzing the input images to describing the input images by means of stationary and moving objects. Finally, I present possibilities for future applications based on the contents of this thesis. Previous work in each case is presented in the respective chapters. Although the overarching issue of motion estimation from image sequences is related to practice, there is nothing as practical as a good theory (Kurt Lewin). Several problems in computer vision are formulated as intricate energy minimization problems. In this thesis, motion analysis in image sequences is thoroughly investigated, showing that splitting an original complex problem into simplified sub-problems yields improved accuracy, increased robustness, and a clear and accessible approach to state-of-the-art motion estimation techniques. In Chapter 4, optical flow is considered. Optical flow is commonly estimated by minimizing the combined energy, consisting of a data term and a smoothness term. These two parts are decoupled, yielding a novel and iterative approach to optical flow. The derived Refinement Optical Flow framework is a clear and straight-forward approach to computing the apparent image motion vector field. Furthermore this results currently in the most accurate motion estimation techniques in literature. Much as this is an engineering approach of fine-tuning precision to the last detail, it helps to get a better insight into the problem of motion estimation. This profoundly contributes to state-of-the-art research in motion analysis, in particular facilitating the use of motion estimation in a wide range of applications. In Chapter 5, scene flow is rethought. Scene flow stands for the three-dimensional motion vector field for every image pixel, computed from a stereo image sequence. Again, decoupling of the commonly coupled approach of estimating three-dimensional position and three dimensional motion yields an approach to scene ow estimation with more accurate results and a considerably lower computational load. It results in a dense scene flow field and enables additional applications based on the dense three-dimensional motion vector field, which are to be investigated in the future. One such application is the segmentation of moving objects in an image sequence. Detecting moving objects within the scene is one of the most important features to extract in image sequences from a dynamic environment. This is presented in Chapter 6. Scene flow and the segmentation of independently moving objects are only first steps towards machine visual kinesthesia. Throughout this work, I present possible future work to improve the estimation of optical flow and scene flow. Chapter 7 additionally presents an outlook on future research for driver assistance applications. But there is much more to the full understanding of the three-dimensional dynamic scene. This work is meant to inspire the reader to think outside the box and contribute to the vision of building perceiving machines.</em

    3-D Cloud Morphology and Evolution Derived from Hemispheric Stereo Cameras

    Get PDF
    Clouds play a key role in the Earth-atmosphere system as they reflect incoming solar radiation back to space, while absorbing and emitting longwave radiation. A significant challenge for observation and modeling pose cumulus clouds due to their relatively small size that can reach several hundreds up to a few thousand meters, their often complex 3-D shapes and highly dynamic life-cycle. Common instruments employed to study clouds include cloud radars, lidar-ceilometers, (microwave-)radiometers, but also satellite and airborne observations (in-situ and remote), all of which lack either sufficient sensitivity or a spatial or temporal resolution for a comprehensive observation. This thesis investigates the feasibility of a ground-based network of hemispheric stereo cameras to retrieve detailed 3-D cloud geometries, which are needed for validation of simulated cloud fields and parametrization in numerical models. Such camera systems, which offer a hemispheric field of view and a temporal resolution in the range of seconds and less, have the potential to fill the remaining gap of cloud observations to a considerable degree and allow to derive critical information about size, morphology, spatial distribution and life-cycle of individual clouds and the local cloud field. The technical basis for the 3-D cloud morphology retrieval is the stereo reconstruction: a cloud is synchronously recorded by a pair of cameras, which are separated by a few hundred meters, so that mutually visible areas of the cloud can be reconstructed via triangulation. Location and orientation of each camera system was obtained from a satellite-navigation system, detected stars in night sky images and mutually visible cloud features in the images. The image point correspondences required for 3-D triangulation were provided primarily by a dense stereo matching algorithm that allows to reconstruct an object with high degree of spatial completeness, which can improve subsequent analysis. The experimental setup in the vicinity of the Jülich Observatory for Cloud Evolution (JOYCE) included a pair of hemispheric sky cameras; it was later extended by another pair to reconstruct clouds from different view perspectives and both were separated by several kilometers. A comparison of the cloud base height (CBH) at zenith obtained from the stereo cameras and a lidar-ceilometer showed a typical bias of mostly below 2% of the lidar-derived CBH, but also a few occasions between 3-5%. Typical standard deviations of the differences ranged between 50 m (1.5 % of CBH) for altocumulus clouds and between 7% (123 m) and 10% (165 m) for cumulus and strato-cumulus clouds. A comparison of the estimated 3-D cumulus boundary at near-zenith to the sensed 2-D reflectivity profiles from a 35-GHz cloud radar revealed typical differences between 35 - 81 m. For clouds at larger distances (> 2 km) both signals can deviate significantly, which can in part be explained by a lower reconstruction accuracy for the low-contrast areas of a cloud base, but also with the insufficient sensitivity of the cloud radar if the cloud condensate is dominated by very small droplets or diluted with environmental air. For sequences of stereo images, the 3-D cloud reconstructions from the stereo analysis can be combined with the motion and tracking information from an optical flow routine in order to derive 3-D motion and deformation vectors of clouds. This allowed to estimate atmospheric motion in case of cloud layers with an accuracy of 1 ms-1 in velocity and 7° to 10° in direction. The fine-grained motion data was also used to detect and quantify cloud motion patterns of individual cumuli, such as deformations under vertical wind-shear. The potential of the proposed method lies in an extended analysis of life-cycle and morphology of cumulus clouds. This is illustrated in two show cases where developing cumulus clouds were reconstructed from two different view perspectives. In the first case study, a moving cloud was tracked and analyzed, while being subject to vertical wind shear. The highly tilted cloud body was captured and its vertical profile was quantified to obtain measures like vertically resolved diameter or tilting angle. The second case study shows a life-cycle analysis of a developing cumulus, including a time-series of relevant geometric aspects, such as perimeter, vertically projected area, diameter, thickness and further derived statistics like cloud aspect ratio or perimeter scaling. The analysis confirms some aspects of cloud evolution, such as the pulse-like formation of cumulus and indicates that cloud aspect ratio (size vs height) can be described by a power-law functional relationship for an individual life-cycle.Wolken haben einen maßgeblichen Einfluss auf den Strahlungshaushalt der Erde, da sie solare Strahlung effektiv reflektieren, aber von der Erde emittierte langwellige Strahlung sowohl absorbieren als auch ihrerseits wieder emittieren. Darüber hinaus stellen Cumulus-Wolken wegen ihrer verhältnismäßig kleinen Ausdehnung von wenigen hundert bis einigen tausend Metern sowie ihres dynamischen Lebenszyklus nach wie vor eine große Herausforderung für Beobachtung und Modellierung dar. Gegenwärtig für deren Erforschung im Einsatz befindliche Instrumente wie Lidar-Ceilometer, Wolkenradar, Mikrowellenradiometer oder auch satellitengestützte Beobachtungen stellen die für eine umfassende Erforschung dieser Wolken erforderliche räumliche und zeitliche Abdeckung nicht zur Verfügung. In dieser Arbeit wird untersucht, inwieweit eine bodengebundene Beobachtung von Wolken mit hemisphärisch projizierenden Wolkenkameras geeignet ist detaillierte 3-D Wolkengeometrien zu rekonstruieren um daraus Informationen über Größe, Morphologie und Lebenszyklus einzelner Wolken und des lokalen Wolkenfeldes abzuleiten. Grundlage für die Erfassung der 3-D Wolkengeometrien in dieser Arbeit ist die 3-D Stereorekonstruktion, bei der eine Wolke von jeweils zwei im Abstand von mehreren Hundert Metern aufgestellten, synchron aufnehmenden Kameras abgebildet wird. Beidseitig sichtbare Teile einer Wolke können so mittels Triangulation rekonstruiert werden. Fischaugen-Objektive ermöglichen das hemisphärische Sichtfeld der Wolkenkameras. Während die Positionsbestimmung der Kameras mit Hilfe eines Satelliten-Navigationssystems durchgeführt wurde, konnte die absolute Orientierung der Kameras im Raum mit Hilfe von detektierten Sternen bestimmt werden, die als Referenzpunkte dienten. Die für eine Stereoanalyse wichtige relative Orientierung zweier Kameras wurde anschließend unter Zuhilfenahme von Punktkorrespondenzen zwischen den Stereobildern verfeinert. Für die Stereoanalyse wurde primär ein Bildanalyse-Algorithmus eingesetzt, welcher sich durch eine hohe geometrische Vollständigkeit auszeichnet und auch 3-D Informationen für Bildregionen mit geringem Kontrast liefert. In ausgewählten Fällen wurden die so rekonstruierten Wolkengeometrien zudem mit einem präzisen Mehrbild-Stereo-Verfahren verglichen. Eine möglichst vollständige 3-D Wolkengeometrie ist vorteilhaft für eine darauffolgende Analyse, die eine Segmentierung und Identifizierung einzelner Wolken, deren raum-zeitliche Verfolgung oder die Ableitung geometrischer Größen umfasst. Der experimentelle Aufbau im Umfeld des Jülich Observatory for Cloud Evolution (JOYCE) umfasste zuerst eine, später zwei Stereokameras, die jeweils mehrere Kilometer entfernt installiert wurden um unterschiedliche Wolkenpartien rekonstruieren zu können. Ein Vergleich zwischen Stereorekonstruktion und Lidar-Ceilometer zeigte typische Standardabweichungen der Wolkenbasishöhendifferenz von 50 m (1.5 %) bei mittelhoher Altocumulus-Bewölkung und 123 m (7 %) bis 165 m (10 %) bei heterogener Cumulus- und Stratocumulus-Bewölkung. Gleichzeitig wich die rekonstruierte Wolkenbasishöhe im Durchschnitt meist nicht weiter als 2 %, in Einzelfällen 3-5 % vom entsprechenden Wert des Lidars ab. Im Vergleich zur abgeleiteten Cumulus-Morphologie aus den 2-D Reflektivitätsprofilen des Wolkenradars, zeigten sich im Zenit-Bereich typische Differenzen zwischen 35 und 81 m. Bei weiter entfernten Wolken (> 2 km) können sich Stereorekonstruktion und Reflektivitätssignal stark unterscheiden, was neben einer abnehmenden geometrischen Genauigkeit der Stereorekonstruktion in kontrastarmen Bereichen insbesondere mit einer oftmals unzureichenden Sensitivität des Radars bei kleinen Wolkentröpfchen erklärt werden kann, wie man sie an der Wolkenbasis und in den Randbereichen von Wolken findet. Die Kombination von Stereoanalyse und der Bewegungsinformation innerhalb einer Bildsequenz erlaubt die Bestimmung von Wolkenzug- und -deformationsvektoren. Neben der Verfolgung einzelner Wolkenstrukturen und der Erfassung von Wolkendynamik (beispielsweise der Deformation von Wolken durch Windscherung), kann im Fall von stratiformen Wolken Windgeschwindigkeit und -richtung abgeschätzt werden. Ein Vergleich mit Beobachtungen eines Wind-Lidars zeigte hierfür typische Abweichungen der Windgeschwindigkeit von 1 ms-1 und der Windrichtung von 7° to 10°. Ein besonderer Mehrwert der Methode liegt in einer tiefergehenden Analyse von Morphologie und Lebenszyklus von Cumulus-Wolken. Dies wurde anhand zweier exemplarischer Fallstudien gezeigt, in denen die 3-D-Rekonstruktionen zweier entfernt aufgestellter Stereokameras kombiniert wurden. Im ersten Fall wurde ein sich unter vertikaler Windscherung entwickelnder Cumulus von zwei Seiten aufgenommen, was eine geometrische Erfassung des stark durch Scherung geneigten Wolkenkörpers ermöglichte. Kennwerte wie Vertikalprofil, Neigungswinkel der Wolke und Durchmesser einzelner Höhenschichten wurden abgeschätzt. Der zweite Fall zeigte eine statistische Analyse eines sich entwickelnden Cumulus über seinen Lebenszyklus hinweg. Dies erlaubte die Erstellung einer Zeitreihe mit relevanten Kennzahlen wie äquivalenter Durchmesser, vertikale Ausdehnung, Perimeter oder abgeleitete Größen wie Aspektrate oder Perimeter-Skalierung. Während die Analyse bisherige Ergebnisse aus Simulationen und satellitengestützten Beobachtungen bestätigt, erlaubt diese aber eine Erweiterung auf die Ebene individueller Wolken und der Ableitung funktionaler Zusammenhänge wie zum Beispiel dem Verhältnis von Wolkendurchmesser und vertikaler Dimension

    NASA Strategic Roadmap Committees Final Roadmaps

    Get PDF
    Volume 1 contains NASA strategic roadmaps for the following Advanced Planning and Integration Office (APIO) committees: Earth Science and Applications from Space; Sun - Solar System Connection. Volume 2 contains NASA strategic roadmaps for the following APIO committees: Robotic and Human Exploration of Mars; Solar System Exploration; Search for Earth-like Planets; Universe Exploration, as well as membership rosters and charters for all APIO committees, including those above and the following: Exploration Transportation System; Nuclear Systems; Robotic and Human Lunar Exploration; Aeronautical Technologies; Space Shuttle; International Space Station; Education
    corecore