197 research outputs found

    Robust Models for Optic Flow Coding in Natural Scenes Inspired by Insect Biology

    Get PDF
    The extraction of accurate self-motion information from the visual world is a difficult problem that has been solved very efficiently by biological organisms utilizing non-linear processing. Previous bio-inspired models for motion detection based on a correlation mechanism have been dogged by issues that arise from their sensitivity to undesired properties of the image, such as contrast, which vary widely between images. Here we present a model with multiple levels of non-linear dynamic adaptive components based directly on the known or suspected responses of neurons within the visual motion pathway of the fly brain. By testing the model under realistic high-dynamic range conditions we show that the addition of these elements makes the motion detection model robust across a large variety of images, velocities and accelerations. Furthermore the performance of the entire system is more than the incremental improvements offered by the individual components, indicating beneficial non-linear interactions between processing stages. The algorithms underlying the model can be implemented in either digital or analog hardware, including neuromorphic analog VLSI, but defy an analytical solution due to their dynamic non-linear operation. The successful application of this algorithm has applications in the development of miniature autonomous systems in defense and civilian roles, including robotics, miniature unmanned aerial vehicles and collision avoidance sensors

    A Survey on Video-based Graphics and Video Visualization

    Get PDF

    Deep learning in remote sensing: a review

    Get PDF
    Standing at the paradigm shift towards data-intensive science, machine learning techniques are becoming increasingly important. In particular, as a major breakthrough in the field, deep learning has proven as an extremely powerful tool in many fields. Shall we embrace deep learning as the key to all? Or, should we resist a 'black-box' solution? There are controversial opinions in the remote sensing community. In this article, we analyze the challenges of using deep learning for remote sensing data analysis, review the recent advances, and provide resources to make deep learning in remote sensing ridiculously simple to start with. More importantly, we advocate remote sensing scientists to bring their expertise into deep learning, and use it as an implicit general model to tackle unprecedented large-scale influential challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin

    A neurobiological and computational analysis of target discrimination in visual clutter by the insect visual system.

    Get PDF
    Some insects have the capability to detect and track small moving objects, often against cluttered moving backgrounds. Determining how this task is performed is an intriguing challenge, both from a physiological and computational perspective. Previous research has characterized higher-order neurons within the fly brain known as 'small target motion detectors‘ (STMD) that respond selectively to targets, even within complex moving surrounds. Interestingly, these cells still respond robustly when the velocity of the target is matched to the velocity of the background (i.e. with no relative motion cues). We performed intracellular recordings from intermediate-order neurons in the fly visual system (the medulla). These full-wave rectifying, transient cells (RTC) reveal independent adaptation to luminance changes of opposite signs (suggesting separate 'on‘ and 'off‘ channels) and fast adaptive temporal mechanisms (as seen in some previously described cell types). We show, via electrophysiological experiments, that the RTC is temporally responsive to rapidly changing stimuli and is well suited to serving an important function in a proposed target-detecting pathway. To model this target discrimination, we use high dynamic range (HDR) natural images to represent 'real-world‘ luminance values that serve as inputs to a biomimetic representation of photoreceptor processing. Adaptive spatiotemporal high-pass filtering (1st-order interneurons) shapes the transient 'edge-like‘ responses, useful for feature discrimination. Following this, a model for the RTC implements a nonlinear facilitation between the rapidly adapting, and independent polarity contrast channels, each with centre-surround antagonism. The recombination of the channels results in increased discrimination of small targets, of approximately the size of a single pixel, without the need for relative motion cues. This method of feature discrimination contrasts with traditional target and background motion-field computations. We show that our RTC-based target detection model is well matched to properties described for the higher-order STMD neurons, such as contrast sensitivity, height tuning and velocity tuning. The model output shows that the spatiotemporal profile of small targets is sufficiently rare within natural scene imagery to allow our highly nonlinear 'matched filter‘ to successfully detect many targets from the background. The model produces robust target discrimination across a biologically plausible range of target sizes and a range of velocities. We show that the model for small target motion detection is highly correlated to the velocity of the stimulus but not other background statistics, such as local brightness or local contrast, which normally influence target detection tasks. From an engineering perspective, we examine model elaborations for improved target discrimination via inhibitory interactions from correlation-type motion detectors, using a form of antagonism between our feature correlator and the more typical motion correlator. We also observe that a changing optimal threshold is highly correlated to the value of observer ego-motion. We present an elaborated target detection model that allows for implementation of a static optimal threshold, by scaling the target discrimination mechanism with a model-derived velocity estimation of ego-motion. Finally, we investigate the physiological relevance of this target discrimination model. We show that via very subtle image manipulation of the visual stimulus, our model accurately predicts dramatic changes in observed electrophysiological responses from STMD neurons.Thesis (Ph.D.) - University of Adelaide, School of Molecular and Biomedical Science, 200

    Bio-Inspired Information Extraction In 3-D Environments Using Wide-Field Integration Of Optic Flow

    Get PDF
    A control theoretic framework is introduced to analyze an information extraction approach from patterns of optic flow based on analogues to wide-field motion-sensitive interneurons in the insect visuomotor system. An algebraic model of optic flow is developed, based on a parameterization of simple 3-D environments. It is shown that estimates of proximity and speed, relative to these environments, can be extracted using weighted summations of the instantaneous patterns of optic flow. Small perturbation techniques are utilized to link weighting patterns to outputs, which are applied as feedback to facilitate stability augmentation and perform local obstacle avoidance and terrain following. Weighting patterns that provide direct linear mappings between the sensor array and actuator commands can be derived by casting the problem as a combined static state estimation and linear feedback control problem. Additive noise and environment uncertainties are incorporated into an offline procedure for determination of optimal weighting patterns. Several applications of the method are provided, with differing spatial measurement domains. Non-linear stability analysis and experimental demonstration is presented for a wheeled robot measuring optic flow in a planar ring. Local stability analysis and simulation is used to show robustness over a range of urban-like environments for a fixed-wing UAV measuring in orthogonal rings and a micro helicopter measuring over the full spherical viewing arena. Finally, the framework is used to analyze insect tangential cells with respect to the information they encode and to demonstrate how cell outputs can be appropriately amplified and combined to generate motor commands to achieve reflexive navigation behavior

    The Architectural Expression of Anglican Rituals as Disseminated Through a Photographic Enquiry of Six Devon Churches

    Get PDF
    There have been a number of publications that have set out to clarify the relationship between architectural, liturgical and ritual developments within the nineteenth century Anglican church; especially that part of the Victorian Gothic Revival where fundamental developments in architectural design and doctrinal change occurred - 1840 to 1900. A variety of graphic illustrations have supported these texts and as a photographer who has had a long standing interest in visual forms of religious expression, it has raised the question as to whether new meanings of the architectural/ ecclesiastical relationship could be established through a photographic-based research investigation. During the MPhil stage of the project the research brief was directed towards the selection of churches for detailed investigation and the construction of the photographic methodology appropriate to the research. Within the national developments of this period, Devon was a particularly significant county in respect of nineteenth century architectural and ecclesiological advancement, containing individual buildings such as St Andrew's Exwick, the presence of architects such as William Butterfield and George Edmund Street, and one of the most active ecclesiological groups to exist outside that of the Cambridge Camden Society and the Oxford Tractarians - the Exeter Diocesan Architectural Society. It was from this basis that the subject of the PhD has been developed. Using photography as primary material the methodology utilises physical and conceptual viewpoints to explore the uses of spatial configuration, light, structural forms and colour, surface and texture within each interior. This work has provided the visual form through which it has been possible to re-examine the visual and symbolic use of architectural expression and make direct visual comparison between the churches. At the same time the photographic images are important pieces of design work which will be presented as both visual documents and creative interpretations. The final thesis has been constructed from an exhibition which uses the formulations of panoramic, composite and sequential photographic imagery and a critical text that aligns the elements of historical contextualisation and analysis of the photographic enquiry. The research argues that the photographic works, by applying contemporary practices in the form of reconstructions, re-establishes the meaning and purpose of the architectural designs and promotes the use of photography as primary research

    Algorithms for the reconstruction, analysis, repairing and enhancement of 3D urban models from multiple data sources

    Get PDF
    Over the last few years, there has been a notorious growth in the field of digitization of 3D buildings and urban environments. The substantial improvement of both scanning hardware and reconstruction algorithms has led to the development of representations of buildings and cities that can be remotely transmitted and inspected in real-time. Among the applications that implement these technologies are several GPS navigators and virtual globes such as Google Earth or the tools provided by the Institut Cartogràfic i Geològic de Catalunya. In particular, in this thesis, we conceptualize cities as a collection of individual buildings. Hence, we focus on the individual processing of one structure at a time, rather than on the larger-scale processing of urban environments. Nowadays, there is a wide diversity of digitization technologies, and the choice of the appropriate one is key for each particular application. Roughly, these techniques can be grouped around three main families: - Time-of-flight (terrestrial and aerial LiDAR). - Photogrammetry (street-level, satellite, and aerial imagery). - Human-edited vector data (cadastre and other map sources). Each of these has its advantages in terms of covered area, data quality, economic cost, and processing effort. Plane and car-mounted LiDAR devices are optimal for sweeping huge areas, but acquiring and calibrating such devices is not a trivial task. Moreover, the capturing process is done by scan lines, which need to be registered using GPS and inertial data. As an alternative, terrestrial LiDAR devices are more accessible but cover smaller areas, and their sampling strategy usually produces massive point clouds with over-represented plain regions. A more inexpensive option is street-level imagery. A dense set of images captured with a commodity camera can be fed to state-of-the-art multi-view stereo algorithms to produce realistic-enough reconstructions. One other advantage of this approach is capturing high-quality color data, whereas the geometric information is usually lacking. In this thesis, we analyze in-depth some of the shortcomings of these data-acquisition methods and propose new ways to overcome them. Mainly, we focus on the technologies that allow high-quality digitization of individual buildings. These are terrestrial LiDAR for geometric information and street-level imagery for color information. Our main goal is the processing and completion of detailed 3D urban representations. For this, we will work with multiple data sources and combine them when possible to produce models that can be inspected in real-time. Our research has focused on the following contributions: - Effective and feature-preserving simplification of massive point clouds. - Developing normal estimation algorithms explicitly designed for LiDAR data. - Low-stretch panoramic representation for point clouds. - Semantic analysis of street-level imagery for improved multi-view stereo reconstruction. - Color improvement through heuristic techniques and the registration of LiDAR and imagery data. - Efficient and faithful visualization of massive point clouds using image-based techniques.Durant els darrers anys, hi ha hagut un creixement notori en el camp de la digitalització d'edificis en 3D i entorns urbans. La millora substancial tant del maquinari d'escaneig com dels algorismes de reconstrucció ha portat al desenvolupament de representacions d'edificis i ciutats que es poden transmetre i inspeccionar remotament en temps real. Entre les aplicacions que implementen aquestes tecnologies hi ha diversos navegadors GPS i globus virtuals com Google Earth o les eines proporcionades per l'Institut Cartogràfic i Geològic de Catalunya. En particular, en aquesta tesi, conceptualitzem les ciutats com una col·lecció d'edificis individuals. Per tant, ens centrem en el processament individual d'una estructura a la vegada, en lloc del processament a gran escala d'entorns urbans. Avui en dia, hi ha una àmplia diversitat de tecnologies de digitalització i la selecció de l'adequada és clau per a cada aplicació particular. Aproximadament, aquestes tècniques es poden agrupar en tres famílies principals: - Temps de vol (LiDAR terrestre i aeri). - Fotogrametria (imatges a escala de carrer, de satèl·lit i aèries). - Dades vectorials editades per humans (cadastre i altres fonts de mapes). Cadascun d'ells presenta els seus avantatges en termes d'àrea coberta, qualitat de les dades, cost econòmic i esforç de processament. Els dispositius LiDAR muntats en avió i en cotxe són òptims per escombrar àrees enormes, però adquirir i calibrar aquests dispositius no és una tasca trivial. A més, el procés de captura es realitza mitjançant línies d'escaneig, que cal registrar mitjançant GPS i dades inercials. Com a alternativa, els dispositius terrestres de LiDAR són més accessibles, però cobreixen àrees més petites, i la seva estratègia de mostreig sol produir núvols de punts massius amb regions planes sobrerepresentades. Una opció més barata són les imatges a escala de carrer. Es pot fer servir un conjunt dens d'imatges capturades amb una càmera de qualitat mitjana per obtenir reconstruccions prou realistes mitjançant algorismes estèreo d'última generació per produir. Un altre avantatge d'aquest mètode és la captura de dades de color d'alta qualitat. Tanmateix, la informació geomètrica resultant sol ser de baixa qualitat. En aquesta tesi, analitzem en profunditat algunes de les mancances d'aquests mètodes d'adquisició de dades i proposem noves maneres de superar-les. Principalment, ens centrem en les tecnologies que permeten una digitalització d'alta qualitat d'edificis individuals. Es tracta de LiDAR terrestre per obtenir informació geomètrica i imatges a escala de carrer per obtenir informació sobre colors. El nostre objectiu principal és el processament i la millora de representacions urbanes 3D amb molt detall. Per a això, treballarem amb diverses fonts de dades i les combinarem quan sigui possible per produir models que es puguin inspeccionar en temps real. La nostra investigació s'ha centrat en les següents contribucions: - Simplificació eficaç de núvols de punts massius, preservant detalls d'alta resolució. - Desenvolupament d'algoritmes d'estimació normal dissenyats explícitament per a dades LiDAR. - Representació panoràmica de baixa distorsió per a núvols de punts. - Anàlisi semàntica d'imatges a escala de carrer per millorar la reconstrucció estèreo de façanes. - Millora del color mitjançant tècniques heurístiques i el registre de dades LiDAR i imatge. - Visualització eficient i fidel de núvols de punts massius mitjançant tècniques basades en imatges

    Deep Learning for 3D Visual Perception

    Get PDF
    La percepción visual 3D se refiere al conjunto de problemas que engloban la reunión de información a través de un sensor visual y la estimación la posición tridimensional y estructura de los objetos y formaciones al rededor del sensor. Algunas funcionalidades como la estimación de la ego moción o construcción de mapas are esenciales para otras tareas de más alto nivel como conducción autónoma o realidad aumentada. En esta tesis se han atacado varios desafíos en la percepción 3D, todos ellos útiles desde la perspectiva de SLAM (Localización y Mapeo Simultáneos) que en si es un problema de percepción 3D.Localización y Mapeo Simultáneos –SLAM– busca realizar el seguimiento de la posición de un dispositivo (por ejemplo de un robot, un teléfono o unas gafas de realidad virtual) con respecto al mapa que está construyendo simultáneamente mientras la plataforma explora el entorno. SLAM es una tecnología muy relevante en distintas aplicaciones como realidad virtual, realidad aumentada o conducción autónoma. SLAM Visual es el termino utilizado para referirse al problema de SLAM resuelto utilizando unicamente sensores visuales. Muchas de las piezas del sistema ideal de SLAM son, hoy en día, bien conocidas, maduras y en muchos casos presentes en aplicaciones. Sin embargo, hay otras piezas que todavía presentan desafíos de investigación significantes. En particular, en los que hemos trabajado en esta tesis son la estimación de la estructura 3D al rededor de una cámara a partir de una sola imagen, reconocimiento de lugares ya visitados bajo cambios de apariencia drásticos, reconstrucción de alto nivel o SLAM en entornos dinámicos; todos ellos utilizando redes neuronales profundas.Estimación de profundidad monocular is la tarea de percibir la distancia a la cámara de cada uno de los pixeles en la imagen, utilizando solo la información que obtenemos de una única imagen. Este es un problema mal condicionado, y por lo tanto es muy difícil de inferir la profundidad exacta de los puntos en una sola imagen. Requiere conocimiento de lo que se ve y del sensor que utilizamos. Por ejemplo, si podemos saber que un modelo de coche tiene cierta altura y también sabemos el tipo de cámara que hemos utilizado (distancia focal, tamaño de pixel...); podemos decir que si ese coche tiene cierta altura en la imagen, por ejemplo 50 pixeles, esta a cierta distancia de la cámara. Para ello nosotros presentamos el primer trabajo capaz de estimar profundidad a partir de una sola vista que es capaz de obtener un funcionamiento razonable con múltiples tipos de cámara; como un teléfono o una cámara de video.También presentamos como estimar, utilizando una sola imagen, la estructura de una habitación o el plan de la habitación. Para este segundo trabajo, aprovechamos imágenes esféricas tomadas por una cámara panorámica utilizando una representación equirectangular. Utilizando estas imágenes recuperamos el plan de la habitación, nuestro objetivo es reconocer las pistas en la imagen que definen la estructura de una habitación. Nos centramos en recuperar la versión más simple, que son las lineas que separan suelo, paredes y techo.Localización y mapeo a largo plazo requiere dar solución a los cambios de apariencia en el entorno; el efecto que puede tener en una imagen tomarla en invierno o verano puede ser muy grande. Introducimos un modelo multivista invariante a cambios de apariencia que resuelve el problema de reconocimiento de lugares de forma robusta. El reconocimiento de lugares visual trata de identificar un lugar que ya hemos visitado asociando pistas visuales que se ven en las imágenes; la tomada en el pasado y la tomada en el presente. Lo preferible es ser invariante a cambios en punto de vista, iluminación, objetos dinámicos y cambios de apariencia a largo plazo como el día y la noche, las estaciones o el clima.Para tener funcionalidad a largo plazo también presentamos DynaSLAM, un sistema de SLAM que distingue las partes estáticas y dinámicas de la escena. Se asegura de estimar su posición unicamente basándose en las partes estáticas y solo reconstruye el mapa de las partes estáticas. De forma que si visitamos una escena de nuevo, nuestro mapa no se ve afectado por la presencia de nuevos objetos dinámicos o la desaparición de los anteriores.En resumen, en esta tesis contribuimos a diferentes problemas de percepción 3D; todos ellos resuelven problemas del SLAM Visual.<br /

    Towards high-accuracy augmented reality GIS for architecture and geo-engineering

    Get PDF
    L’architecture et la géo-ingénierie sont des domaines où les professionnels doivent prendre des décisions critiques. Ceux-ci requièrent des outils de haute précision pour les assister dans leurs tâches quotidiennes. La Réalité Augmentée (RA) présente un excellent potentiel pour ces professionnels en leur permettant de faciliter l’association des plans 2D/3D représentatifs des ouvrages sur lesquels ils doivent intervenir, avec leur perception de ces ouvrages dans la réalité. Les outils de visualisation s’appuyant sur la RA permettent d’effectuer ce recalage entre modélisation spatiale et réalité dans le champ de vue de l’usager. Cependant, ces systèmes de RA nécessitent des solutions de positionnement en temps réel de très haute précision. Ce n’est pas chose facile, spécialement dans les environnements urbains ou sur les sites de construction. Ce projet propose donc d’investiguer les principaux défis que présente un système de RA haute précision basé sur les panoramas omnidirectionels.Architecture and geo-engineering are application domains where professionals need to take critical decisions. These professionals require high-precision tools to assist them in their daily decision taking process. Augmented Reality (AR) shows great potential to allow easier association between the abstract 2D drawings and 3D models representing infrastructure under reviewing and the actual perception of these objects in the reality. The different visualization tools based on AR allow to overlay the virtual models and the reality in the field of view of the user. However, the architecture and geo-engineering context requires high-accuracy and real-time positioning from these AR systems. This is not a trivial task, especially in urban environments or on construction sites where the surroundings may be crowded and highly dynamic. This project investigates the accuracy requirements of mobile AR GIS as well as the main challenges to address when tackling high-accuracy AR based on omnidirectional panoramas
    • …
    corecore