43 research outputs found

    The Use of High-Speed Imaging Systems for Applications in Precision Agriculture

    Get PDF
    UB Dijon EcolDurInternational audienceThe book "New Technologies - Trends, Innovations and Research" presents contributions made by researchers from the entire world and from some modern fields of technology, serving as a valuable tool for scientists, researchers, graduate students and professionals. Some practical applications in particular areas are presented, offering the capability to solve problems resulted from economic needs and to perform specific functions. The book will make possible for scientists and engineers to get familiar with the ideas from researchers from some modern fields of activity. It will provide interesting examples of practical applications of knowledge, assist in the designing process, as well as bring changes to their research areas. A collection of techniques, that combine scientific resources, is provided to make necessary products with the desired quality criteria. Strong mathematical and scientific concepts were used in the applications. They meet the requirements of utility, usability and safety. Technological applications presented in the book have appropriate functions and they may be exploited with competitive advantages. The book has 17 chapters, covering the following subjects: manufacturing technologies, nanotechnologies, robotics, telecommunications, physics, dental medical technologies, smart homes, speech technologies, agriculture technologies and management

    Markerless deformation capture of hoverfly wings using multiple calibrated cameras

    Get PDF
    This thesis introduces an algorithm for the automated deformation capture of hoverfly wings from multiple camera image sequences. The algorithm is capable of extracting dense surface measurements, without the aid of fiducial markers, over an arbitrary number of wingbeats of hovering flight and requires limited manual initialisation. A novel motion prediction method, called the ‘normalised stroke model’, makes use of the similarity of adjacent wing strokes to predict wing keypoint locations, which are then iteratively refined in a stereo image registration procedure. Outlier removal, wing fitting and further refinement using independently reconstructed boundary points complete the algorithm. It was tested on two hovering data sets, as well as a challenging flight manoeuvre. By comparing the 3-d positions of keypoints extracted from these surfaces with those resulting from manual identification, the accuracy of the algorithm is shown to approach that of a fully manual approach. In particular, half of the algorithm-extracted keypoints were within 0.17mm of manually identified keypoints, approximately equal to the error of the manual identification process. This algorithm is unique among purely image based flapping flight studies in the level of automation it achieves, and its generality would make it applicable to wing tracking of other insects

    Geo-rectification and cloud-cover correction of multi-temporal Earth observation imagery

    Get PDF
    Over the past decades, improvements in remote sensing technology have led to mass proliferation of aerial imagery. This, in turn, opened vast new possibilities relating to land cover classification, cartography, and so forth. As applications in these fields became increasingly more complex, the amount of data required also rose accordingly and so, to satisfy these new needs, automated systems had to be developed. Geometric distortions in raw imagery must be rectified, otherwise the high accuracy requirements of the newest applications will not be attained. This dissertation proposes an automated solution for the pre-stages of multi-spectral satellite imagery classification, focusing on Fast Fourier Shift theorem based geo-rectification and multi-temporal cloud-cover correction. By automatizing the first stages of image processing, automatic classifiers can take advantage of a larger supply of image data, eventually allowing for the creation of semi-real-time mapping applications

    A Methodology to Develop Computer Vision Systems in Civil Engineering: Applications in Material Testing and Fish Tracking

    Get PDF
    [Resumen] La Visión Artificial proporciona una nueva y prometedora aproximación al campo de la Ingeniería Civil, donde es extremadamente importante medir con precisión diferentes procesos. Sin embargo, la Visión Artificial es un campo muy amplio que abarca multitud de técnicas y objetivos, y definir una aproximación de desarrollo sistemática es problemático. En esta tesis se propone una nueva metodología para desarrollar estos sistemas considerando las características y requisitos de la Ingeniería Civil. Siguiendo esta metodología se han desarrollado dos sistemas: Un sistema para la medición de desplazamientos y deformaciones en imágenes de ensayos de resistencia de materiales. Solucionando las limitaciones de los actuales sensores físicos que interfieren con el ensayo y solo proporcionan mediciones en un punto y una dirección determinada. Un sistema para la medición de la trayectoria de peces en escalas de hendidura vertical, con el que se pretende solucionar las carencias en el diseño de escalas obteniendo información sobre el comportamiento de los peces. Estas aplicaciones representan contribuciones significativas en el área, y demuestran que la metodología definida e implementada proporciona un marco de trabajo sistemático y confiable para el desarrollo de sistemas de Visión Artificial en Ingeniería Civil.[Resumo] A Visión Artificial proporciona unha nova e prometedora aproximación ó campo da Enxeñería Civil, onde é extremadamente importante medir con precisión diferentes procesos. Sen embargo, a Visión Artificial é un campo moi amplo que abarca multitude de técnicas e obxectivos, e definir unha aproximación de desenvolvemento sistemática é problemático. En esta tese proponse unha nova metodoloxía para desenvolver estes sistemas considerando as características e requisitos da Enxeñería Civil. Seguindo esta metodoloxía desenvolvéronse dous sistemas: Un sistema para a medición de desprazamentos e deformacións en imaxes de ensaios de resistencia de materiais. Solucionando as limitacións dos actuais sensores físicos que interfiren co ensaio e só proporcionan medicións nun punto e nunha dirección determinada. Un sistema para a medición da traxectoria de peixes en escalas de fenda vertical, co que se pretende solucionar as carencias no deseño de escalas obtendo información sobre o comportamento dos peixes. Estas aplicacións representan contribucións significativas na área, e demostran que a metodoloxía definida e implementada proporciona un marco de traballo sistemático e confiable para o desenvolvemento de sistemas de Visión Artificial en Enxeñería Civil.[Abstract] Computer Vision provides a new and promising approach to Civil Engineering, where it is extremely important to measure with accuracy real world processes. However, Computer Vision is a broad field, involving several techniques and topics, and the task of defining a systematic development approach is problematic. In this thesis a new methodology is carried out to develop these systems attending to the special characteristics and requirements of Civil Engineering. Following this methodology, two systems were developed: A system to measure displacements from real images of material surfaces taken during strength tests. This technique solves the limitation of current physical sensors, which interfere with the assay and which are limited to obtaining measurements in a single point of the material and in a single direction of the movement. A system to measure the trajectory of fishes in vertical slot fishways, whose purpose is to solve current lacks in the design of fishways by providing information of fish behavior. These applications represent significant contributions to the field and show that the defined and implemented methodology provides a systematic and reliable framework to develop a Computer Vision system in Civil Engineering

    Artificial Intelligence in Materials Science: Applications of Machine Learning to Extraction of Physically Meaningful Information from Atomic Resolution Microscopy Imaging

    Get PDF
    Materials science is the cornerstone for technological development of the modern world that has been largely shaped by the advances in fabrication of semiconductor materials and devices. However, the Moore’s Law is expected to stop by 2025 due to reaching the limits of traditional transistor scaling. However, the classical approach has shown to be unable to keep up with the needs of materials manufacturing, requiring more than 20 years to move a material from discovery to market. To adapt materials fabrication to the needs of the 21st century, it is necessary to develop methods for much faster processing of experimental data and connecting the results to theory, with feedback flow in both directions. However, state-of-the-art analysis remains selective and manual, prone to human error and unable to handle large quantities of data generated by modern equipment. Recent advances in scanning transmission electron and scanning tunneling microscopies have allowed imaging and manipulation of materials on the atomic level, and these capabilities require development of automated, robust, reproducible methods.Artificial intelligence and machine learning have dealt with similar issues in applications to image and speech recognition, autonomous vehicles, and other projects that are beginning to change the world around us. However, materials science faces significant challenges preventing direct application of the such models without taking physical constraints and domain expertise into account.Atomic resolution imaging can generate data that can lead to better understanding of materials and their properties through using artificial intelligence methods. Machine learning, in particular combinations of deep learning and probabilistic modeling, can learn to recognize physical features in imaging, making this process automated and speeding up characterization. By incorporating the knowledge from theory and simulations with such frameworks, it is possible to create the foundation for the automated atomic scale manufacturing

    Recalage/Fusion d'images multimodales à l'aide de graphes d'ordres supérieurs

    Get PDF
    The main objective of this thesis is the exploration of higher order Markov Random Fields for image registration, specifically to encode the knowledge of global transformations, like rigid transformations, into the graph structure. Our main framework applies to 2D-2D or 3D-3D registration and use a hierarchical grid-based Markov Random Field model where the hidden variables are the displacements vectors of the control points of the grid.We first present the construction of a graph that allows to perform linear registration, which means here that we can perform affine registration, rigid registration, or similarity registration with the same graph while changing only one potential. Our framework is thus modular regarding the sought transformation and the metric used. Inference is performed with Dual Decomposition, which allows to handle the higher order hyperedges and which ensures the global optimum of the function is reached if we have an agreement among the slaves. A similar structure is also used to perform 2D-3D registration.Second, we fuse our former graph with another structure able to perform deformable registration. The resulting graph is more complex and another optimisation algorithm, called Alternating Direction Method of Multipliers is needed to obtain a better solution within reasonable time. It is an improvement of Dual Decomposition which speeds up the convergence. This framework is able to solve simultaneously both linear and deformable registration which allows to remove a potential bias created by the standard approach of consecutive registrations.L’objectif principal de cette thèse est l’exploration du recalage d’images à l’aide de champs aléatoires de Markov d’ordres supérieurs, et plus spécifiquement d’intégrer la connaissance de transformations globales comme une transformation rigide, dans la structure du graphe. Notre cadre principal s’applique au recalage 2D-2D ou 3D-3D et utilise une approche hiérarchique d’un modèle de champ de Markov dont le graphe est une grille régulière. Les variables cachées sont les vecteurs de déplacements des points de contrôle de la grille.Tout d’abord nous expliciterons la construction du graphe qui permet de recaler des images en cherchant entre elles une transformation affine, rigide, ou une similarité, tout en ne changeant qu’un potentiel sur l’ensemble du graphe, ce qui assure une flexibilité lors du recalage. Le choix de la métrique est également laissée à l’utilisateur et ne modifie pas le fonctionnement de notre algorithme. Nous utilisons l’algorithme d’optimisation de décomposition duale qui permet de gérer les hyper-arêtes du graphe et qui garantit l’obtention du minimum exact de la fonction pourvu que l’on ait un accord entre les esclaves. Un graphe similaire est utilisé pour réaliser du recalage 2D-3D.Ensuite, nous fusionnons le graphe précédent avec un autre graphe construit pour réaliser le recalage déformable. Le graphe résultant de cette fusion est plus complexe et, afin d’obtenir un résultat en un temps raisonnable, nous utilisons une méthode d’optimisation appelée ADMM (Alternating Direction Method of Multipliers) qui a pour but d’accélérer la convergence de la décomposition duale. Nous pouvons alors résoudre simultanément recalage affine et déformable, ce qui nous débarrasse du biais potentiel issu de l’approche classique qui consiste à recaler affinement puis de manière déformable

    Combinatorial Solutions for Shape Optimization in Computer Vision

    Get PDF
    This thesis aims at solving so-called shape optimization problems, i.e. problems where the shape of some real-world entity is sought, by applying combinatorial algorithms. I present several advances in this field, all of them based on energy minimization. The addressed problems will become more intricate in the course of the thesis, starting from problems that are solved globally, then turning to problems where so far no global solutions are known. The first two chapters treat segmentation problems where the considered grouping criterion is directly derived from the image data. That is, the respective data terms do not involve any parameters to estimate. These problems will be solved globally. The first of these chapters treats the problem of unsupervised image segmentation where apart from the image there is no other user input. Here I will focus on a contour-based method and show how to integrate curvature regularity into a ratio-based optimization framework. The arising optimization problem is reduced to optimizing over the cycles in a product graph. This problem can be solved globally in polynomial, effectively linear time. As a consequence, the method does not depend on initialization and translational invariance is achieved. This is joint work with Daniel Cremers and Simon Masnou. I will then proceed to the integration of shape knowledge into the framework, while keeping translational invariance. This problem is again reduced to cycle-finding in a product graph. Being based on the alignment of shape points, the method actually uses a more sophisticated shape measure than most local approaches and still provides global optima. It readily extends to tracking problems and allows to solve some of them in real-time. I will present an extension to highly deformable shape models which can be included in the global optimization framework. This method simultaneously allows to decompose a shape into a set of deformable parts, based only on the input images. This is joint work with Daniel Cremers. In the second part segmentation is combined with so-called correspondence problems, i.e. the underlying grouping criterion is now based on correspondences that have to be inferred simultaneously. That is, in addition to inferring the shapes of objects, one now also tries to put into correspondence the points in several images. The arising problems become more intricate and are no longer optimized globally. This part is divided into two chapters. The first chapter treats the topic of real-time motion segmentation where objects are identified based on the observations that the respective points in the video will move coherently. Rather than pre-estimating motion, a single energy functional is minimized via alternating optimization. The main novelty lies in the real-time capability, which is achieved by exploiting a fast combinatorial segmentation algorithm. The results are furthermore improved by employing a probabilistic data term. This is joint work with Daniel Cremers. The final chapter presents a method for high resolution motion layer decomposition and was developed in combination with Daniel Cremers and Thomas Pock. Layer decomposition methods support the notion of a scene model, which allows to model occlusion and enforce temporal consistency. The contributions are twofold: from a practical point of view the proposed method allows to recover fine-detailed layer images by minimizing a single energy. This is achieved by integrating a super-resolution method into the layer decomposition framework. From a theoretical viewpoint the proposed method introduces layer-based regularity terms as well as a graph cut-based scheme to solve for the layer domains. The latter is combined with powerful continuous convex optimization techniques into an alternating minimization scheme. Lastly I want to mention that a significant part of this thesis is devoted to the recent trend of exploiting parallel architectures, in particular graphics cards: many combinatorial algorithms are easily parallelized. In Chapter 3 we will see a case where the standard algorithm is hard to parallelize, but easy for the respective problem instances

    Combining Features and Semantics for Low-level Computer Vision

    Get PDF
    Visual perception of depth and motion plays a significant role in understanding and navigating the environment. Reconstructing outdoor scenes in 3D and estimating the motion from video cameras are of utmost importance for applications like autonomous driving. The corresponding problems in computer vision have witnessed tremendous progress over the last decades, yet some aspects still remain challenging today. Striking examples are reflecting and textureless surfaces or large motions which cannot be easily recovered using traditional local methods. Further challenges include occlusions, large distortions and difficult lighting conditions. In this thesis, we propose to overcome these challenges by modeling non-local interactions leveraging semantics and contextual information. Firstly, for binocular stereo estimation, we propose to regularize over larger areas on the image using object-category specific disparity proposals which we sample using inverse graphics techniques based on a sparse disparity estimate and a semantic segmentation of the image. The disparity proposals encode the fact that objects of certain categories are not arbitrarily shaped but typically exhibit regular structures. We integrate them as non-local regularizer for the challenging object class 'car' into a superpixel-based graphical model and demonstrate its benefits especially in reflective regions. Secondly, for 3D reconstruction, we leverage the fact that the larger the reconstructed area, the more likely objects of similar type and shape will occur in the scene. This is particularly true for outdoor scenes where buildings and vehicles often suffer from missing texture or reflections, but share similarity in 3D shape. We take advantage of this shape similarity by localizing objects using detectors and jointly reconstructing them while learning a volumetric model of their shape. This allows to reduce noise while completing missing surfaces as objects of similar shape benefit from all observations for the respective category. Evaluations with respect to LIDAR ground-truth on a novel challenging suburban dataset show the advantages of modeling structural dependencies between objects. Finally, motivated by the success of deep learning techniques in matching problems, we present a method for learning context-aware features for solving optical flow using discrete optimization. Towards this goal, we present an efficient way of training a context network with a large receptive field size on top of a local network using dilated convolutions on patches. We perform feature matching by comparing each pixel in the reference image to every pixel in the target image, utilizing fast GPU matrix multiplication. The matching cost volume from the network's output forms the data term for discrete MAP inference in a pairwise Markov random field. Extensive evaluations reveal the importance of context for feature matching.Die visuelle Wahrnehmung von Tiefe und Bewegung spielt eine wichtige Rolle bei dem Verständnis und der Navigation in unserer Umwelt. Die 3D Rekonstruktion von Szenen im Freien und die Schätzung der Bewegung von Videokameras sind von größter Bedeutung für Anwendungen, wie das autonome Fahren. Die Erforschung der entsprechenden Probleme des maschinellen Sehens hat in den letzten Jahrzehnten enorme Fortschritte gemacht, jedoch bleiben einige Aspekte heute noch ungelöst. Beispiele hierfür sind reflektierende und texturlose Oberflächen oder große Bewegungen, bei denen herkömmliche lokale Methoden häufig scheitern. Weitere Herausforderungen sind niedrige Bildraten, Verdeckungen, große Verzerrungen und schwierige Lichtverhältnisse. In dieser Arbeit schlagen wir vor nicht-lokale Interaktionen zu modellieren, die semantische und kontextbezogene Informationen nutzen, um diese Herausforderungen zu meistern. Für die binokulare Stereo Schätzung schlagen wir zuallererst vor zusammenhängende Bereiche mit objektklassen-spezifischen Disparitäts Vorschlägen zu regularisieren, die wir mit inversen Grafik Techniken auf der Grundlage einer spärlichen Disparitätsschätzung und semantischen Segmentierung des Bildes erhalten. Die Disparitäts Vorschläge kodieren die Tatsache, dass die Gegenstände bestimmter Kategorien nicht willkürlich geformt sind, sondern typischerweise regelmäßige Strukturen aufweisen. Wir integrieren sie für die komplexe Objektklasse 'Auto' in Form eines nicht-lokalen Regularisierungsterm in ein Superpixel-basiertes grafisches Modell und zeigen die Vorteile vor allem in reflektierenden Bereichen. Zweitens nutzen wir für die 3D-Rekonstruktion die Tatsache, dass mit der Größe der rekonstruierten Fläche auch die Wahrscheinlichkeit steigt, Objekte von ähnlicher Art und Form in der Szene zu enthalten. Dies gilt besonders für Szenen im Freien, in denen Gebäude und Fahrzeuge oft vorkommen, die unter fehlender Textur oder Reflexionen leiden aber ähnlichkeit in der Form aufweisen. Wir nutzen diese ähnlichkeiten zur Lokalisierung von Objekten mit Detektoren und zur gemeinsamen Rekonstruktion indem ein volumetrisches Modell ihrer Form erlernt wird. Dies ermöglicht auftretendes Rauschen zu reduzieren, während fehlende Flächen vervollständigt werden, da Objekte ähnlicher Form von allen Beobachtungen der jeweiligen Kategorie profitieren. Die Evaluierung auf einem neuen, herausfordernden vorstädtischen Datensatz in Anbetracht von LIDAR-Entfernungsdaten zeigt die Vorteile der Modellierung von strukturellen Abhängigkeiten zwischen Objekten. Zuletzt, motiviert durch den Erfolg von Deep Learning Techniken bei der Mustererkennung, präsentieren wir eine Methode zum Erlernen von kontextbezogenen Merkmalen zur Lösung des optischen Flusses mittels diskreter Optimierung. Dazu stellen wir eine effiziente Methode vor um zusätzlich zu einem Lokalen Netzwerk ein Kontext-Netzwerk zu erlernen, das mit Hilfe von erweiterter Faltung auf Patches ein großes rezeptives Feld besitzt. Für das Feature Matching vergleichen wir mit schnellen GPU-Matrixmultiplikation jedes Pixel im Referenzbild mit jedem Pixel im Zielbild. Das aus dem Netzwerk resultierende Matching Kostenvolumen bildet den Datenterm für eine diskrete MAP Inferenz in einem paarweisen Markov Random Field. Eine umfangreiche Evaluierung zeigt die Relevanz des Kontextes für das Feature Matching

    Exploring the Internal Statistics: Single Image Super-Resolution, Completion and Captioning

    Full text link
    Image enhancement has drawn increasingly attention in improving image quality or interpretability. It aims to modify images to achieve a better perception for human visual system or a more suitable representation for further analysis in a variety of applications such as medical imaging, remote sensing, and video surveillance. Based on different attributes of the given input images, enhancement tasks vary, e.g., noise removal, deblurring, resolution enhancement, prediction of missing pixels, etc. The latter two are usually referred to as image super-resolution and image inpainting (or completion). Image super-resolution and completion are numerically ill-posed problems. Multi-frame-based approaches make use of the presence of aliasing in multiple frames of the same scene. For cases where only one input image is available, it is extremely challenging to estimate the unknown pixel values. In this dissertation, we target at single image super-resolution and completion by exploring the internal statistics within the input image and across scales. An internal gradient similarity-based single image super-resolution algorithm is first presented. Then we demonstrate that the proposed framework could be naturally extended to accomplish super-resolution and completion simultaneously. Afterwards, a hybrid learning-based single image super-resolution approach is proposed to benefit from both external and internal statistics. This framework hinges on image-level hallucination from externally learned regression models as well as gradient level pyramid self-awareness for edges and textures refinement. The framework is then employed to break the resolution limitation of the passive microwave imagery and to boost the tracking accuracy of the sea ice movements. To extend our research to the quality enhancement of the depth maps, a novel system is presented to handle circumstances where only one pair of registered low-resolution intensity and depth images are available. High quality RGB and depth images are generated after the system. Extensive experimental results have demonstrated the effectiveness of all the proposed frameworks both quantitatively and qualitatively. Different from image super-resolution and completion which belong to low-level vision research, image captioning is a high-level vision task related to the semantic understanding of an input image. It is a natural task for human beings. However, image captioning remains challenging from a computer vision point of view especially due to the fact that the task itself is ambiguous. In principle, descriptions of an image can talk about any visual aspects in it varying from object attributes to scene features, or even refer to objects that are not depicted and the hidden interaction or connection that requires common sense knowledge to analyze. Therefore, learning-based image captioning is in general a data-driven task, which relies on the training dataset. Descriptions in the majority of the existing image-sentence datasets are generated by humans under specific instructions. Real-world sentence data is rarely directly utilized for training since it is sometimes noisy and unbalanced, which makes it ‘imperfect’ for the training of the image captioning task. In this dissertation, we present a novel image captioning framework to deal with the uncontrolled image-sentence dataset where descriptions could be strongly or weakly correlated to the image content and in arbitrary lengths. A self-guiding learning process is proposed to fully reveal the internal statistics of the training dataset and to look into the learning process in a global way and generate descriptions that are syntactically correct and semantically sound
    corecore