96 research outputs found

    Visibility in underwater robotics: Benchmarking and single image dehazing

    Get PDF
    Dealing with underwater visibility is one of the most important challenges in autonomous underwater robotics. The light transmission in the water medium degrades images making the interpretation of the scene difficult and consequently compromising the whole intervention. This thesis contributes by analysing the impact of the underwater image degradation in commonly used vision algorithms through benchmarking. An online framework for underwater research that makes possible to analyse results under different conditions is presented. Finally, motivated by the results of experimentation with the developed framework, a deep learning solution is proposed capable of dehazing a degraded image in real time restoring the original colors of the image.Una de las dificultades más grandes de la robótica autónoma submarina es lidiar con la falta de visibilidad en imágenes submarinas. La transmisión de la luz en el agua degrada las imágenes dificultando el reconocimiento de objetos y en consecuencia la intervención. Ésta tesis se centra en el análisis del impacto de la degradación de las imágenes submarinas en algoritmos de visión a través de benchmarking, desarrollando un entorno de trabajo en la nube que permite analizar los resultados bajo diferentes condiciones. Teniendo en cuenta los resultados obtenidos con este entorno, se proponen métodos basados en técnicas de aprendizaje profundo para mitigar el impacto de la degradación de las imágenes en tiempo real introduciendo un paso previo que permita recuperar los colores originales

    Mapping and Deep Analysis of Image Dehazing: Coherent Taxonomy, Datasets, Open Challenges, Motivations, and Recommendations

    Get PDF
    Our study aims to review and analyze the most relevant studies in the image dehazing field. Many aspects have been deemed necessary to provide a broad understanding of various studies that have been examined through surveying the existing literature. These aspects are as follows: datasets that have been used in the literature, challenges that other researchers have faced, motivations, and recommendations for diminishing the obstacles in the reported literature. A systematic protocol is employed to search all relevant articles on image dehazing, with variations in keywords, in addition to searching for evaluation and benchmark studies. The search process is established on three online databases, namely, IEEE Xplore, Web of Science (WOS), and ScienceDirect (SD), from 2008 to 2021. These indices are selected because they are sufficient in terms of coverage. Along with definition of the inclusion and exclusion criteria, we include 152 articles to the final set. A total of 55 out of 152 articles focused on various studies that conducted image dehazing, and 13 out 152 studies covered most of the review papers based on scenarios and general overviews. Finally, most of the included articles centered on the development of image dehazing algorithms based on real-time scenario (84/152) articles. Image dehazing removes unwanted visual effects and is often considered an image enhancement technique, which requires a fully automated algorithm to work under real-time outdoor applications, a reliable evaluation method, and datasets based on different weather conditions. Many relevant studies have been conducted to meet these critical requirements. We conducted objective image quality assessment experimental comparison of various image dehazing algorithms. In conclusions unlike other review papers, our study distinctly reflects different observations on image dehazing areas. We believe that the result of this study can serve as a useful guideline for practitioners who are looking for a comprehensive view on image dehazing

    Multiple View Geometry For Video Analysis And Post-production

    Get PDF
    Multiple view geometry is the foundation of an important class of computer vision techniques for simultaneous recovery of camera motion and scene structure from a set of images. There are numerous important applications in this area. Examples include video post-production, scene reconstruction, registration, surveillance, tracking, and segmentation. In video post-production, which is the topic being addressed in this dissertation, computer analysis of the motion of the camera can replace the currently used manual methods for correctly aligning an artificially inserted object in a scene. However, existing single view methods typically require multiple vanishing points, and therefore would fail when only one vanishing point is available. In addition, current multiple view techniques, making use of either epipolar geometry or trifocal tensor, do not exploit fully the properties of constant or known camera motion. Finally, there does not exist a general solution to the problem of synchronization of N video sequences of distinct general scenes captured by cameras undergoing similar ego-motions, which is the necessary step for video post-production among different input videos. This dissertation proposes several advancements that overcome these limitations. These advancements are used to develop an efficient framework for video analysis and post-production in multiple cameras. In the first part of the dissertation, the novel inter-image constraints are introduced that are particularly useful for scenes where minimal information is available. This result extends the current state-of-the-art in single view geometry techniques to situations where only one vanishing point is available. The property of constant or known camera motion is also described in this dissertation for applications such as calibration of a network of cameras in video surveillance systems, and Euclidean reconstruction from turn-table image sequences in the presence of zoom and focus. We then propose a new framework for the estimation and alignment of camera motions, including both simple (panning, tracking and zooming) and complex (e.g. hand-held) camera motions. Accuracy of these results is demonstrated by applying our approach to video post-production applications such as video cut-and-paste and shadow synthesis. As realistic image-based rendering problems, these applications require extreme accuracy in the estimation of camera geometry, the position and the orientation of the light source, and the photometric properties of the resulting cast shadows. In each case, the theoretical results are fully supported and illustrated by both numerical simulations and thorough experimentation on real data

    Displays for Exploration and Comparison of Nested or Intersecting Surfaces

    Get PDF
    The surfaces of real-world objects almost never intersect, so the human visual system is ill prepared to deal with this rare case. However, the comparison of two similar models or approximations of the same surface can require simultaneous estimation of individual global shape, estimation of point or feature correspondences, and local comparisons of shape and distance between the two surfaces. A key supposition of this work is that these relationships between intersecting surfaces, especially the local relationships, are best understood when the surfaces are displayed such that they do intersect. For instance, the relationships between radiation iso-dose levels and healthy and tumorous tissue is best studied in context with all intersections clearly shown. This dissertation presents new visualization techniques for general layered surfaces, and intersecting surfaces in particular, designed for scientists with problems that require such display. The techniques are enabled by a union/intersection refactoring of intersecting surfaces that converts them into nested surfaces, which are more easily treated for visualization. The techniques are aimed at exploratory visualization, where accurate performance of a variety of tasks is desirable, not just the best technique for one particular task. User studies, utilizing tasks selected based on interviews with scientists, are used to evaluate the effectiveness of the new techniques, and to compare them to some existing, common techniques. The studies show that participants performed the user study tasks more accurately with the new techniques than with the existing techniques

    Screen Genealogies

    Get PDF
    Against the grain of the growing literature on screens, *Screen Genealogies* argues that the present excess of screens cannot be understood as an expansion and multiplication of the movie screen nor of the video display. Rather, screens continually exceed the optical histories in which they are most commonly inscribed. As contemporary screens become increasingly decomposed into a distributed field of technologically interconnected surfaces and interfaces, we more readily recognize the deeper spatial and environmental interventions that have long been a property of screens. For most of its history, a screen was a filter, a divide, a shelter, or a camouflage. A genealogy stressing transformation and descent rather than origins and roots emphasizes a deeper set of intersecting and competing definitions of the screen, enabling new thinking about what the screen might yet become

    Functional Polymer Solutions and Gels–Physics and Novel Applications

    Get PDF
    “Functional Polymer Solutions and Gels—Physics and Novel Applications” contains a broad range of articles in this vast field of polymer and soft matter science. It shows insight into the field by highlighting how sticky (non-covalent) chemical bonds can assemble a seemingly water-like liquid into a gel, how ionic liquids influence the gelation behavior of poly(N-Isopropylacrylamide) as well as how the molecular composition of functional copolymers is reflected in the temperature-responsiveness. These physics were augmented by theoretical works on drag-reduction. Also, drug-release – an improved control of how fast or dependent on an external factor – and antibacterial properties were the topic of several works. Biomedical applications on how cell growth can be influenced and how vessels in biological systems, e.g., blood vessels, can be improved by functional polymers were complemented with papers on tomography by using gels. On totally different lines, also the topic of how asphalt can be improved and how functional polymers can be used for the enrichment and removal of substances. These different papers are a good representation of the whole area of functional polymers

    Model-Based Estimation of Meteorological Visibility in the Context of Automotive Camera Systems

    Get PDF
    Highly integrated and increasingly complex video-based driver assistance systems are rapidly developing nowadays. Following the trend towards autonomous driving, they have to operate not only under advantageous but also under adverse conditions. This includes sight impairments caused by atmospheric aerosols such as fog or smog. It is an important part of environmental understanding to thoroughly analyze the optical properties of these aerosols. The aim of this thesis is to develop models and algorithms in order to estimate meteorological visibility in homogeneous daytime fog. The models for light transport through fog are carefully derived from the theory of radiative transfer. In addition to Koschmieder's well-established model for horizontal vision, a recursively-defined sequence of higher-order models is introduced which yields arbitrarily good approximations to the solutions of the radiative boundary problem. Based on the radiative transfer models, visibility estimation algorithms are proposed which are applicable to data captured by a driver assistance front camera. For any one of these algorithms, the recording of luminances from objects observed at distinct distances is required. This data can be acquired from moving objects being tracked as well as from depth-extended homogeneous objects such as the road. The resulting algorithms supplement each other with respect to different road traffic scenarios and environmental conditions. All given algorithms are extensively discussed and optimized regarding their run-time performance in order to make them applicable for real-time purposes. The analysis shows that the proposed algorithms are a useful addition to modern driver assistance cameras

    Reconstruction 3D à partir de paires stéréoscopiques en conditions dégradées

    Get PDF
    Stereo reconstruction serves many outdoor applications, and thus sometimes faces foggy weather. The quality of the reconstruction by state of the art algorithms is then degraded as contrast is reduced with the distance because of scattering. However, as shown by defogging algorithms from a single image, fog provides an extra depth cue in the grey level of far away objects. Our idea is thus to take advantage of both stereo and atmospheric veil depth cues to achieve better stereo reconstructions in foggy weather. To our knowledge, this subject has never been investigated earlier by the computer vision community. We thus propose a Markov Random Field model of the stereo reconstruction and defogging problem which can be optimized iteratively using the ±-expansion algorithm. Outputs are a dense disparity map and an image where contrast is restored. The proposed model is evaluated on synthetic images. This evaluation shows that the proposed method achieves very good results on both stereo reconstruction and defogging compared to standard stereo reconstruction and single image defogging.Nous nous sommes intéressés au problème de la reconstruction 3D à partir de paires stéréoscopiques en présence de brouillard. De nombreux algorithmes existent pour effectuer la reconstruction stéréoscopique, mais peu sont adaptés aux conditions dégradées et en particulier, au brouillard. De ce fait, ils produisent des résultats incorrects à partir d'une certaine distance. L'une des principales causes de cette limitation est la diminution du contraste avec la distance due au brouillard. La restauration du contraste dans une image en présence de brouillard connaît un intérêt croissant en traitement d'image ces dernières années, en particulier pour des applications destinées aux aides à la conduite. De nombreux algorithmes ont été proposés et permettent d'apporter des solutions approchées à ce problème. Un grand nombre d'entre eux sont fondés sur la loi de Koschmieder qui permet de faire le lien entre l'intensité de l'image, l'intensité originale de la scène et la profondeur. L'un des problèmes de la restauration du contraste monoculaire est l'ambiguïté entre l'épaisseur du voile atmosphérique liée à la profondeur et la couleur plus ou moins claire de la scène, qui fait que ce problème est mal posé. Des contraintes arbitraires doivent donc être ajoutées et cela fait que la solution est approchée. Souvent, la profondeur estimée est très différente de la profondeur exacte de la scène, en particulier à courte distance où le voile est faible.. A partir de ce constat, la complémentarité de la reconstruction stéréo et de la restauration apparaît et un algorithme de reconstruction et de restauration simultanées a été proposé pour la première fois. Pour cela, nous proposons une approche probabiliste fondée sur les champs de Markov. Le modèle proposé permet, grâce à l'indice de profondeur de la stéréovision, de restaurer le contraste avec précision à courte distance. De plus, les images restaurées facilitent la reconstruction à longue distance. Une évaluation qualitative de l'algorithme proposé montre une amélioration significative de la qualité de la carte de profondeur générée par rapport aux algorithmes classiques ne prenant pas en compte le brouillard. L'évaluation montre aussi que les restaurations obtenues sont de qualité proche de celles de l'état de l'art
    corecore