38 research outputs found

    Sea Ice Field Analysis Using Machine Vision

    Get PDF
    Sea ice field analysis has motivation in various areas, such as environmental, logistics or ship maintenance. Among other methods, local ice field analysis from ship-based visual observations are currently done by human volunteers and therefore are liable to human errors and subjective interpretations. The goal of the thesis is to develop and implement a complete process for obtaining dimensions, distribution and concentration of sea-ice floes, which aims at assisting and improving part of the aforementioned visual observations. Such process involves numerous, organized steps which take advantage of techniques from image processing (lens calibration, vignetting removal and orthorectification), robotics (transformation frames) and machine vision (thresholding and texture analysis methods, and morphological operations). An experimental system setup for collecting the required information is provided as well, which includes a machine vision camera for image acquisition, an IMU device for determining the dynamic attitude of the cameras with respect to the world, two GPS sensors providing a redundant positioning and clock data, and a desktop computer used as the main logging platform for all the collected data. Through a number of experiments, the proposed system setup and image analysis methods have proved to provide promising results in pack ice and brash ice conditions, thus encouraging further research on the topic. Further improvements should target the accuracy of ice floes detection, and over and under-segmentation of the detected sea-ice floes

    Improving SLI Performance in Optically Challenging Environments

    Get PDF
    The construction of 3D models of real-world scenes using non-contact methods is an important problem in computer vision. Some of the more successful methods belong to a class of techniques called structured light illumination (SLI). While SLI methods are generally very successful, there are cases where their performance is poor. Examples include scenes with a high dynamic range in albedo or scenes with strong interreflections. These scenes are referred to as optically challenging environments. The work in this dissertation is aimed at improving SLI performance in optically challenging environments. A new method of high dynamic range imaging (HDRI) based on pixel-by-pixel Kalman filtering is developed. Using objective metrics, it is show to achieve as much as a 9.4 dB improvement in signal-to-noise ratio and as much as a 29% improvement in radiometric accuracy over a classic method. Quality checks are developed to detect and quantify multipath interference and other quality defects using phase measuring profilometry (PMP). Techniques are established to improve SLI performance in the presence of strong interreflections. Approaches in compressed sensing are applied to SLI, and interreflections in a scene are modeled using SLI. Several different applications of this research are also discussed

    Visibility in underwater robotics: Benchmarking and single image dehazing

    Get PDF
    Dealing with underwater visibility is one of the most important challenges in autonomous underwater robotics. The light transmission in the water medium degrades images making the interpretation of the scene difficult and consequently compromising the whole intervention. This thesis contributes by analysing the impact of the underwater image degradation in commonly used vision algorithms through benchmarking. An online framework for underwater research that makes possible to analyse results under different conditions is presented. Finally, motivated by the results of experimentation with the developed framework, a deep learning solution is proposed capable of dehazing a degraded image in real time restoring the original colors of the image.Una de las dificultades más grandes de la robótica autónoma submarina es lidiar con la falta de visibilidad en imágenes submarinas. La transmisión de la luz en el agua degrada las imágenes dificultando el reconocimiento de objetos y en consecuencia la intervención. Ésta tesis se centra en el análisis del impacto de la degradación de las imágenes submarinas en algoritmos de visión a través de benchmarking, desarrollando un entorno de trabajo en la nube que permite analizar los resultados bajo diferentes condiciones. Teniendo en cuenta los resultados obtenidos con este entorno, se proponen métodos basados en técnicas de aprendizaje profundo para mitigar el impacto de la degradación de las imágenes en tiempo real introduciendo un paso previo que permita recuperar los colores originales

    Long range facial image acquisition and quality

    Get PDF
    Abstract This chapter introduces issues in long range facial image acquisition and measures for image quality and their usage. Section 1, on image acquisition for face recognition discusses issues in lighting, sensor, lens, blur issues, which impact short-range biometrics, but are more pronounced in long-range biometrics. Section 2 introduces the design of controlled experiments for long range face, and why they are needed. Section 3 introduces some of the weather and atmospheric effects that occur for long-range imaging, with numerous of examples. Section 4 addresses measurements of “system quality”, including image-quality measures and their use in prediction of face recognition algorithm. That section introduces the concept of failure prediction and techniques for analyzing different “quality ” measures. The section ends with a discussion of post-recognition ”failure prediction ” and its potential role as a feedback mechanism in acquisition. Each section includes a collection of open-ended questions to challenge the reader to think about the concepts more deeply. For some of the questions we answer them after they are introduced; others are left as an exercise for the reader. 1 Image Acquisition Before any recognition can even be attempted, they system must acquire an image of the subject with sufficient quality and resolution to detect and recognize the face. The issues examined in this section are the sensor-issues in lighting, image/sensor resolution issues, the field-of view, the depth of field, and effects of motion blur

    Multimedia Forensics

    Get PDF
    This book is open access. Media forensics has never been more relevant to societal life. Not only media content represents an ever-increasing share of the data traveling on the net and the preferred communications means for most users, it has also become integral part of most innovative applications in the digital information ecosystem that serves various sectors of society, from the entertainment, to journalism, to politics. Undoubtedly, the advances in deep learning and computational imaging contributed significantly to this outcome. The underlying technologies that drive this trend, however, also pose a profound challenge in establishing trust in what we see, hear, and read, and make media content the preferred target of malicious attacks. In this new threat landscape powered by innovative imaging technologies and sophisticated tools, based on autoencoders and generative adversarial networks, this book fills an important gap. It presents a comprehensive review of state-of-the-art forensics capabilities that relate to media attribution, integrity and authenticity verification, and counter forensics. Its content is developed to provide practitioners, researchers, photo and video enthusiasts, and students a holistic view of the field

    Multimedia Forensics

    Get PDF
    This book is open access. Media forensics has never been more relevant to societal life. Not only media content represents an ever-increasing share of the data traveling on the net and the preferred communications means for most users, it has also become integral part of most innovative applications in the digital information ecosystem that serves various sectors of society, from the entertainment, to journalism, to politics. Undoubtedly, the advances in deep learning and computational imaging contributed significantly to this outcome. The underlying technologies that drive this trend, however, also pose a profound challenge in establishing trust in what we see, hear, and read, and make media content the preferred target of malicious attacks. In this new threat landscape powered by innovative imaging technologies and sophisticated tools, based on autoencoders and generative adversarial networks, this book fills an important gap. It presents a comprehensive review of state-of-the-art forensics capabilities that relate to media attribution, integrity and authenticity verification, and counter forensics. Its content is developed to provide practitioners, researchers, photo and video enthusiasts, and students a holistic view of the field

    Automatizing chromatic quality assessment for cultural heritage image digitization

    Get PDF
    In the context of digitization of photographs and other documents with graphical value, cultural heritage organizations need to give a guarantee that the stored digital image is a faithful representation of the physical image both at the physical level and the perceptual level. On the physical level, image quality can be measured objectively in a simple way by applying certain physical attributes to the image, as well as by measuring how distorting images affects the performance of the attributes. However, on the perceptual level, image quality should correspond to the perception that a human expert would experience when observing the physical image under certain determined and controlled conditions. In this paper we address the problem of image quality assessment (IQA) in the context of cultural heritage digitization by applying machine learning (ML). In particular, we explore the possibility of creating a decision tree that mimics the response of an expert on cultural heritage when observing cultural heritage images

    Multimedia Forensic Analysis via Intrinsic and Extrinsic Fingerprints

    Get PDF
    Digital imaging has experienced tremendous growth in recent decades, and digital images have been used in a growing number of applications. With such increasing popularity of imaging devices and the availability of low-cost image editing software, the integrity of image content can no longer be taken for granted. A number of forensic and provenance questions often arise, including how an image was generated; from where an image was from; what has been done on the image since its creation, by whom, when and how. This thesis presents two different sets of techniques to address the problem via intrinsic and extrinsic fingerprints. The first part of this thesis introduces a new methodology based on intrinsic fingerprints for forensic analysis of digital images. The proposed method is motivated by the observation that many processing operations, both inside and outside acquisition devices, leave distinct intrinsic traces on the final output data. We present methods to identify these intrinsic fingerprints via component forensic analysis, and demonstrate that these traces can serve as useful features for such forensic applications as to build a robust device identifier and to identify potential technology infringement or licensing. Building upon component forensics, we develop a general authentication and provenance framework to reconstruct the processing history of digital images. We model post-device processing as a manipulation filter and estimate its coefficients using a linear time invariant approximation. Absence of in-device fingerprints, presence of new post-device fingerprints, or any inconsistencies in the estimated fingerprints across different regions of the test image all suggest that the image is not a direct device output and has possibly undergone some kind of processing, such as content tampering or steganographic embedding, after device capture. While component forensics is widely applicable in a number of scenarios, it has performance limitations. To understand the fundamental limits of component forensics, we develop a new theoretical framework based on estimation and pattern classification theories, and define formal notions of forensic identifiability and classifiability of components. We show that the proposed framework provides a solid foundation to study information forensics and helps design optimal input patterns to improve parameter estimation accuracy via semi non-intrusive forensics. The final part of the thesis investigates a complementing extrinsic approach via image hashing that can be used for content-based image authentication and other media security applications. We show that the proposed hashing algorithm is robust to common signal processing operations and present a systematic evaluation of the security of image hash against estimation and forgery attacks
    corecore