3,042 research outputs found

    Focusing on out-of-focus : assessing defocus estimation algorithms for the benefit of automated image masking

    Get PDF
    Acquiring photographs as input for an image-based modelling pipeline is less trivial than often assumed. Photographs should be correctly exposed, cover the subject sufficiently from all possible angles, have the required spatial resolution, be devoid of any motion blur, exhibit accurate focus and feature an adequate depth of field. The last four characteristics all determine the " sharpness " of an image and the photogrammetric, computer vision and hybrid photogrammetric computer vision communities all assume that the object to be modelled is depicted " acceptably " sharp throughout the whole image collection. Although none of these three fields has ever properly quantified " acceptably sharp " , it is more or less standard practice to mask those image portions that appear to be unsharp due to the limited depth of field around the plane of focus (whether this means blurry object parts or completely out-of-focus backgrounds). This paper will assess how well-or ill-suited defocus estimating algorithms are for automatically masking a series of photographs, since this could speed up modelling pipelines with many hundreds or thousands of photographs. To that end, the paper uses five different real-world datasets and compares the output of three state-of-the-art edge-based defocus estimators. Afterwards, critical comments and plans for the future finalise this paper

    Soft computing applied to optimization, computer vision and medicine

    Get PDF
    Artificial intelligence has permeated almost every area of life in modern society, and its significance continues to grow. As a result, in recent years, Soft Computing has emerged as a powerful set of methodologies that propose innovative and robust solutions to a variety of complex problems. Soft Computing methods, because of their broad range of application, have the potential to significantly improve human living conditions. The motivation for the present research emerged from this background and possibility. This research aims to accomplish two main objectives: On the one hand, it endeavors to bridge the gap between Soft Computing techniques and their application to intricate problems. On the other hand, it explores the hypothetical benefits of Soft Computing methodologies as novel effective tools for such problems. This thesis synthesizes the results of extensive research on Soft Computing methods and their applications to optimization, Computer Vision, and medicine. This work is composed of several individual projects, which employ classical and new optimization algorithms. The manuscript presented here intends to provide an overview of the different aspects of Soft Computing methods in order to enable the reader to reach a global understanding of the field. Therefore, this document is assembled as a monograph that summarizes the outcomes of these projects across 12 chapters. The chapters are structured so that they can be read independently. The key focus of this work is the application and design of Soft Computing approaches for solving problems in the following: Block Matching, Pattern Detection, Thresholding, Corner Detection, Template Matching, Circle Detection, Color Segmentation, Leukocyte Detection, and Breast Thermogram Analysis. One of the outcomes presented in this thesis involves the development of two evolutionary approaches for global optimization. These were tested over complex benchmark datasets and showed promising results, thus opening the debate for future applications. Moreover, the applications for Computer Vision and medicine presented in this work have highlighted the utility of different Soft Computing methodologies in the solution of problems in such subjects. A milestone in this area is the translation of the Computer Vision and medical issues into optimization problems. Additionally, this work also strives to provide tools for combating public health issues by expanding the concepts to automated detection and diagnosis aid for pathologies such as Leukemia and breast cancer. The application of Soft Computing techniques in this field has attracted great interest worldwide due to the exponential growth of these diseases. Lastly, the use of Fuzzy Logic, Artificial Neural Networks, and Expert Systems in many everyday domestic appliances, such as washing machines, cookers, and refrigerators is now a reality. Many other industrial and commercial applications of Soft Computing have also been integrated into everyday use, and this is expected to increase within the next decade. Therefore, the research conducted here contributes an important piece for expanding these developments. The applications presented in this work are intended to serve as technological tools that can then be used in the development of new devices

    A theory of moving form perception: Synergy between masking, perceptual grouping, and motion computation in retinotopic and non-retinotopic representations

    Get PDF
    Because object and self-motion are ubiquitous in natural viewing conditions, understanding how the human visual system achieves a relatively clear perception for moving objects is a fundamental problem in visual perception. Several studies have shown that the visible persistence of a briefly presented stationary stimulus is approximately 120 ms under normal viewing conditions. Based on this duration of visible persistence, we would expect moving objects to appear highly blurred. However, in human vision, objects in motion typically appear relatively sharp and clear. We suggest that clarity of form in dynamic viewing is achieved by a synergy between masking, perceptual grouping, and motion computation across retinotopic and non-retinotopic representations. We also argue that dissociations observed in masking are essential to create and maintain this synergy

    Image Restoration Using Joint Statistical Modeling in Space-Transform Domain

    Full text link
    This paper presents a novel strategy for high-fidelity image restoration by characterizing both local smoothness and nonlocal self-similarity of natural images in a unified statistical manner. The main contributions are three-folds. First, from the perspective of image statistics, a joint statistical modeling (JSM) in an adaptive hybrid space-transform domain is established, which offers a powerful mechanism of combining local smoothness and nonlocal self-similarity simultaneously to ensure a more reliable and robust estimation. Second, a new form of minimization functional for solving image inverse problem is formulated using JSM under regularization-based framework. Finally, in order to make JSM tractable and robust, a new Split-Bregman based algorithm is developed to efficiently solve the above severely underdetermined inverse problem associated with theoretical proof of convergence. Extensive experiments on image inpainting, image deblurring and mixed Gaussian plus salt-and-pepper noise removal applications verify the effectiveness of the proposed algorithm.Comment: 14 pages, 18 figures, 7 Tables, to be published in IEEE Transactions on Circuits System and Video Technology (TCSVT). High resolution pdf version and Code can be found at: http://idm.pku.edu.cn/staff/zhangjian/IRJSM

    Direct extraction of tau information for use in ego-motion

    Get PDF
    Avoidance collisions with obstacles is a critical function of any autonomous vehicle. This thesis considers the problem of utilising information about time to contact available in the ambient optic array. Motion-from-smear (W.G. Chen, Nandhakumar, & Martin, 1994; Geisler, 1999) is used to aid judgment of global tau (Kaiser & Mowafy, 1993; D. N. Lee, 1974, 1976). A robotic system employing motion-from­ smear was tested in a task requiring judgment of global tau and found to provide adequate accuracy (mean error= -0.52s) but poor precision (SD= 1.52s). Motion­ from-smear is also discussed with respect to its application to a novel formulation for composite tau and a use of motion parallax in stair descent

    Blood-coated sensor for high-throughput ptychographic cytometry on a Blu-ray disc

    Full text link
    Blu-ray drive is an engineering masterpiece that integrates disc rotation, pickup head translation, and three lasers in a compact and portable format. Here we integrate a blood-coated image sensor with a modified Blu-ray drive for high-throughput cytometric analysis of various bio-specimens. In this device, samples are mounted on the rotating Blu-ray disc and illuminated by the built-in lasers from the pickup head. The resulting coherent diffraction patterns are then recorded by the blood-coated image sensor. The rich spatial features of the blood-cell monolayer help down-modulate the object information for sensor detection, thus forming a high-resolution computational bio-lens with a theoretically unlimited field of view. With the acquired data, we develop a lensless coherent diffraction imaging modality termed rotational ptychography for image reconstruction. We show that our device can resolve the 435 nm linewidth on the resolution target and has a field of view only limited by the size of the Blu-ray disc. To demonstrate its applications, we perform high-throughput urinalysis by locating disease-related calcium oxalate crystals over the entire microscope slide. We also quantify different types of cells on a blood smear with an acquisition speed of ~10,000 cells per second. For in vitro experiment, we monitor live bacterial cultures over the entire Petri dish with single-cell resolution. Using biological cells as a computational lens could enable new intriguing imaging devices for point-of-care diagnostics. Modifying a Blu-ray drive with the blood-coated sensor further allows the spread of high-throughput optical microscopy from well-equipped laboratories to citizen scientists worldwide
    corecore