518 research outputs found

    3D Capturing with Monoscopic Camera

    Get PDF
    This article presents a new concept of using the auto-focus function of the monoscopic camera sensor to estimate depth map information, which avoids not only using auxiliary equipment or human interaction, but also the introduced computational complexity of SfM or depth analysis. The system architecture that supports both stereo image and video data capturing, processing and display is discussed. A novel stereo image pair generation algorithm by using Z-buffer-based 3D surface recovery is proposed. Based on the depth map, we are able to calculate the disparity map (the distance in pixels between the image points in both views) for the image. The presented algorithm uses a single image with depth information (e.g. z-buffer) as an input and produces two images for left and right eye

    Shape from focus image processing approach based 3D model construction of manufactured part

    Get PDF
    The purpose of this research is to develop a process and an algorithm to create a 3D model of the surface a part. This is accomplished using a single camera and a CNC machine as a movable stage. A gradient based focus measure operator written in MATLAB is used to process the images and to generate the surface model. The scopes of this research are image processing and surface model generation as well as verifying part accuracy. The algorithm is able to create a rough surface model of a photographed part, and with careful calibration in a limited number of scenarios has been used in checking part z dimensions

    Modeling and applications of the focus cue in conventional digital cameras

    Get PDF
    El enfoque en cámaras digitales juega un papel fundamental tanto en la calidad de la imagen como en la percepción del entorno. Esta tesis estudia el enfoque en cámaras digitales convencionales, tales como cámaras de móviles, fotográficas, webcams y similares. Una revisión rigurosa de los conceptos teóricos detras del enfoque en cámaras convencionales muestra que, a pasar de su utilidad, el modelo clásico del thin lens presenta muchas limitaciones para aplicación en diferentes problemas relacionados con el foco. En esta tesis, el focus profile es propuesto como una alternativa a conceptos clásicos como la profundidad de campo. Los nuevos conceptos introducidos en esta tesis son aplicados a diferentes problemas relacionados con el foco, tales como la adquisición eficiente de imágenes, estimación de profundidad, integración de elementos perceptuales y fusión de imágenes. Los resultados experimentales muestran la aplicación exitosa de los modelos propuestos.The focus of digital cameras plays a fundamental role in both the quality of the acquired images and the perception of the imaged scene. This thesis studies the focus cue in conventional cameras with focus control, such as cellphone cameras, photography cameras, webcams and the like. A deep review of the theoretical concepts behind focus in conventional cameras reveals that, despite its usefulness, the widely known thin lens model has several limitations for solving different focus-related problems in computer vision. In order to overcome these limitations, the focus profile model is introduced as an alternative to classic concepts, such as the near and far limits of the depth-of-field. The new concepts introduced in this dissertation are exploited for solving diverse focus-related problems, such as efficient image capture, depth estimation, visual cue integration and image fusion. The results obtained through an exhaustive experimental validation demonstrate the applicability of the proposed models

    Holographic representation: Hologram plane vs. object plane

    Get PDF
    Digital holography allows the recording, storage and subsequent reconstruction of both amplitude and phase of the light field scattered by an object. This is accomplished by recording interference patterns that preserve the properties of the original object field essential for 3D visualization, the so-called holograms. Digital holography refers to the acquisition of holograms with a digital sensor, typically a CCD or a CMOS camera, and to the reconstruction of the 3D object field using numerical methods. In the current work, the different representations of digital holographic information in the hologram and in the object planes are studied. The coding performance of the different complex field representations, notably Amplitude-Phase and Real-Imaginary, in both the hologram plane and the object plane, is assessed using both computer generated and experimental holograms. The HEVC intra main coding profile is used for the compression of the different representations in both planes, either for experimental holograms or computer generated holograms. The HEVC intra compression in the object plane outperforms encoding in the hologram plane. Furthermore, encoding computer generated holograms in the object plane has a larger benefit than the same encoding over the experimental holograms. This difference was expected, since experimental holograms are affected by a larger negative influence of speckle noise, resulting in a loss of compression efficiency. This work emphasizes the possibility of holographic coding on the object plane, instead of the common encoding in the hologram plane approach. Moreover, this possibility allows direct visualization of the Object Plane Amplitude in a regular 2D display without any transformation methods. The complementary phase information can easily be used to render 3D features such as depth map, multi-view or even holographic interference patterns for further 3D visualization depending on the display technology.info:eu-repo/semantics/publishedVersio

    A low-cost automated digital microscopy platform for automatic identification of diatoms

    Get PDF
    This article belongs to the Special Issue Advanced Intelligent Imaging Technology Ⅱ[EN] Currently, microalgae (i.e., diatoms) constitute a generally accepted bioindicator of water quality and therefore provide an index of the status of biological ecosystems. Diatom detection for specimen counting and sample classification are two difficult time-consuming tasks for the few existing expert diatomists. To mitigate this challenge, in this work, we propose a fully operative low-cost automated microscope, integrating algorithms for: (1) stage and focus control, (2) image acquisition (slide scanning, stitching, contrast enhancement), and (3) diatom detection and a prospective specimen classification (among 80 taxa). Deep learning algorithms have been applied to overcome the difficult selection of image descriptors imposed by classical machine learning strategies. With respect to the mentioned strategies, the best results were obtained by deep neural networks with a maximum precision of 86% (with the YOLO network) for detection and 99.51% for classification, among 80 different species (with the AlexNet network). All the developed operational modules are integrated and controlled by the user from the developed graphical user interface running in the main controller. With the developed operative platform, it is noteworthy that this work provides a quite useful toolbox for phycologists in their daily challenging tasks to identify and classify diatomsSIThis research was funded by the Spanish Government under the AQUALITAS-RETOS project with Ref. CTM2014-51907-C2-2-R-MINEC

    The SuperCam Instrument Suite on the Mars 2020 Rover: Science Objectives and Mast-Unit Description

    Get PDF
    On the NASA 2020 rover mission to Jezero crater, the remote determination of the texture, mineralogy and chemistry of rocks is essential to quickly and thoroughly characterize an area and to optimize the selection of samples for return to Earth. As part of the Perseverance payload, SuperCam is a suite of five techniques that provide critical and complementary observations via Laser-Induced Breakdown Spectroscopy (LIBS), Time-Resolved Raman and Luminescence (TRR/L), visible and near-infrared spectroscopy (VISIR), high-resolution color imaging (RMI), and acoustic recording (MIC). SuperCam operates at remote distances, primarily 2-7 m, while providing data at sub-mm to mm scales. We report on SuperCam's science objectives in the context of the Mars 2020 mission goals and ways the different techniques can address these questions. The instrument is made up of three separate subsystems: the Mast Unit is designed and built in France; the Body Unit is provided by the United States; the calibration target holder is contributed by Spain, and the targets themselves by the entire science team. This publication focuses on the design, development, and tests of the Mast Unit; companion papers describe the other units. The goal of this work is to provide an understanding of the technical choices made, the constraints that were imposed, and ultimately the validated performance of the flight model as it leaves Earth, and it will serve as the foundation for Mars operations and future processing of the data.In France was provided by the Centre National d'Etudes Spatiales (CNES). Human resources were provided in part by the Centre National de la Recherche Scientifique (CNRS) and universities. Funding was provided in the US by NASA's Mars Exploration Program. Some funding of data analyses at Los Alamos National Laboratory (LANL) was provided by laboratory-directed research and development funds

    Focusing on out-of-focus : assessing defocus estimation algorithms for the benefit of automated image masking

    Get PDF
    Acquiring photographs as input for an image-based modelling pipeline is less trivial than often assumed. Photographs should be correctly exposed, cover the subject sufficiently from all possible angles, have the required spatial resolution, be devoid of any motion blur, exhibit accurate focus and feature an adequate depth of field. The last four characteristics all determine the " sharpness " of an image and the photogrammetric, computer vision and hybrid photogrammetric computer vision communities all assume that the object to be modelled is depicted " acceptably " sharp throughout the whole image collection. Although none of these three fields has ever properly quantified " acceptably sharp " , it is more or less standard practice to mask those image portions that appear to be unsharp due to the limited depth of field around the plane of focus (whether this means blurry object parts or completely out-of-focus backgrounds). This paper will assess how well-or ill-suited defocus estimating algorithms are for automatically masking a series of photographs, since this could speed up modelling pipelines with many hundreds or thousands of photographs. To that end, the paper uses five different real-world datasets and compares the output of three state-of-the-art edge-based defocus estimators. Afterwards, critical comments and plans for the future finalise this paper
    corecore