19 research outputs found

    Modeling and applications of the focus cue in conventional digital cameras

    Get PDF
    El enfoque en c谩maras digitales juega un papel fundamental tanto en la calidad de la imagen como en la percepci贸n del entorno. Esta tesis estudia el enfoque en c谩maras digitales convencionales, tales como c谩maras de m贸viles, fotogr谩ficas, webcams y similares. Una revisi贸n rigurosa de los conceptos te贸ricos detras del enfoque en c谩maras convencionales muestra que, a pasar de su utilidad, el modelo cl谩sico del thin lens presenta muchas limitaciones para aplicaci贸n en diferentes problemas relacionados con el foco. En esta tesis, el focus profile es propuesto como una alternativa a conceptos cl谩sicos como la profundidad de campo. Los nuevos conceptos introducidos en esta tesis son aplicados a diferentes problemas relacionados con el foco, tales como la adquisici贸n eficiente de im谩genes, estimaci贸n de profundidad, integraci贸n de elementos perceptuales y fusi贸n de im谩genes. Los resultados experimentales muestran la aplicaci贸n exitosa de los modelos propuestos.The focus of digital cameras plays a fundamental role in both the quality of the acquired images and the perception of the imaged scene. This thesis studies the focus cue in conventional cameras with focus control, such as cellphone cameras, photography cameras, webcams and the like. A deep review of the theoretical concepts behind focus in conventional cameras reveals that, despite its usefulness, the widely known thin lens model has several limitations for solving different focus-related problems in computer vision. In order to overcome these limitations, the focus profile model is introduced as an alternative to classic concepts, such as the near and far limits of the depth-of-field. The new concepts introduced in this dissertation are exploited for solving diverse focus-related problems, such as efficient image capture, depth estimation, visual cue integration and image fusion. The results obtained through an exhaustive experimental validation demonstrate the applicability of the proposed models

    Segmentaci贸n multi-modal de im谩genes RGB-D a partir de mapas de apariencia y de profundidad geom茅trica

    Get PDF
    Classical image segmentation algorithms exploit the detection of similarities and discontinuities of different visual cues to define and differentiate multiple regions of interest in images. However, due to the high variability and uncertainty of image data, producing accurate results is difficult. In other words, segmentation based just on color is often insufficient for a large percentage of real-life scenes. This work presents a novel multi-modal segmentation strategy that integrates depth and appearance cues from RGB-D images by building a hierarchical region-based representation, i.e., a multi-modal segmentation tree (MM-tree). For this purpose, RGB-D image pairs are represented in a complementary fashion by different segmentation maps. Based on color images, a color segmentation tree (C-tree) is created to obtain segmented and over-segmented maps. From depth images, two independent segmentation maps are derived by computing planar and 3D edge primitives. Then, an iterative region merging process can be used to locally group the previously obtained maps into the MM-tree. Finally, the top emerging MM-tree level coherently integrates the available information from depth and appearance maps. The experiments were conducted using the NYU-Depth V2 RGB-D dataset, which demonstrated the competitive results of our strategy compared to state-of-the-art segmentation methods. Specifically, using test images, our method reached average scores of 0.56 in Segmentation Covering and 2.13 in Variation of Information.Los algoritmos cl谩sicos de segmentaci贸n de im谩genes explotan la detecci贸n de similitudes y discontinuidades en diferentes se帽ales visuales, para definir regiones de inter茅s en im谩genes. Sin embargo, debido a la alta variabilidad e incertidumbre en los datos de imagen, se dificulta generar resultados acertados. En otras palabras, la segmentaci贸n basada solo en color a menudo no es suficiente para un gran porcentaje de escenas reales. Este trabajo presenta una nueva estrategia de segmentaci贸n multi-modal que integra se帽ales de profundidad y apariencia desde im谩genes RGB-D, por medio de una representaci贸n jer谩rquica basada en regiones, es decir, un 谩rbol de segmentaci贸n multi-modal (MM-tree). Para ello, la imagen RGB-D es descrita de manera complementaria por diferentes mapas de segmentaci贸n. A partir de la imagen de color, se implementa un 谩rbol de segmentaci贸n de color (C-tree) para obtener mapas de segmentaci贸n y sobre-segmentaci贸n. Desde de la imagen de profundidad, se derivan dos mapas de segmentaci贸n independientes, los cuales se basan en el c谩lculo de primitivas de planos y de bordes 3D. Seguidamente, un proceso de fusi贸n jer谩rquico de regiones permite agrupar de manera local los mapas obtenidos anteriormente en el MM-tree. Por 煤ltimo, el nivel superior emergente del MM-tree integra coherentemente la informaci贸n disponible en los mapas de profundidad y apariencia. Los experimentos se realizaron con el conjunto de im谩genes RGB-D del NYU-Depth V2, evidenciando resultados competitivos, con respecto a los m茅todos de segmentaci贸n del estado del arte. Espec铆ficamente, en las im谩genes de prueba, se obtuvieron puntajes promedio de 0.56 en la medida de Segmentation Covering y 2.13 en Variation of Information

    Reliability measure for shape-from-focus

    Full text link
    This is the author鈥檚 version of a work that was accepted for publication in Journal Image and Vision Computing . Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Journal Image and Vision Computing , 31, 10 (2013) DOI: 10.1016/j.imavis.2013.07.005Shape-from-focus (SFF) is a passive technique widely used in image processing for obtaining depth-maps. This technique is attractive since it only requires a single monocular camera with focus control, thus avoiding correspondence problems typically found in stereo, as well as more expensive capturing devices. However, one of its main drawbacks is its poor performance when the change in the focus level is difficult to detect. Most research in SFF has focused on improving the accuracy of the depth estimation. Less attention has been paid to the problem of providing quality measures in order to predict the performance of SFF without prior knowledge of the recovered scene. This paper proposes a reliability measure aimed at assessing the quality of the depth-map obtained using SFF. The proposed reliability measure (the R-measure) analyses the shape of the focus measure function and estimates the likelihood of obtaining an accurate depth estimation without any previous knowledge of the recovered scene. The proposed R-measure is then applied for determining the image regions where SFF will not perform correctly in order to discard them. Experiments with both synthetic and real scenes are presented

    Revisiting Gray Pixel for Statistical Illumination Estimation

    Get PDF
    We present a statistical color constancy method that relies on novel gray pixel detection and mean shift clustering. The method, called Mean Shifted Grey Pixel -- MSGP, is based on the observation: true-gray pixels are aligned towards one single direction. Our solution is compact, easy to compute and requires no training. Experiments on two real-world benchmarks show that the proposed approach outperforms state-of-the-art methods in the camera-agnostic scenario. In the setting where the camera is known, MSGP outperforms all statistical methods.Comment: updated and will appear in VISSAP 2019 (long paper

    Measurement challenge : protocol for international case鈥揷ontrol comparison of mammographic measures that predict breast cancer risk

    Get PDF
    Introduction: For women of the same age and body mass index, increased mammographic density is one of the strongest predictors of breast cancer risk. There are multiple methods of measuring mammographic density and other features in a mammogram that could potentially be used in a screening setting to identify and target women at high risk of developing breast cancer. However, it is unclear which measurement method provides the strongest predictor of breast cancer risk. Methods and analysis: The measurement challenge has been established as an international resource to offer a common set of anonymised mammogram images for measurement and analysis. To date, full field digital mammogram images and core data from 1650 cases and 1929 controls from five countries have been collated. The measurement challenge is an ongoing collaboration and we are continuing to expand the resource to include additional image sets across different populations (from contributors) and to compare additional measurement methods (by challengers). The intended use of the measurement challenge resource is for refinement and validation of new and existing mammographic measurement methods. The measurement challenge resource provides a standardised dataset of mammographic images and core data that enables investigators to directly compare methods of measuring mammographic density or other mammographic features in case/control sets of both raw and processed images, for the purposes of the comparing their predictions of breast cancer risk. Ethics and dissemination: Challengers and contributors are required to enter a Research Collaboration Agreement with the University of Melbourne prior to participation in the measurement challenge. The Challenge database of collated data and images are stored in a secure data repository at the University of Melbourne. Ethics approval for the measurement challenge is held at University of Melbourne (HREC ID 0931343.3)

    Percepci贸n de estudiantes de ingenier铆a sobre la ense帽anza remota mediante la estrategia de aula-invertida

    Get PDF
    This text reports the results of a perception study of the students of remote teaching of the flipped-classroom method in comparison to two strategies of regular face-to-face teaching: classic el帽cturing and learning based on projects. The perceptions study follows a cohort design where the students have the chance of experimenting the different pedagogic strategies in a sequence and perform an assessment at the course鈥檚 end. The perception evaluation takes into account six criteria: comprehension, theoretical concepts appropriation, disciplinary formation, integral formation, dedication and academic burden, interaction among the subjects of the process and active learning. In a pilot study with 36 students of an engineering undergraduate program, remote teaching through flipped classrooms is always better or equally valued than those face-to-face strategies in all the considered criteria.Este documento reporta los resultados de un estudio de la percepci贸n de los estudiantes sobre la ense帽anza remota mediante el m茅todo de aula invertida en comparaci贸n con dos estrategias de ense帽anza-aprendizaje presencial: c谩tedra cl谩sica y aprendizaje basado en proyectos. El estudio de percepci贸n sigue un dise帽o de cohorte donde los estudiantes tienen la oportunidad de experimentar las diferentes estrategias pedag贸gicas de forma secuencial y realizar una evaluaci贸n de percepci贸n al final del curso. En la evaluaci贸n de percepci贸n, se tienen en cuenta seis criterios: compren- si贸n y apropiaci贸n de conceptos te贸ricos, formaci贸n disciplinar, formaci贸n integral, dedicaci贸n y carga acad茅mica, interacci贸n entre sujetos del proceso y aprendizaje activo. En un estudio piloto con 36 estudiantes de pregrado de ingenier铆a, la ense帽anza remota mediante aula invertida es siempre mejor o igualmente valorada que las dos estrategias presen- ciales en todos los criterios considerados

    Smartphone teleoperation for self-balancing telepresence robots

    Get PDF
    Self-balancing mobile platforms have recently been adopted in many applications thanks to their light-weight and slim build. However, inherent instability in their behaviour makes both manual and autonomous operation more challenging as compared to traditional self-standing platforms. In this work, we experimentally evaluate three teleoperation user interface approaches to remotely control a self-balancing telepresence platform: 1) touchscreen button user interface, 2) tilt user interface and 3) hybrid touchscreen-tilt user interface. We provide evaluation in quantitative terms based on user trajectories and recorded control data, and qualitative findings from user surveys. Both quantitative and qualitative results support our finding that the hybrid user interface (a speed slider with tilt turn) is a suitable approach for smartphone-based teleoperation of self-balancing telepresence robots. We also introduce a client-server based multi-user telepresence architecture using open source tools.publishedVersionPeer reviewe

    Automated image acquisition system for optical microscope

    No full text
    En este art铆culo se presenta un estudio sobre algunas funciones para la estimaci贸n del grado relativo de enfoque de una imagen. Se propone la modificaci贸n de algunas de las funciones estudiadas para mejorar su desempe帽o y se desarrolla un algoritmo de b煤squeda de foco para llevar a cabo enfoque autom谩tico en microscopio 贸ptico. Se hace la implementaci贸n del algoritmo de b煤squeda en un microscopio con platina motorizada en el eje Z, para obtener una total automatizaci贸n del enfoque. Se describe adem谩s el sistema desarrollado para el control del movimiento de la platina del microscopio en las direcciones X, Y y Z para automatizar el proceso de adquisici贸n de im谩genes de la muestra observada.AbstractIn this paper a study of some functions for measuring the relative degree of focus of images is presented. The modification of some existing functions is proposed to improve their performance and a focus searching algorithm is developed in order to perform autofocusing. The focus searching algorithm is then implemented on an optical microscope with motorized X-Y-Z stage to achieve full automation of the focusing and image acquisition process. The description of the control system for -Y-Z movement of the stage is also presented
    corecore