43,508 research outputs found

    Understanding exposure for reverse tone mapping

    Get PDF
    High dynamic range (HDR) displays are capable of providing a rich visual experience by boosting both luminance and contrast beyond what conventional displays can offer.We envision that HDR capture and display hardware will soon reach the mass market and become mainstream in most fields, from entertainment to scientific visualization. This will necessarily lead to an extensive redesign of the imaging pipeline. However, a vast amount of legacy content is available, captured and stored using the traditional, low dynamic range (LDR) pipeline. The immediate question that arises is: will our current LDR digital material be properly visualized on an HDR display? The answer to this question involves the process known as reverse tone mapping (the expansion of luminance and contrast to match those of the HDR display) for which no definite solution exists. This paper studies the specific problem of reverse tone mapping for imperfect legacy still images, where some regions are under- or overexposed. First, we show the results of a psychophysical study compared with first-order image statistics, in an attempt to gain some understanding in what makes an image be perceived as incorrectly exposed; second, we propose a methodology to evaluate existing reverse tone mapping algorithms in the case of imperfect legacy content

    The Genetic Architecture of Noise-Induced Hearing Loss: Evidence for a Gene-by-Environment Interaction.

    Get PDF
    The discovery of environmentally specific genetic effects is crucial to the understanding of complex traits, such as susceptibility to noise-induced hearing loss (NIHL). We describe the first genome-wide association study (GWAS) for NIHL in a large and well-characterized population of inbred mouse strains, known as the Hybrid Mouse Diversity Panel (HMDP). We recorded auditory brainstem response (ABR) thresholds both pre and post 2-hr exposure to 10-kHz octave band noise at 108 dB sound pressure level in 5-6-wk-old female mice from the HMDP (4-5 mice/strain). From the observation that NIHL susceptibility varied among the strains, we performed a GWAS with correction for population structure and mapped a locus on chromosome 6 that was statistically significantly associated with two adjacent frequencies. We then used a "genetical genomics" approach that included the analysis of cochlear eQTLs to identify candidate genes within the GWAS QTL. In order to validate the gene-by-environment interaction, we compared the effects of the postnoise exposure locus with that from the same unexposed strains. The most significant SNP at chromosome 6 (rs37517079) was associated with noise susceptibility, but was not significant at the same frequencies in our unexposed study. These findings demonstrate that the genetic architecture of NIHL is distinct from that of unexposed hearing levels and provide strong evidence for gene-by-environment interactions in NIHL

    Reverse tone mapping for suboptimal exposure conditions

    Get PDF
    La mayor parte de las imágenes y videos existentes son de bajo rango dinámico (generalmente denominado LDR por las siglas del término en inglés, low dynamic range). Se denominan así porque, al utilizar sólo 8 bits por canal (R,G,B) para almacenarlas, sólo son capaces de reproducir dos órdenes de magnitud en luminancia (mientras que el sistema visual humano puede percibir hasta cinco órdenes de magnitud simultáneamente). En los últimos años hemos asistido al nacimiento y expansión de las tecnologías de alto rango dinámico (HDR por sus siglas en inglés), que utilizan hasta 32 bits/canal, permitiendo representar más fielmente el mundo que nos rodea. Paulatinamente el HDR se va haciendo más presente en los pipelines de adquisición, procesamiento y visualización de imágenes, y como con el advenimiento de cualquier nueva tecnología que sustituye a una anterior, surgen ciertos problemas de compatibilidad. En particular, el presente trabajo se centra en el problema denominado reverse tone mapping: dado un monitor de alto rango dinámico, cuál es la forma óptima de visualizar en él todo el material ya existente en bajo rango dinámico (imágenes, vídeos...). Lo que hace un operador de reverse tone mapping (rTMO) es tomar la imagen LDR como entrada y ajustar el contraste de forma inteligente para dar una imagen de salida que reproduzca lo más fielmente posible la escena original. Dado que hay información de la escena original que se ha perdido irreversiblemente al tomar la fotografía en LDR, el problema es intrínsecamente ill-posed o mal condicionado. En este trabajo, en primer lugar, se ha realizado una serie de experimentos psicofísicos utilizando un monitor HDR Brightside para evaluar el funcionamiento de los operadores de reverse tone mapping existentes. Los resultados obtenidos muestran que los actuales operadores fallan -o no ofrecen resultados convincentes- cuando las imágenes de entrada no están expuestas correctamente. Los rTMO existentes funcionan bien con imágenes bien expuestas o subexpuestas, pero la calidad percibida se degrada sustancialmente con la sobreexposición, hasta el punto de que en algunos casos los sujetos prefieren las imágenes originales en LDR a imágenes que han sido procesadas con rTMOs. Teniendo esto en cuenta, el segundo paso ha sido diseñar un rTMO para esos casos en los que los algoritmos existentes fallan. Para imágenes de entrada sobreexpuestas, proponemos un rTMO simple basado en una expansión gamma que evita los errores introducidos por otros métodos, así como un método para fijar automáticamente un valor de gamma para cada imagen basado en el key de la imagen y en datos empíricos. En tercer lugar se ha hecho la validación de los resultados, tanto mediante experimentos psicofísicos como utilizando una métrica objetiva de reciente publicación. Por otro lado, se ha realizado también otra serie de experimentos con el monitor HDR que sugieren que los artefactos espaciales introducidos por los operadores de reverse tone mapping son más determinantes de cara a la calidad final percibida por los sujetos que imprecisiones en las intensidades expandidas. Adicionalmente, como subproyecto menor, se ha explorado la posibilidad de abordar el problema desde un enfoque de más alto nivel, incluyendo información semántica y de saliencia. La mayor parte de este trabajo ha sido publicada en un artículo publicado en la revista Transactions on Graphics (índice JCR 2009 2/93 en la categoría de Computer Science, Software Engineering, con un índice de impacto a 5 años de 5.012, el más alto de su categoría). Además, el Transactions on Graphics está considerado como la mejor revista en el campo de informática gráfica. Otra publicación que cubre parte de este trabajo ha sido aceptada en el Congreso Español de Informática Gráfica 2010. Como medida adicional de la relevancia del trabajo aquí presentado, los dos libros existentes hasta la fecha (hasta donde sabemos) escritos por expertos en el campo de HDR dedican varias páginas a tratar el trabajo aquí expuesto (ver [2, 3]). Esta investigación ha sido realizada en colaboración con Roland Fleming, del Max Planck Institute for Biological Cybernetics, y Olga Sorkine, de New York University

    Synthesizing Normalized Faces from Facial Identity Features

    Full text link
    We present a method for synthesizing a frontal, neutral-expression image of a person's face given an input face photograph. This is achieved by learning to generate facial landmarks and textures from features extracted from a facial-recognition network. Unlike previous approaches, our encoding feature vector is largely invariant to lighting, pose, and facial expression. Exploiting this invariance, we train our decoder network using only frontal, neutral-expression photographs. Since these photographs are well aligned, we can decompose them into a sparse set of landmark points and aligned texture maps. The decoder then predicts landmarks and textures independently and combines them using a differentiable image warping operation. The resulting images can be used for a number of applications, such as analyzing facial attributes, exposure and white balance adjustment, or creating a 3-D avatar

    Seeing with sound? Exploring different characteristics of a visual-to-auditory sensory substitution device

    Get PDF
    Sensory substitution devices convert live visual images into auditory signals, for example with a web camera (to record the images), a computer (to perform the conversion) and headphones (to listen to the sounds). In a series of three experiments, the performance of one such device (‘The vOICe’) was assessed under various conditions on blindfolded sighted participants. The main task that we used involved identifying and locating objects placed on a table by holding a webcam (like a flashlight) or wearing it on the head (like a miner’s light). Identifying objects on a table was easier with a hand-held device, but locating the objects was easier with a head-mounted device. Brightness converted into loudness was less effective than the reverse contrast (dark being loud), suggesting that performance under these conditions (natural indoor lighting, novice users) is related more to the properties of the auditory signal (ie the amount of noise in it) than the cross-modal association between loudness and brightness. Individual differences in musical memory (detecting pitch changes in two sequences of notes) was related to the time taken to identify or recognise objects, but individual differences in self-reported vividness of visual imagery did not reliably predict performance across the experiments. In general, the results suggest that the auditory characteristics of the device may be more important for initial learning than visual associations

    Single-Image HDR Reconstruction by Learning to Reverse the Camera Pipeline

    Full text link
    Recovering a high dynamic range (HDR) image from a single low dynamic range (LDR) input image is challenging due to missing details in under-/over-exposed regions caused by quantization and saturation of camera sensors. In contrast to existing learning-based methods, our core idea is to incorporate the domain knowledge of the LDR image formation pipeline into our model. We model the HDRto-LDR image formation pipeline as the (1) dynamic range clipping, (2) non-linear mapping from a camera response function, and (3) quantization. We then propose to learn three specialized CNNs to reverse these steps. By decomposing the problem into specific sub-tasks, we impose effective physical constraints to facilitate the training of individual sub-networks. Finally, we jointly fine-tune the entire model end-to-end to reduce error accumulation. With extensive quantitative and qualitative experiments on diverse image datasets, we demonstrate that the proposed method performs favorably against state-of-the-art single-image HDR reconstruction algorithms.Comment: CVPR 2020. Project page: https://www.cmlab.csie.ntu.edu.tw/~yulunliu/SingleHDR Code: https://github.com/alex04072000/SingleHD
    • …
    corecore