22 research outputs found

    Influence of Spectral Sensitivity Functions on color demosaicing

    Get PDF
    Color images acquired through single chip digital cameras using a color filter array (CFA) contain a mixture of luminance and opponent chromatic information that share their representation in the spatial Fourier spectrum. This mixture could result in aliasing if the bandwidths of these signals are too wide and their spectra overlap. In such a case, reconstructing three-color per pixel images without error is impossible. One way to improve the reconstruction is to have sensitivity functions that are highly correlated, reducing the bandwidth of the opponent chromatic components. However, this diminishes the ability to reproduce colors accurately as noise is amplified when converting an image to the final color encoding. In this paper, we are looking for an optimum between accurate image reconstruction through demosaicing and accurate color rendering. We design a camera simulation, first using a hyperspectral model of random color images and a demosaicing algorithm based on frequency selection. We find that there is an optimum and confirm our results using a natural hyperspectral image

    Demosaicing of Color Images by Accurate Estimation of Luminance

    Get PDF
    Digital cameras acquire color images using a single sensor with Color filter Arrays. A single color component per pixel is acquired using color filter arrays and the remaining two components are obtained using demosaicing techniques. The conventional demosaicing techniques existent induce artifacts in resultant images effecting reconstruction quality. To overcome this drawback a frequency based demosaicing technique is proposed. The luminance and chrominance components extracted from the frequency domain of the image are interpolated to produce intermediate demosaiced images. A novel Neural Network Based Image Reconstruction Algorithm is applied to the intermediate demosaiced image to obtain resultant demosaiced images. The results presented in the paper prove the proposed demosaicing technique exhibits the best performance and is applicable to a wide variety of images

    Demosaicing multi-energy patterned composite pixels for spectral CT

    Get PDF
    Tese de mestrado integrado em Engenharia Biomédica e Biofísica, apresentada à Universidade de Lisboa, através da Faculdade de Ciências, 2016O desenvolvimento da Tomografia Computadorizada foi realizada na combinação de duas áreas científicas, computação e imagiologia com base em raios-x. Em 1895, o cientista Wilhelm Roentgen descobriu os raios-X: fotões de altas energias provenientes de transições eletrónicas nos átomos. Estes são radiações eletromagnéticas que se propagam à velocidade da luz e são ionizantes. Devido às suas propriedades, os raios-x foram imediatamente rentabilizados como uma ferramenta para explorar a composição da matéria. Os fotões interagem com a matéria por dois mecanismos dominantes, dependendo da energia da radiação eletromagnética: efeito fotoelétrico e efeito de Compton. O efeito fotoelétrico corresponde à interação dos fotões com os eletrões que se encontram nas órbitas de maior energia do átomo. O fotão transfere toda a sua energia para o eletrão, sendo parte dessa usada para superar a energia de ligação do eletrão e a energia restante é transferida para o mesmo eletrão sob a forma de energia cinética. O efeito de Compton corresponde à interação do fotão com o eletrão que se encontra numa das órbitas de menor energia. Depois da interação, o fotão é desviado e o eletrão é ejetado do átomo. O fotão desviado pode voltar a interagir com a matéria sob o efeito de Compton ou o efeito fotoelétrico, ou simplesmente não a interagir com a matéria. Os raios-X têm a sua intensidade diminuída em função das interações que ocorrem com o material que as absorve. A atenuação da energia destes acontece de maneira exponencial em função da espessura do material absorvente. Devido às propriedades físicas provocadas pelos raios-X, esta radiação foi estabelecida como uma ferramenta médica. A tomografia convencional consistiu numa técnica de diagnóstico na qual a aquisição de imagem é realizada a partir de um filme radiográfico, que resulta da projeção das estruturas anatómicas tridimensionais em imagens bidimensionais, com sobreposições de informação anatómica. Em 1970, os cientistas Hounsfield e Cormack desenvolveram uma técnica, a Tomografia Computadorizada, que possuía logo de início a vantagem de corrigir o problema da sobreposição de informação. A Tomografia Computadorizada reconstrói as estruturas internas de um objeto a partir de múltiplas projeções utilizando algoritmos de reconstrução. A diferenciação e classificação de diferentes tipos de tecidos tornou-se extremamente desafiante nesta técnica, devido ao facto de que mesmo que dois materiais difiram em número atómico, dependendo da densidade de massa ou concentração, eles podem aparecer idênticos na imagem. Desta forma uma das soluções foi o estudo da Tomografia Computorizada Espectral, sendo esta uma técnica promissora no desenvolvimento da imagiologia pois potencia a deteção e caracterização dos tecidos anatómicos além dos níveis atualmente atingíveis com técnicas de TC convencionais. A TC espectral leva em consideração que a radiação transmitida transporta mais informações para além de mudanças de intensidade e que o coeficiente de atenuação depende não só do material, mas também da energia do fotão. A TC espectral difere das outras técnicas no sentido em que utiliza as características físicas dos materiais em estudo em mais de dois espectros de energia. Através da aquisição de imagens em diferentes níveis de energia, a técnica é capaz de diferenciar os vários elementos do corpo com base na densidade dos materiais ou nos números atómicos destes. As diferenças entre os vários tecidos são exibidas através de distintas cores na imagem final. Uma tecnologia importante utilizada na CT Espectral é a dos detetores de contagem de fotões, conhecidos por detetores híbridos. Estes detetores têm a particularidade de separar o espetro incidente em múltiplos espetros, cuja forma depende dos limiares de energia impostos. Estes detetores operam num modo de contagem, ou seja, em vez de operarem em modo de integração tal como os detetores convencionais, estes efetuam a contagem individual dos fotões da radiação incidente a partir de limiares de energia estipulados. A influência do ruído electrónico afeta a energia medida de cada fotão, contudo tendo em conta que estes detetores efetuam a contagem de fotões, o ruído eletrónico deixa de ter uma influência tão significativa na qualidade da imagem adquirida. “K-edge Imaging” é uma das abordagens utilizadas em sistemas de TC espectral; explora as propriedades físicas de agentes de contrastes utilizados em tomografia computorizada e as suas respetivas propriedades físicas. Os elementos utilizados para os agentes contrastes são elementos pesados e altamente atenuantes, e cujo efeito fotoelétrico ocorre ao mesmo alcance das energias utilizadas em TC. Deste modo, cada um desses elementos pesados tem um salto característico na sua atenuação de raios-X, o qual corresponde à energia que ocorre o efeito fotoelétrico. Como os eletrões envolvidos no efeito fotoelétrico pertencem à orbital K, o salto característico é designado por "K-edge". “K-edge Imaging” explora a escolha do espetro de energia aplicado de forma a abranger o salto característico destes elementos para identificar e localizar componentes específicos. No CPPM, o grupo imXgam desenvolveu uma micro-TC e uma PET / TC simultânea que incorpora a nova tecnologia de detetores híbridos desenvolvida pelo centro: o detetor XPAD3. Esta tecnologia não só permite trabalhar em modo de contagem de fotões, mas também é capaz de selecionar informação energética sobre os fotões detetados; consequentemente as capacidades do detector XPAD3 foram exploradas para desenvolver “K-edge Imaging”. Os artefactos que resultam de várias aquisições estão relacionados com o movimento. Para resolver esse problema, o CPPM desenvolveu um conceito de pixéis compostos, que consiste numa matriz de pixéis (3 × 3) com 3 diferentes limiares de energia. Embora, os pixéis compostos resolvam os artefactos de movimento, as imagens adquiridas perderam a resolução espacial. Assim, o projeto deste trabalho tem como objetivo a realização de "K-edge Imaging" em objectos em movimento em plena resolução espacial. Este projeto aborda o problema como um problema “Inpainting”, onde as medidas desconhecidas para cada limiar de energia serão estimadas a partir de medidas parciais. Há uma vasta literatura sobre o problema “Inpainting”, assim como noutra área de processamento de imagem, o “Demosaicing”. Estes são métodos de restauração que removem regiões danificadas ou reconstroem porções perdidas da imagem. O problema “Demosaicing” tem um interesse particular para este trabalho em virtude do método recuperar informação de imagens coloridas (imagens RGB). A utilização do método “Demosaicing” em imagens adquiridas por sistemas TC é praticamente inexistente, pelo que o objetivo deste projeto foi avaliar não só os métodos de restauração convencionais, mas também adaptar e avaliar o método “Demosaicing” às imagens adquiridas por sistemas TC. Desta forma, as imagens espectrais foram tratadas como imagens coloridas: cada imagem adquirida por um limiar de energia foi configurada como uma cor. A imagem resultante foi submetida ao processo de recuperação que consistiu em acoplar as três imagens obtidas por cada limiar de energia em uma imagem de cor( imagem RGB). Este trabalho exigiu, em primeiro lugar, o estudo do esquema de amostragem de imagens espectrais e a avaliação de desempenho dos métodos mais simples em relação ao ruído, ao fator de subamostragem e à resolução espacial. As técnicas mais sofisticadas como a “Inpainting” e ”Demosaicing” foram desenvolvidas e avaliadas especificamente para imagens espectrais tomográficas. Após a avaliação destas, foi realizado um “estado de arte” que comparou os métodos e, consequentemente, fez uma análise de qual o método mais adequado para imagens de TC espectral. A segunda parte deste projeto consistiu no estudo do padrão que os píxeis compostos devem seguir, de forma a definir um protocolo de aquisição. Para tal, foram testados dois tipos de padrões: regular e aleatório. A ideia de píxeis compostos foi obtida criando uma matriz com vários componentes que dependem do número de limiar de energias que se quer utilizar. Conforme mencionado, no CPPM é utilizado uma matriz de pixels com três limiares de energia, desta forma, neste projeto, a possibilidade de aumentar o número de limiares de energia foi também testado. Os objetivos do projeto foram alcançados uma vez que a avaliação dos métodos foi realizada e conclui-se que a nova abordagem apresentou melhores resultados que os métodos padrão. Conclui-se que as imagens adquiridas pelo método “Demosaicing” apresentam melhor resolução espacial. Relativamente ao padrão dos pixéis compostos verificou-se que em ambos a reconstrução apresentou bom desempenho. A análise do aumento de número de limiares de energia apontou para bons resultados, observados no uso de 4 níveis de energia, porém a nova abordagem “Demosaicing” teria de ser reformulada. De forma a alcançar os objetivos, este tema foi dividido em vários capítulos. No segundo capítulo foram introduzidos os conceitos físicos envolvidos na tomografia espectral, desde a produção dos raios-X até ao desenvolvimento da técnica propriamente dita. O terceiro capítulo abordou como o “estado de arte” foi efetuado, documentando o que foi realizado atualmente no campo em estudo. Nos capítulos 4 e 5 apresentou-se os materiais e métodos utilizados, assim como exposto as suas aplicações,e de forma mais particular a matemática e a programação envolvidas. No capítulo 6 apresentou-se os resultados alcançados e as respectivas observações. No último capítulo sumariou-se os resultados obtidos e as conclusões retiradas a partir destes.Computed Tomography is a diagnosis technique that uses X-ray radiation to create images of structures. This technique consists in reconstructing a quantitative map of the attenuation coefficients of the object sections from multiple projections using reconstruction algorithms. Since the attenuation coefficient is not unique for any material, the differentiation and classification of different tissue types by Computed Tomography has revealed to be extremely challenging. The solution has been provided through the development of an energy sensitive CT scanner, known as Spectral CT. This technique takes in consideration that the transmitted radiation carries more information than intensity changes, that the x-ray tube produces a wide range of energy spectrum and that the attenuation of radiation depends not only on the material but also on the photon energy. Spectral CT uses the attenuation characteristics at more than two energies which makes it possible to differentiate various elements in the body, based on their material density or atomic numbers. Therefore, this technique uses the new detector technology, the hybrid pixel detector. This detector allows the energy threshold setting. Combining the physical properties of different materials and the possibility of setting the energy threshold in the detectors, a new spectral imaging technique is used, K-edge imaging. This technique explores the discontinuity in the photoelectric effect, which is generated when photons interact with matter, and those interact with the shell electrons. Therefore, the Centre de Physique des Particules de Marseille developed a micro-CT and a simultaneous PET/CT scan based on hybrid pixel detector. The ability of tuning the energy threshold of each pixel independently was exploited to develop K-edge imaging and the proof of concept has been established on phantom and on living mice. In the context of pre-clinical imaging, objects are moving and the several acquisitions must be performed simultaneously to allow the registration set. For this purpose, CPPM had been working with composite pixels made of 9 (3× 3) pixels with 3 different thresholds. This solves the motion artefact problem at the price of loss in spatial resolution. Therefore, the research project of this work aims at performing K-edge imaging on moving object at full spatial resolution. The problem is seen as an Inpainting problem where unknown measure must be estimated from partial measurements. A huge literature exists in the Inpainting, and especially in the field of Demosaicing, which is particularity of interest in this research project. The project consists in a study of the sampling scheme of spectral CT images and to evaluate the performance of simplest methods with respect to noise and spatial resolution. More sophisticated techniques of Inpainting and Demosaicing were tested, which were developed specifically for spectral CT images by incorporating prior on image. Therefore, an evaluation performance of all the reconstruction methods was successfully made, and a state-of-art was established. In this research project, in order to create the composite pixels concept, a set of dynamic strategies of patterning composite pixels was achieved in order to define optimal protocols of acquisition

    30 years of demosaicing

    Get PDF
    This paper proposes a review of thirty years of the development of demosaicing algorithms used in digital camera for the reconstruction of color image. Most recent digital camera used a single sensor in front of a color filter array is placed. This sensor sample a single chromatic value per spatial position and an interpolation algorithm is needed for the definition of a color image with three components per spatial position. This article shows that the whole signal and image processing technics have been used for solving this problem. Moreover, a new method proposed recently by the author and collaborators is decribed. This method based on a model of chromatic sampling by the cones in the retina highlights the nature of spatio-chromatic sampling in digital camera with single sensor.Cet article propose une revue de trente années de développement des algorithmes de démosaïçage utilisés dans les caméras numériques pour la reconstruction des images couleurs. La plupart des caméras numériques actuelles utilisent un seul capteur devant lequel est placée une matrice de filtres couleurs. Ce capteur échantillonne par conséquent une seule couleur par position spatiale et un algorithme d'interpolation est nécessaire pour la définition d'une image couleur avec trois composantes par position spatiale. Cet article montre que l'ensemble des techniques du traitement du signal et des images a été utilisé pour résoudre ce problème. Aussi, une nouvelle méthode proposée récemment par l'auteur et collaborateurs est décrite. Cette méthode, basée sur un modèle d'échantillonnage couleur par les cônes de la rétine, révèle la nature de l'échantillonnage spatio-chromatique dans les caméras couleur à un seul capteur

    Scene-Dependency of Spatial Image Quality Metrics

    Get PDF
    This thesis is concerned with the measurement of spatial imaging performance and the modelling of spatial image quality in digital capturing systems. Spatial imaging performance and image quality relate to the objective and subjective reproduction of luminance contrast signals by the system, respectively; they are critical to overall perceived image quality. The Modulation Transfer Function (MTF) and Noise Power Spectrum (NPS) describe the signal (contrast) transfer and noise characteristics of a system, respectively, with respect to spatial frequency. They are both, strictly speaking, only applicable to linear systems since they are founded upon linear system theory. Many contemporary capture systems use adaptive image signal processing, such as denoising and sharpening, to optimise output image quality. These non-linear processes change their behaviour according to characteristics of the input signal (i.e. the scene being captured). This behaviour renders system performance “scene-dependent” and difficult to measure accurately. The MTF and NPS are traditionally measured from test charts containing suitable predefined signals (e.g. edges, sinusoidal exposures, noise or uniform luminance patches). These signals trigger adaptive processes at uncharacteristic levels since they are unrepresentative of natural scene content. Thus, for systems using adaptive processes, the resultant MTFs and NPSs are not representative of performance “in the field” (i.e. capturing real scenes). Spatial image quality metrics for capturing systems aim to predict the relationship between MTF and NPS measurements and subjective ratings of image quality. They cascade both measures with contrast sensitivity functions that describe human visual sensitivity with respect to spatial frequency. The most recent metrics designed for adaptive systems use MTFs measured using the dead leaves test chart that is more representative of natural scene content than the abovementioned test charts. This marks a step toward modelling image quality with respect to real scene signals. This thesis presents novel scene-and-process-dependent MTFs (SPD-MTF) and NPSs (SPDNPS). They are measured from imaged pictorial scene (or dead leaves target) signals to account for system scene-dependency. Further, a number of spatial image quality metrics are revised to account for capture system and visual scene-dependency. Their MTF and NPS parameters were substituted for SPD-MTFs and SPD-NPSs. Likewise, their standard visual functions were substituted for contextual detection (cCSF) or discrimination (cVPF) functions. In addition, two novel spatial image quality metrics are presented (the log Noise Equivalent Quanta (NEQ) and Visual log NEQ) that implement SPD-MTFs and SPD-NPSs. The metrics, SPD-MTFs and SPD-NPSs were validated by analysing measurements from simulated image capture pipelines that applied either linear or adaptive image signal processing. The SPD-NPS measures displayed little evidence of measurement error, and the metrics performed most accurately when they used SPD-NPSs measured from images of scenes. The benefit of deriving SPD-MTFs from images of scenes was traded-off, however, against measurement bias. Most metrics performed most accurately with SPD-MTFs derived from dead leaves signals. Implementing the cCSF or cVPF did not increase metric accuracy. The log NEQ and Visual log NEQ metrics proposed in this thesis were highly competitive, outperforming metrics of the same genre. They were also more consistent than the IEEE P1858 Camera Phone Image Quality (CPIQ) metric when their input parameters were modified. The advantages and limitations of all performance measures and metrics were discussed, as well as their practical implementation and relevant applications

    TOWARDS A COMPUTATIONAL MODEL OF RETINAL STRUCTURE AND BEHAVIOR

    Get PDF
    Human vision is our most important sensory system, allowing us to perceive our surroundings. It is an extremely complex process that starts with light entering the eye and ends inside of the brain, with most of its mechanisms still to be explained. When we observe a scene, the optics of the eye focus an image on the retina, where light signals are processed and sent all the way to the visual cortex of the brain, enabling our visual sensation. The progress of retinal research, especially on the topography of photoreceptors, is often tied to the progress of retinal imaging systems. The latest adaptive optics techniques have been essential for the study of the photoreceptors and their spatial characteristics, leading to discoveries that challenge the existing theories on color sensation. The organization of the retina is associated with various perceptive phenomena, some of them are straightforward and strictly related to visual performance like visual acuity or contrast sensitivity, but some of them are more difficult to analyze and test and can be related to the submosaics of the three classes of cone photoreceptors, like how the huge interpersonal differences between the ratio of different cone classes result in negligible differences in color sensation, suggesting the presence of compensation mechanisms in some stage of the visual system. In this dissertation will be discussed and addressed issues regarding the spatial organization of the photoreceptors in the human retina. A computational model has been developed, organized into a modular pipeline of extensible methods each simulating a different stage of visual processing. It does so by creating a model of spatial distribution of cones inside of a retina, then applying descriptive statistics for each photoreceptor to contribute to the creation of a graphical representation, based on a behavioral model that determines the absorption of photoreceptors. These apparent color stimuli are reconstructed in a representation of the observed scene. The model allows the testing of different parameters regulating the photoreceptor's topography, in order to formulate hypothesis on the perceptual differences arising from variations in spatial organization

    The Sensor Test for Orion RelNav Risk Mitigation (STORRM) Development Test Objective

    Get PDF
    The Sensor Test for Orion Relative-Navigation Risk Mitigation (STORRM) Development Test Objective (DTO) flew aboard the Space Shuttle Endeavour on STS-134 in May- June 2011, and was designed to characterize the performance of the flash LIDAR and docking camera being developed for the Orion Multi-Purpose Crew Vehicle. The flash LIDAR, called the Vision Navigation Sensor (VNS), will be the primary navigation instrument used by the Orion vehicle during rendezvous, proximity operations, and docking. The DC will be used by the Orion crew for piloting cues during docking. This paper provides an overview of the STORRM test objectives and the concept of operations. It continues with a description of STORRM's major hardware components, which include the VNS, docking camera, and supporting avionics. Next, an overview of crew and analyst training activities will describe how the STORRM team prepared for flight. Then an overview of in-flight data collection and analysis is presented. Key findings and results from this project are summarized. Finally, the paper concludes with lessons learned from the STORRM DTO

    Evaluation and improvement of the workflow of digital imaging of fine art reproduction in museums

    Get PDF
    Fine arts refer to a broad spectrum of art formats, ie~painting, calligraphy, photography, architecture, and so forth. Fine art reproductions are to create surrogates of the original artwork that are able to faithfully deliver the aesthetics and feelings of the original. Traditionally, reproductions of fine art are made in the form of catalogs, postcards or books by museums, libraries, archives, and so on (hereafter called museums for simplicity). With the widespread adoption of digital archiving in museums, more and more artwork is reproduced to be viewed on a display. For example, artwork collections are made available through museum websites and Google Art Project for art lovers to view on their own displays. In the thesis, we study the fine art reproduction of paintings in the form of soft copy viewed on displays by answering four questions: (1) what is the impact of the viewing condition and original on image quality evaluation? (2) can image quality be improved by avoiding visual editing in current workflows of fine art reproduction? (3) can lightweight spectral imaging be used for fine art reproduction? and (4) what is the performance of spectral reproductions compared with reproductions by current workflows? We started with evaluating the perceived image quality of fine art reproduction created by representative museums in the United States under controlled and uncontrolled environments with and without the presence of the original artwork. The experimental results suggest that the image quality is highly correlated with the color accuracy of the reproduction only when the original is present and the reproduction is evaluated on a characterized display. We then examined the workflows to create these reproductions, and found that current workflows rely heavily on visual editing and retouching (global and local color adjustments on the digital reproduction) to improve the color accuracy of the reproduction. Visual editing and retouching can be both time-consuming and subjective in nature (depending on experts\u27 own experience and understanding of the artwork) lowering the efficiency of artwork digitization considerably. We therefore propose to improve the workflow of fine art reproduction by (1) automating the process of visual editing and retouching in current workflows based on RGB acquisition systems and by (2) recovering the spectral reflectance of the painting with off-the-shelf equipment under commonly available lighting conditions. Finally, we studied the perceived image quality of reproductions created by current three-channel (RGB) workflows with those by spectral imaging and those based on an exemplar-based method

    High-fidelity colour reproduction for high-dynamic-range imaging

    Get PDF
    The aim of this thesis is to develop a colour reproduction system for high-dynamic-range (HDR) imaging. Classical colour reproduction systems fail to reproduce HDR images because current characterisation methods and colour appearance models fail to cover the dynamic range of luminance present in HDR images. HDR tone-mapping algorithms have been developed to reproduce HDR images on low-dynamic-range media such as LCD displays. However, most of these models have only considered luminance compression from a photographic point of view and have not explicitly taken into account colour appearance. Motivated by the idea to bridge the gap between crossmedia colour reproduction and HDR imaging, this thesis investigates the fundamentals and the infrastructure of cross-media colour reproduction. It restructures cross-media colour reproduction with respect to HDR imaging, and develops a novel cross-media colour reproduction system for HDR imaging. First, our HDR characterisation method enables us to measure HDR radiance values to a high accuracy that rivals spectroradiometers. Second, our colour appearance model enables us to predict human colour perception under high luminance levels. We first built a high-luminance display in order to establish a controllable high-luminance viewing environment. We conducted a psychophysical experiment on this display device to measure perceptual colour attributes. A novel numerical model for colour appearance was derived from our experimental data, which covers the full working range of the human visual system. Our appearance model predicts colour and luminance attributes under high luminance levels. In particular, our model predicts perceived lightness and colourfulness to a significantly higher accuracy than other appearance models. Finally, a complete colour reproduction pipeline is proposed using our novel HDR characterisation and colour appearance models. Results indicate that our reproduction system outperforms other reproduction methods with statistical significance. Our colour reproduction system provides high-fidelity colour reproduction for HDR imaging, and successfully bridges the gap between cross-media colour reproduction and HDR imaging
    corecore