2,637 research outputs found

    Volumetric high dynamic range windowing for better data representation

    Get PDF
    Volume data is usually generated by measuring devices (eg. CT scanners, MRI scanners), mathematical functions (eg., Marschner/Lobb function), or by simulations. While all these sources typically generate 12bit integer or floating point representations, commonly used displays are only capable of handling 8bit gray or color levels. In a typical medical scenario, a 3D scanner will generate a 12bit dataset, which will be downsampled to an 8bit per-voxel accuracy. This downsampling is usually achieved by a linear windowing operation, which maps the active full accuracy data range of 0 up to 4095 into the interval between 0 and 255. In this paper, we propose a novel windowing operation that is based on methods from high dynamic range image mapping. With this method, the contrast of mapped 8bit volume datasets is significantly enhanced, in particular if the imaging modality allows for a high tissue differentiation (eg., MRI). Henceforth, it also allows better and easier segmentation and classification. We demonstrate the improved contrast with different error metrics and a perception-driven image difference to indicate differences between three different high dynamic range operators

    Exploring the visualisation of the cervicothoracic junction in lateral spine radiography using high dynamic range techniques

    Get PDF
    The C7/T1 junction is an important landmark for spinal injuries. It is traditionally difficult to visualise in a lateral X-ray image due to the rapid change in the bodys anatomy at the level of the junction, where the shoulders cause a large increase in attenuation. To explore methods of enhancing the appearance of this important area, lateral radiographs of a shoulder girdle phantom were subjected to high dynamic range (HDR) processing and tone mapping. A shoulder girdle phantom was constructed using Perspex, shoulder girdle and vertebral bones and water to reproduce the attenuation caused by soft tissue. The design allowed for the removal of the shoulder girdle in order for the cervical vertebrae to be imaged separately. HDR was explored for single and dual-energy X-ray images of the phantom. In the case of single-image HDR, the HDR image of the phantom without water was constructed by combining images created with varying contrast windows throughout the contrast range of an X-ray image. It was found that an overlap of larger contrast windows with a lower number of images performed better than smaller contrast windows and more images when creating an HDR to be tone mapped. Poor results on the phantom without water precluded further testing of single-image HDR on images of the phantom with water, which would have higher attenuation. Dual energy HDR image construction explored images of the phantom both with and without water. A set of images acquired at lower attenuation (phantom without water) was used to evaluate the performance of the various tone mapping algorithms. The tone mapping was then performed on the phantom images containing water. These results showed how each tone mapping algorithm differs and the effects of global vs. local processing. The results revealed that the built-in MatLab algorithm, based on an improved Ward histogram adjustment approach, produces the most desirable result. None of the HDR tone mapped images produced were diagnostically useful. Signal to noise ratio (SNR) analysis was performed on the cervical region of the HDR tone mapped image. It used the scan of the phantom without the shoulder girdle obstruction (imaged under the same conditions) as a reference image. The SNR results quantitatively show that the selection of exposure values affects the visualisation of the tone mapped image. The highest SNR was produced for the 100 - 120 kV dual energy X-ray image pair. The study was limited by the range of HDR image construction techniques employed and the tone mapping algorithms explored. Future studies could explore other HDR image construction techniques and the combination of global and local tone mapping algorithms. Furthermore, the phantom can be replaced by a cadaver for algorithm testing under more realistic conditions

    Adaptive Filters for 2-D and 3-D Digital Images Processing

    Get PDF
    Práce se zabývá adaptivními filtry pro vizualizaci obrazů s vysokým rozlišením. V teoretické části je popsán princip činnosti konfokálního mikroskopu a matematicky korektně zaveden pojem digitální obraz. Pro zpracování obrazů je volen jak frekvenční přístup (s využitím 2-D a 3-D diskrétní Fourierovy transformace a frekvenčních filtrů), tak přístup pomocí digitální geometrie (s využitím adaptivní ekvalizace histogramu s adaptivním okolím). Dále jsou popsány potřebné úpravy pro práci s neideálními obrazy obsahujícími aditivní a impulzní šum. Závěr práce se věnuje prostorové rekonstrukci objektů na základě jejich optických řezů. Veškeré postupy a algoritmy jsou i prakticky zpracovány v softwaru, který byl vyvinut v rámci této práce.The thesis is concerned with filters for visualization of high dynamic range images. In the theoretical part, the principle of confocal microscopy is described and the term digital image is defined in a mathematically correct way. Both frequency approach (using 2-D and 3-D discrete Fourier transform and frequency filters) and digital geometry approach (using adaptive histogram equalization with adaptive neighbourhood) are chosen for the processing of images. Necessary adjustments when working with non-ideal images containing additive and impulse noise are described as well. The last part of the thesis is interested in 3-D reconstruction from optical cuts of an object. All the procedures and algorithms are also implemented in the software developed as a part of this thesis.

    New Stereo Vision Algorithm Composition Using Weighted Adaptive Histogram Equalization and Gamma Correction

    Get PDF
    This work presents the composition of a new algorithm for a stereo vision system to acquire accurate depth measurement from stereo correspondence. Stereo correspondence produced by matching is commonly affected by image noise such as illumination variation, blurry boundaries, and radiometric differences. The proposed algorithm introduces a pre-processing step based on the combination of Contrast Limited Adaptive Histogram Equalization (CLAHE) and Adaptive Gamma Correction Weighted Distribution (AGCWD) with a guided filter (GF). The cost value of the pre-processing step is determined in the matching cost step using the census transform (CT), which is followed by aggregation using the fixed-window and GF technique. A winner-takes-all (WTA) approach is employed to select the minimum disparity map value and final refinement using left-right consistency checking (LR) along with a weighted median filter (WMF) to remove outliers. The algorithm improved the accuracy 31.65% for all pixel errors and 23.35% for pixel errors in nonoccluded regions compared to several established algorithms on a Middlebury dataset

    Evaluating spatial and frequency domain enhancement techniques on dental images to assist dental implant therapy

    Get PDF
    Dental imaging provides the patient's anatomical details for the dental implant based on the maxillofacial structure and the two-dimensional geometric projection, helping clinical experts decide whether the implant surgery is suitable for a particular patient. Dental images often suffer from problems associated with random noise and low contrast factors, which need effective preprocessing operations. However, each enhancement technique comes with some advantages and limitations. Therefore, choosing a suitable image enhancement method always a difficult task. In this paper, a universal framework is proposed that integrates the functionality of various enhancement mechanisms so that dentists can select a suitable method of their own choice to improve the quality of dental image for the implant procedure. The proposed framework evaluates the effectiveness of both frequency domain enhancement and spatial domain enhancement techniques on dental images. The selection of the best enhancement method further depends on the output image perceptibility responses, peak signal-to-noise ratio (PSNR), and sharpness. The proposed framework offers a flexible and scalable approach to the dental expert to perform enhancement of a dental image according to visual image features and different enhancement requirements

    Evaluation of tone-mapping algorithms for focal-plane implementation

    Get PDF
    Scenes in the real world may simultaneously contain very bright and very dark regions, caused by different illumination conditions. These scenes contain a wide range of different light intensity values. Attempting to exhibit a picture of such scene on a conventional display device, such as a computer monitor, leads to (a possibly large) loss of details in the displayed scene, since conventional display devices can only represent a limited amount of different light intensity values, which span a smaller range. To mitigate the loss of details, before it is shown on the display device, the picture of the scene must be processed by a tone-mapping algorithm, which maps the original light intensities into the light intensities representable by the display, thereby accommodating the input high dynamic range of values into a smaller range. In this work, a comparison between different tone-mapping algorithms is presented. More specifically, the performances (regarding processing time and overall quality of the processed image) from a digital version of the tone-mapping operator originally proposed by Fern´andez-Berni et al. [11] that is implemented in the focal plane of the camera and from different tone-mapping operators that are originally implemented in software are compared. Furthermore, a second digital version of the focal-plane operator, which simulates a modified version of the original hardware implementation, is considered and its performance is analyzed. The modified hardware implementation is less complex and requires less space than the original implementation and, subjectively, keeps the overall image quality approximately equal to that achieved by digital operators. Issues regarding colors of the tone-mapped images are also addressed, especially the required processing that must be performed by the focal-plane operator after the tone mapping, in order to yield images without color distortions.Cenas no mundo real podem conter uma ampla faixa de valores de diferentes intensidades luminosas. Mostrar a cena original em um aparelho de exibição convencional, tal como um monitor de computador, leva a uma (possivelmente grande) perda de detalhes na cena exibida, uma vez que esses aparelhos são capazes de representar somente uma quantidade limitada de diferentes intensidades luminosas, as quais ocupam uma faixa de valores menor. Para diminuir a perda de detalhes, antes de ser exibida em tais aparelhos, a cena deve ser processada por um algoritmo de tone mapping, o qual mapeia os valores originais de intensidade luminosa em valores que são representáveis pelo aparelho de exibição, acomodando, com isso, a alta faixa dinâmica dos valores de entrada em uma faixa de valores menor. Neste trabalho, uma comparação entre diferentes algoritmos de tone-mapping é apresentada. Mais especificamente, são comparados entre si os desempenhos (referentes a tempos de execução e qualidade geral da imagem processada) da versão digital do operador de tone mapping originalmente proposto por Fernández-Berni et al. [11] que ´e implementado no plano focal da câmera e de diferentes operadores de tone mapping que são originalmente implementados em software. Além disso, uma segunda versão digital do operador no plano focal, a qual simula uma versão modificada da implementação original em hardware, é considerada e seu desempenho é analisado. Essa versão modificada requer um hardware que é menos complexo e ocupa menos espaço que o hardware da implementação original, além de, subjetivamente, manter a qualidade geral da imagem próxima daquela alcançada por operadores digitais. Questões referentes às cores das imagens processadas também são tratadas, especialmente os processamentos que são requeridos pelo operador do plano focal após o tone mapping, de modo a gerar imagens sem distorções de cor
    corecore