41,363 research outputs found

    Distributed Deblurring of Large Images of Wide Field-Of-View

    Full text link
    Image deblurring is an economic way to reduce certain degradations (blur and noise) in acquired images. Thus, it has become essential tool in high resolution imaging in many applications, e.g., astronomy, microscopy or computational photography. In applications such as astronomy and satellite imaging, the size of acquired images can be extremely large (up to gigapixels) covering wide field-of-view suffering from shift-variant blur. Most of the existing image deblurring techniques are designed and implemented to work efficiently on centralized computing system having multiple processors and a shared memory. Thus, the largest image that can be handle is limited by the size of the physical memory available on the system. In this paper, we propose a distributed nonblind image deblurring algorithm in which several connected processing nodes (with reasonable computational resources) process simultaneously different portions of a large image while maintaining certain coherency among them to finally obtain a single crisp image. Unlike the existing centralized techniques, image deblurring in distributed fashion raises several issues. To tackle these issues, we consider certain approximations that trade-offs between the quality of deblurred image and the computational resources required to achieve it. The experimental results show that our algorithm produces the similar quality of images as the existing centralized techniques while allowing distribution, and thus being cost effective for extremely large images.Comment: 16 pages, 10 figures, submitted to IEEE Trans. on Image Processin

    Firefly Algorithm: Recent Advances and Applications

    Full text link
    Nature-inspired metaheuristic algorithms, especially those based on swarm intelligence, have attracted much attention in the last ten years. Firefly algorithm appeared in about five years ago, its literature has expanded dramatically with diverse applications. In this paper, we will briefly review the fundamentals of firefly algorithm together with a selection of recent publications. Then, we discuss the optimality associated with balancing exploration and exploitation, which is essential for all metaheuristic algorithms. By comparing with intermittent search strategy, we conclude that metaheuristics such as firefly algorithm are better than the optimal intermittent search strategy. We also analyse algorithms and their implications for higher-dimensional optimization problems.Comment: 15 page

    Adaptive Filters for 2-D and 3-D Digital Images Processing

    Get PDF
    Práce se zabývá adaptivními filtry pro vizualizaci obrazů s vysokým rozlišením. V teoretické části je popsán princip činnosti konfokálního mikroskopu a matematicky korektně zaveden pojem digitální obraz. Pro zpracování obrazů je volen jak frekvenční přístup (s využitím 2-D a 3-D diskrétní Fourierovy transformace a frekvenčních filtrů), tak přístup pomocí digitální geometrie (s využitím adaptivní ekvalizace histogramu s adaptivním okolím). Dále jsou popsány potřebné úpravy pro práci s neideálními obrazy obsahujícími aditivní a impulzní šum. Závěr práce se věnuje prostorové rekonstrukci objektů na základě jejich optických řezů. Veškeré postupy a algoritmy jsou i prakticky zpracovány v softwaru, který byl vyvinut v rámci této práce.The thesis is concerned with filters for visualization of high dynamic range images. In the theoretical part, the principle of confocal microscopy is described and the term digital image is defined in a mathematically correct way. Both frequency approach (using 2-D and 3-D discrete Fourier transform and frequency filters) and digital geometry approach (using adaptive histogram equalization with adaptive neighbourhood) are chosen for the processing of images. Necessary adjustments when working with non-ideal images containing additive and impulse noise are described as well. The last part of the thesis is interested in 3-D reconstruction from optical cuts of an object. All the procedures and algorithms are also implemented in the software developed as a part of this thesis.

    Visual Quality Enhancement in Optoacoustic Tomography using Active Contour Segmentation Priors

    Full text link
    Segmentation of biomedical images is essential for studying and characterizing anatomical structures, detection and evaluation of pathological tissues. Segmentation has been further shown to enhance the reconstruction performance in many tomographic imaging modalities by accounting for heterogeneities of the excitation field and tissue properties in the imaged region. This is particularly relevant in optoacoustic tomography, where discontinuities in the optical and acoustic tissue properties, if not properly accounted for, may result in deterioration of the imaging performance. Efficient segmentation of optoacoustic images is often hampered by the relatively low intrinsic contrast of large anatomical structures, which is further impaired by the limited angular coverage of some commonly employed tomographic imaging configurations. Herein, we analyze the performance of active contour models for boundary segmentation in cross-sectional optoacoustic tomography. The segmented mask is employed to construct a two compartment model for the acoustic and optical parameters of the imaged tissues, which is subsequently used to improve accuracy of the image reconstruction routines. The performance of the suggested segmentation and modeling approach are showcased in tissue-mimicking phantoms and small animal imaging experiments.Comment: Accepted for publication in IEEE Transactions on Medical Imagin
    corecore