12,557 research outputs found

    High-density speckle contrast optical tomography (SCOT) for three dimensional tomographic imaging of the small animal brain

    Get PDF
    High-density speckle contrast optical tomography (SCOT) utilizing tens of thousands of source-detector pairs, was developed for in vivo imaging of blood flow in small animals. The reduction in cerebral blood flow (CBF) due to local ischemic stroke in a mouse brain was transcanially imaged and reconstructed in three dimensions. The reconstructed volume was then compared with corresponding magnetic resonance images demonstrating that the volume of reduced CBF agrees with the infarct zone at twenty-four hours.Peer ReviewedPostprint (author's final draft

    Quantitative evaluation of atlas-based highdensity diffuse optical tomography for imaging of the human visual cortex

    Get PDF
    Image recovery in diffuse optical tomography (DOT) of the human brain often relies on accurate models of light propagation within the head. In the absence of subject specific models for image reconstruction, the use of atlas based models are showing strong promise. Although there exists some understanding in the use of some limited rigid model registrations in DOT, there has been a lack of a detailed analysis between errors in geometrical accuracy, light propagation in tissue and subsequent errors in dynamic imaging of recovered focal activations in the brain. In this work 11 different rigid registration algorithms, across 24 simulated subjects, are evaluated for DOT studies in the visual cortex. Although there exists a strong correlation (R(2) = 0.97) between geometrical surface error and internal light propagation errors, the overall variation is minimal when analysing recovered focal activations in the visual cortex. While a subject specific mesh gives the best results with a 1.2 mm average location error, no single algorithm provides errors greater than 4.5 mm. This work demonstrates that the use of rigid algorithms for atlas based imaging is a promising route when subject specific models are not available

    Computational polarimetric microwave imaging

    Get PDF
    We propose a polarimetric microwave imaging technique that exploits recent advances in computational imaging. We utilize a frequency-diverse cavity-backed metasurface, allowing us to demonstrate high-resolution polarimetric imaging using a single transceiver and frequency sweep over the operational microwave bandwidth. The frequency-diverse metasurface imager greatly simplifies the system architecture compared with active arrays and other conventional microwave imaging approaches. We further develop the theoretical framework for computational polarimetric imaging and validate the approach experimentally using a multi-modal leaky cavity. The scalar approximation for the interaction between the radiated waves and the target---often applied in microwave computational imaging schemes---is thus extended to retrieve the susceptibility tensors, and hence providing additional information about the targets. Computational polarimetry has relevance for existing systems in the field that extract polarimetric imagery, and particular for ground observation. A growing number of short-range microwave imaging applications can also notably benefit from computational polarimetry, particularly for imaging objects that are difficult to reconstruct when assuming scalar estimations.Comment: 17 pages, 15 figure

    A New framework for an electrophotographic printer model

    Get PDF
    Digital halftoning is a printing technology that creates the illusion of continuous tone images for printing devices such as electrophotographic printers that can only produce a limited number of tone levels. Digital halftoning works because the human visual system has limited spatial resolution which blurs the printed dots of the halftone image, creating the gray sensation of a continuous tone image. Because the printing process is imperfect it introduces distortions to the halftone image. The quality of the printed image depends, among other factors, on the complex interactions between the halftone image, the printer characteristics, the colorant, and the printing substrate. Printer models are used to assist in the development of new types of halftone algorithms that are designed to withstand the effects of printer distortions. For example, model-based halftone algorithms optimize the halftone image through an iterative process that integrates a printer model within the algorithm. The two main goals of a printer model are to provide accurate estimates of the tone and of the spatial characteristics of the printed halftone pattern. Various classes of printer models, from simple tone calibrations, to complex mechanistic models, have been reported in the literature. Existing models have one or more of the following limiting factors: they only predict tone reproduction, they depend on the halftone pattern, they require complex calibrations or complex calculations, they are printer specific, they reproduce unrealistic dot structures, and they are unable to adapt responses to new data. The two research objectives of this dissertation are (1) to introduce a new framework for printer modeling and (2) to demonstrate the feasibility of such a framework in building an electrophotographic printer model. The proposed framework introduces the concept of modeling a printer as a texture transformation machine. The basic premise is that modeling the texture differences between the output printed images and the input images encompasses all printing distortions. The feasibility of the framework was tested with a case study modeling a monotone electrophotographic printer. The printer model was implemented as a bank of feed-forward neural networks, each one specialized in modeling a group of textural features of the printed halftone pattern. The textural features were obtained using a parametric representation of texture developed from a multiresolution decomposition proposed by other researchers. The textural properties of halftone patterns were analyzed and the key texture parameters to be modeled by the bank were identified. Guidelines for the multiresolution texture decomposition and the model operational parameters and operational limits were established. A method for the selection of training sets based on the morphological properties of the halftone patterns was also developed. The model is fast and has the capability to continue to learn with additional training. The model can be easily implemented because it only requires a calibrated scanner. The model was tested with halftone patterns representing a range of spatial characteristics found in halftoning. Results show that the model provides accurate predictions for the tone and the spatial characteristics when modeling halftone patterns individually and it provides close approximations when modeling multiple halftone patterns simultaneously. The success of the model justifies continued research of this new printer model framework

    Computational Methods and Graphical Processing Units for Real-time Control of Tomographic Adaptive Optics on Extremely Large Telescopes.

    Get PDF
    Ground based optical telescopes suffer from limited imaging resolution as a result of the effects of atmospheric turbulence on the incoming light. Adaptive optics technology has so far been very successful in correcting these effects, providing nearly diffraction limited images. Extremely Large Telescopes will require more complex Adaptive Optics configurations that introduce the need for new mathematical models and optimal solvers. In addition, the amount of data to be processed in real time is also greatly increased, making the use of conventional computational methods and hardware inefficient, which motivates the study of advanced computational algorithms, and implementations on parallel processors. Graphical Processing Units (GPUs) are massively parallel processors that have so far demonstrated a very high increase in speed compared to CPUs and other devices, and they have a high potential to meet the real-time restrictions of adaptive optics systems. This thesis focuses on the study and evaluation of existing proposed computational algorithms with respect to computational performance, and their implementation on GPUs. Two basic methods, one direct and one iterative are implemented and tested and the results presented provide an evaluation of the basic concept upon which other algorithms are based, and demonstrate the benefits of using GPUs for adaptive optics
    • …
    corecore