110 research outputs found
Stellar Content from high resolution galactic spectra via Maximum A Posteriori
This paper describes STECMAP (STEllar Content via Maximum A Posteriori), a
flexible, non-parametric inversion method for the interpretation of the
integrated light spectra of galaxies, based on synthetic spectra of single
stellar populations (SSPs). We focus on the recovery of a galaxy's star
formation history and stellar age-metallicity relation. We use the high
resolution SSPs produced by PEGASE-HR to quantify the informational content of
the wavelength range 4000 - 6800 Angstroms.
A detailed investigation of the properties of the corresponding simplified
linear problem is performed using singular value decomposition. It turns out to
be a powerful tool for explaining and predicting the behaviour of the
inversion. We provide means of quantifying the fundamental limitations of the
problem considering the intrinsic properties of the SSPs in the spectral range
of interest, as well as the noise in these models and in the data.
We performed a systematic simulation campaign and found that, when the time
elapsed between two bursts of star formation is larger than 0.8 dex, the
properties of each episode can be constrained with a precision of 0.04 dex in
age and 0.02 dex in metallicity from high quality data (R=10 000,
signal-to-noise ratio SNR=100 per pixel), not taking model errors into account.
The described methods and error estimates will be useful in the design and in
the analysis of extragalactic spectroscopic surveys.Comment: 31 pages, 23 figures, accepted for publication in MNRA
Power-Balanced Hybrid Optics Boosted Design for Achromatic Extended-Depth-of-Field Imaging via Optimized Mixed OTF
The power-balanced hybrid optical imaging system is a special design of a
diffractive computational camera, introduced in this paper, with image
formation by a refractive lens and Multilevel Phase Mask (MPM). This system
provides a long focal depth with low chromatic aberrations thanks to MPM and a
high energy light concentration due to the refractive lens. We introduce the
concept of optical power balance between the lens and MPM which controls the
contribution of each element to modulate the incoming light. Additional unique
features of our MPM design are the inclusion of quantization of the MPM's shape
on the number of levels and the Fresnel order (thickness) using a smoothing
function. To optimize optical power-balance as well as the MPM, we build a
fully-differentiable image formation model for joint optimization of optical
and imaging parameters for the proposed camera using Neural Network techniques.
Additionally, we optimize a single Wiener-like optical transfer function (OTF)
invariant to depth to reconstruct a sharp image. We numerically and
experimentally compare the designed system with its counterparts, lensless and
just-lens optical systems, for the visible wavelength interval (400-700)nm and
the depth-of-field range (0.5-m for numerical and 0.5-2m for
experimental). The attained results demonstrate that the proposed system
equipped with the optimal OTF overcomes its counterparts (even when they are
used with optimized OTF) in terms of reconstruction quality for off-focus
distances. The simulation results also reveal that optimizing the optical
power-balance, Fresnel order, and the number of levels parameters are essential
for system performance attaining an improvement of up to 5dB of PSNR using the
optimized OTF compared with its counterpart lensless setup.Comment: 18 pages, 14 figure
Computational Imaging Approach to Recovery of Target Coordinates Using Orbital Sensor Data
This dissertation addresses the components necessary for simulation of an image-based recovery of the position of a target using orbital image sensors. Each component is considered in detail, focusing on the effect that design choices and system parameters have on the accuracy of the position estimate. Changes in sensor resolution, varying amounts of blur, differences in image noise level, selection of algorithms used for each component, and lag introduced by excessive processing time all contribute to the accuracy of the result regarding recovery of target coordinates using orbital sensor data.
Using physical targets and sensors in this scenario would be cost-prohibitive in the exploratory setting posed, therefore a simulated target path is generated using Bezier curves which approximate representative paths followed by the targets of interest. Orbital trajectories for the sensors are designed on an elliptical model representative of the motion of physical orbital sensors. Images from each sensor are simulated based on the position and orientation of the sensor, the position of the target, and the imaging parameters selected for the experiment (resolution, noise level, blur level, etc.). Post-processing of the simulated imagery seeks to reduce noise and blur and increase resolution. The only information available for calculating the target position by a fully implemented system are the sensor position and orientation vectors and the images from each sensor. From these data we develop a reliable method of recovering the target position and analyze the impact on near-realtime processing. We also discuss the influence of adjustments to system components on overall capabilities and address the potential system size, weight, and power requirements from realistic implementation approaches
Deep learning-based diagnostic system for malignant liver detection
Cancer is the second most common cause of death of human beings, whereas liver cancer is the fifth most
common cause of mortality. The prevention of deadly diseases in living beings requires timely, independent,
accurate, and robust detection of ailment by a computer-aided diagnostic (CAD) system. Executing such intelligent CAD requires some preliminary steps, including preprocessing, attribute analysis, and identification.
In recent studies, conventional techniques have been used to develop computer-aided diagnosis algorithms.
However, such traditional methods could immensely affect the structural properties of processed images with
inconsistent performance due to variable shape and size of region-of-interest. Moreover, the unavailability of sufficient datasets makes the performance of the proposed methods doubtful for commercial use.
To address these limitations, I propose novel methodologies in this dissertation. First, I modified a
generative adversarial network to perform deblurring and contrast adjustment on computed tomography
(CT) scans. Second, I designed a deep neural network with a novel loss function for fully automatic precise
segmentation of liver and lesions from CT scans. Third, I developed a multi-modal deep neural network
to integrate pathological data with imaging data to perform computer-aided diagnosis for malignant liver
detection.
The dissertation starts with background information that discusses the proposed study objectives and the workflow. Afterward, Chapter 2 reviews a general schematic for developing a computer-aided algorithm, including image acquisition techniques, preprocessing steps, feature extraction approaches, and machine learning-based prediction methods.
The first study proposed in Chapter 3 discusses blurred images and their possible effects on classification.
A novel multi-scale GAN network with residual image learning is proposed to deblur images. The second
method in Chapter 4 addresses the issue of low-contrast CT scan images. A multi-level GAN is utilized
to enhance images with well-contrast regions. Thus, the enhanced images improve the cancer diagnosis
performance. Chapter 5 proposes a deep neural network for the segmentation of liver and lesions from
abdominal CT scan images. A modified Unet with a novel loss function can precisely segment minute lesions.
Similarly, Chapter 6 introduces a multi-modal approach for liver cancer variants diagnosis. The pathological data are integrated with CT scan images to diagnose liver cancer variants.
In summary, this dissertation presents novel algorithms for preprocessing and disease detection. Furthermore,
the comparative analysis validates the effectiveness of proposed methods in computer-aided diagnosis
High-quality computed tomography using advanced model-based iterative reconstruction
Computed Tomography (CT) is an essential technology for the treatment, diagnosis, and study of disease, providing detailed three-dimensional images of patient anatomy. While CT image quality and resolution has improved in recent years, many clinical tasks require visualization and study of structures beyond current system capabilities. Model-Based Iterative Reconstruction (MBIR) techniques offer improved image quality over traditional methods by incorporating more accurate models of the imaging physics. In this work, we seek to improve image quality by including high-fidelity models of CT physics in a MBIR framework. Specifically, we measure and model spectral effects, scintillator blur, focal-spot blur, and gantry motion blur, paying particular attention to shift-variant blur properties and noise correlations. We derive a novel MBIR framework that is capable of modeling a wide range of physical effects, and use this framework with the physical models to reconstruct data from various systems. Physical models of varying degrees of accuracy are compared with each other and more traditional techniques. Image quality is assessed with a variety of metrics, including bias, noise, and edge-response, as well as task specific metrics such as segmentation quality and material density accuracy. These results show that improving the model accuracy generally improves image quality, as the measured data is used more efficiently. For example, modeling focal-spot blur, scintillator blur, and noise iicorrelations enables more accurate trabecular bone visualization and trabecular thickness calculation as compared to methods that ignore blur or model blur but ignore noise correlations. Additionally, MBIR with advanced modeling typically outperforms traditional methods, either with more accurate reconstructions or by including physical effects that cannot otherwise be modeled, such as shift-variant focal-spot blur. This work provides a means to produce high-quality and high-resolution CT reconstructions for a wide variety of systems with different hardware and geometries, providing new tradeoffs in system design, enabling new applications in CT, and ultimately improving patient care
Ensemble deep learning: A review
Ensemble learning combines several individual models to obtain better
generalization performance. Currently, deep learning models with multilayer
processing architecture is showing better performance as compared to the
shallow or traditional classification models. Deep ensemble learning models
combine the advantages of both the deep learning models as well as the ensemble
learning such that the final model has better generalization performance. This
paper reviews the state-of-art deep ensemble models and hence serves as an
extensive summary for the researchers. The ensemble models are broadly
categorised into ensemble models like bagging, boosting and stacking, negative
correlation based deep ensemble models, explicit/implicit ensembles,
homogeneous /heterogeneous ensemble, decision fusion strategies, unsupervised,
semi-supervised, reinforcement learning and online/incremental, multilabel
based deep ensemble models. Application of deep ensemble models in different
domains is also briefly discussed. Finally, we conclude this paper with some
future recommendations and research directions
BEMDEC: An Adaptive and Robust Methodology for Digital Image Feature Extraction
The intriguing study of feature extraction, and edge detection in particular, has, as a result of the increased use of imagery, drawn even more attention not just from the field of computer science but also from a variety of scientific fields. However, various challenges surrounding the formulation of feature extraction operator, particularly of edges, which is capable of satisfying the necessary properties of low probability of error (i.e., failure of marking true edges), accuracy, and consistent response to a single edge, continue to persist. Moreover, it should be pointed out that most of the work in the area of feature extraction has been focused on improving many of the existing approaches rather than devising or adopting new ones. In the image processing subfield, where the needs constantly change, we must equally change the way we think.
In this digital world where the use of images, for variety of purposes, continues to increase, researchers, if they are serious about addressing the aforementioned limitations, must be able to think outside the box and step away from the usual in order to overcome these challenges. In this dissertation, we propose an adaptive and robust, yet simple, digital image features detection methodology using bidimensional empirical mode decomposition (BEMD), a sifting process that decomposes a signal into its two-dimensional (2D) bidimensional intrinsic mode functions (BIMFs). The method is further extended to detect corners and curves, and as such, dubbed as BEMDEC, indicating its ability to detect edges, corners and curves. In addition to the application of BEMD, a unique combination of a flexible envelope estimation algorithm, stopping criteria and boundary adjustment made the realization of this multi-feature detector possible. Further application of two morphological operators of binarization and thinning adds to the quality of the operator
- …