543 research outputs found

    Detection of multiplicative noise in stationary random processes using second- and higher order statistics

    Get PDF
    This paper addresses the problem of detecting the presence of colored multiplicative noise, when the information process can be modeled as a parametric ARMA process. For the case of zero-mean multiplicative noise, a cumulant based suboptimal detector is studied. This detector tests the nullity of a specific cumulant slice. A second detector is developed when the multiplicative noise is nonzero mean. This detector consists of filtering the data by an estimated AR filter. Cumulants of the residual data are then shown to be well suited to the detection problem. Theoretical expressions for the asymptotic probability of detection are given. Simulation-derived finite-sample ROC curves are shown for different sets of model parameters

    Bayesian off-line detection of multiple change-points corrupted by multiplicative noise : application to SAR image edge detection

    Get PDF
    This paper addresses the problem of Bayesian off-line change-point detection in synthetic aperture radar images. The minimum mean square error and maximum a posteriori estimators of the changepoint positions are studied. Both estimators cannot be implemented because of optimization or integration problems. A practical implementation using Markov chain Monte Carlo methods is proposed. This implementation requires a priori knowledge of the so-called hyperparameters. A hyperparameter estimation procedure is proposed that alleviates the requirement of knowing the values of the hyperparameters. Simulation results on synthetic signals and synthetic aperture radar images are presented

    Independent component analysis (ICA) applied to ultrasound image processing and tissue characterization

    Get PDF
    As a complicated ubiquitous phenomenon encountered in ultrasound imaging, speckle can be treated as either annoying noise that needs to be reduced or the source from which diagnostic information can be extracted to reveal the underlying properties of tissue. In this study, the application of Independent Component Analysis (ICA), a relatively new statistical signal processing tool appeared in recent years, to both the speckle texture analysis and despeckling problems of B-mode ultrasound images was investigated. It is believed that higher order statistics may provide extra information about the speckle texture beyond the information provided by first and second order statistics only. However, the higher order statistics of speckle texture is still not clearly understood and very difficult to model analytically. Any direct dealing with high order statistics is computationally forbidding. On the one hand, many conventional ultrasound speckle texture analysis algorithms use only first or second order statistics. On the other hand, many multichannel filtering approaches use pre-defined analytical filters which are not adaptive to the data. In this study, an ICA-based multichannel filtering texture analysis algorithm, which considers both higher order statistics and data adaptation, was proposed and tested on the numerically simulated homogeneous speckle textures. The ICA filters were learned directly from the training images. Histogram regularization was conducted to make the speckle images quasi-stationary in the wide sense so as to be adaptive to an ICA algorithm. Both Principal Component Analysis (PCA) and a greedy algorithm were used to reduce the dimension of feature space. Finally, Support Vector Machines (SVM) with Radial Basis Function (RBF) kernel were chosen as the classifier for achieving best classification accuracy. Several representative conventional methods, including both low and high order statistics based methods, and both filtering and non-filtering methods, have been chosen for comparison study. The numerical experiments have shown that the proposed ICA-based algorithm in many cases outperforms other algorithms for comparison. Two-component texture segmentation experiments were conducted and the proposed algorithm showed strong capability of segmenting two visually very similar yet different texture regions with rather fuzzy boundaries and almost the same mean and variance. Through simulating speckle with first order statistics approaching gradually to the Rayleigh model from different non-Rayleigh models, the experiments to some extent reveal how the behavior of higher order statistics changes with the underlying property of tissues. It has been demonstrated that when the speckle approaches the Rayleigh model, both the second and higher order statistics lose the texture differentiation capability. However, when the speckles tend to some non-Rayleigh models, methods based on higher order statistics show strong advantage over those solely based on first or second order statistics. The proposed algorithm may potentially find clinical application in the early detection of soft tissue disease, and also be helpful for better understanding ultrasound speckle phenomenon in the perspective of higher order statistics. For the despeckling problem, an algorithm was proposed which adapted the ICA Sparse Code Shrinkage (ICA-SCS) method for the ultrasound B-mode image despeckling problem by applying an appropriate preprocessing step proposed by other researchers. The preprocessing step makes the speckle noise much closer to the real white Gaussian noise (WGN) hence more amenable to a denoising algorithm such as ICS-SCS that has been strictly designed for additive WGN. A discussion is given on how to obtain the noise-free training image samples in various ways. The experimental results have shown that the proposed method outperforms several classical methods chosen for comparison, including first or second order statistics based methods (such as Wiener filter) and multichannel filtering methods (such as wavelet shrinkage), in the capability of both speckle reduction and edge preservation

    Diffuse Gas in Retired Galaxies: Nebular Emission Templates and Constraints on the Sources of Ionization

    Full text link
    We present emission line templates for passively evolving ("retired") galaxies, useful for investigation of the evolution of the ISM in these galaxies, and characterization of their high-temperature source populations. The templates are based on high signal-to-noise (>800>800) co-added spectra (370068003700-6800\AA) of 11500\sim11500 gas-rich Sloan Digital Sky Survey galaxies devoid of star-formation and active galactic nuclei. Stacked spectra are provided for the entire sample and sub-samples binned by mean stellar age. In Johansson~et al (2014), these spectra provided the first measurements of the He II 4686\AA\ line in passively-evolving galaxies, and the observed He II/Hβ\beta ratio constrained the contribution of accreting white dwarfs (the "single-degenerate" scenario) to the type Ia supernova rate. In this paper, the full range of unambiguously detected emission lines are presented. Comparison of the observed [O I] 6300\AA/Hα\alpha ratio with photoionization models further constrains any high-temperature single-degenerate scenario for type Ia supernovae (with 1.5 \lesssim T/105K10^{5}K \lesssim 10) to \lesssim3-6\% of the observed rate in the youngest age bin (i.e. highest SN Ia rate). Hence, for the same temperatures, in the presence of an ambient population of post-AGB stars, we exclude additional high-temperature sources with a combined ionizing luminosity of 1.35×1030L/M,\approx 1.35\times 10^{30} L_{\odot}/M_{\odot,*} for stellar populations with mean ages of 1 - 4 Gyrs. Furthermore, we investigate the extinction affecting both the stellar and nebular continuum. The latter shows about five times higher values. This contradicts isotropically distributed dust and gas that renders similar extinction values for both cases.Comment: Accepted for publication in MNRAS, 16 pages, 12 figure

    Statistical process control of mortality series in the Australian and New Zealand Intensive Care Society (ANZICS) adult patient database: implications of the data generating process

    Get PDF
    for the ANZICS Centre for Outcome and Resource Evaluation (CORE) of the Australian and New Zealand Intensive Care Society (ANZICS)BACKGROUND Statistical process control (SPC), an industrial sphere initiative, has recently been applied in health care and public health surveillance. SPC methods assume independent observations and process autocorrelation has been associated with increase in false alarm frequency. METHODS Monthly mean raw mortality (at hospital discharge) time series, 1995–2009, at the individual Intensive Care unit (ICU) level, were generated from the Australia and New Zealand Intensive Care Society adult patient database. Evidence for series (i) autocorrelation and seasonality was demonstrated using (partial)-autocorrelation ((P)ACF) function displays and classical series decomposition and (ii) “in-control” status was sought using risk-adjusted (RA) exponentially weighted moving average (EWMA) control limits (3 sigma). Risk adjustment was achieved using a random coefficient (intercept as ICU site and slope as APACHE III score) logistic regression model, generating an expected mortality series. Application of time-series to an exemplar complete ICU series (1995-(end)2009) was via Box-Jenkins methodology: autoregressive moving average (ARMA) and (G)ARCH ((Generalised) Autoregressive Conditional Heteroscedasticity) models, the latter addressing volatility of the series variance. RESULTS The overall data set, 1995-2009, consisted of 491324 records from 137 ICU sites; average raw mortality was 14.07%; average(SD) raw and expected mortalities ranged from 0.012(0.113) and 0.013(0.045) to 0.296(0.457) and 0.278(0.247) respectively. For the raw mortality series: 71 sites had continuous data for assessment up to or beyond lag ₄₀ and 35% had autocorrelation through to lag ₄₀; and of 36 sites with continuous data for ≥ 72 months, all demonstrated marked seasonality. Similar numbers and percentages were seen with the expected series. Out-of-control signalling was evident for the raw mortality series with respect to RA-EWMA control limits; a seasonal ARMA model, with GARCH effects, displayed white-noise residuals which were in-control with respect to EWMA control limits and one-step prediction error limits (3SE). The expected series was modelled with a multiplicative seasonal autoregressive model. CONCLUSIONS The data generating process of monthly raw mortality series at the ICU level displayed autocorrelation, seasonality and volatility. False-positive signalling of the raw mortality series was evident with respect to RA-EWMA control limits. A time series approach using residual control charts resolved these issues.John L Moran, Patricia J Solomo

    A Tunable-Q wavelet transform and quadruple symmetric pattern based EEG signal classification method

    Get PDF
    Electroencephalography (EEG) signals have been widely used to diagnose brain diseases for instance epilepsy, Parkinson's Disease (PD), Multiple Skleroz (MS), and many machine learning methods have been proposed to develop automated disease diagnosis methods using EEG signals. In this method, a multilevel machine learning method is presented to diagnose epilepsy disease. The proposed multilevel EEG classification method consists of pre-processing, feature extraction, feature concatenation, feature selection and classification phases. In order to create levels, Tunable-Q wavelet transform (TQWT) is chosen and 25 frequency coefficients sub-bands are calculated by using TQWT in the pre-processing. In the feature extraction phase, quadruple symmetric pattern (QSP) is chosen as feature extractor and extracts 256 features from the raw EEG signal and the extracted 25 sub-bands. In the feature selection phase, neighborhood component analysis (NCA) is used. The 128, 256, 512 and 1024 most significant features are selected in this phase. In the classification phase, k nearest neighbors (kNN) classifier is utilized as classifier. The proposed method is tested on seven cases using Bonn EEG dataset. The proposed method achieved 98.4% success rate for 5 classes case. Therefore, our proposed method can be used in bigger datasets for more validation

    A Deconvolution Framework with Applications in Medical and Biological Imaging

    Get PDF
    A deconvolution framework is presented in this thesis and applied to several problems in medical and biological imaging. The framework is designed to contain state of the art deconvolution methods, to be easily expandable and to combine different components arbitrarily. Deconvolution is an inverse problem and in order to cope with its ill-posed nature, suitable regularization techniques and additional restrictions are required. A main objective of deconvolution methods is to restore degraded images acquired by fluorescence microscopy which has become an important tool in biological and medical sciences. Fluorescence microscopy images are degraded by out-of-focus blurring and noise and the deconvolution algorithms to restore these images are usually called deblurring methods. Many deblurring methods were proposed to restore these images in the last decade which are part of the deconvolution framework. In addition, existing deblurring techniques are improved and new components for the deconvolution framework are developed. A considerable improvement could be obtained by combining a state of the art regularization technique with an additional non-negativity constraint. A real biological screen analysing a specific protein in human cells is presented and shows the need to analyse structural information of fluorescence images. Such an analysis requires a good image quality which is the aim of the deblurring methods if the required image quality is not given. For a reliable understanding of cells and cellular processes, high resolution 3D images of the investigated cells are necessary. However, the ability of fluorescence microscopes to image a cell in 3D is limited since the resolution along the optical axis is by a factor of three worse than the transversal resolution. Standard microscopy image deblurring techniques are able to improve the resolution but the problem of a lower resolution in direction along the optical axis remains. It is however possible to overcome this problem using Axial Tomography providing tilted views of the object by rotating it under the microscope. The rotated images contain additional information about the objects which can be used to improve the resolution along the optical axis. In this thesis, a sophisticated method to reconstruct a high resolution Axial Tomography image on basis of the developed deblurring methods is presented. The deconvolution methods are also used to reconstruct the dose distribution in proton therapy on basis of measured PET images. Positron emitters are activated by proton beams but a PET image is not directly proportional to the delivered radiation dose distribution. A PET signal can be predicted by a convolution of the planned dose with specific filter functions. In this thesis, a dose reconstruction method based on PET images which reverses the convolution approach is presented and the potential to reconstruct the actually delivered dose distribution from measured PET images is investigated. Last but not least, a new denoising method using higher-order statistic information of a given Gaussian noise signal is presented and compared to state of the art denoising methods

    Observer-based Fault Diagnosis: Applications to Exothermic Continuous Stirred Tank Reactors

    Get PDF
    For chemical engineering dynamic systems, there is an increasing demand for better process performance, high product quality, absolute reliability & safety, maximum cost efficiency and less environmental impact. Improved individual process components and advanced automatic control techniques have brought significant benefits to the chemical industry. However, fault-free operation of processes cannot be guaranteed. Timely fault diagnosis and proper management can help to avoid or at least minimize the undesirable consequences. There are many techniques for fault diagnosis, and observer-based methods have been widely studied and have proved to be efficient for fault diagnosis. The basic idea of an observer-based approach is to generate a specific residual signal which carries the information of specific faults, as well as the information of process disturbances, model uncertainties, other faults and measurement noises. For fault diagnosis, the residual should be sensitive to faults and insensitive to other unknown inputs. With this feature, faults can be easily detected and may be isolated and identified. This thesis applied an observer-based fault diagnosis method to three exothermic CSTR case studies. In order to improve the operational safety of exothermic CSTRs with risks of runaway reactions and explosion, fault diagnostic observers are built for fault detection, isolation and identification. For this purpose, different types of most common faults have been studied in different reaction systems. For each fault, a specific observer and the corresponding residual is built, which works as an indicator of that fault and is robust to other unknown inputs. For designing linear observers, the original nonlinear system is linearized at steady state, and the observer is designed based on the linearized system. However, in the simulations, the observer is tested on the nonlinear system instead of the linearized system. In addition, an efficient & effective general MATLAB program has been developed for fault diagnosis observer design. Extensive simulation studies have been performed to test the fault diagnostic observer on exothermic CSTRs. The results show that the proposed fault diagnosis scheme can be directly implemented and it works well for diagnosing faults in exothermic chemical reactors

    Primordial Non-Gaussianity from Biased Tracers: Likelihood Analysis of Real-Space Power Spectrum and Bispectrum

    Get PDF
    Upcoming galaxy redshift surveys promise to significantly improve current limits on primordial non-Gaussianity (PNG) through measurements of 2- and 3-point correlation functions in Fourier space. However, realizing the full potential of this dataset is contingent upon having both accurate theoretical models and optimized analysis methods. Focusing on the local model of PNG, parameterized by fNLf_{\rm NL}, we perform a Monte-Carlo Markov Chain analysis to confront perturbation theory predictions of the halo power spectrum and bispectrum in real space against a suite of N-body simulations. We model the halo bispectrum at tree-level, including all contributions linear and quadratic in fNLf_{\rm NL}, and the halo power spectrum at 1-loop, including tree-level terms up to quadratic order in fNLf_{\rm NL} and all loops induced by local PNG linear in fNLf_{\rm NL}. Keeping the cosmological parameters fixed, we examine the effect of informative priors on the linear non-Gaussian bias parameter on the statistical inference of fNLf_{\rm NL}. A conservative analysisof the combined power spectrum and bispectrum, in which only loose priors are imposed and all parameters are marginalized over, can improve the constraint on fNLf_{\rm NL} by more than a factor of 5 relative to the power spectrum-only measurement. Imposing a strong prior on bϕb_\phi, or assuming bias relations for both bϕb_\phi and bϕδb_{\phi\delta} (motivated by a universal mass function assumption), improves the constraints further by a factor of few. In this case, however, we find a significant systematic shift in the inferred value of fNLf_{\rm NL} if the same range of wavenumber is used. Likewise, a Poisson noise assumption can lead to significant systematics, and it is thus essential to leave all the stochastic amplitudes free
    corecore