102 research outputs found

    Cellular neural networks, Navier-Stokes equation and microarray image reconstruction

    Get PDF
    Copyright @ 2011 IEEE.Although the last decade has witnessed a great deal of improvements achieved for the microarray technology, many major developments in all the main stages of this technology, including image processing, are still needed. Some hardware implementations of microarray image processing have been proposed in the literature and proved to be promising alternatives to the currently available software systems. However, the main drawback of those proposed approaches is the unsuitable addressing of the quantification of the gene spot in a realistic way without any assumption about the image surface. Our aim in this paper is to present a new image-reconstruction algorithm using the cellular neural network that solves the Navier–Stokes equation. This algorithm offers a robust method for estimating the background signal within the gene-spot region. The MATCNN toolbox for Matlab is used to test the proposed method. Quantitative comparisons are carried out, i.e., in terms of objective criteria, between our approach and some other available methods. It is shown that the proposed algorithm gives highly accurate and realistic measurements in a fully automated manner within a remarkably efficient time

    Wavelet-based noise reduction of cDNA microarray images

    Get PDF
    The advent of microarray imaging technology has lead to enormous progress in the life sciences by allowing scientists to analyze the expression of thousands of genes at a time. For complementary DNA (cDNA) microarray experiments, the raw data are a pair of red and green channel images corresponding to the treatment and control samples. These images are contaminated by a high level of noise due to the numerous noise sources affecting the image formation. A major challenge of microarray image analysis is the extraction of accurate gene expression measurements from the noisy microarray images. A crucial step in this process is denoising, which consists of reducing the noise in the observed microarray images while preserving the signal information as much as possible. This thesis deals with the problem of developing novel methods for reducing noise in cDNA microarray images for accurate estimation of the gene expression levels. Denoising methods based on the wavelet transform have shown significant success when applied to natural images. However, these methods are not very efficient for reducing noise in cDNA microarray images. An important reason for this is that existing methods are only capable of processing the red and green channel images separately. In doing so. they ignore the signal correlation as well as the noise correlation that exists between the wavelet coefficients of the two channels. The primary objective of this research is to design efficient wavelet-based noise reduction algorithms for cDNA microarray images that take into account these inter-channel dependencies by 'jointly' estimating the noise-free coefficients in both the channels. Denoising algorithms are developed using two types of wavelet transforms, namely, the frequently-used discrete wavelet transform (DWT) and the complex wavelet transform (CWT). The main advantage of using the DWT for denoising is that this transform is computationally very efficient. In order to obtain a better denoising performance for microarray images, however, the CWT is preferred to DWT because the former has good directional selectivity properties that are necessary for better representation of the circular edges of spots. The linear minimum mean squared error and maximum a posteriori estimation techniques are used to develop bivariate estimators for the noise-free coefficients of the two images. These estimators are derived by utilizing appropriate joint probability density functions for the image coefficients as well as the noise coefficients of the two channels. Extensive experimentations are carried out on a large set of cDNA microarray images to evaluate the performance of the proposed denoising methods as compared to the existing ones. Comparisons are made using standard metrics such as the peak signal-to-noise ratio (PSNR) for measuring the amount of noise removed from the pixels of the images, and the mean absolute error for measuring the accuracy of the estimated log-intensity ratios obtained from the denoised version of the images. Results indicate that the proposed denoising methods that are developed specifically for the microarray images do, indeed, lead to more accurate estimation of gene expression levels. Thus, it is expected that the proposed methods will play a significant role in improving the reliability of the results obtained from practical microarray experiments

    Multi-scale approaches for the statistical analysis of microarray data (with an application to 3D vesicle tracking)

    Get PDF
    The recent developments in experimental methods for gene data analysis, called microarrays, provide the possibility of interrogating changes in the expression of a vast number of genes in cell or tissue cultures and thus in depth exploration of disease conditions. As part of an ongoing program of research in Guy A. Rutter (G.A.R.) laboratory, Department of Biochemistry, University of Bristol, UK, with support from the Welcome Trust, we study the impact of established and of potentially new methods to the statistical analysis of gene expression data.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Selection of Wavelet Basis Function for Image Compression : a Review

    Get PDF
    Wavelets are being suggested as a platform for various tasks in image processing. The advantage of wavelets lie in its time frequency resolution. The use of different basis functions in the form of different wavelets made the wavelet analysis as a destination for many applications. The performance of a particular technique depends on the wavelet coefficients arrived after applying the wavelet transform. The coefficients for a specific input signal depends on the basis functions used in the wavelet transform. Hence in this paper toward this end, different basis functions and their features are presented. As the image compression task depends on wavelet transform to large extent from few decades, the selection of basis function for image compression should be taken with care. In this paper, the factors influencing the performance of image compression are presented

    Curvelet-Based Texture Classification in Computerized Critical Gleason Grading of Prostate Cancer Histological Images

    Get PDF
    Classical multi-resolution image processing using wavelets provides an efficient analysis of image characteristics represented in terms of pixel-based singularities such as connected edge pixels of objects and texture elements given by the pixel intensity statistics. Curvelet transform is a recently developed approach based on curved singularities that provides a more sparse representation for a variety of directional multi-resolution image processing tasks such as denoising and texture analysis. The objective of this research is to develop a multi-class classifier for the automated classification of Gleason patterns of prostate cancer histological images with the utilization of curvelet-based texture analysis. This problem of computer-aided recognition of four pattern classes between Gleason Score 6 (primary Gleason grade 3 plus secondary Gleason grade 3) and Gleason Score 8 (both primary and secondary grades 4) is of critical importance affecting treatment decision and patients’ quality of life. Multiple spatial sampling within each histological image is examined through the curvelet transform, the significant curvelet coefficient at each location of an image patch is obtained by maximization with respect to all curvelet orientations at a given location which represents the apparent curved-based singularity such as a short edge segment in the image structure. This sparser representation reduces greatly the redundancy in the original set of curvelet coefficients. The statistical textural features are extracted from these curvelet coefficients at multiple scales. We have designed a 2-level 4-class classification scheme, attempting to mimic the human expert’s decision process. It consists of two Gaussian kernel support vector machines, one support vector machine in each level and each is incorporated with a voting mechanism from classifications of multiple windowed patches in an image to reach the final decision for the image. At level 1, the support vector machine with voting is trained to ascertain the classification of Gleason grade 3 and grade 4, thus Gleason score 6 and score 8, by unanimous votes to one of the two classes, while the mixture voting inside the margin between decision boundaries will be assigned to the third class for consideration at level 2. The support vector machine in level 2 with supplemental features is trained to classify an image patch to Gleason grade 3+4 or 4+3 and the majority decision from multiple patches to consolidate the two-class discrimination of the image within Gleason score 7, or else, assign to an Indecision category. The developed tree classifier with voting from sampled image patches is distinct from the traditional voting by multiple machines. With a database of TMA prostate histological images from Urology/Pathology Laboratory of the Johns Hopkins Medical Center, the classifier using curvelet-based statistical texture features for recognition of 4-class critical Gleason scores was successfully trained and tested achieving a remarkable performance with 97.91% overall 4-class validation accuracy and 95.83% testing accuracy. This lends to an expectation of more testing and further improvement toward a plausible practical implementation

    Nonparametric Pre-Processing Methods and Inference Tools for Analyzing Time-of-Flight Mass Spectrometry Data

    Get PDF
    The objective of this paper is to contribute to the methodology available for extracting and analyzing signal content from protein mass spectrometry data. Data from MALDI-TOF or SELDI-TOF spectra require considerable signal pre-processing such as noise removal and baseline level error correction. After removing the noise by an invariant wavelet transform, we develop a background correction method based on penalized spline quantile regression and apply it to MALDI-TOF (matrix assisted laser deabsorbtion time-of-flight) spectra obtained from serum samples. The results show that the wavelet transform technique combined with nonparametric quantile regression can handle all kinds of background and low signal-to-background ratio spectra; it requires no prior knowledge about the spectra composition, no selection of suitable background correction points, and no mathematical assumption of the background distribution. We further present a multi-scale based novel spectra alignment methodology useful in a functional analysis of variance method for identifying proteins that are differentially expressed between different type tissues. Our approaches are compared with several existing approaches in the recent literature and are tested on simulated and some real data. The results indicate that the proposed schemes enable accurate diagnosis based on the over-expression of a small number of identified proteins with high sensitivity

    Identification of cancer hallmarks in patients with non-metastatic colon cancer after surgical resection

    Get PDF
    Colon cancer is one of the most common cancers in the world, and the therapeutic workflow is dependent on the TNM staging system and the presence of clinical risk factors. However, in the case of patients with non-metastatic disease, evaluating the benefit of adjuvant chemotherapy is a clinical challenge. Radiomics could be seen as a non-invasive novel imaging biomarker able to outline tumor phenotype and to predict patient prognosis by analyzing preoperative medical images. Radiomics might provide decisional support for oncologists with the goal to reduce the number of arbitrary decisions in the emerging era of personalized medicine. To date, much evidence highlights the strengths of radiomics in cancer workup, but several aspects limit the use of radiomics methods as routine. The study aimed to develop a radiomic model able to identify high-risk colon cancer by analyzing pre-operative CT scans. The study population comprised 148 patients: 108 with non-metastatic colon cancer were retrospectively enrolled from January 2015 to June 2020, and 40 patients were used as the external validation cohort. The population was divided into two groups—High-risk and No-risk—following the presence of at least one high-risk clinical factor. All patients had baseline CT scans, and 3D cancer segmentation was performed on the portal phase by two expert radiologists using open-source software (3DSlicer v4.10.2). Among the 107 radiomic features extracted, stable features were selected to evaluate the inter-class correlation (ICC) (cut-off ICC > 0.8). Stable features were compared between the two groups (T-test or Mann–Whitney), and the significant features were selected for univariate and multivariate logistic regression to build a predictive radiomic model. The radiomic model was then validated with an external cohort. In total, 58/108 were classified as High-risk and 50/108 as No-risk. A total of 35 radiomic features were stable (0.81 ≤ ICC <  0.92). Among these, 28 features were significantly different between the two groups (p < 0.05), and only 9 features were selected to build the radiomic model. The radiomic model yielded an AUC of 0.73 in the internal cohort and 0.75 in the external cohort. In conclusion, the radiomic model could be seen as a performant, non-invasive imaging tool to properly stratify colon cancers with high-risk diseas

    Enhancement of the Feature Extraction Capability in Global Damage Detection Using Wavelet Theory

    Get PDF
    The main objective of this study is to assess the specific capabilities of the defect energy parameter technique for global damage detection developed by Saleeb and coworkers. The feature extraction is the most important capability in any damage-detection technique. Features are any parameters extracted from the processed measurement data in order to enhance damage detection. The damage feature extraction capability was studied extensively by analyzing various simulation results. The practical significance in structural health monitoring is that the detection at early stages of small-size defects is always desirable. The amount of changes in the structure's response due to these small defects was determined to show the needed level of accuracy in the experimental methods. The arrangement of fine/extensive sensor network to measure required data for the detection is an "unlimited" ability, but there is a difficulty to place extensive number of sensors on a structure. Therefore, an investigation was conducted using the measurements of coarse sensor network. The white and the pink noises, which cover most of the frequency ranges that are typically encountered in the many measuring devices used (e.g., accelerometers, strain gauges, etc.) are added to the displacements to investigate the effect of noisy measurements in the detection technique. The noisy displacements and the noisy damage parameter values are used to study the signal feature reconstruction using wavelets. The enhancement of the feature extraction capability was successfully achieved by the wavelet theory

    Unsupervised multi-scale change detection from SAR imagery for monitoring natural and anthropogenic disasters

    Get PDF
    Thesis (Ph.D.) University of Alaska Fairbanks, 2017Radar remote sensing can play a critical role in operational monitoring of natural and anthropogenic disasters. Despite its all-weather capabilities, and its high performance in mapping, and monitoring of change, the application of radar remote sensing in operational monitoring activities has been limited. This has largely been due to: (1) the historically high costs associated with obtaining radar data; (2) slow data processing, and delivery procedures; and (3) the limited temporal sampling that was provided by spaceborne radar-based satellites. Recent advances in the capabilities of spaceborne Synthetic Aperture Radar (SAR) sensors have developed an environment that now allows for SAR to make significant contributions to disaster monitoring. New SAR processing strategies that can take full advantage of these new sensor capabilities are currently being developed. Hence, with this PhD dissertation, I aim to: (i) investigate unsupervised change detection techniques that can reliably extract signatures from time series of SAR images, and provide the necessary flexibility for application to a variety of natural, and anthropogenic hazard situations; (ii) investigate effective methods to reduce the effects of speckle and other noise on change detection performance; (iii) automate change detection algorithms using probabilistic Bayesian inferencing; and (iv) ensure that the developed technology is applicable to current, and future SAR sensors to maximize temporal sampling of a hazardous event. This is achieved by developing new algorithms that rely on image amplitude information only, the sole image parameter that is available for every single SAR acquisition. The motivation and implementation of the change detection concept are described in detail in Chapter 3. In the same chapter, I demonstrated the technique's performance using synthetic data as well as a real-data application to map wildfire progression. I applied Radiometric Terrain Correction (RTC) to the data to increase the sampling frequency, while the developed multiscaledriven approach reliably identified changes embedded in largely stationary background scenes. With this technique, I was able to identify the extent of burn scars with high accuracy. I further applied the application of the change detection technology to oil spill mapping. The analysis highlights that the approach described in Chapter 3 can be applied to this drastically different change detection problem with only little modification. While the core of the change detection technique remained unchanged, I made modifications to the pre-processing step to enable change detection from scenes of continuously varying background. I introduced the Lipschitz regularity (LR) transformation as a technique to normalize the typically dynamic ocean surface, facilitating high performance oil spill detection independent of environmental conditions during image acquisition. For instance, I showed that LR processing reduces the sensitivity of change detection performance to variations in surface winds, which is a known limitation in oil spill detection from SAR. Finally, I applied the change detection technique to aufeis flood mapping along the Sagavanirktok River. Due to the complex nature of aufeis flooded areas, I substituted the resolution-preserving speckle filter used in Chapter 3 with curvelet filters. In addition to validating the performance of the change detection results, I also provide evidence of the wealth of information that can be extracted about aufeis flooding events once a time series of change detection information was extracted from SAR imagery. A summary of the developed change detection techniques is conducted and suggested future work is presented in Chapter 6

    Bayesian methods for non-gaussian data modeling and applications

    Get PDF
    Finite mixture models are among the most useful machine learning techniques and are receiving considerable attention in various applications. The use of finite mixture models in image and signal processing has proved to be of considerable interest in terms of both theoretical development and in their usefulness in several applications. In most of the applications, the Gaussian density is used in the mixture modeling of data. Although a Gaussian mixture may provide a reasonable approximation to many real-world distributions, it is certainly not always the best approximation especially in image and signal processing applications where we often deal with non-Gaussian data. In this thesis, we propose two novel approaches that may be used in modeling non-Gaussian data. These approaches use two highly flexible distributions, the generalized Gaussian distribution (GGD) and the general Beta distribution, in order to model the data. We are motivated by the fact that these distributions are able to fit many distributional shapes and then can be considered as a useful class of flexible models to address several problems and applications involving measurements and features having well-known marked deviation from the Gaussian shape. For the mixture estimation and selection problem, researchers have demonstrated that Bayesian approaches are fully optimal. The Bayesian learning allows the incorporation of prior knowledge in a formal coherent way that avoids overfitting problems. For this reason, we adopt different Bayesian approaches in order to learn our models parameters. First, we present a fully Bayesian approach to analyze finite generalized Gaussian mixture models which incorporate several standard mixtures, such as Laplace and Gaussian. This approach evaluates the posterior distribution and Bayes estimators using a Gibbs sampling algorithm, and selects the number of components in the mixture using the integrated likelihood. We also propose a fully Bayesian approach for finite Beta mixtures learning using a Reversible Jump Markov Chain Monte Carlo (RJMCMC) technique which simultaneously allows cluster assignments, parameters estimation, and the selection of the optimal number of clusters. We then validate the proposed methods by applying them to different image processing applications
    corecore