146 research outputs found

    On-Line Learning and Wavelet-Based Feature Extraction Methodology for Process Monitoring using High-Dimensional Functional Data

    Get PDF
    The recent advances in information technology, such as the various automatic data acquisition systems and sensor systems, have created tremendous opportunities for collecting valuable process data. The timely processing of such data for meaningful information remains a challenge. In this research, several data mining methodology that will aid information streaming of high-dimensional functional data are developed. For on-line implementations, two weighting functions for updating support vector regression parameters were developed. The functions use parameters that can be easily set a priori with the slightest knowledge of the data involved and have provision for lower and upper bounds for the parameters. The functions are applicable to time series predictions, on-line predictions, and batch predictions. In order to apply these functions for on-line predictions, a new on-line support vector regression algorithm that uses adaptive weighting parameters was presented. The new algorithm uses varying rather than fixed regularization constant and accuracy parameter. The developed algorithm is more robust to the volume of data available for on-line training as well as to the relative position of the available data in the training sequence. The algorithm improves prediction accuracy by reducing uncertainty in using fixed values for the regression parameters. It also improves prediction accuracy by reducing uncertainty in using regression values based on some experts’ knowledge rather than on the characteristics of the incoming training data. The developed functions and algorithm were applied to feedwater flow rate data and two benchmark time series data. The results show that using adaptive regression parameters performs better than using fixed regression parameters. In order to reduce the dimension of data with several hundreds or thousands of predictors and enhance prediction accuracy, a wavelet-based feature extraction procedure called step-down thresholding procedure for identifying and extracting significant features for a single curve was developed. The procedure involves transforming the original spectral into wavelet coefficients. It is based on multiple hypothesis testing approach and it controls family-wise error rate in order to guide against selecting insignificant features without any concern about the amount of noise that may be present in the data. Therefore, the procedure is applicable for data-reduction and/or data-denoising. The procedure was compared to six other data-reduction and data-denoising methods in the literature. The developed procedure is found to consistently perform better than most of the popular methods and performs at the same level with the other methods. Many real-world data with high-dimensional explanatory variables also sometimes have multiple response variables; therefore, the selection of the fewest explanatory variables that show high sensitivity to predicting the response variable(s) and low sensitivity to the noise in the data is important for better performance and reduced computational burden. In order to select the fewest explanatory variables that can predict each of the response variables better, a two-stage wavelet-based feature extraction procedure is proposed. The first stage uses step-down procedure to extract significant features for each of the curves. Then, representative features are selected out of the extracted features for all curves using voting selection strategy. Other selection strategies such as union and intersection were also described and implemented. The essence of the first stage is to reduce the dimension of the data without any consideration for whether or not they can predict the response variables accurately. The second stage uses Bayesian decision theory approach to select some of the extracted wavelet coefficients that can predict each of the response variables accurately. The two stage procedure was implemented using near-infrared spectroscopy data and shaft misalignment data. The results show that the second stage further reduces the dimension and the prediction results are encouraging

    A New SURE Approach to Image Denoising: Interscale Orthonormal Wavelet Thresholding

    Get PDF
    This paper introduces a new approach to orthonormal wavelet image denoising. Instead of postulating a statistical model for the wavelet coefficients, we directly parametrize the denoising process as a sum of elementary nonlinear processes with unknown weights. We then minimize an estimate of the mean square error between the clean image and the denoised one. The key point is that we have at our disposal a very accurate, statistically unbiased, MSE estimate—Stein's unbiased risk estimate—that depends on the noisy image alone, not on the clean one. Like the MSE, this estimate is quadratic in the unknown weights, and its minimization amounts to solving a linear system of equations. The existence of this a priori estimate makes it unnecessary to devise a specific statistical model for the wavelet coefficients. Instead, and contrary to the custom in the literature, these coefficients are not considered random anymore. We describe an interscale orthonormal wavelet thresholding algorithm based on this new approach and show its near-optimal performance—both regarding quality and CPU requirement—by comparing with the results of three state-of-the-art nonredundant denoising algorithms on a large set of test images. An interesting fallout of this study is the development of a new, group-delay-based, parent-child prediction in a wavelet dyadic tree

    The SURE-LET approach to image denoising

    Get PDF
    Denoising is an essential step prior to any higher-level image-processing tasks such as segmentation or object tracking, because the undesirable corruption by noise is inherent to any physical acquisition device. When the measurements are performed by photosensors, one usually distinguish between two main regimes: in the first scenario, the measured intensities are sufficiently high and the noise is assumed to be signal-independent. In the second scenario, only few photons are detected, which leads to a strong signal-dependent degradation. When the noise is considered as signal-independent, it is often modeled as an additive independent (typically Gaussian) random variable, whereas, otherwise, the measurements are commonly assumed to follow independent Poisson laws, whose underlying intensities are the unknown noise-free measures. We first consider the reduction of additive white Gaussian noise (AWGN). Contrary to most existing denoising algorithms, our approach does not require an explicit prior statistical modeling of the unknown data. Our driving principle is the minimization of a purely data-adaptive unbiased estimate of the mean-squared error (MSE) between the processed and the noise-free data. In the AWGN case, such a MSE estimate was first proposed by Stein, and is known as "Stein's unbiased risk estimate" (SURE). We further develop the original SURE theory and propose a general methodology for fast and efficient multidimensional image denoising, which we call the SURE-LET approach. While SURE allows the quantitative monitoring of the denoising quality, the flexibility and the low computational complexity of our approach are ensured by a linear parameterization of the denoising process, expressed as a linear expansion of thresholds (LET).We propose several pointwise, multivariate, and multichannel thresholding functions applied to arbitrary (in particular, redundant) linear transformations of the input data, with a special focus on multiscale signal representations. We then transpose the SURE-LET approach to the estimation of Poisson intensities degraded by AWGN. The signal-dependent specificity of the Poisson statistics leads to the derivation of a new unbiased MSE estimate that we call "Poisson's unbiased risk estimate" (PURE) and requires more adaptive transform-domain thresholding rules. In a general PURE-LET framework, we first devise a fast interscale thresholding method restricted to the use of the (unnormalized) Haar wavelet transform. We then lift this restriction and show how the PURE-LET strategy can be used to design and optimize a wide class of nonlinear processing applied in an arbitrary (in particular, redundant) transform domain. We finally apply some of the proposed denoising algorithms to real multidimensional fluorescence microscopy images. Such in vivo imaging modality often operates under low-illumination conditions and short exposure time; consequently, the random fluctuations of the measured fluorophore radiations are well described by a Poisson process degraded (or not) by AWGN. We validate experimentally this statistical measurement model, and we assess the performance of the PURE-LET algorithms in comparison with some state-of-the-art denoising methods. Our solution turns out to be very competitive both qualitatively and computationally, allowing for a fast and efficient denoising of the huge volumes of data that are nowadays routinely produced in biomedical imaging

    Space-frequency localized basis function networks for nonlinear system identification and control

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 1995.Includes bibliographical references (p. 99-102).by Mark Cannon.M.S

    Modelling the time-series of cerebrovascular pressure transmission variation in head injured patients

    Get PDF
    Cerebral autoregulation is the process by which blood ow is maintained over a changing cerebral perfusion pressure. Clinically autoregulation is an important topic because it directly effects overall patient management strategy. However accurately predicting autoregulatory state or even modelling the underlying general physiological processes is a complex task. There are a number of models published within the literature but there has been no active attempt to compare and classify these models. Starting with the hypothesis that a physiologically based model would be a better predictor of autoregulatory state than a purely statistically based one has led us to investigate approaches to model comparison. Using three different models: a new mathematical arrangement of a physiological model by Ursino, the Highest Model Frequency (HMF) model by Daley and the Pressure reactivity index (PRx) statistical model by Czosnyka, a general comparison was carried out using the Matthews correlation coecient against a known autoregulatory state. This showed that the Ursino model was approximately three times as predictive as both the HMF model and the PRx model. However, in general, all of the models predictive accuracies were relatively poor so a number of optimisation strategies were then assessed. These optimisation strategies ultimately were formed into a generalised modelling framework. This framework draws on the ideas of mathematical topology to underpin and explain any change or optimisation to a model. Within the framework different optimisations can be grouped into four categories, each of which are explored in the text of this thesis: 1) Model Comparison. This is the simplest technique to apply where the number of models under examination are reduced based on the predictive accuracy. 2) Parameter restriction. A classical form of optimisation by constraining a model parameter to cause a better predictive accuracy. In the case of both the HMF and PRx we showed between a two hundred and six hundred percent increase in predictive accuracy over the initial assessment. 3) Parameter alteration. This change allows for related parameters to be substituted into a model. Four different alterations are explored as a surrogate measure for arterial-arteriolar blood volume the most clinically applicable of which is a transcranial impedance technique. This latter technique has the potential to be a non invasive measure correlated with both mean ICP and ICP pulse amplitude. 4) Model alteration. Allows for larger changes to the underlying structure of the model. Two examples are presented: firstly a new asymmetric sigmoid curve to overcome computational issues in the Ursino model and secondly a novel use of fractal characterisation which is applied in a wavelet noise reduction technique. The framework also gives an overview of the autoregulatory research domain as a whole as a result of its abstract nature. This helps to highlight some general issues in the domain including a more standardised way to record autoregulatory status. Finally concluding with research addressing the requirement for easier access to data and the need for the research community to cohesively start to address these issues
    • …
    corecore