1,425 research outputs found

    Railway track condition assessment at network level by frequency domain analysis of GPR data

    Get PDF
    The railway track system is a crucial infrastructure for the transportation of people and goods in modern societies. With the increase in railway traffic, the availability of the track for monitoring and maintenance purposes is becoming significantly reduced. Therefore, continuous non-destructive monitoring tools for track diagnoses take on even greater importance. In this context, Ground Penetrating Radar (GPR) technique results yield valuable information on track condition, mainly in the identification of the degradation of its physical and mechanical characteristics caused by subsurface malfunctions. Nevertheless, the application of GPR to assess the ballast condition is a challenging task because the material electromagnetic properties are sensitive to both the ballast grading and water content. This work presents a novel approach, fast and practical for surveying and analysing long sections of transport infrastructure, based mainly on expedite frequency domain analysis of the GPR signal. Examples are presented with the identification of track events, ballast interventions and potential locations of malfunctions. The approach, developed to identify changes in the track infrastructure, allows for a user-friendly visualisation of the track condition, even for GPR non-professionals such as railways engineers, and may further be used to correlate with track geometric parameters. It aims to automatically detect sudden variations in the GPR signals, obtained with successive surveys over long stretches of railway lines, thus providing valuable information in asset management activities of infrastructure managers

    Estimating Local Function Complexity via Mixture of Gaussian Processes

    Full text link
    Real world data often exhibit inhomogeneity, e.g., the noise level, the sampling distribution or the complexity of the target function may change over the input space. In this paper, we try to isolate local function complexity in a practical, robust way. This is achieved by first estimating the locally optimal kernel bandwidth as a functional relationship. Specifically, we propose Spatially Adaptive Bandwidth Estimation in Regression (SABER), which employs the mixture of experts consisting of multinomial kernel logistic regression as a gate and Gaussian process regression models as experts. Using the locally optimal kernel bandwidths, we deduce an estimate to the local function complexity by drawing parallels to the theory of locally linear smoothing. We demonstrate the usefulness of local function complexity for model interpretation and active learning in quantum chemistry experiments and fluid dynamics simulations.Comment: 19 pages, 16 figure

    Nonlinear multiple regression methods for spectroscopic analysis: application to NIR calibration

    Get PDF
    Chemometrics has been applied to analyse near-infrared (NIR) spectra for decades. Linear regression methods such as partial least squares (PLS) regression and principal component regression (PCR) are simple and widely used solutions for spectroscopic calibration. My dissertation connects spectroscopic calibration with nonlinear machine learning techniques. It explores the feasibility of applying nonlinear methods for NIR calibration. Investigated nonlinear regression methods include least squares support vec- tor machine (LS-SVM), Gaussian process regression (GPR), Bayesian hierarchical mixture of linear regressions (HMLR) and convolutional neural networks (CNN). Our study focuses on the discussion of various design choices, interpretation of nonlinear models and providing novel recommendations and insights for the con- struction nonlinear regression models for NIR data. Performances of investigated nonlinear methods were benchmarked against traditional methods on multiple real-world NIR datasets. The datasets have differ- ent sizes (varying from 400 samples to 7000 samples) and are from various sources. Hypothesis tests on separate, independent test sets indicated that nonlinear methods give significant improvements in most practical NIR calibrations

    A comparative investigation of the combined effects of pre-processing, wavelength selection and regression methods on near infrared calibration model performance

    Get PDF
    Near-infrared (NIR) spectroscopy is being widely used in various fields ranging from pharmaceutics to the food industry for analyzing chemical and physical properties of the substances concerned. Its advantages over other analytical techniques include available physical interpretation of spectral data, nondestructive nature and high speed of measurements, and little or no need for sample preparation. The successful application of NIR spectroscopy relies on three main aspects: pre-processing of spectral data to eliminate nonlinear variations due to temperature, light scattering effects and many others, selection of those wavelengths that contribute useful information, and identification of suitable calibration models using linear/nonlinear regression . Several methods have been developed for each of these three aspects and many comparative studies of different methods exist for an individual aspect or some combinations. However, there is still a lack of comparative studies for the interactions among these three aspects, which can shed light on what role each aspect plays in the calibration and how to combine various methods of each aspect together to obtain the best calibration model. This paper aims to provide such a comparative study based on four benchmark data sets using three typical pre-processing methods, namely, orthogonal signal correction (OSC), extended multiplicative signal correction (EMSC) and optical path-length estimation and correction (OPLEC); two existing wavelength selection methods, namely, stepwise forward selection (SFS) and genetic algorithm optimization combined with partial least squares regression for spectral data (GAPLSSP); four popular regression methods, namely, partial least squares (PLS), least absolute shrinkage and selection operator (LASSO), least squares support vector machine (LS-SVM), and Gaussian process regression (GPR). The comparative study indicates that, in general, pre-processing of spectral data can play a significant role in the calibration while wavelength selection plays a marginal role and the combination of certain pre-processing, wavelength selection, and nonlinear regression methods can achieve superior performance over traditional linear regression-based calibration

    Predicting continuous conflict perception with Bayesian Gaussian processes

    Get PDF
    Conflict is one of the most important phenomena of social life, but it is still largely neglected by the computing community. This work proposes an approach that detects common conversational social signals (loudness, overlapping speech, etc.) and predicts the conflict level perceived by human observers in continuous, non-categorical terms. The proposed regression approach is fully Bayesian and it adopts Automatic Relevance Determination to identify the social signals that influence most the outcome of the prediction. The experiments are performed over the SSPNet Conflict Corpus, a publicly available collection of 1430 clips extracted from televised political debates (roughly 12 hours of material for 138 subjects in total). The results show that it is possible to achieve a correlation close to 0.8 between actual and predicted conflict perception

    Dirichlet-based Gaussian Processes for Large-scale Calibrated Classification

    Get PDF
    This paper studies the problem of deriving fast and accurate classification algorithms with uncertainty quantification. Gaussian process classification provides a principled approach, but the corresponding computational burden is hardly sustainable in large-scale problems and devising efficient alternatives is a challenge. In this work, we investigate if and how Gaussian process regression directly applied to classification labels can be used to tackle this question. While in this case training is remarkably faster, predictions need to be calibrated for classification and uncertainty estimation. To this aim, we propose a novel regression approach where the labels are obtained through the interpretation of classification labels as the coefficients of a degenerate Dirichlet distribution. Extensive experimental results show that the proposed approach provides essentially the same accuracy and uncertainty quantification as Gaussian process classification while requiring only a fraction of computational resources
    • …
    corecore