82,946 research outputs found

    Semiparametric curve alignment and shift density estimation for biological data

    Full text link
    Assume that we observe a large number of curves, all of them with identical, although unknown, shape, but with a different random shift. The objective is to estimate the individual time shifts and their distribution. Such an objective appears in several biological applications like neuroscience or ECG signal processing, in which the estimation of the distribution of the elapsed time between repetitive pulses with a possibly low signal-noise ratio, and without a knowledge of the pulse shape is of interest. We suggest an M-estimator leading to a three-stage algorithm: we split our data set in blocks, on which the estimation of the shifts is done by minimizing a cost criterion based on a functional of the periodogram; the estimated shifts are then plugged into a standard density estimator. We show that under mild regularity assumptions the density estimate converges weakly to the true shift distribution. The theory is applied both to simulations and to alignment of real ECG signals. The estimator of the shift distribution performs well, even in the case of low signal-to-noise ratio, and is shown to outperform the standard methods for curve alignment.Comment: 30 pages ; v5 : minor changes and correction in the proof of Proposition 3.

    Reduced-order modeling of transonic flows around an airfoil submitted to small deformations

    Get PDF
    A reduced-order model (ROM) is developed for the prediction of unsteady transonic flows past an airfoil submitted to small deformations, at moderate Reynolds number. Considering a suitable state formulation as well as a consistent inner product, the Galerkin projection of the compressible flow Navier–Stokes equations, the high-fidelity (HF) model, onto a low-dimensional basis determined by Proper Orthogonal Decomposition (POD), leads to a polynomial quadratic ODE system relevant to the prediction of main flow features. A fictitious domain deformation technique is yielded by the Hadamard formulation of HF model and validated at HF level. This approach captures airfoil profile deformation by a modification of the boundary conditions whereas the spatial domain remains unchanged. A mixed POD gathering information from snapshot series associated with several airfoil profiles can be defined. The temporal coefficients in POD expansion are shape-dependent while spatial POD modes are not. In the ROM, airfoil deformation is introduced by a steady forcing term. ROM reliability towards airfoil deformation is demonstrated for the prediction of HF-resolved as well as unknown intermediate configurations

    Test of a single module of the J-PET scanner based on plastic scintillators

    Full text link
    Time of Flight Positron Emission Tomography scanner based on plastic scintillators is being developed at the Jagiellonian University by the J-PET collaboration. The main challenge of the conducted research lies in the elaboration of a method allowing application of plastic scintillators for the detection of low energy gamma quanta. In this article we report on tests of a single detection module built out from BC-420 plastic scintillator strip (with dimensions of 5x19x300mm^3) read out at two ends by Hamamatsu R5320 photomultipliers. The measurements were performed using collimated beam of annihilation quanta from the 68Ge isotope and applying the Serial Data Analyzer (Lecroy SDA6000A) which enabled sampling of signals with 50ps intervals. The time resolution of the prototype module was established to be better than 80ps (sigma) for a single level discrimination. The spatial resolution of the determination of the hit position along the strip was determined to be about 0.93cm (sigma) for the annihilation quanta. The fractional energy resolution for the energy E deposited by the annihilation quanta via the Compton scattering amounts to sigma(E)/E = 0.044/sqrt(E[MeV]) and corresponds to the sigma(E)/E of 7.5% at the Compton edge.Comment: 12 pages, 6 figures; Updated with editorial corrections related to publication in NIM

    The random component of mixer-based nonlinear vector network analyzer measurement uncertainty

    Get PDF
    The uncertainty, due to random noise, of the measurements made with a mixer-based nonlinear vector network analyzer are analyzed. An approximate covariance matrix corresponding to the measurements is derived that can be used for fitting models and maximizing the dynamic range in the measurement setup. The validity of the approximation is verified with measurements

    Machine-learning nonstationary noise out of gravitational-wave detectors

    Get PDF
    Signal extraction out of background noise is a common challenge in high-precision physics experiments, where the measurement output is often a continuous data stream. To improve the signal-to-noise ratio of the detection, witness sensors are often used to independently measure background noises and subtract them from the main signal. If the noise coupling is linear and stationary, optimal techniques already exist and are routinely implemented in many experiments. However, when the noise coupling is nonstationary, linear techniques often fail or are suboptimal. Inspired by the properties of the background noise in gravitational wave detectors, this work develops a novel algorithm to efficiently characterize and remove nonstationary noise couplings, provided there exist witnesses of the noise source and of the modulation. In this work, the algorithm is described in its most general formulation, and its efficiency is demonstrated with examples from the data of the Advanced LIGO gravitational-wave observatory, where we could obtain an improvement of the detector gravitational-wave reach without introducing any bias on the source parameter estimation

    Fault detection in operating helicopter drive train components based on support vector data description

    Get PDF
    The objective of the paper is to develop a vibration-based automated procedure dealing with early detection of mechanical degradation of helicopter drive train components using Health and Usage Monitoring Systems (HUMS) data. An anomaly-detection method devoted to the quantification of the degree of deviation of the mechanical state of a component from its nominal condition is developed. This method is based on an Anomaly Score (AS) formed by a combination of a set of statistical features correlated with specific damages, also known as Condition Indicators (CI), thus the operational variability is implicitly included in the model through the CI correlation. The problem of fault detection is then recast as a one-class classification problem in the space spanned by a set of CI, with the aim of a global differentiation between normal and anomalous observations, respectively related to healthy and supposedly faulty components. In this paper, a procedure based on an efficient one-class classification method that does not require any assumption on the data distribution, is used. The core of such an approach is the Support Vector Data Description (SVDD), that allows an efficient data description without the need of a significant amount of statistical data. Several analyses have been carried out in order to validate the proposed procedure, using flight vibration data collected from a H135, formerly known as EC135, servicing helicopter, for which micro-pitting damage on a gear was detected by HUMS and assessed through visual inspection. The capability of the proposed approach of providing better trade-off between false alarm rates and missed detection rates with respect to individual CI and to the AS obtained assuming jointly-Gaussian-distributed CI has been also analysed

    Cross correlation anomaly detection system

    Get PDF
    This invention provides a method for automatically inspecting the surface of an object, such as an integrated circuit chip, whereby the data obtained by the light reflected from the surface, caused by a scanning light beam, is automatically compared with data representing acceptable values for each unique surface. A signal output provided indicated of acceptance or rejection of the chip. Acceptance is based on predetermined statistical confidence intervals calculated from known good regions of the object being tested, or their representative values. The method can utilize a known good chip, a photographic mask from which the I.C. was fabricated, or a computer stored replica of each pattern being tested

    Compressive Sensing of Signals Generated in Plastic Scintillators in a Novel J-PET Instrument

    Full text link
    The J-PET scanner, which allows for single bed imaging of the whole human body, is currently under development at the Jagiellonian University. The dis- cussed detector offers improvement of the Time of Flight (TOF) resolution due to the use of fast plastic scintillators and dedicated electronics allowing for sam- pling in the voltage domain of signals with durations of few nanoseconds. In this paper we show that recovery of the whole signal, based on only a few samples, is possible. In order to do that, we incorporate the training signals into the Tikhonov regularization framework and we perform the Principal Component Analysis decomposition, which is well known for its compaction properties. The method yields a simple closed form analytical solution that does not require iter- ative processing. Moreover, from the Bayes theory the properties of regularized solution, especially its covariance matrix, may be easily derived. This is the key to introduce and prove the formula for calculations of the signal recovery error. In this paper we show that an average recovery error is approximately inversely proportional to the number of acquired samples

    High count rate {\gamma}-ray spectroscopy with LaBr3:Ce scintillation detectors

    Full text link
    The applicability of LaBr3:Ce detectors for high count rate {\gamma}-ray spectroscopy is investigated. A 3"x3" LaBr3:Ce detector is used in a test setup with radioactive sources to study the dependence of energy resolution and photo peak efficiency on the overall count rate in the detector. Digitized traces were recorded using a 500 MHz FADC and analysed with digital signal processing methods. In addition to standard techniques a pile-up correction method is applied to the data in order to further improve the high-rate capabilities and to reduce the losses in efficiency due to signal pile-up. It is shown, that {\gamma}-ray spectroscopy can be performed with high resolution at count rates even above 1 MHz and that the performance can be enhanced in the region between 500 kHz and 10 MHz by using pile-up correction techniques
    • 

    corecore