2,297 research outputs found

    First passage times of two-correlated processes: analytical results for the Wiener process and a numerical method for diffusion processes

    Full text link
    Given a two-dimensional correlated diffusion process, we determine the joint density of the first passage times of the process to some constant boundaries. This quantity depends on the joint density of the first passage time of the first crossing component and of the position of the second crossing component before its crossing time. First we show that these densities are solutions of a system of Volterra-Fredholm first kind integral equations. Then we propose a numerical algorithm to solve it and we describe how to use the algorithm to approximate the joint density of the first passage times. The convergence of the method is theoretically proved for bivariate diffusion processes. We derive explicit expressions for these and other quantities of interest in the case of a bivariate Wiener process, correcting previous misprints appearing in the literature. Finally we illustrate the application of the method through a set of examples.Comment: 18 pages, 3 figure

    Wind turbine condition monitoring strategy through multiway PCA and multivariate inference

    Get PDF
    This article states a condition monitoring strategy for wind turbines using a statistical data-driven modeling approach by means of supervisory control and data acquisition (SCADA) data. Initially, a baseline data-based model is obtained from the healthy wind turbine by means of multiway principal component analysis (MPCA). Then, when the wind turbine is monitorized, new data is acquired and projected into the baseline MPCA model space. The acquired SCADA data are treated as a random process given the random nature of the turbulent wind. The objective is to decide if the multivariate distribution that is obtained from the wind turbine to be analyzed (healthy or not) is related to the baseline one. To achieve this goal, a test for the equality of population means is performed. Finally, the results of the test can determine that the hypothesis is rejected (and the wind turbine is faulty) or that there is no evidence to suggest that the two means are different, so the wind turbine can be considered as healthy. The methodology is evaluated on a wind turbine fault detection benchmark that uses a 5 MW high-fidelity wind turbine model and a set of eight realistic fault scenarios. It is noteworthy that the results, for the presented methodology, show that for a wide range of significance, a in [1%, 13%], the percentage of correct decisions is kept at 100%; thus it is a promising tool for real-time wind turbine condition monitoring.Peer ReviewedPostprint (published version

    Robust isogeometric preconditioners for the Stokes system based on the Fast Diagonalization method

    Full text link
    In this paper we propose a new class of preconditioners for the isogeometric discretization of the Stokes system. Their application involves the solution of a Sylvester-like equation, which can be done efficiently thanks to the Fast Diagonalization method. These preconditioners are robust with respect to both the spline degree and mesh size. By incorporating information on the geometry parametrization and equation coefficients, we maintain efficiency on non-trivial computational domains and for variable kinematic viscosity. In our numerical tests we compare to a standard approach, showing that the overall iterative solver based on our preconditioners is significantly faster.Comment: 31 pages, 4 figure

    Discretization of Continuous Attributes

    No full text
    7 pagesIn the data mining field, many learning methods -like association rules, Bayesian networks, induction rules (Grzymala-Busse & Stefanowski, 2001)- can handle only discrete attributes. Therefore, before the machine learning process, it is necessary to re-encode each continuous attribute in a discrete attribute constituted by a set of intervals, for example the age attribute can be transformed in two discrete values representing two intervals: less than 18 (a minor) and 18 and more (of age). This process, known as discretization, is an essential task of the data preprocessing, not only because some learning methods do not handle continuous attributes, but also for other important reasons: the data transformed in a set of intervals are more cognitively relevant for a human interpretation (Liu, Hussain, Tan & Dash, 2002); the computation process goes faster with a reduced level of data, particularly when some attributes are suppressed from the representation space of the learning problem if it is impossible to find a relevant cut (Mittal & Cheong, 2002); the discretization can provide non-linear relations -e.g., the infants and the elderly people are more sensitive to illness

    Computation of sum of squares polynomials from data points

    Get PDF
    We propose an iterative algorithm for the numerical computation of sums of squares of polynomials approximating given data at prescribed interpolation points. The method is based on the definition of a convex functional GG arising from the dualization of a quadratic regression over the Cholesky factors of the sum of squares decomposition. In order to justify the construction, the domain of GG, the boundary of the domain and the behavior at infinity are analyzed in details. When the data interpolate a positive univariate polynomial, we show that in the context of the Lukacs sum of squares representation, GG is coercive and strictly convex which yields a unique critical point and a corresponding decomposition in sum of squares. For multivariate polynomials which admit a decomposition in sum of squares and up to a small perturbation of size Δ\varepsilon, GΔG^\varepsilon is always coercive and so it minimum yields an approximate decomposition in sum of squares. Various unconstrained descent algorithms are proposed to minimize GG. Numerical examples are provided, for univariate and bivariate polynomials

    Numerical Implementation of the QuEST Function

    Full text link
    This paper deals with certain estimation problems involving the covariance matrix in large dimensions. Due to the breakdown of finite-dimensional asymptotic theory when the dimension is not negligible with respect to the sample size, it is necessary to resort to an alternative framework known as large-dimensional asymptotics. Recently, Ledoit and Wolf (2015) have proposed an estimator of the eigenvalues of the population covariance matrix that is consistent according to a mean-square criterion under large-dimensional asymptotics. It requires numerical inversion of a multivariate nonrandom function which they call the QuEST function. The present paper explains how to numerically implement the QuEST function in practice through a series of six successive steps. It also provides an algorithm to compute the Jacobian analytically, which is necessary for numerical inversion by a nonlinear optimizer. Monte Carlo simulations document the effectiveness of the code.Comment: 35 pages, 8 figure

    Optimal rates and adaptation in the single-index model using aggregation

    Get PDF
    We want to recover the regression function in the single-index model. Using an aggregation algorithm with local polynomial estimators, we answer in particular to the second part of Question~2 from Stone (1982) on the optimal convergence rate. The procedure constructed here has strong adaptation properties: it adapts both to the smoothness of the link function and to the unknown index. Moreover, the procedure locally adapts to the distribution of the design. We propose new upper bounds for the local polynomial estimator (which are results of independent interest) that allows a fairly general design. The behavior of this algorithm is studied through numerical simulations. In particular, we show empirically that it improves strongly over empirical risk minimization.Comment: 36 page
    • 

    corecore