8 research outputs found

    Surrogate modelling and uncertainty quantification based on multi-fidelity deep neural network

    Full text link
    To reduce training costs, several Deep neural networks (DNNs) that can learn from a small set of HF data and a sufficient number of low-fidelity (LF) data have been proposed. In these established neural networks, a parallel structure is commonly proposed to separately approximate the non-linear and linear correlation between the HF- and LF data. In this paper, a new architecture of multi-fidelity deep neural network (MF-DNN) was proposed where one subnetwork was built to approximate both the non-linear and linear correlation simultaneously. Rather than manually allocating the output weights for the paralleled linear and nonlinear correction networks, the proposed MF-DNN can autonomously learn arbitrary correlation. The prediction accuracy of the proposed MF-DNN was firstly demonstrated by approximating the 1-, 32- and 100-dimensional benchmark functions with either the linear or non-linear correlation. The surrogating modelling results revealed that MF-DNN exhibited excellent approximation capabilities for the test functions. Subsequently, the MF DNN was deployed to simulate the 1-, 32- and 100-dimensional aleatory uncertainty propagation progress with the influence of either the uniform or Gaussian distributions of input uncertainties. The uncertainty quantification (UQ) results validated that the MF-DNN efficiently predicted the probability density distributions of quantities of interest (QoI) as well as the statistical moments without significant compromise of accuracy. MF-DNN was also deployed to model the physical flow of turbine vane LS89. The distributions of isentropic Mach number were well-predicted by MF-DNN based on the 2D Euler flow field and few experimental measurement data points. The proposed MF-DNN should be promising in solving UQ and robust optimization problems in practical engineering applications with multi-fidelity data sources

    Root-finding Approaches for Computing Conformal Prediction Set

    Full text link
    Conformal prediction constructs a confidence set for an unobserved response of a feature vector based on previous identically distributed and exchangeable observations of responses and features. It has a coverage guarantee at any nominal level without additional assumptions on their distribution. Its computation deplorably requires a refitting procedure for all replacement candidates of the target response. In regression settings, this corresponds to an infinite number of model fit. Apart from relatively simple estimators that can be written as pieces of linear function of the response, efficiently computing such sets is difficult and is still considered as an open problem. We exploit the fact that, \emph{often}, conformal prediction sets are intervals whose boundaries can be efficiently approximated by classical root-finding algorithm. We investigate how this approach can overcome many limitations of formerly used strategies and we discuss its complexity and drawbacks

    Wearable Sensors Applied in Movement Analysis

    Get PDF
    Recent advances in electronics have led to sensors whose sizes and weights are such that they can be placed on living systems without impairing their natural motion and habits. They may be worn on the body as accessories or as part of the clothing and enable personalized mobile information processing. Wearable sensors open the way for a nonintrusive and continuous monitoring of body orientation, movements, and various physiological parameters during motor activities in real-life settings. Thus, they may become crucial tools not only for researchers, but also for clinicians, as they have the potential to improve diagnosis, better monitor disease development and thereby individualize treatment. Wearable sensors should obviously go unnoticed for the people wearing them and be intuitive in their installation. They should come with wireless connectivity and low-power consumption. Moreover, the electronics system should be self-calibrating and deliver correct information that is easy to interpret. Cross-platform interfaces that provide secure data storage and easy data analysis and visualization are needed.This book contains a selection of research papers presenting new results addressing the above challenges

    Safe Grid Search with Optimal Complexity

    No full text
    International audiencePopular machine learning estimators involve regularization parameters that can be challenging to tune, and standard strategies rely on grid search for this task. In this paper, we revisit the techniques of approximating the regularization path up to predefined tolerance ϵ\epsilon in a unified framework and show that its complexity is O(1/ϵd)O(1/\sqrt[d]{\epsilon}) for uniformly convex loss of order d>0d>0 and O(1/ϵ)O(1/\sqrt{\epsilon}) for Generalized Self-Concordant functions. This framework encompasses least-squares but also logistic regression (a case that as far as we know was not handled as precisely by previous works). We leverage our technique to provide refined bounds on the validation error as well as a practical algorithm for hyperparameter tuning. The later has global convergence guarantee when targeting a prescribed accuracy on the validation set. Last but not least, our approach helps relieving the practitioner from the (often neglected) task of selecting a stopping criterion when optimizing over the training set: our method automatically calibrates it based on the targeted accuracy on the validation set
    corecore