11,768 research outputs found

    Self-Calibration Methods for Uncontrolled Environments in Sensor Networks: A Reference Survey

    Get PDF
    Growing progress in sensor technology has constantly expanded the number and range of low-cost, small, and portable sensors on the market, increasing the number and type of physical phenomena that can be measured with wirelessly connected sensors. Large-scale deployments of wireless sensor networks (WSN) involving hundreds or thousands of devices and limited budgets often constrain the choice of sensing hardware, which generally has reduced accuracy, precision, and reliability. Therefore, it is challenging to achieve good data quality and maintain error-free measurements during the whole system lifetime. Self-calibration or recalibration in ad hoc sensor networks to preserve data quality is essential, yet challenging, for several reasons, such as the existence of random noise and the absence of suitable general models. Calibration performed in the field, without accurate and controlled instrumentation, is said to be in an uncontrolled environment. This paper provides current and fundamental self-calibration approaches and models for wireless sensor networks in uncontrolled environments

    A nonparametric approach for model individualization in an artificial pancreas

    Get PDF
    The identification of patient-tailored linear time invariant glucose-insulin models is investigated for type 1 diabetic patients, that are characterized by a substantial inter-subject variability. The individualized linear models are identified by considering a novel kernel-based nonparametric approach and are compared with a linear time invariant average model in terms of prediction performance by means of the coefficient of determination, fit, positive and negative max errors, and root mean squared error. Model identification and validation are based on in-silico data collected from the adult virtual population of the UVA/Padova simulator. The data generation involves a protocol designed to produce a sufficient input excitation without compromising patient safety, compatible also with real life scenarios. The identified models are exploited to synthesize an individualized Model Predictive Controller (MPC) for each patient, which is used in an Artificial Pancreas to maintain the blood glucose concentration within an euglycemic range. The MPC used in several clinical studies, synthesized on the basis of a non-individualized average linear time invariant model, is also considered as reference. The closed-loop control performance is evaluated in an in-silico study on the adult virtual population of the UVA/Padova simulator in a perturbed scenario, in which the MPC is blind to random variations of insulin sensitivity in each virtual patient. © 2015, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved

    Sensitivity analysis and parameter estimation for distributed hydrological modeling: potential of variational methods

    Get PDF
    Variational methods are widely used for the analysis and control of computationally intensive spatially distributed systems. In particular, the adjoint state method enables a very efficient calculation of the derivatives of an objective function (response function to be analysed or cost function to be optimised) with respect to model inputs. In this contribution, it is shown that the potential of variational methods for distributed catchment scale hydrology should be considered. A distributed flash flood model, coupling kinematic wave overland flow and Green Ampt infiltration, is applied to a small catchment of the Thoré basin and used as a relatively simple (synthetic observations) but didactic application case. It is shown that forward and adjoint sensitivity analysis provide a local but extensive insight on the relation between the assigned model parameters and the simulated hydrological response. Spatially distributed parameter sensitivities can be obtained for a very modest calculation effort (~6 times the computing time of a single model run) and the singular value decomposition (SVD) of the Jacobian matrix provides an interesting perspective for the analysis of the rainfall-runoff relation. For the estimation of model parameters, adjoint-based derivatives were found exceedingly efficient in driving a bound-constrained quasi-Newton algorithm. The reference parameter set is retrieved independently from the optimization initial condition when the very common dimension reduction strategy (i.e. scalar multipliers) is adopted. Furthermore, the sensitivity analysis results suggest that most of the variability in this high-dimensional parameter space can be captured with a few orthogonal directions. A parametrization based on the SVD leading singular vectors was found very promising but should be combined with another regularization strategy in order to prevent overfitting

    Multi–scale modelling of timber–frame structures under seismic loads

    Get PDF
    This paper introduces a versatile hysteretic constitutive law, developed for various joints with steel fasteners commonly used in timber structures (nails, screws, staples, 3D connectors of bracket type, punched plates). Compared to previous models available in literature, the proposed one improves numerical robustness and represents a step forward by taking into account the damaging process of joints with metal fasteners. Experimental tests carried out on joints are used for calibration purpose, and quasi–static and dynamic tests performed on shear walls allow validating the proposed Finite Element model. Finally, the development of a computationally efficient simplified FE model of timber–frame structures for shear walls is described, with emphasis on its validation and its use at the scale of a complete structure

    Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches

    Get PDF
    Imaging spectrometers measure electromagnetic energy scattered in their instantaneous field view in hundreds or thousands of spectral channels with higher spectral resolution than multispectral cameras. Imaging spectrometers are therefore often referred to as hyperspectral cameras (HSCs). Higher spectral resolution enables material identification via spectroscopic analysis, which facilitates countless applications that require identifying materials in scenarios unsuitable for classical spectroscopic analysis. Due to low spatial resolution of HSCs, microscopic material mixing, and multiple scattering, spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus, accurate estimation requires unmixing. Pixels are assumed to be mixtures of a few materials, called endmembers. Unmixing involves estimating all or some of: the number of endmembers, their spectral signatures, and their abundances at each pixel. Unmixing is a challenging, ill-posed inverse problem because of model inaccuracies, observation noise, environmental conditions, endmember variability, and data set size. Researchers have devised and investigated many models searching for robust, stable, tractable, and accurate unmixing algorithms. This paper presents an overview of unmixing methods from the time of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models are first discussed. Signal-subspace, geometrical, statistical, sparsity-based, and spatial-contextual unmixing algorithms are described. Mathematical problems and potential solutions are described. Algorithm characteristics are illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensin
    corecore