14,642 research outputs found

    Self-Calibration Methods for Uncontrolled Environments in Sensor Networks: A Reference Survey

    Get PDF
    Growing progress in sensor technology has constantly expanded the number and range of low-cost, small, and portable sensors on the market, increasing the number and type of physical phenomena that can be measured with wirelessly connected sensors. Large-scale deployments of wireless sensor networks (WSN) involving hundreds or thousands of devices and limited budgets often constrain the choice of sensing hardware, which generally has reduced accuracy, precision, and reliability. Therefore, it is challenging to achieve good data quality and maintain error-free measurements during the whole system lifetime. Self-calibration or recalibration in ad hoc sensor networks to preserve data quality is essential, yet challenging, for several reasons, such as the existence of random noise and the absence of suitable general models. Calibration performed in the field, without accurate and controlled instrumentation, is said to be in an uncontrolled environment. This paper provides current and fundamental self-calibration approaches and models for wireless sensor networks in uncontrolled environments

    Energy Disaggregation via Adaptive Filtering

    Full text link
    The energy disaggregation problem is recovering device level power consumption signals from the aggregate power consumption signal for a building. We show in this paper how the disaggregation problem can be reformulated as an adaptive filtering problem. This gives both a novel disaggregation algorithm and a better theoretical understanding for disaggregation. In particular, we show how the disaggregation problem can be solved online using a filter bank and discuss its optimality.Comment: Submitted to 51st Annual Allerton Conference on Communication, Control, and Computin

    Damage identification in structural health monitoring: a brief review from its implementation to the Use of data-driven applications

    Get PDF
    The damage identification process provides relevant information about the current state of a structure under inspection, and it can be approached from two different points of view. The first approach uses data-driven algorithms, which are usually associated with the collection of data using sensors. Data are subsequently processed and analyzed. The second approach uses models to analyze information about the structure. In the latter case, the overall performance of the approach is associated with the accuracy of the model and the information that is used to define it. Although both approaches are widely used, data-driven algorithms are preferred in most cases because they afford the ability to analyze data acquired from sensors and to provide a real-time solution for decision making; however, these approaches involve high-performance processors due to the high computational cost. As a contribution to the researchers working with data-driven algorithms and applications, this work presents a brief review of data-driven algorithms for damage identification in structural health-monitoring applications. This review covers damage detection, localization, classification, extension, and prognosis, as well as the development of smart structures. The literature is systematically reviewed according to the natural steps of a structural health-monitoring system. This review also includes information on the types of sensors used as well as on the development of data-driven algorithms for damage identification.Peer ReviewedPostprint (published version

    Ambient vibration re-testing and operational modal analysis of the Humber Bridge

    Get PDF
    An ambient vibration survey of the Humber Bridge was carried out in July 2008 by a combined team from the UK, Portugal and Hong Kong. The exercise had several purposes that included the evaluation of the current technology for instrumentation and system identification and the generation of an experimental dataset of modal properties to be used for validation and updating of finite element models for scenario simulation and structural health monitoring. The exercise was conducted as part of a project aimed at developing online diagnosis capabilities for three landmark European suspension bridges. Ten stand-alone tri-axial acceleration recorders were deployed at locations along all three spans and in all four pylons during five days of consecutive one-hour recordings. Time series segments from the recorders were merged, and several operational modal analysis techniques were used to analyse these data and assemble modal models representing the global behaviour of the bridge in all three dimensions for all components of the structure. The paper describes the equipment and procedures used for the exercise, compares the operational modal analysis (OMA) technology used for system identification and presents modal parameters for key vibration modes of the complete structure. The results obtained using three techniques, natural excitation technique/eigensystem realisation algorithm, stochastic subspace identification and poly-Least Squares Frequency Domain method, are compared among themselves and with those obtained from a 1985 test of the bridge, showing few significant modal parameter changes over 23 years in cases where direct comparison is possible. The measurement system and the much more sophisticated OMA technology used in the present test show clear advantages necessary due to the compressed timescales compared to the earlier exercise. Even so, the parameter estimates exhibit significant variability between different methods and variations of the same method, while also varying in time and having inherent variability. (C) 2010 Elsevier Ltd. All rights reserved

    Orbit determination of space objects based on sparse optical data

    Get PDF
    While building up a catalog of Earth orbiting objects, if the available optical observations are sparse, not deliberate follow ups of specific objects, no orbit determination is possible without previous correlation of observations obtained at different times. This correlation step is the most computationally intensive, and becomes more and more difficult as the number of objects to be discovered increases. In this paper we tested two different algorithms (and the related prototype software) recently developed to solve the correlation problem for objects in geostationary orbit (GEO), including the accurate orbit determination by full least squares solutions with all six orbital elements. Because of the presence in the GEO region of a significant subpopulation of high area to mass objects, strongly affected by non-gravitational perturbations, it was actually necessary to solve also for dynamical parameters describing these effects, that is to fit between 6 and 8 free parameters for each orbit. The validation was based upon a set of real data, acquired from the ESA Space Debris Telescope (ESASDT) at the Teide observatory (Canary Islands). We proved that it is possible to assemble a set of sparse observations into a set of objects with orbits, starting from a sparse time distribution of observations, which would be compatible with a survey capable of covering the region of interest in the sky just once per night. This could result in a significant reduction of the requirements for a future telescope network, with respect to what would have been required with the previously known algorithm for correlation and orbit determination.Comment: 20 pages, 8 figure

    A Dynamical Systems Approach to Energy Disaggregation

    Full text link
    Energy disaggregation, also known as non-intrusive load monitoring (NILM), is the task of separating aggregate energy data for a whole building into the energy data for individual appliances. Studies have shown that simply providing disaggregated data to the consumer improves energy consumption behavior. However, placing individual sensors on every device in a home is not presently a practical solution. Disaggregation provides a feasible method for providing energy usage behavior data to the consumer which utilizes currently existing infrastructure. In this paper, we present a novel framework to perform the energy disaggregation task. We model each individual device as a single-input, single-output system, where the output is the power consumed by the device and the input is the device usage. In this framework, the task of disaggregation translates into finding inputs for each device that generates our observed power consumption. We describe an implementation of this framework, and show its results on simulated data as well as data from a small-scale experiment.Comment: Submitted to 52nd IEEE Conference on Decision and Control (CDC 2013

    On the use of asymmetric PSF on NIR images of crowded stellar fields

    Full text link
    We present data collected using the camera PISCES coupled with the Firt Light Adaptive Optics (FLAO) mounted at the Large Binocular Telescope (LBT). The images were collected using two natural guide stars with an apparent magnitude of R<13 mag. During these observations the seeing was on average ~0.9". The AO performed very well: the images display a mean FWHM of 0.05 arcsec and of 0.06 arcsec in the J- and in the Ks-band, respectively. The Strehl ratio on the quoted images reaches 13-30% (J) and 50-65% (Ks), in the off and in the central pointings respectively. On the basis of this sample we have reached a J-band limiting magnitude of ~22.5 mag and the deepest Ks-band limiting magnitude ever obtained in a crowded stellar field: Ks~23 mag. J-band images display a complex change in the shape of the PSF when moving at larger radial distances from the natural guide star. In particular, the stellar images become more elongated in approaching the corners of the J-band images whereas the Ks-band images are more uniform. We discuss in detail the strategy used to perform accurate and deep photometry in these very challenging images. In particular we will focus our attention on the use of an updated version of ROMAFOT based on asymmetric and analytical Point Spread Functions. The quality of the photometry allowed us to properly identify a feature that clearly shows up in NIR bands: the main sequence knee (MSK). The MSK is independent of the evolutionary age, therefore the difference in magnitude with the canonical clock to constrain the cluster age, the main sequence turn off (MSTO), provides an estimate of the absolute age of the cluster. The key advantage of this new approach is that the error decreases by a factor of two when compared with the classical one. Combining ground-based Ks with space F606W photometry, we estimate the absolute age of M15 to be 13.70+-0.80 Gyr.Comment: 15 pages, 7 figures, presented at the SPIE conference 201
    corecore