6,335 research outputs found

    Energy Analytics for Infrastructure: An Application to Institutional Buildings

    Get PDF
    abstract: Commercial buildings in the United States account for 19% of the total energy consumption annually. Commercial Building Energy Consumption Survey (CBECS), which serves as the benchmark for all the commercial buildings provides critical input for EnergyStar models. Smart energy management technologies, sensors, innovative demand response programs, and updated versions of certification programs elevate the opportunity to mitigate energy-related problems (blackouts and overproduction) and guides energy managers to optimize the consumption characteristics. With increasing advancements in technologies relying on the ‘Big Data,' codes and certification programs such as the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE), and the Leadership in Energy and Environmental Design (LEED) evaluates during the pre-construction phase. It is mostly carried out with the assumed quantitative and qualitative values calculated from energy models such as Energy Plus and E-quest. However, the energy consumption analysis through Knowledge Discovery in Databases (KDD) is not commonly used by energy managers to perform complete implementation, causing the need for better energy analytic framework. The dissertation utilizes Interval Data (ID) and establishes three different frameworks to identify electricity losses, predict electricity consumption and detect anomalies using data mining, deep learning, and mathematical models. The process of energy analytics integrates with the computational science and contributes to several objectives which are to 1. Develop a framework to identify both technical and non-technical losses using clustering and semi-supervised learning techniques. 2. Develop an integrated framework to predict electricity consumption using wavelet based data transformation model and deep learning algorithms. 3. Develop a framework to detect anomalies using ensemble empirical mode decomposition and isolation forest algorithms. With a thorough research background, the first phase details on performing data analytics on the demand-supply database to determine the potential energy loss reduction potentials. Data preprocessing and electricity prediction framework in the second phase integrates mathematical models and deep learning algorithms to accurately predict consumption. The third phase employs data decomposition model and data mining techniques to detect the anomalies of institutional buildings.Dissertation/ThesisDoctoral Dissertation Civil, Environmental and Sustainable Engineering 201

    Self-adjustable domain adaptation in personalized ECG monitoring integrated with IR-UWB radar

    Get PDF
    To enhance electrocardiogram (ECG) monitoring systems in personalized detections, deep neural networks (DNNs) are applied to overcome individual differences by periodical retraining. As introduced previously [4], DNNs relieve individual differences by fusing ECG with impulse radio ultra-wide band (IR-UWB) radar. However, such DNN-based ECG monitoring system tends to overfit into personal small datasets and is difficult to generalize to newly collected unlabeled data. This paper proposes a self-adjustable domain adaptation (SADA) strategy to prevent from overfitting and exploit unlabeled data. Firstly, this paper enlarges the database of ECG and radar data with actual records acquired from 28 testers and expanded by the data augmentation. Secondly, to utilize unlabeled data, SADA combines self organizing maps with the transfer learning in predicting labels. Thirdly, SADA integrates the one-class classification with domain adaptation algorithms to reduce overfitting. Based on our enlarged database and standard databases, a large dataset of 73200 records and a small one of 1849 records are built up to verify our proposal. Results show SADA\u27s effectiveness in predicting labels and increments in the sensitivity of DNNs by 14.4% compared with existing domain adaptation algorithms

    Development of a machine learning based methodology for bridge health monitoring

    Get PDF
    Tesi en modalitat de compendi de publicacionsIn recent years the scientific community has been developing new techniques in structural health monitoring (SHM) to identify the damages in civil structures specially in bridges. The bridge health monitoring (BHM) systems serve to reduce overall life-cycle maintenance costs for bridges, as their main objective is to prevent catastrophic failures and damages. In the BHM using dynamic data, there are several problems related to the post-processing of the vibration signals such as: (i) when the modal-based dynamic features like natural frequencies, modes shape and damping are used, they present a limitation in relation to damage location, since they are based on a global response of the structure; (ii) presence of noise in the measurement of vibration responses; (iii) inadequate use of existing algorithms for damage feature extraction because of neglecting the non-linearity and non-stationarity of the recorded signals; (iv) environmental and operational conditions can also generate false damage detections in bridges; (v) the drawbacks of traditional algorithms for processing large amounts of data obtained from the BHM. This thesis proposes new vibration-based parameters and methods with focus on damage detection, localization and quantification, considering a mixed robust methodology that includes signal processing and machine learning methods to solve the identified problems. The increasing volume of bridge monitoring data makes it interesting to study the ability of advanced tools and systems to extract useful information from dynamic and static variables. In the field of Machine Learning (ML) and Artificial Intelligence (AI), powerful algorithms have been developed to face problems where the amount of data is much larger (big data). The possibilities of ML techniques (unsupervised algorithms) were analyzed here in bridges taking into account both operational and environmental conditions. A critical literature review was performed and a deep study of the accuracy and performance of a set of algorithms for detecting damage in three real bridges and one numerical model. In the literature review inherent to the vibration-based damage detection, several state-of-the-art methods have been studied that do not consider the nature of the data and the characteristics of the applied excitation (possible non-linearity, non-stationarity, presence or absence of environmental and/or operational effects) and the noise level of the sensors. Besides, most research uses modal-based damage characteristics that have some limitations. A poor data normalization is performed by the majority of methods and both operational and environmental variability is not properly accounted for. Likewise, the huge amount of data recorded requires automatic procedures with proven capacity to reduce the possibility of false alarms. On the other hand, many investigations have limitations since only numerical or laboratory cases are studied. Therefore, a methodology is proposed by the combination of several algorithms to avoid them. The conclusions show a robust methodology based on ML algorithms capable to detect, localize and quantify damage. It allows the engineers to verify bridges and anticipate significant structural damage when occurs. Moreover, the proposed non-modal parameters show their feasibility as damage features using ambient and forced vibrations. Hilbert-Huang Transform (HHT) in conjunction with Marginal Hilbert Spectrum and Instantaneous Phase Difference shows a great capability to analyze the nonlinear and nonstationary response signals for damage identification under operational conditions. The proposed strategy combines algorithms for signal processing (ICEEMDAN and HHT) and ML (k-means) to conduct damage detection and localization in bridges by using the traffic-induced vibration data in real-time operation.En los últimos años la comunidad científica ha desarrollado nuevas técnicas en monitoreo de salud estructural (SHM) para identificar los daños en estructuras civiles especialmente en puentes. Los sistemas de monitoreo de puentes (BHM) sirven para reducir los costos generales de mantenimiento del ciclo de vida, ya que su principal objetivo es prevenir daños y fallas catastróficas. En el BHM que utiliza datos dinámicos, existen varios problemas relacionados con el procesamiento posterior de las señales de vibración, tales como: (i) cuando se utilizan características dinámicas modales como frecuencias naturales, formas de modos y amortiguamiento, presentan una limitación en relación con la localización del daño, ya que se basan en una respuesta global de la estructura; (ii) presencia de ruido en la medición de las respuestas de vibración; (iii) uso inadecuado de los algoritmos existentes para la extracción de características de daño debido a la no linealidad y la no estacionariedad de las señales registradas; (iv) las condiciones ambientales y operativas también pueden generar falsas detecciones de daños en los puentes; (v) los inconvenientes de los algoritmos tradicionales para procesar grandes cantidades de datos obtenidos del BHM. Esta tesis propone nuevos parámetros y métodos basados en vibraciones con enfoque en la detección, localización y cuantificación de daños, considerando una metodología robusta que incluye métodos de procesamiento de señales y aprendizaje automático. El creciente volumen de datos de monitoreo de puentes hace que sea interesante estudiar la capacidad de herramientas y sistemas avanzados para extraer información útil de variables dinámicas y estáticas. En el campo del Machine Learning (ML) y la Inteligencia Artificial (IA) se han desarrollado potentes algoritmos para afrontar problemas donde la cantidad de datos es mucho mayor (big data). Aquí se analizaron las posibilidades de las técnicas ML (algoritmos no supervisados) teniendo en cuenta tanto las condiciones operativas como ambientales. Se realizó una revisión crítica de la literatura y se llevó a cabo un estudio profundo de la precisión y el rendimiento de un conjunto de algoritmos para la detección de daños en tres puentes reales y un modelo numérico. En la revisión de literatura se han estudiado varios métodos que no consideran la naturaleza de los datos y las características de la excitación aplicada (posible no linealidad, no estacionariedad, presencia o ausencia de efectos ambientales y/u operativos) y el nivel de ruido de los sensores. Además, la mayoría de las investigaciones utilizan características de daño modales que tienen algunas limitaciones. Estos métodos realizan una normalización deficiente de los datos y no se tiene en cuenta la variabilidad operativa y ambiental. Asimismo, la gran cantidad de datos registrados requiere de procedimientos automáticos para reducir la posibilidad de falsas alarmas. Por otro lado, muchas investigaciones tienen limitaciones ya que solo se estudian casos numéricos o de laboratorio. Por ello, se propone una metodología mediante la combinación de varios algoritmos. Las conclusiones muestran una metodología robusta basada en algoritmos de ML capaces de detectar, localizar y cuantificar daños. Permite a los ingenieros verificar puentes y anticipar daños estructurales. Además, los parámetros no modales propuestos muestran su viabilidad como características de daño utilizando vibraciones ambientales y forzadas. La Transformada de Hilbert-Huang (HHT) junto con el Espectro Marginal de Hilbert y la Diferencia de Fase Instantánea muestran una gran capacidad para analizar las señales de respuesta no lineales y no estacionarias para la identificación de daños en condiciones operativas. La estrategia propuesta combina algoritmos para el procesamiento de señales (ICEEMDAN y HHT) y ML (k-means) para detectar y localizar daños en puentes mediante el uso de datos de vibraciones inducidas por el tráfico en tiempo real.Postprint (published version

    Anomaly detection in a cutting tool by K-means clustering and Support Vector Machines

    Get PDF
    This paper concerns the analysis of experimental data, verifying the applicability of signal analysis techniques for condition monitoring of a packaging machine. In particular, the activity focuses on the cutting process that divides a continuous flow of packaging paper into single packages. The cutting process is made by a steel knife driven by a hydraulic system. Actually, the knives are frequently substituted, causing frequent stops of the machine and consequent lost production costs. The aim of this paper is to develop a diagnostic procedure to assess the wearing condition of blades, reducing the stops for maintenance. The packaging machine was provided with pressure sensor that monitors the hydraulic system driving the blade. Processing the pressure data comprises three main steps: the selection of scalar quantities that could be indicative of the condition of the knife. A clustering analysis was used to set up a threshold between unfaulted and faulted knives. Finally, a Support Vector Machine (SVM) model was applied to classify the technical condition of knife during its lifetime

    Air Quality Prediction in Smart Cities Using Machine Learning Technologies Based on Sensor Data: A Review

    Get PDF
    The influence of machine learning technologies is rapidly increasing and penetrating almost in every field, and air pollution prediction is not being excluded from those fields. This paper covers the revision of the studies related to air pollution prediction using machine learning algorithms based on sensor data in the context of smart cities. Using the most popular databases and executing the corresponding filtration, the most relevant papers were selected. After thorough reviewing those papers, the main features were extracted, which served as a base to link and compare them to each other. As a result, we can conclude that: (1) instead of using simple machine learning techniques, currently, the authors apply advanced and sophisticated techniques, (2) China was the leading country in terms of a case study, (3) Particulate matter with diameter equal to 2.5 micrometers was the main prediction target, (4) in 41% of the publications the authors carried out the prediction for the next day, (5) 66% of the studies used data had an hourly rate, (6) 49% of the papers used open data and since 2016 it had a tendency to increase, and (7) for efficient air quality prediction it is important to consider the external factors such as weather conditions, spatial characteristics, and temporal features
    • …
    corecore