8,013 research outputs found

    Computational intelligence techniques for HVAC systems: a review

    Get PDF
    Buildings are responsible for 40% of global energy use and contribute towards 30% of the total CO2 emissions. The drive to reduce energy use and associated greenhouse gas emissions from buildings has acted as a catalyst in the development of advanced computational methods for energy efficient design, management and control of buildings and systems. Heating, ventilation and air conditioning (HVAC) systems are the major source of energy consumption in buildings and an ideal candidate for substantial reductions in energy demand. Significant advances have been made in the past decades on the application of computational intelligence (CI) techniques for HVAC design, control, management, optimization, and fault detection and diagnosis. This article presents a comprehensive and critical review on the theory and applications of CI techniques for prediction, optimization, control and diagnosis of HVAC systems.The analysis of trends reveals the minimization of energy consumption was the key optimization objective in the reviewed research, closely followed by the optimization of thermal comfort, indoor air quality and occupant preferences. Hardcoded Matlab program was the most widely used simulation tool, followed by TRNSYS, EnergyPlus, DOE–2, HVACSim+ and ESP–r. Metaheuristic algorithms were the preferred CI method for solving HVAC related problems and in particular genetic algorithms were applied in most of the studies. Despite the low number of studies focussing on MAS, as compared to the other CI techniques, interest in the technique is increasing due to their ability of dividing and conquering an HVAC optimization problem with enhanced overall performance. The paper also identifies prospective future advancements and research directions

    Designing labeled graph classifiers by exploiting the R\'enyi entropy of the dissimilarity representation

    Full text link
    Representing patterns as labeled graphs is becoming increasingly common in the broad field of computational intelligence. Accordingly, a wide repertoire of pattern recognition tools, such as classifiers and knowledge discovery procedures, are nowadays available and tested for various datasets of labeled graphs. However, the design of effective learning procedures operating in the space of labeled graphs is still a challenging problem, especially from the computational complexity viewpoint. In this paper, we present a major improvement of a general-purpose classifier for graphs, which is conceived on an interplay between dissimilarity representation, clustering, information-theoretic techniques, and evolutionary optimization algorithms. The improvement focuses on a specific key subroutine devised to compress the input data. We prove different theorems which are fundamental to the setting of the parameters controlling such a compression operation. We demonstrate the effectiveness of the resulting classifier by benchmarking the developed variants on well-known datasets of labeled graphs, considering as distinct performance indicators the classification accuracy, computing time, and parsimony in terms of structural complexity of the synthesized classification models. The results show state-of-the-art standards in terms of test set accuracy and a considerable speed-up for what concerns the computing time.Comment: Revised versio

    Group Iterative Spectrum Thresholding for Super-Resolution Sparse Spectral Selection

    Full text link
    Recently, sparsity-based algorithms are proposed for super-resolution spectrum estimation. However, to achieve adequately high resolution in real-world signal analysis, the dictionary atoms have to be close to each other in frequency, thereby resulting in a coherent design. The popular convex compressed sensing methods break down in presence of high coherence and large noise. We propose a new regularization approach to handle model collinearity and obtain parsimonious frequency selection simultaneously. It takes advantage of the pairing structure of sine and cosine atoms in the frequency dictionary. A probabilistic spectrum screening is also developed for fast computation in high dimensions. A data-resampling version of high-dimensional Bayesian Information Criterion is used to determine the regularization parameters. Experiments show the efficacy and efficiency of the proposed algorithms in challenging situations with small sample size, high frequency resolution, and low signal-to-noise ratio

    A Study of Automatic Detection and Classification of EEG Epileptiform Transients

    Get PDF
    This Dissertation documents methods for automatic detection and classification of epileptiform transients, which are important clinical issues. There are two main topics: (1) Detection of paroxysmal activities in EEG; and (2) Classification of paroxysmal activities. This machine learning algorithms were trained on expert opinion which was provided as annotations in clinical EEG recordings, which are called \u27yellow boxes\u27 (YBs). The Dissertation describes improved wavelet-based features which are used in machine learning algorithms to detect events in clinical EEG. It also reveals the influence of electrode positions and cardinality of datasets on the outcome. Furthermore, it studies the utility of using fuzzy strategies to obtain better performance than using crisp decision strategies. In the yellow-box detection study, this Dissertation makes use of threshold strategies and implementation of ANNs. It develops two types of features, wavelet and morphology, for comparison. It also explores the possibility to reduce input vector dimension by pruning. A full-scale real-time simulation of YB detection is performed. The simulation results are demonstrated using a web-based EEG viewing system designed in the School of Computing at Clemson, called EEGnet. Results are compared to expert marked YBs
    • …
    corecore