315 research outputs found

    Energy performance forecasting of residential buildings using fuzzy approaches

    Get PDF
    The energy consumption used for domestic purposes in Europe is, to a considerable extent, due to heating and cooling. This energy is produced mostly by burning fossil fuels, which has a high negative environmental impact. The characteristics of a building are an important factor to determine the necessities of heating and cooling loads. Therefore, the study of the relevant characteristics of the buildings, regarding the heating and cooling needed to maintain comfortable indoor air conditions, could be very useful in order to design and construct energy-efficient buildings. In previous studies, different machine-learning approaches have been used to predict heating and cooling loads from the set of variables: relative compactness, surface area, wall area, roof area, overall height, orientation, glazing area and glazing area distribution. However, none of these methods are based on fuzzy logic. In this research, we study two fuzzy logic approaches, i.e., fuzzy inductive reasoning (FIR) and adaptive neuro fuzzy inference system (ANFIS), to deal with the same problem. Fuzzy approaches obtain very good results, outperforming all the methods described in previous studies except one. In this work, we also study the feature selection process of FIR methodology as a pre-processing tool to select the more relevant variables before the use of any predictive modelling methodology. It is proven that FIR feature selection provides interesting insights into the main building variables causally related to heating and cooling loads. This allows better decision making and design strategies, since accurate cooling and heating load estimations and correct identification of parameters that affect building energy demands are of high importance to optimize building designs and equipment specifications.Peer ReviewedPostprint (published version

    1st INCF Workshop on Sustainability of Neuroscience Databases

    Get PDF
    The goal of the workshop was to discuss issues related to the sustainability of neuroscience databases, identify problems and propose solutions, and formulate recommendations to the INCF. The report summarizes the discussions of invited participants from the neuroinformatics community as well as from other disciplines where sustainability issues have already been approached. The recommendations for the INCF involve rating, ranking, and supporting database sustainability

    Uncertainty and Interpretability Studies in Soft Computing with an Application to Complex Manufacturing Systems

    Get PDF
    In systems modelling and control theory, the benefits of applying neural networks have been extensively studied. Particularly in manufacturing processes, such as the prediction of mechanical properties of heat treated steels. However, modern industrial processes usually involve large amounts of data and a range of non-linear effects and interactions that might hinder their model interpretation. For example, in steel manufacturing the understanding of complex mechanisms that lead to the mechanical properties which are generated by the heat treatment process is vital. This knowledge is not available via numerical models, therefore an experienced metallurgist estimates the model parameters to obtain the required properties. This human knowledge and perception sometimes can be imprecise leading to a kind of cognitive uncertainty such as vagueness and ambiguity when making decisions. In system classification, this may be translated into a system deficiency - for example, small input changes in system attributes may result in a sudden and inappropriate change for class assignation. In order to address this issue, practitioners and researches have developed systems that are functional equivalent to fuzzy systems and neural networks. Such systems provide a morphology that mimics the human ability of reasoning via the qualitative aspects of fuzzy information rather by its quantitative analysis. Furthermore, these models are able to learn from data sets and to describe the associated interactions and non-linearities in the data. However, in a like-manner to neural networks, a neural fuzzy system may suffer from a lost of interpretability and transparency when making decisions. This is mainly due to the application of adaptive approaches for its parameter identification. Since the RBF-NN can be treated as a fuzzy inference engine, this thesis presents several methodologies that quantify different types of uncertainty and its influence on the model interpretability and transparency of the RBF-NN during its parameter identification. Particularly, three kind of uncertainty sources in relation to the RBF-NN are studied, namely: entropy, fuzziness and ambiguity. First, a methodology based on Granular Computing (GrC), neutrosophic sets and the RBF-NN is presented. The objective of this methodology is to quantify the hesitation produced during the granular compression at the low level of interpretability of the RBF-NN via the use of neutrosophic sets. This study also aims to enhance the disitnguishability and hence the transparency of the initial fuzzy partition. The effectiveness of the proposed methodology is tested against a real case study for the prediction of the properties of heat-treated steels. Secondly, a new Interval Type-2 Radial Basis Function Neural Network (IT2-RBF-NN) is introduced as a new modelling framework. The IT2-RBF-NN takes advantage of the functional equivalence between FLSs of type-1 and the RBF-NN so as to construct an Interval Type-2 Fuzzy Logic System (IT2-FLS) that is able to deal with linguistic uncertainty and perceptions in the RBF-NN rule base. This gave raise to different combinations when optimising the IT2-RBF-NN parameters. Finally, a twofold study for uncertainty assessment at the high-level of interpretability of the RBF-NN is provided. On the one hand, the first study proposes a new methodology to quantify the a) fuzziness and the b) ambiguity at each RU, and during the formation of the rule base via the use of neutrosophic sets theory. The aim of this methodology is to calculate the associated fuzziness of each rule and then the ambiguity related to each normalised consequence of the fuzzy rules that result from the overlapping and to the choice with one-to-many decisions respectively. On the other hand, a second study proposes a new methodology to quantify the entropy and the fuzziness that come out from the redundancy phenomenon during the parameter identification. To conclude this work, the experimental results obtained through the application of the proposed methodologies for modelling two well-known benchmark data sets and for the prediction of mechanical properties of heat-treated steels conducted to publication of three articles in two peer-reviewed journals and one international conference

    Mass Upper Bounds for Over 50 Kepler Planets Using Low-S/N Transit Timing Variations

    Full text link
    Prospects for expanding the available mass measurements of the Kepler sample are limited. Planet masses have typically been inferred via radial velocity (RV) measurements of the host star or time-series modeling of transit timing variations (TTVs) in multiplanet systems; however, the majority of Kepler hosts are too dim for RV follow-up, and only a select number of systems have strong enough TTVs for time-series modeling. Here, we develop a method of constraining planet mass in multiplanet systems using low signal-to-noise ratio (S/N) TTVs. For a sample of 175 planets in 79 multiplanet systems from the California-Kepler Survey, we infer posteriors on planet mass using publicly available TTV time-series from Kepler. For 53 planets (>30%>30\% of our sample), low-S/N TTVs yield informative upper bounds on planet mass, i.e., the mass constraint strongly deviates from the prior on mass and yields a physically reasonable bulk composition. For 25 small planets, low-S/N TTVs favor volatile-rich compositions. Where available, low-S/N TTV-based mass constraints are consistent with RV-derived masses. TTV time-series are publicly available for each Kepler planet, and the compactness of Kepler systems makes TTV-based constraints informative for a substantial fraction of multiplanet systems. Leveraging low-S/N TTVs offers a valuable path toward increasing the available mass constraints of the Kepler sample.Comment: 18 pages, accepted to A

    The Path to Extreme Precision Radial Velocity With EXPRES

    Get PDF
    The field of exoplanets is currently poised to benefit hugely from improved radial velocity (RV) precision. Extreme precision radial-velocity (EPRV) measurements, capable of detecting planetary signals on the order of 10-30 cm/s, will deliver integral planetary parameters, be sensitive to a missing category of lower-mass planets, grant a deeper understanding of multi-planet architectures, and support both current and future space missions such as TESS and JWST. The ability of EPRV to deliver mass estimates is essential for comprehensively characterizing planets, understanding formation histories, and interpreting atmospheric spectra. Until recently, RV precision had stalled at around 1 m/s, i.e. signals with a semi-amplitude of less than 1 m/s could not be faithfully detected. We demonstrate with HARPS, UVES, and CHIRON observations of alpha Cen the need for better data, not just more data. Even with over a decade of observations at around 1 /mas precision, large areas of mass/period parameter space remained unprobed. Higher-fidelity data is needed to significantly push down detection limits. EXPRES, the EXtreme PREcision Spectrograph, was one of the first next-generation spectrographs to go on sky. Installed at the 4.3-m Lowell Discovery Telescope in 2017 and commissioned through 2019, EXPRES is a fiber-fed, ultra-stabilized, echelle spectrograph with a high median resolving power of R~137,000 and an instrument calibration stability of 4-7 cm/s, a factor of 10 better than previous instruments. The stringent requirements of EPRV measurements along with the stability of EXPRES and similar instruments changes how we must extract, calibrate, and model the resultant spectral data. This dissertation discusses the work that must be done in this new regime in terms of data pipelines and modeling stellar signals and showcases some initial progress. We present EXPRES\u27 data pipeline, a new data-driven method for wavelength calibration, and the current state of the field for disentangling stellar signals. The EXPRES extraction pipeline implements a flat-relative, optimal extraction model and excalibur for wavelength calibration. Excalibur is a hierarchical, non-parametric method for wavelength calibration developed as part of this thesis work. Calibration line-positions are de-noised by using all calibration images to construct a model of the accessible calibration space of the instrument. This denoising returns wavelengths a factor of five more precise than previous polynomial-based methods. With EXPRES data, excalibur reduced the overall RMS of RV data sets for all targets tested by 0.2-0.5 m/s. This consistent reduction in overall RMS implies that excalibur is addressing an instrumental, red-noise component that would otherwise permeate all exposures. With instrumental noise lowered and extraction error reduced, intrinsic stellar variability and the resulting apparent RVs now dominate the error budget for EPRV measurements. The EXPRES Stellar Signals Project (ESSP) released high-fidelity, spectroscopic data from EXPRES and photometric observations from the automatic photoelectric telescopes (APT) for four different stars. This allowed for a self-consistent comparison of the 19 different methods submitted, which represent the current state of the field in disentangling stellar signals. The analysis of results is ongoing work. Currently, the best performing method give a final RV RMS of 1.2 m/s. Submitted methods nearly always do better than classic methods of decorrelating RVs from stellar signals. We found that methods returning the lowest RV RMS often used the full spectra and/or flexible statistical models such as Gaussian processes or principal component analysis. However, there was a concerning lack of agreement between methods. If we hope to improve on current advancements and develop methods achieving sub-meter-per-second RMS, we must introduce more interpretability to methods to understand what is and is not working. A densely sampled, high-resolution data sensitive to all categories of stellar variation is needed to understand all types of stellar signals. This dissertation work centers on the question of achieving EPRV capabilities for detecting planets incurring reflex velocity signals on the order of 10-30 cm/s. We consider what needs to be done, describe current development towards this goal, and discuss the future work that remains before sub-meter-per-second precision can become a regular reality. We emphasize the power of data-driven pipelines to account for variations in data for EPRV applications and beyond. Empirically backed conclusions for mitigating photospheric velocities are summarized from the results of the ESSP along with next steps and additional data requirements. Progress is being made, but there remains much work to be done

    Data and Feature Reduction in Fuzzy Modeling through Particle Swarm Optimization

    Get PDF
    The study is concerned with data and feature reduction in fuzzy modeling. As these reduction activities are advantageous to fuzzy models in terms of both the effectiveness of their construction and the interpretation of the resulting models, their realization deserves particular attention. The formation of a subset of meaningful features and a subset of essential instances is discussed in the context of fuzzy-rule-based models. In contrast to the existing studies, which are focused predominantly on feature selection (namely, a reduction of the input space), a position advocated here is that a reduction has to involve both data and features to become efficient to the design of fuzzy model. The reduction problem is combinatorial in its nature and, as such, calls for the use of advanced optimization techniques. In this study, we use a technique of particle swarm optimization (PSO) as an optimization vehicle of forming a subset of features and data (instances) to design a fuzzy model. Given the dimensionality of the problem (as the search space involves both features and instances), we discuss a cooperative version of the PSO along with a clustering mechanism of forming a partition of the overall search space. Finally, a series of numeric experiments using several machine learning data sets is presented

    Memristor Platforms for Pattern Recognition Memristor Theory, Systems and Applications

    Get PDF
    In the last decade a large scientific community has focused on the study of the memristor. The memristor is thought to be by many the best alternative to CMOS technology, which is gradually showing its flaws. Transistor technology has developed fast both under a research and an industrial point of view, reducing the size of its elements to the nano-scale. It has been possible to generate more and more complex machinery and to communicate with that same machinery thanks to the development of programming languages based on combinations of boolean operands. Alas as shown by Moore’s law, the steep curve of implementation and of development of CMOS is gradually reaching a plateau. It is clear the need of studying new elements that can combine the efficiency of transistors and at the same time increase the complexity of the operations. Memristors can be described as non-linear resistors capable of maintaining memory of the resistance state that they reached. From their first theoretical treatment by Professor Leon O. Chua in 1971, different research groups have devoted their expertise in studying the both the fabrication and the implementation of this new promising technology. In the following thesis a complete study on memristors and memristive elements is presented. The road map that characterizes this study departs from a deep understanding of the physics that govern memristors, focusing on the HP model by Dr. Stanley Williams. Other devices such as phase change memories (PCMs) and memristive biosensors made with Si nano-wires have been studied, developing emulators and equivalent circuitry, in order to describe their complex dynamics. This part sets the first milestone of a pathway that passes trough more complex implementations such as neuromorphic systems and neural networks based on memristors proving their computing efficiency. Finally it will be presented a memristror-based technology, covered by patent, demonstrating its efficacy for clinical applications. The presented system has been designed for detecting and assessing automatically chronic wounds, a syndrome that affects roughly 2% of the world population, through a Cellular Automaton which analyzes and processes digital images of ulcers. Thanks to its precision in measuring the lesions the proposed solution promises not only to increase healing rates, but also to prevent the worsening of the wounds that usually lead to amputation and death

    Initial Condition Estimation in Flux Tube Simulations using Machine Learning

    Get PDF
    Space weather has become an essential field of study as solar flares, coronal mass ejections, and other phenomena can severely impact Earth's life as we know it. The solar wind is threaded by magnetic flux tubes that extend from the solar atmosphere to distances beyond the solar system boundary. As those flux tubes cross the Earth's orbit, it is essential to understand and predict solar phenomena' effects at 1 AU, but the physical parameters linked to the solar wind formation and acceleration processes are not directly observable. Some existing models, such as MULTI-VP, try to fill this gap by predicting the background solar wind's dynamical and thermal properties from chosen magnetograms and using a coronal field reconstruction method. However, these models take a long time, and their performance increases with good initial guesses regarding the simulation's initial conditions. To address this problem, we propose using varied machine learning techniques to obtain good initial guesses that can accelerate MULTI-VP's computational time

    Image enhancement techniques applied to solar feature detection

    Get PDF
    This dissertation presents the development of automatic image enhancement techniques for solar feature detection. The new method allows for detection and tracking of the evolution of filaments in solar images. Series of H-alpha full-disk images are taken in regular time intervals to observe the changes of the solar disk features. In each picture, the solar chromosphere filaments are identified for further evolution examination. The initial preprocessing step involves local thresholding to convert grayscale images into black-and-white pictures with chromosphere granularity enhanced. An alternative preprocessing method, based on image normalization and global thresholding is presented. The next step employs morphological closing operations with multi-directional linear structuring elements to extract elongated shapes in the image. After logical union of directional filtering results, the remaining noise is removed from the final outcome using morphological dilation and erosion with a circular structuring element. Experimental results show that the developed techniques can achieve excellent results in detecting large filaments and good detection rates for small filaments. The final chapter discusses proposed directions of the future research and applications to other areas of solar image processing, in particular to detection of solar flares, plages and sunspots
    corecore