431 research outputs found

    A fuzzy approach to building thermal systems optimization.

    Get PDF
    Optimization of building thermal systems is treated in the paper in the framework of fuzzy mathematical programming. This new approach allows to formulate more precisely the problem which compromises energy saving and thermal comfort satisfaction under given constraints. Fuzzy optimization problem is solved analytically under some assumptions. An example illustrates the viability of the approach proposed. A solution which significantly (with 38%) improves comfort is found which is more energetically expensive with only 0.6%. (c) IFS

    Identification of Evolving Rule-based Models.

    Get PDF
    An approach to identification of evolving fuzzy rule-based (eR) models is proposed. eR models implement a method for the noniterative update of both the rule-base structure and parameters by incremental unsupervised learning. The rule-base evolves by adding more informative rules than those that previously formed the model. In addition, existing rules can be replaced with new rules based on ranking using the informative potential of the data. In this way, the rule-base structure is inherited and updated when new informative data become available, rather than being completely retrained. The adaptive nature of these evolving rule-based models, in combination with the highly transparent and compact form of fuzzy rules, makes them a promising candidate for modeling and control of complex processes, competitive to neural networks. The approach has been tested on a benchmark problem and on an air-conditioning component modeling application using data from an installation serving a real building. The results illustrate the viability and efficiency of the approach. (c) IEEE Transactions on Fuzzy System

    Adaptive inferential sensors based on evolving fuzzy models

    Get PDF
    A new technique to the design and use of inferential sensors in the process industry is proposed in this paper, which is based on the recently introduced concept of evolving fuzzy models (EFMs). They address the challenge that the modern process industry faces today, namely, to develop such adaptive and self-calibrating online inferential sensors that reduce the maintenance costs while keeping the high precision and interpretability/transparency. The proposed new methodology makes possible inferential sensors to recalibrate automatically, which reduces significantly the life-cycle efforts for their maintenance. This is achieved by the adaptive and flexible open-structure EFM used. The novelty of this paper lies in the following: (1) the overall concept of inferential sensors with evolving and self-developing structure from the data streams; (2) the new methodology for online automatic selection of input variables that are most relevant for the prediction; (3) the technique to detect automatically a shift in the data pattern using the age of the clusters (and fuzzy rules); (4) the online standardization technique used by the learning procedure of the evolving model; and (5) the application of this innovative approach to several real-life industrial processes from the chemical industry (evolving inferential sensors, namely, eSensors, were used for predicting the chemical properties of different products in The Dow Chemical Company, Freeport, TX). It should be noted, however, that the methodology and conclusions of this paper are valid for the broader area of chemical and process industries in general. The results demonstrate that well-interpretable and with-simple-structure inferential sensors can automatically be designed from the data stream in real time, which predict various process variables of interest. The proposed approach can be used as a basis for the development of a new generation of adaptive and evolving inferential sensors that can a- ddress the challenges of the modern advanced process industry

    A Model of Autonomous System for Scientific Experiments and Spacecraft Control for Deep Space Missions

    Get PDF
    Report published in the Proceedings of the National Conference on "Education and Research in the Information Society", Plovdiv, June, 2017The particularities of autonomous control system for deep space missions are described. A new approach for autonomous control system development is proposed and analyzed in details. Some models are analyzed and compared. The general formal model is based on the theory of communicating sequential processes (CSP). Methods for reconfiguration, verification and trace control are described. The software that is appropriate not only for the spacecraft flight path control but also for autonomous control of scientific apparatus operation and science experiments parameters is described. The software enables onboard scientific apparatus to autonomously detect and respond to science events Science algorithms, including onboard event detection, feature detection, change detection, and unusualness detection, are proposed to be used to analyze science data. Thus detecting features of scientific interest these algorithms are used to downlink only significant science data. These onboard science algorithms are inputs to onboard decision-making Replaner that modify the spacecraft observation plan to capture high value science events. This new observation plan is input for the Task execution subsystem of the Autonomous control system (ACS), able to adjust the plan to succeed despite run-time anomalies and uncertainties, and after it is executed by the ACS, which controls onboard scientific apparatus to enable an autonomous goal-directed exploration and data acquisition to maximize science return.Association for the Development of the Information Society, Institute of Mathematics and Informatics Bulgarian Academy of Sciences, Plovdiv University "Paisii Hilendarski

    Typicality distribution function:a new density-based data analytics tool

    Get PDF
    In this paper a new density-based, non-frequentistic data analytics tool, called typicality distribution function (TDF) is proposed. It is a further development of the recently introduced typicality- and eccentricity-based data analytics (TEDA) framework. The newly introduced TDF and its standardized form offer an effective alternative to the widely used probability distribution function (pdf), however, remaining free from the restrictive assumptions made and required by the latter. In particular, it offers an exact solution for any (except a single point) amount of non-coinciding data samples. For a comparison, that the well developed and widely used traditional probability theory and related statistical learning approaches require (theoretically) an infinitely large amount of data samples/ observations, although, in practice this requirement is often ignored. Furthermore, TDF does not require the user to pre-select or assume a particular distribution (e.g. Gaussian or other) or a mixture of such distributions or to pre-define the number of such distributions in a mixture. In addition, it does not require the individual data items to be independent. At the same time, the link with the traditional statistical approaches such as the well-known “nσ” analysis, Chebyshev inequality, etc. offers the interesting conclusion that without the restrictive prior assumptions listed above to which these traditional approaches are tied up the same type of analysis can be made using TDF automatically. TDF can provide valuable information for analysis of extreme processes, fault detection and identification were the amount of observations of extreme events or faults is usually disproportionally small. The newly proposed TDF offers a non-parametric, closed form analytical (quadratic) description extracted from the real data realizations exactly in contrast to the usual practice where such distributions are being pre-assumed or approximated. For example, so call- d particle filters are also a non-parametric approximation of the traditional statistics; however, they suffer from computational complexity and introduce a large number of dummy data. In addition to that, for several types of proximity/similarity measures (such as Euclidean, Mahalonobis, cosine) it can be calculated recursively, thus, computationally very efficiently and is suitable for real time and online algorithms. Moreover, with a very simple example, it has been illustrated that while traditional probability theory and related statistical approaches can lead in some cases to paradoxically incorrect results and/or to the need for hard prior assumptions to be made. In contrast, the newly proposed TDF can offer a logically meaningful result and an intuitive interpretation automatically and exactly without any prior assumptions. Finally, few simple univariate examples are provided and the process of inference is discussed and the future steps of the development of TDF and TEDA are outlined. Since it is a new fundamental theoretical innovation the areas of applications of TDF and TEDA can span from anomaly detection, clustering, classification, prediction, control, regression to (Kalman-like) filters. Practical applications can be even wider and, therefore, it is difficult to list all of them

    Multi-objective optimisation in air-conditioning systems : comfort/discomfort definition by IF sets.

    Get PDF
    The problem of multi-objective optimisation of air-conditioning (AC) systems is treated in the paper in the framework of intuitionistic fvzzy (lF) set theory. The nature of the problem is multi-objective one with requirements for minimal costs (generally, life cycle costs; more specifically, energy costs) and maximal occupants' comfort (minimal discomforl). Moreover, its definition by conventional means is bounded to a number of restrictions and assumptions, which are often far from the real-life situations. Attempts have been made to formulate and solve this problem by means of the fuzzy optimisation [4]. The present paper makes further step by exploring the innovative concept of IF sets [6] into definition of the trickiest issue: comfort and discomforr definition. The new approach allows to formulate more precisely the problem which compromises energy saving and thermal comfort satisfaction under given constraints. The resulting IF optimisation problem could be solved numerically or, under some assumptions, analytically Il]-t21. An example illustrates the viability of the proposed approach. A solution which significantly (with 3S%) improves comfort is found which is more energetically expensive with only 0.6Yo. This illustrates the possibility to use the approach for trade-off analysis in multi-objective optimisation of AC systems

    A retrospective comparative study of three data modelling techniques in anticoagulation therapy.

    Get PDF
    Three types of data modelling technique are applied retrospectively to individual patients’ anticoagulation therapy data to predict their future levels of anticoagulation. The results of the different models are compared and discussed relative to each other and previous similar studies. The conclusions of earlier papers are reinforced here using an extensive data set and continuously-updating neural network models are shown to predict future INR measurements best of the models presented here

    Unsupervised Domain Adaptation within Deep Foundation Latent Spaces

    Full text link
    The vision transformer-based foundation models, such as ViT or Dino-V2, are aimed at solving problems with little or no finetuning of features. Using a setting of prototypical networks, we analyse to what extent such foundation models can solve unsupervised domain adaptation without finetuning over the source or target domain. Through quantitative analysis, as well as qualitative interpretations of decision making, we demonstrate that the suggested method can improve upon existing baselines, as well as showcase the limitations of such approach yet to be solved

    A new online clustering approach for data in arbitrary shaped clusters

    Get PDF
    In this paper we demonstrate a new density based clustering technique, CODAS, for online clustering of streaming data into arbitrary shaped clusters. CODAS is a two stage process using a simple local density to initiate micro-clusters which are then combined into clusters. Memory efficiency is gained by not storing or re-using any data. Computational efficiency is gained by using hyper-spherical micro-clusters to achieve a micro-cluster joining technique that is dimensionally independent for speed. The micro-clusters divide the data space in to sub-spaces with a core region and a non-core region. Core regions which intersect define the clusters. A threshold value is used to identify outlier micro-clusters separately from small clusters of unusual data. The cluster information is fully maintained on-line. In this paper we compare CODAS with ELM, DEC, Chameleon, DBScan and Denstream and demonstrate that CODAS achieves comparable results but in a fully on-line and dimensionally scale-able manner

    Towards Explainable Deep Neural Networks (xDNN)

    Get PDF
    In this paper, we propose an elegant solution that is directly addressing the bottlenecks of the traditional deep learning approaches and offers a clearly explainable internal architecture that can outperform the existing methods, requires very little computational resources (no need for GPUs) and short training times (in the order of seconds). The proposed approach, xDNN is using prototypes. Prototypes are actual training data samples (images), which are local peaks of the empirical data distribution called typicality as well as of the data density. This generative model is identified in a closed form and equates to the pdf but is derived automatically and entirely from the training data with no user- or problem-specific thresholds, parameters or intervention. The proposed xDNN offers a new deep learning architecture that combines reasoning and learning in a synergy. It is non-iterative and non-parametric, which explains its efficiency in terms of time and computational resources. From the user perspective, the proposed approach is clearly understandable to human users. We tested it on some well-known benchmark data sets such as iRoads and Caltech-256. xDNN outperforms the other methods including deep learning in terms of accuracy, time to train and offers a clearly explainable classifier. In fact, the result on the very hard Caltech-256 problem (which has 257 classes) represents a world record
    corecore