10,409 research outputs found

    Application of Rough Classification of Multi-objective Extension Group Decision-making under Uncertainty

    Get PDF
    On account of the problem of incomplete information system in classification of extension group decision-making, this paper studies attribution reduction with decision-making function based on the group interaction and individual preferences assembly for achieving the goal of rough classification of multi-objective extension group decision-making under uncertainty. Then, this paper describes the idea and operating processes of multi-objective extension classification model in order to provide decision-makers with more practical, easy to operate and objective classification. Finally, an example concerning practical problem is given to demonstrate the classification process. Combining by extension association and rough reduction, this method not only takes the advantages of dynamic classification in extension decision-making, but also achieves the elimination of redundant attributes, conducive to the promotion on the accuracy and the reliability of the classification results in multi-objective extension group decision-making. Keywords: extension group decision-making; matter-element analysis; extension association; rough set; attribution reductio

    Predictive Maintenance on the Machining Process and Machine Tool

    Get PDF
    This paper presents the process required to implement a data driven Predictive Maintenance (PdM) not only in the machine decision making, but also in data acquisition and processing. A short review of the different approaches and techniques in maintenance is given. The main contribution of this paper is a solution for the predictive maintenance problem in a real machining process. Several steps are needed to reach the solution, which are carefully explained. The obtained results show that the Preventive Maintenance (PM), which was carried out in a real machining process, could be changed into a PdM approach. A decision making application was developed to provide a visual analysis of the Remaining Useful Life (RUL) of the machining tool. This work is a proof of concept of the methodology presented in one process, but replicable for most of the process for serial productions of pieces

    A novel Big Data analytics and intelligent technique to predict driver's intent

    Get PDF
    Modern age offers a great potential for automatically predicting the driver's intent through the increasing miniaturization of computing technologies, rapid advancements in communication technologies and continuous connectivity of heterogeneous smart objects. Inside the cabin and engine of modern cars, dedicated computer systems need to possess the ability to exploit the wealth of information generated by heterogeneous data sources with different contextual and conceptual representations. Processing and utilizing this diverse and voluminous data, involves many challenges concerning the design of the computational technique used to perform this task. In this paper, we investigate the various data sources available in the car and the surrounding environment, which can be utilized as inputs in order to predict driver's intent and behavior. As part of investigating these potential data sources, we conducted experiments on e-calendars for a large number of employees, and have reviewed a number of available geo referencing systems. Through the results of a statistical analysis and by computing location recognition accuracy results, we explored in detail the potential utilization of calendar location data to detect the driver's intentions. In order to exploit the numerous diverse data inputs available in modern vehicles, we investigate the suitability of different Computational Intelligence (CI) techniques, and propose a novel fuzzy computational modelling methodology. Finally, we outline the impact of applying advanced CI and Big Data analytics techniques in modern vehicles on the driver and society in general, and discuss ethical and legal issues arising from the deployment of intelligent self-learning cars

    A Review of Rule Learning Based Intrusion Detection Systems and Their Prospects in Smart Grids

    Get PDF

    Should patients with abnormal liver function tests in primary care be tested for chronic viral hepatitis: cost minimisation analysis based on a comprehensively tested cohort

    Get PDF
    Background Liver function tests (LFTs) are ordered in large numbers in primary care, and the Birmingham and Lambeth Liver Evaluation Testing Strategies (BALLETS) study was set up to assess their usefulness in patients with no pre-existing or self-evident liver disease. All patients were tested for chronic viral hepatitis thereby providing an opportunity to compare various strategies for detection of this serious treatable disease. Methods This study uses data from the BALLETS cohort to compare various testing strategies for viral hepatitis in patients who had received an abnormal LFT result. The aim was to inform a strategy for identification of patients with chronic viral hepatitis. We used a cost-minimisation analysis to define a base case and then calculated the incremental cost per case detected to inform a strategy that could guide testing for chronic viral hepatitis. Results Of the 1,236 study patients with an abnormal LFT, 13 had chronic viral hepatitis (nine hepatitis B and four hepatitis C). The strategy advocated by the current guidelines (repeating the LFT with a view to testing for specific disease if it remained abnormal) was less efficient (more expensive per case detected) than a simple policy of testing all patients for viral hepatitis without repeating LFTs. A more selective strategy of viral testing all patients for viral hepatitis if they were born in countries where viral hepatitis was prevalent provided high efficiency with little loss of sensitivity. A notably high alanine aminotransferase (ALT) level (greater than twice the upper limit of normal) on the initial ALT test had high predictive value, but was insensitive, missing half the cases of viral infection. Conclusions Based on this analysis and on widely accepted clinical principles, a "fast and frugal" heuristic was produced to guide general practitioners with respect to diagnosing cases of viral hepatitis in asymptomatic patients with abnormal LFTs. It recommends testing all patients where a clear clinical indication of infection is present (e.g. evidence of intravenous drug use), followed by testing all patients who originated from countries where viral hepatitis is prevalent, and finally testing those who have a notably raised ALT level (more than twice the upper limit of normal). Patients not picked up by this efficient algorithm had a risk of chronic viral hepatitis that is lower than the general population

    A Fault Diagnosis Method for Power Transmission Networks Based on Spiking Neural P Systems with Self-Updating Rules considering Biological Apoptosis Mechanism

    Get PDF
    Power transmission networks play an important role in smart girds. Fast and accurate faulty-equipment identification is critical for fault diagnosis of power systems; however, it is rather difficult due to uncertain and incomplete fault alarm messages in fault events. This paper proposes a new fault diagnosis method of transmission networks in the framework of membrane computing. We first propose a class of spiking neural P systems with self-updating rules (srSNPS) considering biological apoptosis mechanism and its self-updating matrix reasoning algorithm. The srSNPS, for the first time, effectively unitizes the attribute reduction ability of rough sets and the apoptosis mechanism of biological neurons in a P system, where the apoptosis algorithm for condition neurons is devised to delete redundant information in fault messages. This simplifies the complexity of the srSNPS model and allows us to deal with the uncertainty and incompleteness of fault information in an objective way without using historical statistics and expertise. Then, the srSNPS-based fault diagnosis method is proposed. It is composed of the transmission network partition, the SNPS model establishment, the pulse value correction and computing, and the protection device behavior evaluation, where the first two components can be finished before failures to save diagnosis time. Finally, case studies based on the IEEE 14- and IEEE 118-bus systems verify the effectiveness and superiority of the proposed method

    Optimal Number and Location of Sensors for Structural Damage Detection using the Theory of Geometrical Viewpoint and Parameter Subset Selection Method

    Get PDF
    The recorded responses at predefined sensor placements are used as input to solve an inverse structural damage detection problem. The error rate that noise causes from the recorded responses of the sensors is a significant issue in damage detection methods. Therefore, an optimal number and location of sensors is a goal to achieve the lowest error rate in structural damage detection. To overcome this problem, an algorithm (GVPSS) based on a Geometrical Viewpoint (GV) of optimal sensor placement and Parameter Subset Selection (PSS) method is proposed. The goal of the GVPSS algorithm is to minimize the effect of noise on damage detection problem. Therefore, the fitness function based on error in damage detection is minimized by GVPSS. In this method, the degrees of freedom are arranged to place sensors using a fitness function based on GV theory. Then, the optimal number and location of sensors are found on these arranged the degrees of freedom using the objective function. The efficiency of the proposed method is studied in a 52-bar dome structure under static and dynamic loadings. In the examples, damages are detected in two states: 1) using responses recorded at all DOFs, 2) using responses recorded at the optimal number and location of sensors obtained by GVPSS. The results showed that the damage detection error in state 2 is approximately equal to the error in state 1. Therefore, the GVPSS have the high performance to find the optimal number and location of sensors for structural damage detection

    Improving Building Energy Efficiency through Measurement of Building Physics Properties Using Dynamic Heating Tests

    Get PDF
    © 2019 the author. Licensee MDPI, Basel, Switzerland.Buildings contribute to nearly 30% of global carbon dioxide emissions, making a significant impact on climate change. Despite advanced design methods, such as those based on dynamic simulation tools, a significant discrepancy exists between designed and actual performance. This so-called performance gap occurs as a result of many factors, including the discrepancies between theoretical properties of building materials and properties of the same materials in buildings in use, reflected in the physics properties of the entire building. There are several different ways in which building physics properties and the underlying properties of materials can be established: a co-heating test, which measures the overall heat loss coefficient of the building; a dynamic heating test, which, in addition to the overall heat loss coefficient, also measures the effective thermal capacitance and the time constant of the building; and a simulation of the dynamic heating test with a calibrated simulation model, which establishes the same three properties in a non-disruptive way in comparison with the actual physical tests. This article introduces a method of measuring building physics properties through actual and simulated dynamic heating tests. It gives insights into the properties of building materials in use and it documents significant discrepancies between theoretical and measured properties. It introduces a quality assurance method for building construction and retrofit projects, and it explains the application of results on energy efficiency improvements in building design and control. It calls for re-examination of material properties data and for increased safety margins in order to make significant improvements in building energy efficiency.Peer reviewedFinal Published versio

    Review of trends and targets of complex systems for power system optimization

    Get PDF
    Optimization systems (OSs) allow operators of electrical power systems (PS) to optimally operate PSs and to also create optimal PS development plans. The inclusion of OSs in the PS is a big trend nowadays, and the demand for PS optimization tools and PS-OSs experts is growing. The aim of this review is to define the current dynamics and trends in PS optimization research and to present several papers that clearly and comprehensively describe PS OSs with characteristics corresponding to the identified current main trends in this research area. The current dynamics and trends of the research area were defined on the basis of the results of an analysis of the database of 255 PS-OS-presenting papers published from December 2015 to July 2019. Eleven main characteristics of the current PS OSs were identified. The results of the statistical analyses give four characteristics of PS OSs which are currently the most frequently presented in research papers: OSs for minimizing the price of electricity/OSs reducing PS operation costs, OSs for optimizing the operation of renewable energy sources, OSs for regulating the power consumption during the optimization process, and OSs for regulating the energy storage systems operation during the optimization process. Finally, individual identified characteristics of the current PS OSs are briefly described. In the analysis, all PS OSs presented in the observed time period were analyzed regardless of the part of the PS for which the operation was optimized by the PS OS, the voltage level of the optimized PS part, or the optimization goal of the PS OS.Web of Science135art. no. 107
    corecore