1,097 research outputs found

    Modeling and Recognition of Smart Grid Faults by a Combined Approach of Dissimilarity Learning and One-Class Classification

    Full text link
    Detecting faults in electrical power grids is of paramount importance, either from the electricity operator and consumer viewpoints. Modern electric power grids (smart grids) are equipped with smart sensors that allow to gather real-time information regarding the physical status of all the component elements belonging to the whole infrastructure (e.g., cables and related insulation, transformers, breakers and so on). In real-world smart grid systems, usually, additional information that are related to the operational status of the grid itself are collected such as meteorological information. Designing a suitable recognition (discrimination) model of faults in a real-world smart grid system is hence a challenging task. This follows from the heterogeneity of the information that actually determine a typical fault condition. The second point is that, for synthesizing a recognition model, in practice only the conditions of observed faults are usually meaningful. Therefore, a suitable recognition model should be synthesized by making use of the observed fault conditions only. In this paper, we deal with the problem of modeling and recognizing faults in a real-world smart grid system, which supplies the entire city of Rome, Italy. Recognition of faults is addressed by following a combined approach of multiple dissimilarity measures customization and one-class classification techniques. We provide here an in-depth study related to the available data and to the models synthesized by the proposed one-class classifier. We offer also a comprehensive analysis of the fault recognition results by exploiting a fuzzy set based reliability decision rule

    A Novel Fuzzy c -Means Clustering Algorithm Using Adaptive Norm

    Get PDF
    Abstract(#br)The fuzzy c -means (FCM) clustering algorithm is an unsupervised learning method that has been widely applied to cluster unlabeled data automatically instead of artificially, but is sensitive to noisy observations due to its inappropriate treatment of noise in the data. In this paper, a novel method considering noise intelligently based on the existing FCM approach, called adaptive-FCM and its extended version (adaptive-REFCM) in combination with relative entropy, are proposed. Adaptive-FCM, relying on an inventive integration of the adaptive norm, benefits from a robust overall structure. Adaptive-REFCM further integrates the properties of the relative entropy and normalized distance to preserve the global details of the dataset. Several experiments are carried out,..

    An Experimental Study on Microarray Expression Data from Plants under Salt Stress by using Clustering Methods

    Get PDF
    Current Genome-wide advancements in Gene chips technology provide in the “Omics (genomics, proteomics and transcriptomics) research”, an opportunity to analyze the expression levels of thousand of genes across multiple experiments. In this regard, many machine learning approaches were proposed to deal with this deluge of information. Clustering methods are one of these approaches. Their process consists of grouping data (gene profiles) into homogeneous clusters using distance measurements. Various clustering techniques are applied, but there is no consensus for the best one. In this context, a comparison of seven clustering algorithms was performed and tested against the gene expression datasets of three model plants under salt stress. These techniques are evaluated by internal and relative validity measures. It appears that the AGNES algorithm is the best one for internal validity measures for the three plant datasets. Also, K-Means profiles a trend for relative validity measures for these datasets

    SPOCC: Scalable POssibilistic Classifier Combination -- toward robust aggregation of classifiers

    Full text link
    We investigate a problem in which each member of a group of learners is trained separately to solve the same classification task. Each learner has access to a training dataset (possibly with overlap across learners) but each trained classifier can be evaluated on a validation dataset. We propose a new approach to aggregate the learner predictions in the possibility theory framework. For each classifier prediction, we build a possibility distribution assessing how likely the classifier prediction is correct using frequentist probabilities estimated on the validation set. The possibility distributions are aggregated using an adaptive t-norm that can accommodate dependency and poor accuracy of the classifier predictions. We prove that the proposed approach possesses a number of desirable classifier combination robustness properties

    Fuzzy adaptive tracking control within the full envelope for an unmanned aerial vehicle

    Get PDF
    AbstractMotivated by the autopilot of an unmanned aerial vehicle (UAV) with a wide flight envelope span experiencing large parametric variations in the presence of uncertainties, a fuzzy adaptive tracking controller (FATC) is proposed. The controller consists of a fuzzy baseline controller and an adaptive increment, and the main highlight is that the fuzzy baseline controller and adaptation laws are both based on the fuzzy multiple Lyapunov function approach, which helps to reduce the conservatism for the large envelope and guarantees satisfactory tracking performances with strong robustness simultaneously within the whole envelope. The constraint condition of the fuzzy baseline controller is provided in the form of linear matrix inequality (LMI), and it specifies the satisfactory tracking performances in the absence of uncertainties. The adaptive increment ensures the uniformly ultimately bounded (UUB) predication errors to recover satisfactory responses in the presence of uncertainties. Simulation results show that the proposed controller helps to achieve high-accuracy tracking of airspeed and altitude desirable commands with strong robustness to uncertainties throughout the entire flight envelope

    Applications of artificial intelligence in powerline communications in terms of noise detection and reduction : a review

    Get PDF
    Abstract: The technology which utilizes the power line as a medium for transferring information known as powerline communication (PLC) has been in existence for over a hundred years. It is beneficial because it avoids new installation since it uses the present installation for electrical power to transmit data. However, transmission of data signals through a power line channel usually experience some challenges which include impulsive noise, frequency selectivity, high channel attenuation, low line impedance etc. The impulsive noise exhibits a power spectral density within the range of 10-15 dB higher than the background noise, which could cause a severe problem in a communication system. For better outcome of the PLC system, these noises must be detected and suppressed. This paper reviews various techniques used in detecting and mitigating the impulsive noise in PLC and suggests the application of machine learning algorithms for the detection and removal of impulsive noise in power line communication systems

    DeepFT: Fault-tolerant edge computing using a self-supervised deep surrogate model

    Get PDF
    The emergence of latency-critical AI applications has been supported by the evolution of the edge computing paradigm. However, edge solutions are typically resource-constrained, posing reliability challenges due to heightened contention for compute capacities and faulty application behavior in the presence of overload conditions. Although a large amount of generated log data can be mined for fault prediction, labeling this data for training is a manual process and thus a limiting factor for automation. Due to this, many companies resort to unsupervised fault-tolerance models. Yet, failure models of this kind can incur a loss of accuracy when they need to adapt to non-stationary workloads and diverse host characteristics. Thus, we propose a novel modeling approach, DeepFT, to proactively avoid system overloads and their adverse effects by optimizing the task scheduling decisions. DeepFT uses a deep-surrogate model to accurately predict and diagnose faults in the system and co-simulation based self-supervised learning to dynamically adapt the model in volatile settings. Experimentation on an edge cluster shows that DeepFT can outperform state-of-the-art methods in fault-detection and QoS metrics. Specifically, DeepFT gives the highest F1 scores for fault-detection, reducing service deadline violations by up to 37% while also improving response time by up to 9%
    corecore