227 research outputs found

    Data Mining in Smart Grids

    Get PDF
    Effective smart grid operation requires rapid decisions in a data-rich, but information-limited, environment. In this context, grid sensor data-streaming cannot provide the system operators with the necessary information to act on in the time frames necessary to minimize the impact of the disturbances. Even if there are fast models that can convert the data into information, the smart grid operator must deal with the challenge of not having a full understanding of the context of the information, and, therefore, the information content cannot be used with any high degree of confidence. To address this issue, data mining has been recognized as the most promising enabling technology for improving decision-making processes, providing the right information at the right moment to the right decision-maker. This Special Issue is focused on emerging methodologies for data mining in smart grids. In this area, it addresses many relevant topics, ranging from methods for uncertainty management, to advanced dispatching. This Special Issue not only focuses on methodological breakthroughs and roadmaps in implementing the methodology, but also presents the much-needed sharing of the best practices. Topics include, but are not limited to, the following: Fuzziness in smart grids computing Emerging techniques for renewable energy forecasting Robust and proactive solution of optimal smart grids operation Fuzzy-based smart grids monitoring and control frameworks Granular computing for uncertainty management in smart grids Self-organizing and decentralized paradigms for information processin

    An Effective Disease Prediction System using CRF based Butterfly Optimization, Fuzzy Decision Tree and DBN

    Get PDF
    Diabetes is a seriously deadly disease today. It is necessary to enable patients to control their blood glucose levels. Even though, in the past, various researchers proposed numerous diabetic detection and prediction systems they are not fulfilling the requirements in terms of detection and prediction accuracy. Nowadays, diabetes patients are utilizing the gadgets like Wireless Insulin Pump that passes into the body instead of syringes for filling insulin. Within this context, insulin treatment is necessary for avoiding life-threatening. Toward this mission, a new deep learning approach-based disease detection system is introduced which takes care of identifying Type-1 and Type-2 diabetes, heart diseases, and breast cancer. In this system, a new Conditional Random Field based Butterfly Optimization Algorithm (CRF-BOA) is developedto select the important features for identifying the Type-1 and Type-2 diabetic disease. Besides, a new fuzzy ID3 classification method is developed for classifying the patient's datasets either normal or abnormal and disease affected. Ultimately, by applying the deep belief network (DBN) the classified patient records are involved with training to identify the relevant symptoms of similarity and glucose status of various patient records. These experiments are being conducted for proving the efficiency of the proposed deep learning approach in terms of glucose monitoring efficiency and disease prediction accuracy.The proposed approach achieved high detection accuracy than the current deep learning approaches in this directionbased on error rate and accuracy

    The Right Direction Needed to Develop White-Box Deep Learning in Radiology, Pathology, and Ophthalmology: A Short Review

    Get PDF
    The popularity of deep learning (DL) in the machine learning community has been dramatically increasing since 2012. The theoretical foundations of DL are well-rooted in the classical neural network (NN). Rule extraction is not a new concept, but was originally devised for a shallow NN. For about the past 30 years, extensive efforts have been made by many researchers to resolve the “black box” problem of trained shallow NNs using rule extraction technology. A rule extraction technology that is well-balanced between accuracy and interpretability has recently been proposed for shallow NNs as a promising means to address this black box problem. Recently, we have been confronting a “new black box” problem caused by highly complex deep NNs (DNNs) generated by DL. In this paper, we first review four rule extraction approaches to resolve the black box problem of DNNs trained by DL in computer vision. Next, we discuss the fundamental limitations and criticisms of current DL approaches in radiology, pathology, and ophthalmology from the black box point of view. We also review the conversion methods from DNNs to decision trees and point out their limitations. Furthermore, we describe a transparent approach for resolving the black box problem of DNNs trained by a deep belief network. Finally, we provide a brief description to realize the transparency of DNNs generated by a convolutional NN and discuss a practical way to realize the transparency of DL in radiology, pathology, and ophthalmology

    Deep learning systems as complex networks

    Full text link
    Thanks to the availability of large scale digital datasets and massive amounts of computational power, deep learning algorithms can learn representations of data by exploiting multiple levels of abstraction. These machine learning methods have greatly improved the state-of-the-art in many challenging cognitive tasks, such as visual object recognition, speech processing, natural language understanding and automatic translation. In particular, one class of deep learning models, known as deep belief networks, can discover intricate statistical structure in large data sets in a completely unsupervised fashion, by learning a generative model of the data using Hebbian-like learning mechanisms. Although these self-organizing systems can be conveniently formalized within the framework of statistical mechanics, their internal functioning remains opaque, because their emergent dynamics cannot be solved analytically. In this article we propose to study deep belief networks using techniques commonly employed in the study of complex networks, in order to gain some insights into the structural and functional properties of the computational graph resulting from the learning process.Comment: 20 pages, 9 figure

    Toward enhancement of deep learning techniques using fuzzy logic: a survey

    Get PDF
    Deep learning has emerged recently as a type of artificial intelligence (AI) and machine learning (ML), it usually imitates the human way in gaining a particular knowledge type. Deep learning is considered an essential data science element, which comprises predictive modeling and statistics. Deep learning makes the processes of collecting, interpreting, and analyzing big data easier and faster. Deep neural networks are kind of ML models, where the non-linear processing units are layered for the purpose of extracting particular features from the inputs. Actually, the training process of similar networks is very expensive and it also depends on the used optimization method, hence optimal results may not be provided. The techniques of deep learning are also vulnerable to data noise. For these reasons, fuzzy systems are used to improve the performance of deep learning algorithms, especially in combination with neural networks. Fuzzy systems are used to improve the representation accuracy of deep learning models. This survey paper reviews some of the deep learning based fuzzy logic models and techniques that were presented and proposed in the previous studies, where fuzzy logic is used to improve deep learning performance. The approaches are divided into two categories based on how both of the samples are combined. Furthermore, the models' practicality in the actual world is revealed

    Revolutionizing Groundwater Management with Hybrid AI Models: A Practical Review

    Get PDF
    Developing precise soft computing methods for groundwater management, which includes quality and quantity, is crucial for improving water resources planning and management. In the past 20 years, significant progress has been made in groundwater management using hybrid machine learning (ML) models as artificial intelligence (AI). Although various review articles have reported advances in this field, existing literature must cover groundwater management using hybrid ML. This review article aims to understand the current state-of-the-art hybrid ML models used for groundwater management and the achievements made in this domain. It includes the most cited hybrid ML models employed for groundwater management from 2009 to 2022. It summarises the reviewed papers, highlighting their strengths and weaknesses, the performance criteria employed, and the most highly cited models identified. It is worth noting that the accuracy was significantly enhanced, resulting in a substantial improvement and demonstrating a robust outcome. Additionally, this article outlines recommendations for future research directions to enhance the accuracy of groundwater management, including prediction models and enhance related knowledge

    Corporate Credit Rating: A Survey

    Full text link
    Corporate credit rating (CCR) plays a very important role in the process of contemporary economic and social development. How to use credit rating methods for enterprises has always been a problem worthy of discussion. Through reading and studying the relevant literature at home and abroad, this paper makes a systematic survey of CCR. This paper combs the context of the development of CCR methods from the three levels: statistical models, machine learning models and neural network models, summarizes the common databases of CCR, and deeply compares the advantages and disadvantages of the models. Finally, this paper summarizes the problems existing in the current research and prospects the future of CCR. Compared with the existing review of CCR, this paper expounds and analyzes the progress of neural network model in this field in recent years.Comment: 11 page

    Predicting Short-Term Traffic Congestion on Urban Motorway Networks

    Get PDF
    Traffic congestion is a widely occurring phenomenon caused by increased use of vehicles on roads resulting in slower speeds, longer delays, and increased vehicular queueing in traffic. Every year, over a thousand hours are spent in traffic congestion leading to great cost and time losses. In this thesis, we propose a multimodal data fusion framework for predicting traffic congestion on urban motorway networks. It comprises of three main approaches. The first approach predicts traffic congestion on urban motorway networks using data mining techniques. Two categories of models are considered namely neural networks, and random forest classifiers. The neural network models include the back propagation neural network and deep belief network. The second approach predicts traffic congestion using social media data. Twitter traffic delay tweets are analyzed using sentiment analysis and cluster classification for traffic flow prediction. Lastly, we propose a data fusion framework as the third approach. It comprises of two main techniques. The homogeneous data fusion technique fuses data of same types (quantitative or numeric) estimated using machine learning algorithms. The heterogeneous data fusion technique fuses the quantitative data obtained from the homogeneous data fusion model and the qualitative or categorical data (i.e. traffic tweet information) from twitter data source using Mamdani fuzzy rule inferencing systems. The proposed work has strong practical applicability and can be used by traffic planners and decision makers in traffic congestion monitoring, prediction and route generation under disruption

    Eddy current defect response analysis using sum of Gaussian methods

    Get PDF
    This dissertation is a study of methods to automatedly detect and produce approximations of eddy current differential coil defect signatures in terms of a summed collection of Gaussian functions (SoG). Datasets consisting of varying material, defect size, inspection frequency, and coil diameter were investigated. Dimensionally reduced representations of the defect responses were obtained utilizing common existing reduction methods and novel enhancements to them utilizing SoG Representations. Efficacy of the SoG enhanced representations were studied utilizing common Machine Learning (ML) interpretable classifier designs with the SoG representations indicating significant improvement of common analysis metrics
    • …
    corecore