4,842 research outputs found

    Computational intelligence approaches for energy load forecasting in smart energy management grids: state of the art, future challenges, and research directions and Research Directions

    Get PDF
    Energy management systems are designed to monitor, optimize, and control the smart grid energy market. Demand-side management, considered as an essential part of the energy management system, can enable utility market operators to make better management decisions for energy trading between consumers and the operator. In this system, a priori knowledge about the energy load pattern can help reshape the load and cut the energy demand curve, thus allowing a better management and distribution of the energy in smart grid energy systems. Designing a computationally intelligent load forecasting (ILF) system is often a primary goal of energy demand management. This study explores the state of the art of computationally intelligent (i.e., machine learning) methods that are applied in load forecasting in terms of their classification and evaluation for sustainable operation of the overall energy management system. More than 50 research papers related to the subject identified in existing literature are classified into two categories: namely the single and the hybrid computational intelligence (CI)-based load forecasting technique. The advantages and disadvantages of each individual techniques also discussed to encapsulate them into the perspective into the energy management research. The identified methods have been further investigated by a qualitative analysis based on the accuracy of the prediction, which confirms the dominance of hybrid forecasting methods, which are often applied as metaheurstic algorithms considering the different optimization techniques over single model approaches. Based on extensive surveys, the review paper predicts a continuous future expansion of such literature on different CI approaches and their optimizations with both heuristic and metaheuristic methods used for energy load forecasting and their potential utilization in real-time smart energy management grids to address future challenges in energy demand managemen

    Fuzzy Side Information Clustering-Based Framework for Effective Recommendations

    Get PDF
    Collaborative filtering (CF) is the most successful and widely implemented algorithm in the area of recommender systems (RSs). It generates recommendations using a set of user-product ratings by matching similarity between the profiles of different users. Computing similarity among user profiles efficiently in case of sparse data is the most crucial component of the CF technique. Data sparsity and accuracy are the two major issues associated with the classical CF approach. In this paper, we try to solve these issues using a novel approach based on the side information (user-product background content) and the Mahalanobis distance measure. The side information has been incorporated into RSs to further improve their performance, especially in the case of data sparsity. However, incorporation of side information into traditional two-dimensional recommender systems would increase the dimensionality and complexity of the system. Therefore, to alleviate the problem of dimensionality, we cluster users based on their side information using k-means clustering algorithm and each user's similarity is computed using the Mahalanobis distance method. Additionally, we use fuzzy sets to represent the side information more efficiently. Results of the experimentation with two benchmark datasets show that our framework improves the recommendations quality and predictive accuracy of both traditional and clustering-based collaborative recommendations

    A Study of recent classification algorithms and a novel approach for biosignal data classification

    Get PDF
    Analyzing and understanding human biosignals have been important research areas that have many practical applications in everyday life. For example, Brain Computer Interface is a research area that studies the connection between the human brain and external systems by processing and learning the brain signals called Electroencephalography (EEG) signals. Similarly, various assistive robotics applications are being developed to interpret eye or muscle signals in humans in order to provide control inputs for external devices. The efficiency for all of these applications depends heavily on being able to process and classify human biosignals. Therefore many techniques from Signal Processing and Machine Learning fields are applied in order to understand human biosignals better and increase the efficiency and success of these applications. This thesis proposes a new classifier for biosignal data classification utilizing Particle Swarm Optimization Clustering and Radial Basis Function Networks (RBFN). The performance of the proposed classifier together with several variations in the technique is analyzed by utilizing comparisons with the state of the art classifiers such as Fuzzy Functions Support Vector Machines (FFSVM), Improved Fuzzy Functions Support Vector Machines (IFFSVM). These classifiers are implemented on the classification of same biological signals in order to evaluate the proposed technique. Several clustering algorithms, which are used in these classifiers, such as K-means, Fuzzy c-means, and Particle Swarm Optimization (PSO), are studied and compared with each other based on clustering abilities. The effects of the analyzed clustering algorithms in the performance of Radial Basis Functions Networks classifier are investigated. Strengths and weaknesses are analyzed on various standard and EEG datasets. Results show that the proposed classifier that combines PSO clustering with RBFN classifier can reach or exceed the performance of these state of the art classifiers. Finally, the proposed classification technique is applied to a real-time system application where a mobile robot is controlled based on person\u27s EEG signal

    Application of Computational Intelligence Techniques to Process Industry Problems

    Get PDF
    In the last two decades there has been a large progress in the computational intelligence research field. The fruits of the effort spent on the research in the discussed field are powerful techniques for pattern recognition, data mining, data modelling, etc. These techniques achieve high performance on traditional data sets like the UCI machine learning database. Unfortunately, this kind of data sources usually represent clean data without any problems like data outliers, missing values, feature co-linearity, etc. common to real-life industrial data. The presence of faulty data samples can have very harmful effects on the models, for example if presented during the training of the models, it can either cause sub-optimal performance of the trained model or in the worst case destroy the so far learnt knowledge of the model. For these reasons the application of present modelling techniques to industrial problems has developed into a research field on its own. Based on the discussion of the properties and issues of the data and the state-of-the-art modelling techniques in the process industry, in this paper a novel unified approach to the development of predictive models in the process industry is presented

    Digital Image-Based Frameworks for Monitoring and Controlling of Particulate Systems

    Get PDF
    Particulate processes have been widely involved in various industries and most products in the chemical industry today are manufactured as particulates. Previous research and practise illustrate that the final product quality can be influenced by particle properties such as size and shape which are related to operating conditions. Online characterization of these particles is an important step for maintaining desired product quality in particulate processes. Image-based characterization method for the purpose of monitoring and control particulate processes is very promising and attractive. The development of a digital image-based framework, in the context of this research, can be envisioned in two parts. One is performing image analysis and designing advanced algorithms for segmentation and texture analysis. The other is formulating and implementing modern predictive tools to establish the correlations between the texture features and the particle characteristics. According to the extent of touching and overlapping between particles in images, two image analysis methods were developed and tested. For slight touching problems, image segmentation algorithms were developed by introducing Wavelet Transform de-noising and Fuzzy C-means Clustering detecting the touching regions, and by adopting the intensity and geometry characteristics of touching areas. Since individual particles can be identified through image segmentation, particle number, particle equivalent diameter, and size distribution were used as the features. For severe touching and overlapping problems, texture analysis was carried out through the estimation of wavelet energy signature and fractal dimension based on wavelet decomposition on the objects. Predictive models for monitoring and control for particulate processes were formulated and implemented. Building on the feature extraction properties of the wavelet decomposition, a projection technique such as principal component analysis (PCA) was used to detect off-specification conditions which generate particle mean size deviates the target value. Furthermore, linear and nonlinear predictive models based on partial least squares (PLS) and artificial neural networks (ANN) were formulated, implemented and tested on an experimental facility to predict particle characteristics (mean size and standard deviation) from the image texture analysis

    A Survey of Adaptive Resonance Theory Neural Network Models for Engineering Applications

    Full text link
    This survey samples from the ever-growing family of adaptive resonance theory (ART) neural network models used to perform the three primary machine learning modalities, namely, unsupervised, supervised and reinforcement learning. It comprises a representative list from classic to modern ART models, thereby painting a general picture of the architectures developed by researchers over the past 30 years. The learning dynamics of these ART models are briefly described, and their distinctive characteristics such as code representation, long-term memory and corresponding geometric interpretation are discussed. Useful engineering properties of ART (speed, configurability, explainability, parallelization and hardware implementation) are examined along with current challenges. Finally, a compilation of online software libraries is provided. It is expected that this overview will be helpful to new and seasoned ART researchers

    Fully Connected Neural Networks Ensemble with Signal Strength Clustering for Indoor Localization in Wireless Sensor Networks

    Get PDF
    The paper introduces a method which improves localization accuracy of the signal strength fingerprinting approach. According to the proposed method, entire localization area is divided into regions by clustering the fingerprint database. For each region a prototype of the received signal strength is determined and a dedicated artificial neural network (ANN) is trained by using only those fingerprints that belong to this region (cluster). Final estimation of the location is obtained by fusion of the coordinates delivered by selected ANNs. Sensor nodes have to store only the signal strength prototypes and synaptic weights of the ANNs in order to estimate their locations. This approach significantly reduces the amount of memory required to store a received signal strength map. Various ANN topologies were considered in this study. Improvement of the localization accuracy as well as speed-up of learning process was achieved by employing fully connected neural networks. The proposed method was verified and compared against state-of-the-art localization approaches in realworld indoor environment by using both stationary andmobile sensor nodes

    Data fusion by using machine learning and computational intelligence techniques for medical image analysis and classification

    Get PDF
    Data fusion is the process of integrating information from multiple sources to produce specific, comprehensive, unified data about an entity. Data fusion is categorized as low level, feature level and decision level. This research is focused on both investigating and developing feature- and decision-level data fusion for automated image analysis and classification. The common procedure for solving these problems can be described as: 1) process image for region of interest\u27 detection, 2) extract features from the region of interest and 3) create learning model based on the feature data. Image processing techniques were performed using edge detection, a histogram threshold and a color drop algorithm to determine the region of interest. The extracted features were low-level features, including textual, color and symmetrical features. For image analysis and classification, feature- and decision-level data fusion techniques are investigated for model learning using and integrating computational intelligence and machine learning techniques. These techniques include artificial neural networks, evolutionary algorithms, particle swarm optimization, decision tree, clustering algorithms, fuzzy logic inference, and voting algorithms. This work presents both the investigation and development of data fusion techniques for the application areas of dermoscopy skin lesion discrimination, content-based image retrieval, and graphic image type classification --Abstract, page v
    • …
    corecore