1,590 research outputs found

    Knowledge management overview of feature selection problem in high-dimensional financial data: Cooperative co-evolution and Map Reduce perspectives

    Get PDF
    The term big data characterizes the massive amounts of data generation by the advanced technologies in different domains using 4Vs volume, velocity, variety, and veracity-to indicate the amount of data that can only be processed via computationally intensive analysis, the speed of their creation, the different types of data, and their accuracy. High-dimensional financial data, such as time-series and space-Time data, contain a large number of features (variables) while having a small number of samples, which are used to measure various real-Time business situations for financial organizations. Such datasets are normally noisy, and complex correlations may exist between their features, and many domains, including financial, lack the al analytic tools to mine the data for knowledge discovery because of the high-dimensionality. Feature selection is an optimization problem to find a minimal subset of relevant features that maximizes the classification accuracy and reduces the computations. Traditional statistical-based feature selection approaches are not adequate to deal with the curse of dimensionality associated with big data. Cooperative co-evolution, a meta-heuristic algorithm and a divide-And-conquer approach, decomposes high-dimensional problems into smaller sub-problems. Further, MapReduce, a programming model, offers a ready-To-use distributed, scalable, and fault-Tolerant infrastructure for parallelizing the developed algorithm. This article presents a knowledge management overview of evolutionary feature selection approaches, state-of-The-Art cooperative co-evolution and MapReduce-based feature selection techniques, and future research directions

    Faster convergence in seismic history matching by dividing and conquering the unknowns

    Get PDF
    The aim in reservoir management is to control field operations to maximize both the short and long term recovery of hydrocarbons. This often comprises continuous optimization based on reservoir simulation models when the significant unknown parameters have been updated by history matching where they are conditioned to all available data. However, history matching of what is usually a high dimensional problem requires expensive computer and commercial software resources. Many models are generated, particularly if there are interactions between the properties that update and their effects on the misfit that measures the difference between model predictions to observed data. In this work, a novel 'divide and conquer' approach is developed to the seismic history matching method which efficiently searches for the best values of uncertain parameters such as barrier transmissibilities, net:gross, and permeability by matching well and 4D seismic predictions to observed data. The ‘divide’ is carried by applying a second order polynomial regression analysis to identify independent sub-volumes of the parameters hyperspace. These are then ‘conquered’ by searching separately but simultaneously with an adapted version of the quasi-global stochastic neighbourhood algorithm. This 'divide and conquer' approach is applied to the seismic history matching of the Schiehallion field, located on the UK continental shelf. The field model, supplied by the operator, contained a large number of barriers that affect flow at different times during production, and their transmissibilities were largely unknown. There was also some uncertainty in the petrophysical parameters that controlled permeability and net:gross. Application of the method was accomplished because it is found that the misfit function could be successfully represented as sub-misfits each dependent on changes in a smaller number of parameters which then could be searched separately but simultaneously. Ultimately, the number of models required to find a good match reduced by an order of magnitude. Experimental design was used to contribute to the efficiency and the ‘divide and conquer’ approach was also able to separate the misfit on a spatial basis by using time-lapse seismic data in the misfit. The method has effectively gained a greater insight into the reservoir behaviour and has been able to predict flow more accurately with a very efficient 'divide and conquer' approach

    Meta-learning computational intelligence architectures

    Get PDF
    In computational intelligence, the term \u27memetic algorithm\u27 has come to be associated with the algorithmic pairing of a global search method with a local search method. In a sociological context, a \u27meme\u27 has been loosely defined as a unit of cultural information, the social analog of genes for individuals. Both of these definitions are inadequate, as \u27memetic algorithm\u27 is too specific, and ultimately a misnomer, as much as a \u27meme\u27 is defined too generally to be of scientific use. In this dissertation the notion of memes and meta-learning is extended from a computational viewpoint and the purpose, definitions, design guidelines and architecture for effective meta-learning are explored. The background and structure of meta-learning architectures is discussed, incorporating viewpoints from psychology, sociology, computational intelligence, and engineering. The benefits and limitations of meme-based learning are demonstrated through two experimental case studies -- Meta-Learning Genetic Programming and Meta- Learning Traveling Salesman Problem Optimization. Additionally, the development and properties of several new algorithms are detailed, inspired by the previous case-studies. With applications ranging from cognitive science to machine learning, meta-learning has the potential to provide much-needed stimulation to the field of computational intelligence by providing a framework for higher order learning --Abstract, page iii

    Efficient Solution of Minimum Cost Flow Problems for Large-scale Transportation Networks

    Get PDF
    With the rapid advance of information technology in the transportation industry, of which intermodal transportation is one of the most important subfields, the scale and dimension of problem sizes and datasets is rising significantly. This trend raises the need for study on improving the efficiency, profitability and level of competitiveness of intermodal transportation networks while exploiting the rich information of big data related to these networks. Therefore, this dissertation aims to investigate intermodal transportation network design problems, especially practical optimization problems, and to develop more realistic and effective models and solution approaches that will assist network operators and/or decision makers of the intermodal transportation system. This dissertation focuses on developing a novel strategy for solving the Minimum Cost Flow (MCF) problem for large-scale network design problems by adopting a divide-and-conquer policy during the optimization process. The main contribution is the development of an agglomerative clustering based tiling strategy to significantly reduce the computational and peak memory consumption of the MCF model for large-scale networks. The tiling strategy is supported by the regional-division theorem and -approximation regional-division theorem that are proposed and proved in this dissertation. The region-division theorem is a sufficient condition to exactly guarantee the consistency between the local MCF solution of each sub-network obtained by the aforementioned tiling strategy and the global MCF solution of the whole network. Furthermore, the -approximation region-division theorem provides worst-case bounds, so that the practical approximation MCF solution closely approximates the optimal solution in terms of its optimal value. A series of experiments are performed to evaluate the utility of the proposed approach of solving the large-scale MCF problem. The results indicate that the proposed approach is beneficial to save the execution time and peak memory consumption in large-scale MCF problems under different circumstances

    Learning enhancement of radial basis function network with particle swarm optimization

    Get PDF
    Back propagation (BP) algorithm is the most common technique in Artificial Neural Network (ANN) learning, and this includes Radial Basis Function Network. However, major disadvantages of BP are its convergence rate is relatively slow and always being trapped at the local minima. To overcome this problem, Particle Swarm Optimization (PSO) has been implemented to enhance ANN learning to increase the performance of network in terms of convergence rate and accuracy. In Back Propagation Radial Basis Function Network (BP-RBFN), there are many elements to be considered. These include the number of input nodes, hidden nodes, output nodes, learning rate, bias, minimum error and activation/transfer functions. These elements will affect the speed of RBF Network learning. In this study, Particle Swarm Optimization (PSO) is incorporated into RBF Network to enhance the learning performance of the network. Two algorithms have been developed on error optimization for Back Propagation of Radial Basis Function Network (BP-RBFN) and Particle Swarm Optimization of Radial Basis Function Network (PSO-RBFN) to seek and generate better network performance. The results show that PSO-RBFN give promising outputs with faster convergence rate and better classifications compared to BP-RBFN

    Infrequent pattern detection for reliable network traffic analysis using robust evolutionary computation

    Get PDF
    While anomaly detection is very important in many domains, such as in cybersecurity, there are many rare anomalies or infrequent patterns in cybersecurity datasets. Detection of infrequent patterns is computationally expensive. Cybersecurity datasets consist of many features, mostly irrelevant, resulting in lower classification performance by machine learning algorithms. Hence, a feature selection (FS) approach, i.e., selecting relevant features only, is an essential preprocessing step in cybersecurity data analysis. Despite many FS approaches proposed in the literature, cooperative co-evolution (CC)-based FS approaches can be more suitable for cybersecurity data preprocessing considering the Big Data scenario. Accordingly, in this paper, we have applied our previously proposed CC-based FS with random feature grouping (CCFSRFG) to a benchmark cybersecurity dataset as the preprocessing step. The dataset with original features and the dataset with a reduced number of features were used for infrequent pattern detection. Experimental analysis was performed and evaluated using 10 unsupervised anomaly detection techniques. Therefore, the proposed infrequent pattern detection is termed Unsupervised Infrequent Pattern Detection (UIPD). Then, we compared the experimental results with and without FS in terms of true positive rate (TPR). Experimental analysis indicates that the highest rate of TPR improvement was by cluster-based local outlier factor (CBLOF) of the backdoor infrequent pattern detection, and it was 385.91% when using FS. Furthermore, the highest overall infrequent pattern detection TPR was improved by 61.47% for all infrequent patterns using clustering-based multivariate Gaussian outlier score (CMGOS) with FS
    corecore