7 research outputs found

    Energy Consumption Data Based Machine Anomaly Detection

    Get PDF

    Big Data Analysis of Facebook Users Personality Recognition using Map Reduce Back Propagation Neural Networks

    Get PDF
    Abstract- Machine learning has been an effective tool to connect networks of enormous information for predicting personality.  Identification of personality-related indicators encrypted in Facebook profiles and activities are of special concern in most research efforts. This research modeled user personality based on set of features extracted from the Facebook data using Map-Reduce Back Propagation Neural Network (MRBPNN). The performance of the MRBPNN classification model was evaluated in terms of five basic personality dimensions: Extraversion (EXT), Agreeableness (AGR), Conscientiousness (CON), Neuroticism (NEU), and Openness to Experience (OPN) using True positive, False Positive, accuracy, precision and F-measure as metrics at the threshold value of 0.32. The experimental results reveal that MRBPNN model has accuracy of 91.40%, 93.89%, 91.33%, 90.43% and 89.13% CON, OPN, EXT, NEU and AGR respectively for personality recognition which is more computationally efficient than Back Propagation Neural Network (BPNN) and Support Vector Machine (SVM). Therefore, personality recognition based on MRBPNN would produce a reliable prediction system for various personality traits with data having a very large instance

    COMET: A Recipe for Learning and Using Large Ensembles on Massive Data

    Full text link
    COMET is a single-pass MapReduce algorithm for learning on large-scale data. It builds multiple random forest ensembles on distributed blocks of data and merges them into a mega-ensemble. This approach is appropriate when learning from massive-scale data that is too large to fit on a single machine. To get the best accuracy, IVoting should be used instead of bagging to generate the training subset for each decision tree in the random forest. Experiments with two large datasets (5GB and 50GB compressed) show that COMET compares favorably (in both accuracy and training time) to learning on a subsample of data using a serial algorithm. Finally, we propose a new Gaussian approach for lazy ensemble evaluation which dynamically decides how many ensemble members to evaluate per data point; this can reduce evaluation cost by 100X or more

    Computing resources sensitive parallelization of neural neworks for large scale diabetes data modelling, diagnosis and prediction

    Get PDF
    Diabetes has become one of the most severe deceases due to an increasing number of diabetes patients globally. A large amount of digital data on diabetes has been collected through various channels. How to utilize these data sets to help doctors to make a decision on diagnosis, treatment and prediction of diabetic patients poses many challenges to the research community. The thesis investigates mathematical models with a focus on neural networks for large scale diabetes data modelling and analysis by utilizing modern computing technologies such as grid computing and cloud computing. These computing technologies provide users with an inexpensive way to have access to extensive computing resources over the Internet for solving data and computationally intensive problems. This thesis evaluates the performance of seven representative machine learning techniques in classification of diabetes data and the results show that neural network produces the best accuracy in classification but incurs high overhead in data training. As a result, the thesis develops MRNN, a parallel neural network model based on the MapReduce programming model which has become an enabling technology in support of data intensive applications in the clouds. By partitioning the diabetic data set into a number of equally sized data blocks, the workload in training is distributed among a number of computing nodes for speedup in data training. MRNN is first evaluated in small scale experimental environments using 12 mappers and subsequently is evaluated in large scale simulated environments using up to 1000 mappers. Both the experimental and simulations results have shown the effectiveness of MRNN in classification, and its high scalability in data training. MapReduce does not have a sophisticated job scheduling scheme for heterogonous computing environments in which the computing nodes may have varied computing capabilities. For this purpose, this thesis develops a load balancing scheme based on genetic algorithms with an aim to balance the training workload among heterogeneous computing nodes. The nodes with more computing capacities will receive more MapReduce jobs for execution. Divisible load theory is employed to guide the evolutionary process of the genetic algorithm with an aim to achieve fast convergence. The proposed load balancing scheme is evaluated in large scale simulated MapReduce environments with varied levels of heterogeneity using different sizes of data sets. All the results show that the genetic algorithm based load balancing scheme significantly reduce the makespan in job execution in comparison with the time consumed without load balancing.EThOS - Electronic Theses Online ServiceEPSRCChina Market AssociationGBUnited Kingdo
    corecore