3 research outputs found

    A scalable system for factored learning in the cloud

    Get PDF
    Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2013.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student-submitted PDF version of thesis.Includes bibliographical references (pages 79-81).This work presents FlexGP, a new system designed for scalable machine learning in the cloud. FlexGP presents a learner-agnostic, data-parallel approach to cloud-based distributed learning using existing single-machine algorithms, without any dependence on distributed file systems or shared memory between instances. We design and implement asynchronous and decentralized launch and peer discovery protocols to start and configure a distributed network of learners. Through a unique process of factoring the data and parameters across the learners, FlexGP ensures this network consists of heterogeneous learners producing diverse models. These models are then filtered and fused to produce a meta-model for prediction. Using a thoughtfully designed test framework, FlexGP is run on a real-world regression problem from a large database. The results demonstrate the reliability and robustness of the system, even when learning from very little training data and multiple factorings, and demonstrate FlexGP as a vital tool to effectively leverage the cloud for machine learning tasks.by Owen C. Derby.M. Eng

    Multiple levels of parallelism in distributed machine learning via genetic programming

    Get PDF
    Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (pages 105-107).This thesis presents FlexGP 2.0, a distributed cloud-backed machine learning system. FlexGP 2.0 features multiple levels of parallelism which provide a significant improvement in accuracy v.s. elapsed time. The amount of computational resources in FlexGP 2.0 can be scaled along several dimensions to support large, complex data. FlexGP 2.0's core genetic programming (GP) learner includes multithreaded C++ model evaluation and a multi-objective optimization algorithm which is extensible to pursue any number of objectives simultaneously in parallel. FlexGP 2.0 parallelizes the entire learner to obtain a large distributed population size and leverages communication between learners to increase performance via transferral of search progress between learners. FlexGP 2.0 factors training data to boost performance and enable support for increased data size and complexity. Several experiments are performed which verify the efficacy of FlexGP 2.0's multilevel parallelism. Experiments run on a large dataset from a real-world regression problem. The results demonstrate both less time to achieve the same accuracy and overall increased accuracy, and illustrate the value of FlexGP 2.0 as a platform for machine learning.by Dylan J. Sherry.M. Eng
    corecore