2 research outputs found

    Scalable Bayesian Network Learning and its Applications

    No full text
    The Bayesian network is a powerful tool for modeling of cause effect and other uncertain relations between variables in a domain of interest. Probabilistic reasoning with a Bayesian network offers prediction of one or more unobserved variables of interest, given evidence. To use a Bayesian network in a real-world problem, one may need to learn the structure, the parameters, or both from data. However, learning Bayesian networks from high dimensionaland large datasets is a computationally challengingproblem. Parameter learning from large datasets demandsconsiderable computational and memory resources. Moreover, the runtime of theoretically correct structure learning algorithms (such as Hill Climbing, PC) are super-linear in the number of data dimensions.This research develops scalable techniques for both structure learning and parameter learning of Bayesian networks from data. For the parameter learning task, we proposed a novel decomposition of the Expectation Maximization algorithm in the MapReduceframework, where computation is performed in parallel across partitions of the data records. This learning method can handle both complete and incomplete data. Complete data means that all features have values in all records in a dataset, while this is not the case in the incomplete data case. For the Bayesian network structure learning task, a novel score-based method is developed. Score-based structure learning may seems inherently sequential, due to its use of iterative improvement steps. However, we bring parallelism to the scorebased structure learning paradigm.. This is done by organizing the candidate updates for a given structure in a matrix, partitioning the matrix in blocks, and computing scores for each block in parallel. Moreover, we maintain an archive of potential structures, which appear in the search path and use them as starting points of restarts. This mechanism helps preventing the search from getting trapped in local optimal solutions. We apply the proposed techniques to several datasets including two real-world engineering problems: smart building optimizationand next-generation air trac control. For smart buildingoptimization, we study the isolation of candidate causes ofadverse events in a building heating-cooling system. We propose a novel scalable causal learning (SCL) method using Bayesian network structure learning as a central piece. Experimental results on a dataset collected from a Building Automation System (BAS) show improved prediction accuracy and reduced computationtime of SCL compared to existing algorithms. For next generation air traffic control, we improve on the prediction ofaircraft taxi times in airports via Bayesian network uncertainty modeling of surface traffic. We apply both Bayesian network structure learning and parameter learning on dataset obtained from surface trac simulations for three airports in the New York city multiplex: JFK, LGA, and EWR. We use junction tree inference on the trained model to obtain the posterior distribution of transit time variables. The uncertainly model of the transit times are used by a scheduler to minimize the overall delay of departureswhile maximizing runway throughput. Existing approachesheavily rely on subject matter expert models and therefore limited in scope. However, the scalable structure learning approach relies on data, and thereby enables the use of Bayesian networks in any arbitrary airport where prior expert knowledge is limited or unavailable. <br

    MapReduce for Bayesian Network Parameter Learning using the EM Algorithm

    No full text
    This work applies the distributed computing framework MapReduce to Bayesian network parameter learning from incomplete data. We formulate the classical Expectation Maximization (EM) algorithm within the MapReduce framework. Analytically and experimentally we analyze the speed-up that can be obtained by means of MapReduce. We present details of the MapReduce formulation of EM, report speed-ups versus the sequential case, and carefully compare various Hadoop cluster configurations in experiments with Bayesian networks of different sizes and structures
    corecore