868 research outputs found

    Boosting Combinatorial Problem Modeling with Machine Learning

    Full text link
    In the past few years, the area of Machine Learning (ML) has witnessed tremendous advancements, becoming a pervasive technology in a wide range of applications. One area that can significantly benefit from the use of ML is Combinatorial Optimization. The three pillars of constraint satisfaction and optimization problem solving, i.e., modeling, search, and optimization, can exploit ML techniques to boost their accuracy, efficiency and effectiveness. In this survey we focus on the modeling component, whose effectiveness is crucial for solving the problem. The modeling activity has been traditionally shaped by optimization and domain experts, interacting to provide realistic results. Machine Learning techniques can tremendously ease the process, and exploit the available data to either create models or refine expert-designed ones. In this survey we cover approaches that have been recently proposed to enhance the modeling process by learning either single constraints, objective functions, or the whole model. We highlight common themes to multiple approaches and draw connections with related fields of research.Comment: Originally submitted to IJCAI201

    Machine learning and its applications in reliability analysis systems

    Get PDF
    In this thesis, we are interested in exploring some aspects of Machine Learning (ML) and its application in the Reliability Analysis systems (RAs). We begin by investigating some ML paradigms and their- techniques, go on to discuss the possible applications of ML in improving RAs performance, and lastly give guidelines of the architecture of learning RAs. Our survey of ML covers both levels of Neural Network learning and Symbolic learning. In symbolic process learning, five types of learning and their applications are discussed: rote learning, learning from instruction, learning from analogy, learning from examples, and learning from observation and discovery. The Reliability Analysis systems (RAs) presented in this thesis are mainly designed for maintaining plant safety supported by two functions: risk analysis function, i.e., failure mode effect analysis (FMEA) ; and diagnosis function, i.e., real-time fault location (RTFL). Three approaches have been discussed in creating the RAs. According to the result of our survey, we suggest currently the best design of RAs is to embed model-based RAs, i.e., MORA (as software) in a neural network based computer system (as hardware). However, there are still some improvement which can be made through the applications of Machine Learning. By implanting the 'learning element', the MORA will become learning MORA (La MORA) system, a learning Reliability Analysis system with the power of automatic knowledge acquisition and inconsistency checking, and more. To conclude our thesis, we propose an architecture of La MORA

    Improved Surrogates in Inertial Confinement Fusion with Manifold and Cycle Consistencies

    Full text link
    Neural networks have become very popular in surrogate modeling because of their ability to characterize arbitrary, high dimensional functions in a data driven fashion. This paper advocates for the training of surrogates that are consistent with the physical manifold -- i.e., predictions are always physically meaningful, and are cyclically consistent -- i.e., when the predictions of the surrogate, when passed through an independently trained inverse model give back the original input parameters. We find that these two consistencies lead to surrogates that are superior in terms of predictive performance, more resilient to sampling artifacts, and tend to be more data efficient. Using Inertial Confinement Fusion (ICF) as a test bed problem, we model a 1D semi-analytic numerical simulator and demonstrate the effectiveness of our approach. Code and data are available at https://github.com/rushilanirudh/macc/Comment: 10 pages, 6 figure

    Combining support vector machines and simulated annealing for stereovision matching with fish eye lenses in forest environments

    Get PDF
    We present a novel strategy for computing disparity maps from omni-directional stereo images obtained with fish-eye lenses in forest environments. At a first segmentation stage, the method identifies textures of interest to be either matched or discarded. Two of them are identified by applying the powerful Support Vector Machines approach. At a second stage, a stereovision matching process is designed based on the application of four stereovision matching constraints: epipolarity, similarity, uniqueness and smoothness. The epipolarity guides the process. The similarity and uniqueness are mapped once again through the Support Vector Machines, but under a different way to the previous case; after this an initial disparity map is obtained. This map is later filtered by applying the Discrete Simulated Annealing framework where the smoothness constraint is conveniently mapped. The combination of the segmentation and stereovision matching approaches makes the main contribution. The method is compared against the usage of simple features and combined similarity matching strategies. (C) 2011 Elsevier Ltd. All rights reserved

    PLANNING UNDER UNCERTAINTIES FOR AUTONOMOUS DRIVING ON URBAN ROAD

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Synthetic Data -- what, why and how?

    Full text link
    This explainer document aims to provide an overview of the current state of the rapidly expanding work on synthetic data technologies, with a particular focus on privacy. The article is intended for a non-technical audience, though some formal definitions have been given to provide clarity to specialists. This article is intended to enable the reader to quickly become familiar with the notion of synthetic data, as well as understand some of the subtle intricacies that come with it. We do believe that synthetic data is a very useful tool, and our hope is that this report highlights that, while drawing attention to nuances that can easily be overlooked in its deployment.Comment: Commissioned by the Royal Society. 57 pages 2 figure

    Prediction of Airport Arrival Rates Using Data Mining Methods

    Get PDF
    This research sought to establish and utilize relationships between environmental variable inputs and airport efficiency estimates by data mining archived weather and airport performance data at ten geographically and climatologically different airports. Several meaningful relationships were discovered using various statistical modeling methods within an overarching data mining protocol and the developed models were tested using historical data. Additionally, a selected model was deployed using real-time predictive weather information to estimate airport efficiency as a demonstration of potential operational usefulness. This work employed SAS® Enterprise Miner TM data mining and modeling software to train and validate decision tree, neural network, and linear regression models to estimate the importance of weather input variables in predicting Airport Arrival Rates (AAR) using the FAA’s Aviation System Performance Metric (ASPM) database. The ASPM database contains airport performance statistics and limited weather variables archived at 15-minute and hourly intervals, and these data formed the foundation of this study. In order to add more weather parameters into the data mining environment, National Oceanic and Atmospheric Administration (NOAA) National Centers for Environmental Information (NCEI) meteorological hourly station data were merged with the ASPM data to increase the number of environmental variables (e.g., precipitation type and amount) into the analyses. Using the SAS® Enterprise Miner TM, three different types of models were created, compared, and scored at the following ten airports: a) Hartsfield-Jackson Atlanta International Airport (ATL), b) Los Angeles International Airport (LAX), c) O’Hare International Airport (ORD), d) Dallas/Fort Worth International Airport (DFW), e) John F. Kennedy International Airport (JFK), f) Denver International Airport (DEN), g) San Francisco International Airport (SFO), h) Charlotte-Douglas International Airport (CLT), i) LaGuardia Airport (LGA), and j) Newark Liberty International Airport (EWR). At each location, weather inputs were used to estimate AARs as a metric of efficiency easily interpreted by FAA airspace managers. To estimate Airport Arrival Rates, three data sets were used: a) 15-minute and b) hourly ASPM data, along with c) a merged ASPM and meteorological hourly station data set. For all three data sets, the models were trained and validated using data from 2014 and 2015, and then tested using 2016 data. Additionally, a selected airport model was deployed using National Weather Service (NWS) Localized Aviation MOS (Model Output Statistics) Program (LAMP) weather guidance as the input variables over a 24-hour period as a test. The resulting AAR output predictions were then compared with the real-world AARs observed. Based on model scoring using 2016 data, LAX, ATL, and EWR demonstrated useful predictive performance that potentially could be applied to estimate real-world AARs. Marginal, but perhaps useful AAR prediction might be gleaned operationally at LGA, SFO, and DFW, as the number of successfully scored cases fall loosely within one standard deviation of acceptable model performance arbitrarily set at ten percent of the airport’s maximum AAR. The remaining models studied, DEN, CLT, ORD, and JFK appeared to have little useful operational application based on the 2016 model scoring results

    Gaining Insight into Determinants of Physical Activity using Bayesian Network Learning

    Get PDF
    Contains fulltext : 228326pre.pdf (preprint version ) (Open Access) Contains fulltext : 228326pub.pdf (publisher's version ) (Open Access)BNAIC/BeneLearn 202
    corecore