6,110 research outputs found

    An ADMM Based Framework for AutoML Pipeline Configuration

    Full text link
    We study the AutoML problem of automatically configuring machine learning pipelines by jointly selecting algorithms and their appropriate hyper-parameters for all steps in supervised learning pipelines. This black-box (gradient-free) optimization with mixed integer & continuous variables is a challenging problem. We propose a novel AutoML scheme by leveraging the alternating direction method of multipliers (ADMM). The proposed framework is able to (i) decompose the optimization problem into easier sub-problems that have a reduced number of variables and circumvent the challenge of mixed variable categories, and (ii) incorporate black-box constraints along-side the black-box optimization objective. We empirically evaluate the flexibility (in utilizing existing AutoML techniques), effectiveness (against open source AutoML toolkits),and unique capability (of executing AutoML with practically motivated black-box constraints) of our proposed scheme on a collection of binary classification data sets from UCI ML& OpenML repositories. We observe that on an average our framework provides significant gains in comparison to other AutoML frameworks (Auto-sklearn & TPOT), highlighting the practical advantages of this framework

    Building Machines That Learn and Think Like People

    Get PDF
    Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.Comment: In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentar

    Learning Multiple Defaults for Machine Learning Algorithms

    Get PDF
    The performance of modern machine learning methods highly depends on their hyperparameter configurations. One simple way of selecting a configuration is to use default settings, often proposed along with the publication and implementation of a new algorithm. Those default values are usually chosen in an ad-hoc manner to work good enough on a wide variety of datasets. To address this problem, different automatic hyperparameter configuration algorithms have been proposed, which select an optimal configuration per dataset. This principled approach usually improves performance, but adds additional algorithmic complexity and computational costs to the training procedure. As an alternative to this, we propose learning a set of complementary default values from a large database of prior empirical results. Selecting an appropriate configuration on a new dataset then requires only a simple, efficient and embarrassingly parallel search over this set. We demonstrate the effectiveness and efficiency of the approach we propose in comparison to random search and Bayesian Optimization

    Äriprotsesside ajaliste näitajate selgitatav ennustav jälgimine

    Get PDF
    Kaasaegsed ettevõtte infosüsteemid võimaldavad ettevõtetel koguda detailset informatsiooni äriprotsesside täitmiste kohta. Eelnev koos masinõppe meetoditega võimaldab kasutada andmejuhitavaid ja ennustatavaid lähenemisi äriprotsesside jõudluse jälgimiseks. Kasutades ennustuslike äriprotsesside jälgimise tehnikaid on võimalik jõudluse probleeme ennustada ning soovimatu tegurite mõju ennetavalt leevendada. Tüüpilised küsimused, millega tegeleb ennustuslik protsesside jälgimine on “millal antud äriprotsess lõppeb?” või “mis on kõige tõenäolisem järgmine sündmus antud äriprotsessi jaoks?”. Suurim osa olemasolevatest lahendustest eelistavad täpsust selgitatavusele. Praktikas, selgitatavus on ennustatavate tehnikate tähtis tunnus. Ennustused, kas protsessi täitmine ebaõnnestub või selle täitmisel võivad tekkida raskused, pole piisavad. On oluline kasutajatele seletada, kuidas on selline ennustuse tulemus saavutatud ning mida saab teha soovimatu tulemuse ennetamiseks. Töö pakub välja kaks meetodit ennustatavate mudelite konstrueerimiseks, mis võimaldavad jälgida äriprotsesse ning keskenduvad selgitatavusel. Seda saavutatakse ennustuse lahtivõtmisega elementaarosadeks. Näiteks, kui ennustatakse, et äriprotsessi lõpuni on jäänud aega 20 tundi, siis saame anda seletust, et see aeg on moodustatud kõikide seni käsitlemata tegevuste lõpetamiseks vajalikust ajast. Töös võrreldakse omavahel eelmainitud meetodeid, käsitledes äriprotsesse erinevatest valdkondadest. Hindamine toob esile erinevusi selgitatava ja täpsusele põhinevale lähenemiste vahel. Töö teaduslik panus on ennustuslikuks protsesside jälgimiseks vabavaralise tööriista arendamine. Süsteemi nimeks on Nirdizati ning see süsteem võimaldab treenida ennustuslike masinõppe mudeleid, kasutades nii töös kirjeldatud meetodeid kui ka kolmanda osapoole meetodeid. Hiljem saab treenitud mudeleid kasutada hetkel käivate äriprotsesside tulemuste ennustamiseks, mis saab aidata kasutajaid reaalajas.Modern enterprise systems collect detailed data about the execution of the business processes they support. The widespread availability of such data in companies, coupled with advances in machine learning, have led to the emergence of data-driven and predictive approaches to monitor the performance of business processes. By using such predictive process monitoring approaches, potential performance issues can be anticipated and proactively mitigated. Various approaches have been proposed to address typical predictive process monitoring questions, such as what is the most likely continuation of an ongoing process instance, or when it will finish. However, most existing approaches prioritize accuracy over explainability. Yet in practice, explainability is a critical property of predictive methods. It is not enough to accurately predict that a running process instance will end up in an undesired outcome. It is also important for users to understand why this prediction is made and what can be done to prevent this undesired outcome. This thesis proposes two methods to build predictive models to monitor business processes in an explainable manner. This is achieved by decomposing a prediction into its elementary components. For example, to explain that the remaining execution time of a process execution is predicted to be 20 hours, we decompose this prediction into the predicted execution time of each activity that has not yet been executed. We evaluate the proposed methods against each other and various state-of-the-art baselines using a range of business processes from multiple domains. The evaluation reaffirms a fundamental trade-off between explainability and accuracy of predictions. The research contributions of the thesis have been consolidated into an open-source tool for predictive business process monitoring, namely Nirdizati. It can be used to train predictive models using the methods described in this thesis, as well as third-party methods. These models are then used to make predictions for ongoing process instances; thus, the tool can also support users at runtime

    Analyzing and Interpreting Neural Networks for NLP: A Report on the First BlackboxNLP Workshop

    Full text link
    The EMNLP 2018 workshop BlackboxNLP was dedicated to resources and techniques specifically developed for analyzing and understanding the inner-workings and representations acquired by neural models of language. Approaches included: systematic manipulation of input to neural networks and investigating the impact on their performance, testing whether interpretable knowledge can be decoded from intermediate representations acquired by neural networks, proposing modifications to neural network architectures to make their knowledge state or generated output more explainable, and examining the performance of networks on simplified or formal languages. Here we review a number of representative studies in each category

    The Integration of Reading and Science to Aid Problem Readers

    Get PDF
    The purpose of this paper is to explain a curriculum package which was designed for science students at Orange Park IX, ninth grade center, Clay County, Florida. The target population consists of those students who read below the sixth-grade level according to the Stanford Achievement Test (SAT) scores and who are enrolled in a general science class. These students are also enrolled in a Reading Skills class and some are in the SLD and ED programs as well. Although there will be interaction with the reading, SLD, and ED teachers, the classes will not be team taught. Therefore, the science curriculum is intended to be contained within the fifty-minute sessions allowed for science classes
    corecore