4,123 research outputs found

    Deep Learning: Our Miraculous Year 1990-1991

    Full text link
    In 2020, we will celebrate that many of the basic ideas behind the deep learning revolution were published three decades ago within fewer than 12 months in our "Annus Mirabilis" or "Miraculous Year" 1990-1991 at TU Munich. Back then, few people were interested, but a quarter century later, neural networks based on these ideas were on over 3 billion devices such as smartphones, and used many billions of times per day, consuming a significant fraction of the world's compute.Comment: 37 pages, 188 references, based on work of 4 Oct 201

    A Survey on Compiler Autotuning using Machine Learning

    Full text link
    Since the mid-1990s, researchers have been trying to use machine-learning based approaches to solve a number of different compiler optimization problems. These techniques primarily enhance the quality of the obtained results and, more importantly, make it feasible to tackle two main compiler optimization problems: optimization selection (choosing which optimizations to apply) and phase-ordering (choosing the order of applying optimizations). The compiler optimization space continues to grow due to the advancement of applications, increasing number of compiler optimizations, and new target architectures. Generic optimization passes in compilers cannot fully leverage newly introduced optimizations and, therefore, cannot keep up with the pace of increasing options. This survey summarizes and classifies the recent advances in using machine learning for the compiler optimization field, particularly on the two major problems of (1) selecting the best optimizations and (2) the phase-ordering of optimizations. The survey highlights the approaches taken so far, the obtained results, the fine-grain classification among different approaches and finally, the influential papers of the field.Comment: version 5.0 (updated on September 2018)- Preprint Version For our Accepted Journal @ ACM CSUR 2018 (42 pages) - This survey will be updated quarterly here (Send me your new published papers to be added in the subsequent version) History: Received November 2016; Revised August 2017; Revised February 2018; Accepted March 2018

    Topic Discovery of Online Course Reviews Using LDA with Leveraging Reviews Helpfulness

    Get PDF
    Despite the popularity of the Massive Open Online Courses, small-scale research has been done to understand the factors that influence the teaching-learning process through the massive online platform. Using topic modeling approach, our results show terms with prior knowledge to understand e.g.: Chuck as the instructor name. So, we proposed the topic modeling approach on helpful subjective reviews. The results show five influential factors: “learn easy excellent class program”, “python learn class easy lot”, “Program learn easy python time game”, and “learn class python time game”. Also, research results showed that the proposed method improved the perplexity score on the LDA model

    Sentiment analysis in MOOCs: a case study

    Get PDF
    Proceeding of: 2018 IEEE Global Engineering Education Conference (EDUCON2018), 17-20 April, 2018, Santa Cruz de Tenerife, Canary Islands, Spain.Forum messages in MOOCs (Massive Open Online Courses) are the most important source of information about the social interactions happening in these courses. Forum messages can be analyzed to detect patterns and learners' behaviors. Particularly, sentiment analysis (e.g., classification in positive and negative messages) can be used as a first step for identifying complex emotions, such as excitement, frustration or boredom. The aim of this work is to compare different machine learning algorithms for sentiment analysis, using a real case study to check how the results can provide information about learners' emotions or patterns in the MOOC. Both supervised and unsupervised (lexicon-based) algorithms were used for the sentiment analysis. The best approaches found were Random Forest and one lexicon based method, which used dictionaries of words. The analysis of the case study also showed an evolution of the positivity over time with the best moment at the beginning of the course and the worst near the deadlines of peer-review assessments.This work has been co-funded by the Madrid Regional Government, through the eMadrid Excellence Network (S2013/ICE-2715), by the European Commission through Erasmus+ projects MOOC-Maker (561533-EPP-1-2015-1-ESEPPKA2-CBHE-JP), SHEILA (562080-EPP-1-2015-1-BEEPPKA3-PI-FORWARD), and LALA (586120-EPP-1-2017-1-ES-EPPKA2-CBHE-JP), and by the Spanish Ministry of Economy and Competitiveness, projects SNOLA (TIN2015-71669-REDT), RESET (TIN2014-53199-C3-1-R) and Smartlet (TIN2017-85179-C3-1-R). The latter is financed by the State Research Agency in Spain (AEI) and the European Regional Development Fund (FEDER). It has also been supported by the Spanish Ministry of Education, Culture and Sport, under a FPU fellowship (FPU016/00526).Publicad

    Building Machines That Learn and Think Like People

    Get PDF
    Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.Comment: In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentar
    corecore