7,663 research outputs found

    Industrial process monitoring by means of recurrent neural networks and Self Organizing Maps

    Get PDF
    Industrial manufacturing plants often suffer from reliability problems during their day-to-day operations which have the potential for causing a great impact on the effectiveness and performance of the overall process and the sub-processes involved. Time-series forecasting of critical industrial signals presents itself as a way to reduce this impact by extracting knowledge regarding the internal dynamics of the process and advice any process deviations before it affects the productive process. In this paper, a novel industrial condition monitoring approach based on the combination of Self Organizing Maps for operating point codification and Recurrent Neural Networks for critical signal modeling is proposed. The combination of both methods presents a strong synergy, the information of the operating condition given by the interpretation of the maps helps the model to improve generalization, one of the drawbacks of recurrent networks, while assuring high accuracy and precision rates. Finally, the complete methodology, in terms of performance and effectiveness is validated experimentally with real data from a copper rod industrial plant.Postprint (published version

    Making AI Meaningful Again

    Get PDF
    Artificial intelligence (AI) research enjoyed an initial period of enthusiasm in the 1970s and 80s. But this enthusiasm was tempered by a long interlude of frustration when genuinely useful AI applications failed to be forthcoming. Today, we are experiencing once again a period of enthusiasm, fired above all by the successes of the technology of deep neural networks or deep machine learning. In this paper we draw attention to what we take to be serious problems underlying current views of artificial intelligence encouraged by these successes, especially in the domain of language processing. We then show an alternative approach to language-centric AI, in which we identify a role for philosophy

    Dirichlet belief networks for topic structure learning

    Full text link
    Recently, considerable research effort has been devoted to developing deep architectures for topic models to learn topic structures. Although several deep models have been proposed to learn better topic proportions of documents, how to leverage the benefits of deep structures for learning word distributions of topics has not yet been rigorously studied. Here we propose a new multi-layer generative process on word distributions of topics, where each layer consists of a set of topics and each topic is drawn from a mixture of the topics of the layer above. As the topics in all layers can be directly interpreted by words, the proposed model is able to discover interpretable topic hierarchies. As a self-contained module, our model can be flexibly adapted to different kinds of topic models to improve their modelling accuracy and interpretability. Extensive experiments on text corpora demonstrate the advantages of the proposed model.Comment: accepted in NIPS 201

    Automatic differentiation in machine learning: a survey

    Get PDF
    Derivatives, mostly in the form of gradients and Hessians, are ubiquitous in machine learning. Automatic differentiation (AD), also called algorithmic differentiation or simply "autodiff", is a family of techniques similar to but more general than backpropagation for efficiently and accurately evaluating derivatives of numeric functions expressed as computer programs. AD is a small but established field with applications in areas including computational fluid dynamics, atmospheric sciences, and engineering design optimization. Until very recently, the fields of machine learning and AD have largely been unaware of each other and, in some cases, have independently discovered each other's results. Despite its relevance, general-purpose AD has been missing from the machine learning toolbox, a situation slowly changing with its ongoing adoption under the names "dynamic computational graphs" and "differentiable programming". We survey the intersection of AD and machine learning, cover applications where AD has direct relevance, and address the main implementation techniques. By precisely defining the main differentiation techniques and their interrelationships, we aim to bring clarity to the usage of the terms "autodiff", "automatic differentiation", and "symbolic differentiation" as these are encountered more and more in machine learning settings.Comment: 43 pages, 5 figure

    Topic Similarity Networks: Visual Analytics for Large Document Sets

    Full text link
    We investigate ways in which to improve the interpretability of LDA topic models by better analyzing and visualizing their outputs. We focus on examining what we refer to as topic similarity networks: graphs in which nodes represent latent topics in text collections and links represent similarity among topics. We describe efficient and effective approaches to both building and labeling such networks. Visualizations of topic models based on these networks are shown to be a powerful means of exploring, characterizing, and summarizing large collections of unstructured text documents. They help to "tease out" non-obvious connections among different sets of documents and provide insights into how topics form larger themes. We demonstrate the efficacy and practicality of these approaches through two case studies: 1) NSF grants for basic research spanning a 14 year period and 2) the entire English portion of Wikipedia.Comment: 9 pages; 2014 IEEE International Conference on Big Data (IEEE BigData 2014

    Natural Variation and Neuromechanical Systems

    Get PDF
    Natural variation plays an important but subtle and often ignored role in neuromechanical systems. This is especially important when designing for living or hybrid systems \ud which involve a biological or self-assembling component. Accounting for natural variation can be accomplished by taking a population phenomics approach to modeling and analyzing such systems. I will advocate the position that noise in neuromechanical systems is partially represented by natural variation inherent in user physiology. Furthermore, this noise can be augmentative in systems that couple physiological systems with technology. There are several tools and approaches that can be borrowed from computational biology to characterize the populations of users as they interact with the technology. In addition to transplanted approaches, the potential of natural variation can be understood as having a range of effects on both the individual's physiology and function of the living/hybrid system over time. Finally, accounting for natural variation can be put to good use in human-machine system design, as three prescriptions for exploiting variation in design are proposed
    • …
    corecore