7,784 research outputs found

    Use of PGM for Form recognition

    Get PDF
    ISBN : 978-1-4673-0868-7International audienceThis paper addresses the use of PGM (Probabilistic Graphical Model) for form model identification from just few items filled up by an electronic pen. Only the electronic ink is sent to the system without any indication on the form model. Two applications are made in this study: one is related to keynote form classification from its filled fields, while the second application concerns a design modelling problem for the on-line configuration of shower areas. In the former, only indications on the filled fields are sent to the system, while in the latter, the designer send strokes corresponding to the elements designed on the form model. In this application a unique form is proposed to the user to fill up the configuration of his shower area. The PGM is exploited advantageously in both cases translating precisely the relationships between corresponding elements in conditional probabilities, from individual elements up to the complete model constitution

    Knowledge is at the Edge! How to Search in Distributed Machine Learning Models

    Full text link
    With the advent of the Internet of Things and Industry 4.0 an enormous amount of data is produced at the edge of the network. Due to a lack of computing power, this data is currently send to the cloud where centralized machine learning models are trained to derive higher level knowledge. With the recent development of specialized machine learning hardware for mobile devices, a new era of distributed learning is about to begin that raises a new research question: How can we search in distributed machine learning models? Machine learning at the edge of the network has many benefits, such as low-latency inference and increased privacy. Such distributed machine learning models can also learn personalized for a human user, a specific context, or application scenario. As training data stays on the devices, control over possibly sensitive data is preserved as it is not shared with a third party. This new form of distributed learning leads to the partitioning of knowledge between many devices which makes access difficult. In this paper we tackle the problem of finding specific knowledge by forwarding a search request (query) to a device that can answer it best. To that end, we use a entropy based quality metric that takes the context of a query and the learning quality of a device into account. We show that our forwarding strategy can achieve over 95% accuracy in a urban mobility scenario where we use data from 30 000 people commuting in the city of Trento, Italy.Comment: Published in CoopIS 201

    A Survey on Bayesian Deep Learning

    Full text link
    A comprehensive artificial intelligence system needs to not only perceive the environment with different `senses' (e.g., seeing and hearing) but also infer the world's conditional (or even causal) relations and corresponding uncertainty. The past decade has seen major advances in many perception tasks such as visual object recognition and speech recognition using deep learning models. For higher-level inference, however, probabilistic graphical models with their Bayesian nature are still more powerful and flexible. In recent years, Bayesian deep learning has emerged as a unified probabilistic framework to tightly integrate deep learning and Bayesian models. In this general framework, the perception of text or images using deep learning can boost the performance of higher-level inference and in turn, the feedback from the inference process is able to enhance the perception of text or images. This survey provides a comprehensive introduction to Bayesian deep learning and reviews its recent applications on recommender systems, topic models, control, etc. Besides, we also discuss the relationship and differences between Bayesian deep learning and other related topics such as Bayesian treatment of neural networks.Comment: To appear in ACM Computing Surveys (CSUR) 202
    corecore