77 research outputs found

    FAIME: A Framework for AI-Assisted Musical Devices

    Get PDF
    In this paper, we present a novel framework for the study and design of AI-assisted musical devices (AIMEs). Initially, we present taxonomy of these devices and illustrate it with a set of scenarios and personas. Later, we propose a generic architecture for the implementation of AIMEs and present some examples from the scenarios. We show that the proposed framework and architecture are a valid tool for the study of intelligent musical devices

    Electronic Imaging & the Visual Arts. EVA 2013 Florence

    Get PDF
    Important Information Technology topics are presented: multimedia systems, data-bases, protection of data, access to the content. Particular reference is reserved to digital images (2D, 3D) regarding Cultural Institutions (Museums, Libraries, Palace – Monuments, Archaeological Sites). The main parts of the Conference Proceedings regard: Strategic Issues, EC Projects and Related Networks & Initiatives, International Forum on “Culture & Technology”, 2D – 3D Technologies & Applications, Virtual Galleries – Museums and Related Initiatives, Access to the Culture Information. Three Workshops are related to: International Cooperation, Innovation and Enterprise, Creative Industries and Cultural Tourism

    AXMEDIS 2008

    Get PDF
    The AXMEDIS International Conference series aims to explore all subjects and topics related to cross-media and digital-media content production, processing, management, standards, representation, sharing, protection and rights management, to address the latest developments and future trends of the technologies and their applications, impacts and exploitation. The AXMEDIS events offer venues for exchanging concepts, requirements, prototypes, research ideas, and findings which could contribute to academic research and also benefit business and industrial communities. In the Internet as well as in the digital era, cross-media production and distribution represent key developments and innovations that are fostered by emergent technologies to ensure better value for money while optimising productivity and market coverage

    Gaining Insight into Determinants of Physical Activity using Bayesian Network Learning

    Get PDF
    Contains fulltext : 228326pre.pdf (preprint version ) (Open Access) Contains fulltext : 228326pub.pdf (publisher's version ) (Open Access)BNAIC/BeneLearn 202

    Suchbasierte automatische Bildannotation anhand geokodierter Community-Fotos

    Get PDF
    In the Web 2.0 era, platforms for sharing and collaboratively annotating images with keywords, called tags, became very popular. Tags are a powerful means for organizing and retrieving photos. However, manual tagging is time consuming. Recently, the sheer amount of user-tagged photos available on the Web encouraged researchers to explore new techniques for automatic image annotation. The idea is to annotate an unlabeled image by propagating the labels of community photos that are visually similar to it. Most recently, an ever increasing amount of community photos is also associated with location information, i.e., geotagged. In this thesis, we aim at exploiting the location context and propose an approach for automatically annotating geotagged photos. Our objective is to address the main limitations of state-of-the-art approaches in terms of the quality of the produced tags and the speed of the complete annotation process. To achieve these goals, we, first, deal with the problem of collecting images with the associated metadata from online repositories. Accordingly, we introduce a strategy for data crawling that takes advantage of location information and the social relationships among the contributors of the photos. To improve the quality of the collected user-tags, we present a method for resolving their ambiguity based on tag relatedness information. In this respect, we propose an approach for representing tags as probability distributions based on the algorithm of Laplacian score feature selection. Furthermore, we propose a new metric for calculating the distance between tag probability distributions by extending Jensen-Shannon Divergence to account for statistical fluctuations. To efficiently identify the visual neighbors, the thesis introduces two extensions to the state-of-the-art image matching algorithm, known as Speeded Up Robust Features (SURF). To speed up the matching, we present a solution for reducing the number of compared SURF descriptors based on classification techniques, while the accuracy of SURF is improved through an efficient method for iterative image matching. Furthermore, we propose a statistical model for ranking the mined annotations according to their relevance to the target image. This is achieved by combining multi-modal information in a statistical framework based on Bayes' rule. Finally, the effectiveness of each of mentioned contributions as well as the complete automatic annotation process are evaluated experimentally.Seit der EinfĂŒhrung von Web 2.0 steigt die PopularitĂ€t von Plattformen, auf denen Bilder geteilt und durch die Gemeinschaft mit Schlagwörtern, sogenannten Tags, annotiert werden. Mit Tags lassen sich Fotos leichter organisieren und auffinden. Manuelles Taggen ist allerdings sehr zeitintensiv. Animiert von der schieren Menge an im Web zugĂ€nglichen, von Usern getaggten Fotos, erforschen Wissenschaftler derzeit neue Techniken der automatischen Bildannotation. Dahinter steht die Idee, ein noch nicht beschriftetes Bild auf der Grundlage visuell Ă€hnlicher, bereits beschrifteter Community-Fotos zu annotieren. UnlĂ€ngst wurde eine immer grĂ¶ĂŸere Menge an Community-Fotos mit geographischen Koordinaten versehen (geottagged). Die Arbeit macht sich diesen geographischen Kontext zunutze und prĂ€sentiert einen Ansatz zur automatischen Annotation geogetaggter Fotos. Ziel ist es, die wesentlichen Grenzen der bisher bekannten AnsĂ€tze in Hinsicht auf die QualitĂ€t der produzierten Tags und die Geschwindigkeit des gesamten Annotationsprozesses aufzuzeigen. Um dieses Ziel zu erreichen, wurden zunĂ€chst Bilder mit entsprechenden Metadaten aus den Online-Quellen gesammelt. Darauf basierend, wird eine Strategie zur Datensammlung eingefĂŒhrt, die sich sowohl der geographischen Informationen als auch der sozialen Verbindungen zwischen denjenigen, die die Fotos zur VerfĂŒgung stellen, bedient. Um die QualitĂ€t der gesammelten User-Tags zu verbessern, wird eine Methode zur Auflösung ihrer AmbiguitĂ€t vorgestellt, die auf der Information der Tag-Ähnlichkeiten basiert. In diesem Zusammenhang wird ein Ansatz zur Darstellung von Tags als Wahrscheinlichkeitsverteilungen vorgeschlagen, der auf den Algorithmus der sogenannten Laplacian Score (LS) aufbaut. Des Weiteren wird eine Erweiterung der Jensen-Shannon-Divergence (JSD) vorgestellt, die statistische Fluktuationen berĂŒcksichtigt. Zur effizienten Identifikation der visuellen Nachbarn werden in der Arbeit zwei Erweiterungen des Speeded Up Robust Features (SURF)-Algorithmus vorgestellt. Zur Beschleunigung des Abgleichs wird eine Lösung auf der Basis von Klassifikationstechniken prĂ€sentiert, die die Anzahl der miteinander verglichenen SURF-Deskriptoren minimiert, wĂ€hrend die SURF-Genauigkeit durch eine effiziente Methode des schrittweisen Bildabgleichs verbessert wird. Des Weiteren wird ein statistisches Modell basierend auf der Baye'schen Regel vorgeschlagen, um die erlangten Annotationen entsprechend ihrer Relevanz in Bezug auf das Zielbild zu ranken. Schließlich wird die Effizienz jedes einzelnen, erwĂ€hnten Beitrags experimentell evaluiert. DarĂŒber hinaus wird die Performanz des vorgeschlagenen automatischen Annotationsansatzes durch umfassende experimentelle Studien als Ganzes demonstriert

    Recommending Tags for Images: Deep Learning Approaches for Personalized Tag Recommendation

    Get PDF
    Social media has become an integral part of numerous individuals as well as organizations, with many services being used frequently by a majority of people. Along with its widespread use, the amount of information explodes when people use these services. This demands for efficient tools as well as methods to assist data management and retrieval. Annotating resources by keywords, known as the tagging task, is a solution to improve categorizability and findability of resources. However, tagging is a human, time-consuming task, which requires the user's focus to figure out many keywords in a short moment and manually enter them into the system. To encourage users to tag their resources more correctly and frequently, tag recommendation is adopted into the social tagging systems to suggest relevant keywords for resources. In this thesis, we will address the problem of personalized tag recommendation for images and present ways to solve this problem by combining the advantages of the user relation with the images' content. In order to suggest tags for unobserved images, their visual contents are used to replace the index-based information of the image entity in the tagging relations. Because the limitation of low-level features does not show the "content" of images, we propose to utilize a deep learning based approach to learn high-level visual features concurrently with the scoring-tag estimator. For the tag predictor, a latent factor model or a multi-layer perceptron is selected to compute scores of tags by which the top selected tags are sorted in descending order. As a further development upon our findings, we examine the inside and outside context of images to enhance the accuracy of estimators. Regarding the image-inside context, we are motivated by the fact that objects, such as cars or cats are influential on the user's selection criteria. Regarding the image-outside context, the image's surrounding text contributes to the clarity of the image's content for different users. We consider these contextual features as a supporting part which is combined with the mainly visual representation to enhance the tag recommendation performance. Finally, as an additional technique, transfer learning is also adapted to support the proposed models to overcome the limitations of too small training data and boost up their performance. This thesis demonstrates the usefulness and versatility of deep learning approaches for tag recommendation and highlights the importance of the learned image's content in predicting personalized tags. Directions for future work include semantic enhancements to context-based representation and extensions of the content-aware approaches to different recommendation scenarios

    Encyclopaedic question answering

    Get PDF
    Open-domain question answering (QA) is an established NLP task which enables users to search for speciVc pieces of information in large collections of texts. Instead of using keyword-based queries and a standard information retrieval engine, QA systems allow the use of natural language questions and return the exact answer (or a list of plausible answers) with supporting snippets of text. In the past decade, open-domain QA research has been dominated by evaluation fora such as TREC and CLEF, where shallow techniques relying on information redundancy have achieved very good performance. However, this performance is generally limited to simple factoid and deVnition questions because the answer is usually explicitly present in the document collection. Current approaches are much less successful in Vnding implicit answers and are diXcult to adapt to more complex question types which are likely to be posed by users. In order to advance the Veld of QA, this thesis proposes a shift in focus from simple factoid questions to encyclopaedic questions: list questions composed of several constraints. These questions have more than one correct answer which usually cannot be extracted from one small snippet of text. To correctly interpret the question, systems need to combine classic knowledge-based approaches with advanced NLP techniques. To Vnd and extract answers, systems need to aggregate atomic facts from heterogeneous sources as opposed to simply relying on keyword-based similarity. Encyclopaedic questions promote QA systems which use basic reasoning, making them more robust and easier to extend with new types of constraints and new types of questions. A novel semantic architecture is proposed which represents a paradigm shift in open-domain QA system design, using semantic concepts and knowledge representation instead of words and information retrieval. The architecture consists of two phases, analysis – responsible for interpreting questions and Vnding answers, and feedback – responsible for interacting with the user. This architecture provides the basis for EQUAL, a semantic QA system developed as part of the thesis, which uses Wikipedia as a source of world knowledge and iii employs simple forms of open-domain inference to answer encyclopaedic questions. EQUAL combines the output of a syntactic parser with semantic information from Wikipedia to analyse questions. To address natural language ambiguity, the system builds several formal interpretations containing the constraints speciVed by the user and addresses each interpretation in parallel. To Vnd answers, the system then tests these constraints individually for each candidate answer, considering information from diUerent documents and/or sources. The correctness of an answer is not proved using a logical formalism, instead a conVdence-based measure is employed. This measure reWects the validation of constraints from raw natural language, automatically extracted entities, relations and available structured and semi-structured knowledge from Wikipedia and the Semantic Web. When searching for and validating answers, EQUAL uses the Wikipedia link graph to Vnd relevant information. This method achieves good precision and allows only pages of a certain type to be considered, but is aUected by the incompleteness of the existing markup targeted towards human readers. In order to address this, a semantic analysis module which disambiguates entities is developed to enrich Wikipedia articles with additional links to other pages. The module increases recall, enabling the system to rely more on the link structure of Wikipedia than on word-based similarity between pages. It also allows authoritative information from diUerent sources to be linked to the encyclopaedia, further enhancing the coverage of the system. The viability of the proposed approach was evaluated in an independent setting by participating in two competitions at CLEF 2008 and 2009. In both competitions, EQUAL outperformed standard textual QA systems as well as semi-automatic approaches. Having established a feasible way forward for the design of open-domain QA systems, future work will attempt to further improve performance to take advantage of recent advances in information extraction and knowledge representation, as well as by experimenting with formal reasoning and inferencing capabilities.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Analyzing Granger causality in climate data with time series classification methods

    Get PDF
    Attribution studies in climate science aim for scientifically ascertaining the influence of climatic variations on natural or anthropogenic factors. Many of those studies adopt the concept of Granger causality to infer statistical cause-effect relationships, while utilizing traditional autoregressive models. In this article, we investigate the potential of state-of-the-art time series classification techniques to enhance causal inference in climate science. We conduct a comparative experimental study of different types of algorithms on a large test suite that comprises a unique collection of datasets from the area of climate-vegetation dynamics. The results indicate that specialized time series classification methods are able to improve existing inference procedures. Substantial differences are observed among the methods that were tested
    • 

    corecore