165 research outputs found

    Contextualized and personalized location-based services

    Get PDF
    Advances in the technologies of smart mobile devices and tiny sensors together with the increase in the number of web resources open up a plethora of new mobile information services where people can acquire and disseminate information at any place and any time. Location-based services (LBS) are characterized by providing users with useful and local information, i.e. information that belongs to a particular domain of interest to the user and can be of use while the user remains in a particular area. In addition, LBS need to take into account the interactions and dependencies between services, user and context for the information filtering and delivery in order to fulfill the needs and constraints of mobile users. We argue that consequently it brings up a series of technical challenges in terms of data semantics and infrastructure, context-awareness and personalization, as well as query formulation and answering etc. They can not be simply extended from existing traditional data management strategies. Instead, they need a new solution. Firstly, we propose a semantic LBS infrastructure on the basis of the modularized ontologies approach. We elaborate a core ontology which is mainly composed of three modules describing the services, users and contexts. The core ontology aims at presenting an abstract view (a model) of all information in LBS. In contrast, data describing the instances (of services user and actual contextual data) are stored in three independent data stores, called the service profiles, user profiles and context profiles. These data are semantically aligned with the concepts in the core ontology through a set of mappings. This approach enables the distributed data sources to be maintained in a autonomous manner, which is well adapted to the high dynamics and mobility of the data sources. Secondly, we separately address the function, features, and our modelling approach of the three major players, i.e. service, context and user in LBS. Then, we define a set of constructs to represent their interactions and inter-dependencies and illustrate how these semantic constructs can contribute to personalized and contextualized query processing. Service classes are organized in a taxonomy, which distinguishes the services by their business functions. This concept hierarchy helps to analyze and reformulate the users' queries. We introduce three new kinds of relationships in the service module to enhance the semantics of interactions and dependencies between services. We identify five key components of contexts in LBS and regard them as a semantic contextual basis for LBS. Component contexts are related together by specific composition relationships that can describe spatio-temporal constraints. A user profile contains personal information about a given user and possibly a set of self-defined rules, which offer hints on what the user likes or dislikes, and what could attract him or her. In the core ontology clustering users with common features can help the cooperative query answering. Each of the three modules of the core ontology is an ontology in itself. They are inter-related by relationships that link concepts belonging to two different modules. The LBS fully benefits from the modularized structure of the core ontology. It allows restricting the search space, as well as facilitating the maintenance of each module. Finally, we studied the query reformulation and processing issues in LBS. How to make the query interface tangible and provide rapid and relevant answers are typical concerns in all information services. Our query format not only fully obeys the "simple, tangible and effective" golden-rules of user-interface design, but also satisfies the needs of domain-independent interface and emphasizes the importance of spatio-temporal constraints in LBS. With pre-defined spatio-temporal operators, users can easily specify in their queries the spatio-temporal availability they need for the services they are looking for. This allows eliminating most of irrelevant answers that are usually generated by keyword-based approaches. Constraints in the various dimensions (what, when, where and what-else) can be expressed by a conjunctive query, and then be smoothly translated to RDF-patterns. We illustrate our query answering strategy by using the SPARQL syntax, and explain how the relaxation can be done with rules specified in the query relaxation profile

    Interoperability of semantics in news production

    Get PDF

    B!SON: A Tool for Open Access Journal Recommendation

    Get PDF
    Finding a suitable open access journal to publish scientific work is a complex task: Researchers have to navigate a constantly growing number of journals, institutional agreements with publishers, funders’ conditions and the risk of Predatory Publishers. To help with these challenges, we introduce a web-based journal recommendation system called B!SON. It is developed based on a systematic requirements analysis, built on open data, gives publisher-independent recommendations and works across domains. It suggests open access journals based on title, abstract and references provided by the user. The recommendation quality has been evaluated using a large test set of 10,000 articles. Development by two German scientific libraries ensures the longevity of the project

    Semantic Domains in Akkadian Text

    Get PDF
    The article examines the possibilities offered by language technology for analyzing semantic fields in Akkadian. The corpus of data for our research group is the existing electronic corpora, Open richly annotated cuneiform corpus (ORACC). In addition to more traditional Assyriological methods, the article explores two language technological methods: Pointwise mutual information (PMI) and Word2vec.Peer reviewe

    CyberResearch on the Ancient Near East and Eastern Mediterranean

    Get PDF
    CyberResearch on the Ancient Near East and Neighboring Regions provides case studies on archaeology, objects, cuneiform texts, and online publishing, digital archiving, and preservation. Eleven chapters present a rich array of material, spanning the fifth through the first millennium BCE, from Anatolia, the Levant, Mesopotamia, and Iran. Customized cyber- and general glossaries support readers who lack either a technical background or familiarity with the ancient cultures. Edited by Vanessa Bigot Juloux, Amy Rebecca Gansell, and Alessandro Di Ludovico, this volume is dedicated to broadening the understanding and accessibility of digital humanities tools, methodologies, and results to Ancient Near Eastern Studies. Ultimately, this book provides a model for introducing cyber-studies to the mainstream of humanities research

    Fine Art Pattern Extraction and Recognition

    Get PDF
    This is a reprint of articles from the Special Issue published online in the open access journal Journal of Imaging (ISSN 2313-433X) (available at: https://www.mdpi.com/journal/jimaging/special issues/faper2020)

    Machine Learning Algorithm for the Scansion of Old Saxon Poetry

    Get PDF
    Several scholars designed tools to perform the automatic scansion of poetry in many languages, but none of these tools deal with Old Saxon or Old English. This project aims to be a first attempt to create a tool for these languages. We implemented a Bidirectional Long Short-Term Memory (BiLSTM) model to perform the automatic scansion of Old Saxon and Old English poems. Since this model uses supervised learning, we manually annotated the Heliand manuscript, and we used the resulting corpus as labeled dataset to train the model. The evaluation of the performance of the algorithm reached a 97% for the accuracy and a 99% of weighted average for precision, recall and F1 Score. In addition, we tested the model with some verses from the Old Saxon Genesis and some from The Battle of Brunanburh, and we observed that the model predicted almost all Old Saxon metrical patterns correctly misclassified the majority of the Old English input verses

    Acquisition des contenus intelligents dans l’archivage du Web

    Get PDF
    Web sites are dynamic by nature with content and structure changing overtime; many pages on the Web are produced by content management systems (CMSs). Tools currently used by Web archivists to preserve the content of the Web blindly crawl and store Web pages, disregarding the CMS the site is based on and whatever structured content is contained in Web pages. We first present an application-aware helper (AAH) that fits into an archiving crawl processing chain to perform intelligent and adaptive crawling of Web applications, given a knowledge base of common CMSs. The AAH has been integrated into two Web crawlers in the framework of the ARCOMEM project: the proprietary crawler of the Internet Memory Foundation and a customized version of Heritrix. Then we propose an efficient unsupervised Web crawling system ACEBot (Adaptive Crawler Bot for data Extraction), a structure-driven crawler that utilizes the inner structure of the pages and guides the crawling process based on the importance of their content. ACEBot works intwo phases: in the offline phase, it constructs a dynamic site map (limiting the number of URLs retrieved), learns a traversal strategy based on the importance of navigation patterns (selecting those leading to valuable content); in the online phase, ACEBot performs massive downloading following the chosen navigation patterns. The AAH and ACEBot makes 7 and 5 times, respectively, fewer HTTP requests as compared to a generic crawler, without compromising on effectiveness. We finally propose OWET (Open Web Extraction Toolkit) as a free platform for semi-supervised data extraction. OWET allows a user to extract the data hidden behind Web formsLes sites Web sont par nature dynamiques, leur contenu et leur structure changeant au fil du temps ; de nombreuses pages sur le Web sont produites par des systèmes de gestion de contenu (CMS). Les outils actuellement utilisés par les archivistes du Web pour préserver le contenu du Web collectent et stockent de manière aveugle les pages Web, en ne tenant pas compte du CMS sur lequel le site est construit et du contenu structuré de ces pages Web. Nous présentons dans un premier temps un application-aware helper (AAH) qui s’intègre à une chaine d’archivage classique pour accomplir une collecte intelligente et adaptative des applications Web, étant donnée une base de connaissance deCMS courants. L’AAH a été intégrée à deux crawlers Web dans le cadre du projet ARCOMEM : le crawler propriétaire d’Internet Memory Foundation et une version personnalisée d’Heritrix. Nous proposons ensuite un système de crawl efficace et non supervisé, ACEBot (Adaptive Crawler Bot for data Extraction), guidé par la structure qui exploite la structure interne des pages et dirige le processus de crawl en fonction de l’importance du contenu. ACEBot fonctionne en deux phases : dans la phase hors-ligne, il construit un plan dynamique du site (en limitant le nombre d’URL récupérées), apprend une stratégie de parcours basée sur l’importance des motifs de navigation (sélectionnant ceux qui mènent à du contenu de valeur) ; dans la phase en-ligne, ACEBot accomplit un téléchargement massif en suivant les motifs de navigation choisis. L’AAH et ACEBot font 7 et 5 fois moins, respectivement, de requêtes HTTP qu’un crawler générique, sans compromis de qualité. Nous proposons enfin OWET (Open Web Extraction Toolkit), une plate-forme libre pour l’extraction de données semi-supervisée. OWET permet à un utilisateur d’extraire les données cachées derrière des formulaires Web

    31th International Conference on Information Modelling and Knowledge Bases

    Get PDF
    Information modelling is becoming more and more important topic for researchers, designers, and users of information systems.The amount and complexity of information itself, the number of abstractionlevels of information, and the size of databases and knowledge bases arecontinuously growing. Conceptual modelling is one of the sub-areas ofinformation modelling. The aim of this conference is to bring together experts from different areas of computer science and other disciplines, who have a common interest in understanding and solving problems on information modelling and knowledge bases, as well as applying the results of research to practice. We also aim to recognize and study new areas on modelling and knowledge bases to which more attention should be paid. Therefore philosophy and logic, cognitive science, knowledge management, linguistics and management science are relevant areas, too. In the conference, there will be three categories of presentations, i.e. full papers, short papers and position papers
    • …
    corecore