504 research outputs found

    User modeling for exploratory search on the Social Web. Exploiting social bookmarking systems for user model extraction, evaluation and integration

    Get PDF
    Exploratory search is an information seeking strategy that extends be- yond the query-and-response paradigm of traditional Information Retrieval models. Users browse through information to discover novel content and to learn more about the newly discovered things. Social bookmarking systems integrate well with exploratory search, because they allow one to search, browse, and filter social bookmarks. Our contribution is an exploratory tag search engine that merges social bookmarking with exploratory search. For this purpose, we have applied collaborative filtering to recommend tags to users. User models are an im- portant prerequisite for recommender systems. We have produced a method to algorithmically extract user models from folksonomies, and an evaluation method to measure the viability of these user models for exploratory search. According to our evaluation web-scale user modeling, which integrates user models from various services across the Social Web, can improve exploratory search. Within this thesis we also provide a method for user model integra- tion. Our exploratory tag search engine implements the findings of our user model extraction, evaluation, and integration methods. It facilitates ex- ploratory search on social bookmarks from Delicious and Connotea and pub- lishes extracted user models as Linked Data

    Semantic Web Tools and Decision-Making

    Get PDF
    Semantic Web technologies are intertwined with decision-making processes. In this paper the general objectives of the semantic web tools are reviewed and characterized, as well as the categories of decision support tools, in order to establish an intersection of utility and use. We also elaborate on actual and foreseen possibilities for a deeper integration, considering the actual implementation, opportunities and constraints in the decision-making context.info:eu-repo/semantics/publishedVersio

    concept paper

    Get PDF
    In this concept paper, we outline our working plan for the next phase of the Corporate Semantic Web project. The plan covers the period from March 2009 to March 2010. Corporate ontology engineering will improve the facilitation of agile ontology engineering to lessen the costs of ontology development and, especially, maintenance. Corporate semantic collaboration focuses the human- centered aspects of knowledge management in corporate contexts. Corporate semantic search is settled on the highest application level of the three research areas and at that point it is a representative for applications working on and with the appropriately represented and delivered background knowledge. Each of these pillars will yield innovative methods and tools during the project runtime until 2013. We propose a concept draft and a working plan covering the next twelve months for an integrative architecture of a Corporate Semantic Web provided by these three core pillars

    Web-Wide Application Customization:The Case of Mashups

    Full text link
    Application development of is commonly a balancing of interests, as the question of what should actually be implemented is answered differently by different stakeholders. This paper considers mashups, which are a way of allowing an application to grow beyond the capabilities of the original developers. First, it introduces several approaches to integrate mashups into the services, or Web pages, that they are based upon. These approaches commonly implement ways to determine which mashups are potentially relevant for display in a certain Web page context. One approach, ActiveTags, enables users to create reliable mashups based on tags, which effectively, leads to customized views of Web pages with tagged content. A scenario that demonstrates the potential benefits of this approach is presented. Second, a formalization of the approaches is presented which uses a relational analog to show their commonalities. The abstraction from implementation specifics opens the range of vision for fundamental capabilities and gives a clear picture of future work

    An open annotation ontology for science on web 3.0

    Get PDF
    Background: There is currently a gap between the rich and expressive collection of published biomedical ontologies, and the natural language expression of biomedical papers consumed on a daily basis by scientific researchers. The purpose of this paper is to provide an open, shareable structure for dynamic integration of biomedical domain ontologies with the scientific document, in the form of an Annotation Ontology (AO), thus closing this gap and enabling application of formal biomedical ontologies directly to the literature as it emerges. Methods: Initial requirements for AO were elicited by analysis of integration needs between biomedical web communities, and of needs for representing and integrating results of biomedical text mining. Analysis of strengths and weaknesses of previous efforts in this area was also performed. A series of increasingly refined annotation tools were then developed along with a metadata model in OWL, and deployed for feedback and additional requirements the ontology to users at a major pharmaceutical company and a major academic center. Further requirements and critiques of the model were also elicited through discussions with many colleagues and incorporated into the work. Results: This paper presents Annotation Ontology (AO), an open ontology in OWL-DL for annotating scientific documents on the web. AO supports both human and algorithmic content annotation. It enables "stand-off" or independent metadata anchored to specific positions in a web document by any one of several methods. In AO, the document may be annotated but is not required to be under update control of the annotator. AO contains a provenance model to support versioning, and a set model for specifying groups and containers of annotation. AO is freely available under open source license at http://purl.org/ao/, and extensive documentation including screencasts is available on AO's Google Code page: http://code.google.com/p/annotation-ontology/. Conclusions: The Annotation Ontology meets critical requirements for an open, freely shareable model in OWL, of annotation metadata created against scientific documents on the Web. We believe AO can become a very useful common model for annotation metadata on Web documents, and will enable biomedical domain ontologies to be used quite widely to annotate the scientific literature. Potential collaborators and those with new relevant use cases are invited to contact the authors

    Statistical Extraction of Multilingual Natural Language Patterns for RDF Predicates: Algorithms and Applications

    Get PDF
    The Data Web has undergone a tremendous growth period. It currently consists of more then 3300 publicly available knowledge bases describing millions of resources from various domains, such as life sciences, government or geography, with over 89 billion facts. In the same way, the Document Web grew to the state where approximately 4.55 billion websites exist, 300 million photos are uploaded on Facebook as well as 3.5 billion Google searches are performed on average every day. However, there is a gap between the Document Web and the Data Web, since for example knowledge bases available on the Data Web are most commonly extracted from structured or semi-structured sources, but the majority of information available on the Web is contained in unstructured sources such as news articles, blog post, photos, forum discussions, etc. As a result, data on the Data Web not only misses a significant fragment of information but also suffers from a lack of actuality since typical extraction methods are time-consuming and can only be carried out periodically. Furthermore, provenance information is rarely taken into consideration and therefore gets lost in the transformation process. In addition, users are accustomed to entering keyword queries to satisfy their information needs. With the availability of machine-readable knowledge bases, lay users could be empowered to issue more specific questions and get more precise answers. In this thesis, we address the problem of Relation Extraction, one of the key challenges pertaining to closing the gap between the Document Web and the Data Web by four means. First, we present a distant supervision approach that allows finding multilingual natural language representations of formal relations already contained in the Data Web. We use these natural language representations to find sentences on the Document Web that contain unseen instances of this relation between two entities. Second, we address the problem of data actuality by presenting a real-time data stream RDF extraction framework and utilize this framework to extract RDF from RSS news feeds. Third, we present a novel fact validation algorithm, based on natural language representations, able to not only verify or falsify a given triple, but also to find trustworthy sources for it on the Web and estimating a time scope in which the triple holds true. The features used by this algorithm to determine if a website is indeed trustworthy are used as provenance information and therewith help to create metadata for facts in the Data Web. Finally, we present a question answering system that uses the natural language representations to map natural language question to formal SPARQL queries, allowing lay users to make use of the large amounts of data available on the Data Web to satisfy their information need

    A Semantic Web Based Search Engine with X3D Visualisation of Queries and Results

    Get PDF
    Parts of this PhD have been published: Gkoutzis, Konstantinos, and Vladimir Geroimenko. "Moving from Folksonomies to Taxonomies: Using the Social Web and 3D to Build an Unlimited Semantic Ontology." Proceedings of the 2011 15th International Conference on Information Visualisation. IEEE Computer Society, 2011.The Semantic Web project has introduced new techniques for managing information. Data can now be organised more efficiently and in such a way that computers can take advantage of the relationships that characterise the given input to present more relevant output. Semantic Web based search engines can quickly educe exactly what is needed to be found and retrieve it while avoiding information overload. Up until now, search engines have interacted with their users by asking them to look for words and phrases. We propose the creation of a new generation Semantic Web search engine that will offer a visual interface for queries and results. To create such an engine, information input must be viewed not merely as keywords, but as specific concepts and objects which are all part of the same universal system. To make the manipulation of the interconnected visual objects simpler and more natural, 3D graphics are utilised, based on the X3D Web standard, allowing users to semantically synthesise their queries faster and in a more logical way, both for them and the computer

    Defining interoperability standards: A case study of public health observatory websites

    Get PDF
    The Association of Public Health Observatories (APHO) is a group of region-based health-information providers. Each PHO publishes health-related data for their specific region. Each observatory has taken a national lead in one or more key health area - such as 'cancer' or Obesity'. In 2003, a project was initiated to develop 'interoperability' between public health observatory websites, so the national resources published by one lead observatory could be found on the websites for each other PHO. The APHO interoperability project defined a set of requirements for each PHO - websites should comply with the current government data standards and provide webservices to allow data to be searched in real-time between different PHOs. This thesis describes the production of an interoperable website for the North East Public Health Observatory (NEPHO) and the problems faced during implementation to comply with the APHO interoperability requirements. The areas of interoperability, e-Government and metadata were investigated specifically in suitability for NEPHO and an action list of tasks necessary to achieve the project aims was drawn up. This project has resulted in the successful introduction of a new NEPHO website that complies with the APHO and e-Govemment requirements, however interoperability with other organisations has been difficult to achieve. This thesis describes how other organisations approached the same APHO interoperability criteria and questions whether the national project governance could be improved
    • 

    corecore