27,751 research outputs found

    OILSW: A New System for Ontology Instance Learning

    Get PDF
    The Semantic Web is expected to extend the current Web by providing structured content via the addition of annotations. Because of the large amount of pages in the Web, manual annotation is very time consuming. Finding an automatic or semiautomatic method to change the current Web to the Semantic Web is very helpful. In a specific domain, Web pages are the instances of that domain ontology. So we need semiautomatic tools to find these instances and fill their attributes. In this article, we propose a new system named OILSW for instance learning of an ontology from Web pages of Websites in a common domain. This system is the first comprehensive system for automatically populating the ontology for websites. By using this system, any Website in a certain domain can be automatically annotated

    Ontology Driven Web Extraction from Semi-structured and Unstructured Data for B2B Market Analysis

    No full text
    The Market Blended Insight project1 has the objective of improving the UK business to business marketing performance using the semantic web technologies. In this project, we are implementing an ontology driven web extraction and translation framework to supplement our backend triple store of UK companies, people and geographical information. It deals with both the semi-structured data and the unstructured text on the web, to annotate and then translate the extracted data according to the backend schema

    Exploring The Value Of Folksonomies For Creating Semantic Metadata

    No full text
    Finding good keywords to describe resources is an on-going problem: typically we select such words manually from a thesaurus of terms, or they are created using automatic keyword extraction techniques. Folksonomies are an increasingly well populated source of unstructured tags describing web resources. This paper explores the value of the folksonomy tags as potential source of keyword metadata by examining the relationship between folksonomies, community produced annotations, and keywords extracted by machines. The experiment has been carried-out in two ways: subjectively, by asking two human indexers to evaluate the quality of the generated keywords from both systems; and automatically, by measuring the percentage of overlap between the folksonomy set and machine generated keywords set. The results of this experiment show that the folksonomy tags agree more closely with the human generated keywords than those automatically generated. The results also showed that the trained indexers preferred the semantics of folksonomy tags compared to keywords extracted automatically. These results can be considered as evidence for the strong relationship of folksonomies to the human indexer’s mindset, demonstrating that folksonomies used in the del.icio.us bookmarking service are a potential source for generating semantic metadata to annotate web resources

    A Semantic Framework for the Analysis of Privacy Policies

    Get PDF

    Business Ontology for Evaluating Corporate Social Responsibility

    Get PDF
    This paper presents a software solution that is developed to automatically classify companies by taking into account their level of social responsibility. The application is based on ontologies and on intelligent agents. In order to obtain the data needed to evaluate companies, we developed a web crawling module that analyzes the company’s website and the documents that are available online such as social responsibility report, mission statement, employment structure, etc. Based on a predefined CSR ontology, the web crawling module extracts the terms that are linked to corporate social responsibility. By taking into account the extracted qualitative data, an intelligent agent, previously trained on a set of companies, computes the qualitative values, which are then included in the classification model based on neural networks. The proposed ontology takes into consideration the guidelines proposed by the “ISO 26000 Standard for Social Responsibility”. Having this model, and being aware of the positive relationship between Corporate Social Responsibility and financial performance, an overall perspective on each company’s activity can be configured, this being useful not only to the company’s creditors, auditors, stockholders, but also to its consumers.corporate social responsibility, ISO 26000 Standard for Social Responsibility, ontology, web crawling, intelligent agent, corporate performance, POS tagging, opinion mining, sentiment analysis

    Information Extraction in Illicit Domains

    Full text link
    Extracting useful entities and attribute values from illicit domains such as human trafficking is a challenging problem with the potential for widespread social impact. Such domains employ atypical language models, have `long tails' and suffer from the problem of concept drift. In this paper, we propose a lightweight, feature-agnostic Information Extraction (IE) paradigm specifically designed for such domains. Our approach uses raw, unlabeled text from an initial corpus, and a few (12-120) seed annotations per domain-specific attribute, to learn robust IE models for unobserved pages and websites. Empirically, we demonstrate that our approach can outperform feature-centric Conditional Random Field baselines by over 18\% F-Measure on five annotated sets of real-world human trafficking datasets in both low-supervision and high-supervision settings. We also show that our approach is demonstrably robust to concept drift, and can be efficiently bootstrapped even in a serial computing environment.Comment: 10 pages, ACM WWW 201

    Online event-based conservation documentation: A case study from the IIC website

    Full text link
    There is a wealth of conservation-related resources that are published online on institutional and personal websites. There is value in searching across these websites, but this is currently impossible because the published data do not conform to any universal standard. This paper begins with a review of the types of classifications employed for conservation content in several conservation websites. It continues with an analysis of these classifications and it identifies some of their limitations that are related to the lack of conceptual basis of the classification terms used. The paper then draws parallels with similar problems in other professional fields and investigates the technologies used to resolve them. Solutions developed in the fields of computer science and knowledge organization are then described. The paper continues with the survey of two important resources in cultural heritage: the ICOM-CIDOC-CRM and the Getty vocabularies and it explains how these resources can be combined in the field of conservation documentation to assist the implementation of a common publication framework across different resources. A case study for the proposed implementation is then presented based on recent work on the IIC website. The paper concludes with a summary of the benefits of the recommended approach. An appendix with a selection of classification terms with reasonable coverage for conservation content is included

    Initiating organizational memories using ontology-based network analysis as a bootstrapping tool

    Get PDF
    An important problem for many kinds of knowledge systems is their initial set-up. It is difficult to choose the right information to include in such systems, and the right information is also a prerequisite for maximizing the uptake and relevance. To tackle this problem, most developers adopt heavyweight solutions and rely on a faithful continuous interaction with users to create and improve content. In this paper, we explore the use of an automatic, lightweight ontology-based solution to the bootstrapping problem, in which domain-describing ontologies are analysed to uncover significant yet implicit relationships between instances. We illustrate the approach by using such an analysis to provide content automatically for the initial set-up of an organizational memory
    corecore