5,864 research outputs found

    Ontology Based Approach for Services Information Discovery using Hybrid Self Adaptive Semantic Focused Crawler

    Get PDF
    Focused crawling is aimed at specifically searching out pages that are relevant to a predefined set of topics. Since ontology is an all around framed information representation, ontology based focused crawling methodologies have come into exploration. Crawling is one of the essential systems for building information stockpiles. The reason for semantic focused crawler is naturally finding, commenting and ordering the administration data with the Semantic Web advances. Here, a framework of a hybrid self-adaptive semantic focused crawler – HSASF crawler, with the inspiration driving viably discovering, and sorting out administration organization information over the Internet, by considering the three essential issues has been displayed. A semi-supervised system has been planned with the inspiration driving subsequently selecting the ideal limit values for each idea, while considering the optimal performance without considering the constraint of the preparation of data set. DOI: 10.17762/ijritcc2321-8169.15072

    Ontology Driven Web Extraction from Semi-structured and Unstructured Data for B2B Market Analysis

    No full text
    The Market Blended Insight project1 has the objective of improving the UK business to business marketing performance using the semantic web technologies. In this project, we are implementing an ontology driven web extraction and translation framework to supplement our backend triple store of UK companies, people and geographical information. It deals with both the semi-structured data and the unstructured text on the web, to annotate and then translate the extracted data according to the backend schema

    Business Ontology for Evaluating Corporate Social Responsibility

    Get PDF
    This paper presents a software solution that is developed to automatically classify companies by taking into account their level of social responsibility. The application is based on ontologies and on intelligent agents. In order to obtain the data needed to evaluate companies, we developed a web crawling module that analyzes the company’s website and the documents that are available online such as social responsibility report, mission statement, employment structure, etc. Based on a predefined CSR ontology, the web crawling module extracts the terms that are linked to corporate social responsibility. By taking into account the extracted qualitative data, an intelligent agent, previously trained on a set of companies, computes the qualitative values, which are then included in the classification model based on neural networks. The proposed ontology takes into consideration the guidelines proposed by the “ISO 26000 Standard for Social Responsibility”. Having this model, and being aware of the positive relationship between Corporate Social Responsibility and financial performance, an overall perspective on each company’s activity can be configured, this being useful not only to the company’s creditors, auditors, stockholders, but also to its consumers.corporate social responsibility, ISO 26000 Standard for Social Responsibility, ontology, web crawling, intelligent agent, corporate performance, POS tagging, opinion mining, sentiment analysis

    Ontology Population for Open-Source Intelligence

    Get PDF
    We present an approach based on GATE (General Architecture for Text Engineering) for the automatic population of ontologies from text documents. We describe some experimental results, which are encouraging in terms of extracted correct instances of the ontology. We then focus on a phase of our pipeline and discuss a variant thereof, which aims at reducing the manual effort needed to generate pre-defined dictionaries used in document annotation. Our additional experiments show promising results also in this case

    Ontology population for open-source intelligence: A GATE-based solution

    Get PDF
    Open-Source INTelligence is intelligence based on publicly available sources such as news sites, blogs, forums, etc. The Web is the primary source of information, but once data are crawled, they need to be interpreted and structured. Ontologies may play a crucial role in this process, but because of the vast amount of documents available, automatic mechanisms for their population are needed, starting from the crawled text. This paper presents an approach for the automatic population of predefined ontologies with data extracted from text and discusses the design and realization of a pipeline based on the General Architecture for Text Engineering system, which is interesting for both researchers and practitioners in the field. Some experimental results that are encouraging in terms of extracted correct instances of the ontology are also reported. Furthermore, the paper also describes an alternative approach and provides additional experiments for one of the phases of our pipeline, which requires the use of predefined dictionaries for relevant entities. Through such a variant, the manual workload required in this phase was reduced, still obtaining promising results

    iCrawl: Improving the Freshness of Web Collections by Integrating Social Web and Focused Web Crawling

    Full text link
    Researchers in the Digital Humanities and journalists need to monitor, collect and analyze fresh online content regarding current events such as the Ebola outbreak or the Ukraine crisis on demand. However, existing focused crawling approaches only consider topical aspects while ignoring temporal aspects and therefore cannot achieve thematically coherent and fresh Web collections. Especially Social Media provide a rich source of fresh content, which is not used by state-of-the-art focused crawlers. In this paper we address the issues of enabling the collection of fresh and relevant Web and Social Web content for a topic of interest through seamless integration of Web and Social Media in a novel integrated focused crawler. The crawler collects Web and Social Media content in a single system and exploits the stream of fresh Social Media content for guiding the crawler.Comment: Published in the Proceedings of the 15th ACM/IEEE-CS Joint Conference on Digital Libraries 201
    corecore