111,807 research outputs found
Automatic Extraction of Complex Web Data
A new wrapper induction algorithm WTM for generating rules that describe the general web page layout template is presented. WTM is mainly designed for use in weblog crawling and indexing system. Most weblogs are maintained by content management systems and have similar layout structures in all pages. In addition, they provide RSS feeds to describe the latest entries. These entries appear in the weblog homepage in HTML format as well. WTM is built upon these two observations. It uses RSS feed data to automatically label the corresponding HTML file (weblog homepage) and induces general template rules from the labeled page. The rules can then be used to extract data from other pages of similar layout template. WTM is tested on some selected weblogs and the results are satisfactory
Comparative Mining of Multiple Web Data Source Contents with Object Oriented Model
Web contents usually contain different types of data which are embedded under different complex structures. Existing approaches for extracting data contents from the web are manual wrappers, supervised wrapper induction, or automatic data extraction. The WebOminer system is an automatic extraction system that attempts to extract diverse heterogeneous web contents by modeling web sites as object oriented schemas. The goal is to generate and integrate various web site object schemas for deeper comparative querying of historical and derived contents of Business to Customer (B2C) such as BestBuy and Future Shop. The current WebOMiner system generates and extracts from only one product list page (e.g., computer page) of B2C web sites and still needs to generate and extract from a more comprehensive web site object schemas (e.g., those of Computer, Laptop and Desktop products). The current WebOMiner system does not yet handle historical aspects of data objects from different web pages. This thesis extends and advances the WebOMiner system to automatically generate a more comprehensive web site object schema, extract and mine structured web contents from different web pages based on objects\u27 patterns similarity matching, and stores the extracted objects in historical object-oriented data warehouse. Approaches to be used include similarity matching of DOM tree tag nodes for identifying data blocks and data regions, automatic Non-Deterministic and Deterministic Finite Automata (NFA and DFA) for generating web site object schemas and content extraction, which contain similar data objects. Experimental results show that our system is effective and able to extract and mine structured data tuples from different web websites with 79% recall and 100% precision. The average execution time of our system is 21.8 seconds
Automatic supervised information extraction of structured web data
The overall purpose of this project is, in short words, to create a system able to extract vital
information from product web pages just like a human would. Information like the name of the
product, its description, price tag, company that produces it, and so on. At a first glimpse, this
may not seem extraordinary or technically difficult, since web scraping techniques exist from long
ago (like the python library Beautiful Soup for instance, an HTML parser1 released in 2004). But
let us think for a second on what it actually means being able to extract desired information from
any given web source: the way information is displayed can be extremely varied, not only visually,
but also semantically. For instance, some hotel booking web pages display at once all prices for
the different room types, while medium-sized consumer products in websites like Amazon offer the
main product in detail and then more small-sized product recommendations further down the page,
being the latter the preferred way of displaying assets by most retail companies. And each with its
own styling and search engines. With the above said, the task of mining valuable data from the
web now does not sound as easy as it first seemed. Hence the purpose of this project is to shine
some light on the Automatic Supervised Information Extraction of Structured Web Data problem.
It is important to think if developing such a solution is really valuable at all. Such an endeavour
both in time and computing resources should lead to a useful end result, at least on paper, to
justify it. The opinion of this author is that it does lead to a potentially valuable result. The
targeted extraction of information of publicly available consumer-oriented content at large scale in
an accurate, reliable and future proof manner could provide an incredibly useful and large amount
of data. This data, if kept updated, could create endless opportunities for Business Intelligence,
although exactly which ones is beyond the scope of this work. A simple metaphor explains the
potential value of this work: if an oil company were to be told where are all the oil reserves in the
planet, it still should need to invest in machinery, workers and time to successfully exploit them,
but half of the job would have already been done2.
As the reader will see in this work, the way the issue is tackled is by building a somehow complex
architecture that ends in an Artificial Neural Network3. A quick overview of such architecture is
as follows: first find the URLs that lead to the product pages that contain the desired data that
is going to be extracted inside a given site (like URLs that lead to ”action figure” products inside
the site ebay.com); second, per each URL passed, extract its HTML and make a screenshot of the
page, and store this data in a suitable and scalable fashion; third, label the data that will be fed to
the NN4; fourth, prepare the aforementioned data to be input in an NN; fifth, train the NN; and
sixth, deploy the NN to make [hopefully accurate] predictions
Automatic Wrapper Adaptation by Tree Edit Distance Matching
Information distributed through the Web keeps growing faster day by day,\ud
and for this reason, several techniques for extracting Web data have been suggested\ud
during last years. Often, extraction tasks are performed through so called wrappers,\ud
procedures extracting information from Web pages, e.g. implementing logic-based\ud
techniques. Many fields of application today require a strong degree of robustness\ud
of wrappers, in order not to compromise assets of information or reliability of data\ud
extracted.\ud
Unfortunately, wrappers may fail in the task of extracting data from a Web page, if\ud
its structure changes, sometimes even slightly, thus requiring the exploiting of new\ud
techniques to be automatically held so as to adapt the wrapper to the new structure\ud
of the page, in case of failure. In this work we present a novel approach of automatic wrapper adaptation based on the measurement of similarity of trees through\ud
improved tree edit distance matching techniques
Web Data Extraction, Applications and Techniques: A Survey
Web Data Extraction is an important problem that has been studied by means of
different scientific tools and in a broad range of applications. Many
approaches to extracting data from the Web have been designed to solve specific
problems and operate in ad-hoc domains. Other approaches, instead, heavily
reuse techniques and algorithms developed in the field of Information
Extraction.
This survey aims at providing a structured and comprehensive overview of the
literature in the field of Web Data Extraction. We provided a simple
classification framework in which existing Web Data Extraction applications are
grouped into two main classes, namely applications at the Enterprise level and
at the Social Web level. At the Enterprise level, Web Data Extraction
techniques emerge as a key tool to perform data analysis in Business and
Competitive Intelligence systems as well as for business process
re-engineering. At the Social Web level, Web Data Extraction techniques allow
to gather a large amount of structured data continuously generated and
disseminated by Web 2.0, Social Media and Online Social Network users and this
offers unprecedented opportunities to analyze human behavior at a very large
scale. We discuss also the potential of cross-fertilization, i.e., on the
possibility of re-using Web Data Extraction techniques originally designed to
work in a given domain, in other domains.Comment: Knowledge-based System
Generating and visualizing a soccer knowledge base
This demo abstract describes the SmartWeb Ontology-based Information Extraction System (SOBIE). A key feature of SOBIE is that all information is extracted and stored with respect to the SmartWeb ontology. In this way, other components of the systems, which use the same ontology, can access this information in a straightforward way. We will show how information extracted by SOBIE is visualized within its original context, thus enhancing the browsing experience of the end user
Recommended from our members
Extracting ontologies from software documentation: a semi-automatic method and its evaluation
Rich and generic ontologies about web service functionalities are a prerequisite for performing complex reasoning tasks with web service descriptions. However, their acquisition is timeconsuming and conditioned by the small number of web services available in certain domains. As a solution, we describe a semiautomatic method to extract such ontologies from software documentation,
motivated by the observation that web services reflect the
functionality of their underlying implementation. Further, we report on fine-tuning the extraction process by using a multi-stage evaluation method
An infrastructure for building semantic web portals
In this paper, we present our KMi semantic web portal infrastructure, which supports two important tasks of semantic web portals, namely metadata extraction and data querying. Central to our infrastructure are three components: i) an automated metadata extraction tool, ASDI, which supports the extraction of high quality metadata from heterogeneous sources, ii) an ontology-driven question answering tool, AquaLog, which makes use of the domain specific ontology and the semantic metadata extracted by ASDI to answers questions in natural language format, and iii) a semantic search engine, which enhances traditional
text-based searching by making use of the underlying ontologies and the extracted metadata. A semantic web portal application has been built, which illustrates the usage of this infrastructure
- …