113,635 research outputs found
BlogForever D2.6: Data Extraction Methodology
This report outlines an inquiry into the area of web data extraction, conducted within the context of blog preservation. The report reviews theoretical advances and practical developments for implementing data extraction. The inquiry is extended through an experiment that demonstrates the effectiveness and feasibility of implementing some of the suggested approaches. More specifically, the report discusses an approach based on unsupervised machine learning that employs the RSS feeds and HTML representations of blogs. It outlines the possibilities of extracting semantics available in blogs and demonstrates the benefits of exploiting available standards such as microformats and microdata. The report proceeds to propose a methodology for extracting and processing blog data to further inform the design and development of the BlogForever platform
Towards Building a Knowledge Base of Monetary Transactions from a News Collection
We address the problem of extracting structured representations of economic
events from a large corpus of news articles, using a combination of natural
language processing and machine learning techniques. The developed techniques
allow for semi-automatic population of a financial knowledge base, which, in
turn, may be used to support a range of data mining and exploration tasks. The
key challenge we face in this domain is that the same event is often reported
multiple times, with varying correctness of details. We address this challenge
by first collecting all information pertinent to a given event from the entire
corpus, then considering all possible representations of the event, and
finally, using a supervised learning method, to rank these representations by
the associated confidence scores. A main innovative element of our approach is
that it jointly extracts and stores all attributes of the event as a single
representation (quintuple). Using a purpose-built test set we demonstrate that
our supervised learning approach can achieve 25% improvement in F1-score over
baseline methods that consider the earliest, the latest or the most frequent
reporting of the event.Comment: Proceedings of the 17th ACM/IEEE-CS Joint Conference on Digital
Libraries (JCDL '17), 201
Computer-based library or computer-based learning?
Traditionally, libraries have played the role of repository of published information resources and, more recently,
gateway to online subscription databases. The library online catalog and digital library interface serve an
intermediary function to help users locate information resources available through the library. With competition from Web search engines and Web portals of various kinds available for free, the library has to step up to play a more active role as guide and coach to help users make use of information resources for learning or to accomplish particular tasks. It is no longer sufficient for computer-based library systems to provide just search and access functions. They must provide the functionality and environment to support learning and become computer-based learning systems. This paper examines the kind of learning support that can be incorporated in library online catalogs and digital libraries, including 1) enhanced support for information browsing and synthesis through linking by shared meta-data, references and concepts; 2) visualization of related information; 3) adoption of Library 2.0 and social technologies; 4) adoption of Library 3.0 technologies including intelligent processing and text mining
Approach to Identify Internal Best Practices in a Software Organization.
Current approaches to software process improvements (SPI) in software organizations is based on identifying gaps by comparing the way organizations work with respect to practices contained in the reference models. Later, these gaps will be targeted for establishing software process improvements. This paper presents an approach for identifying best practices within the organization. This is considered a key element in order to compare the way software organizations work with the reference models. After that, these practices will be complemented with practices contained in these models depending on the organization's business goals
Comparing knowledge sources for nominal anaphora resolution
We compare two ways of obtaining lexical knowledge for antecedent selection in other-anaphora
and definite noun phrase coreference. Specifically, we compare an algorithm that relies on links
encoded in the manually created lexical hierarchy WordNet and an algorithm that mines corpora
by means of shallow lexico-semantic patterns. As corpora we use the British National
Corpus (BNC), as well as the Web, which has not been previously used for this task. Our
results show that (a) the knowledge encoded in WordNet is often insufficient, especially for
anaphor-antecedent relations that exploit subjective or context-dependent knowledge; (b) for
other-anaphora, the Web-based method outperforms the WordNet-based method; (c) for definite
NP coreference, the Web-based method yields results comparable to those obtained using
WordNet over the whole dataset and outperforms the WordNet-based method on subsets of the
dataset; (d) in both case studies, the BNC-based method is worse than the other methods because
of data sparseness. Thus, in our studies, the Web-based method alleviated the lexical knowledge
gap often encountered in anaphora resolution, and handled examples with context-dependent relations
between anaphor and antecedent. Because it is inexpensive and needs no hand-modelling
of lexical knowledge, it is a promising knowledge source to integrate in anaphora resolution systems
FrameNet CNL: a Knowledge Representation and Information Extraction Language
The paper presents a FrameNet-based information extraction and knowledge
representation framework, called FrameNet-CNL. The framework is used on natural
language documents and represents the extracted knowledge in a tailor-made
Frame-ontology from which unambiguous FrameNet-CNL paraphrase text can be
generated automatically in multiple languages. This approach brings together
the fields of information extraction and CNL, because a source text can be
considered belonging to FrameNet-CNL, if information extraction parser produces
the correct knowledge representation as a result. We describe a
state-of-the-art information extraction parser used by a national news agency
and speculate that FrameNet-CNL eventually could shape the natural language
subset used for writing the newswire articles.Comment: CNL-2014 camera-ready version. The final publication is available at
link.springer.co
- âŠ