30,243 research outputs found
BlogForever D2.4: Weblog spider prototype and associated methodology
The purpose of this document is to present the evaluation of different solutions for capturing blogs, established methodology and to describe the developed blog spider prototype
Organizing information on the next generation web - Design and implementation of a new bookmark structure
The next-generation Web will increase the need for a highly organized and ever evolving method to store references to Web objects. These requirements could be realized by the development of a new bookmark structure. This paper endeavors to identify the key requirements of such a bookmark, specifically in relation to Web documents, and sets out a suggested design through which these needs may be accomplished. A prototype developed offers such features as the sharing of bookmarks between users and groups of users. Bookmarks for Web documents in this prototype allow more specific information to be stored such as: URL, the document type, the document title, keywords, a summary, user annotations, date added, date last visited and date last modified. Individuals may access the service from anywhere on the Internet, as long as they have a Java-enabled Web browser
BlogForever D2.6: Data Extraction Methodology
This report outlines an inquiry into the area of web data extraction, conducted within the context of blog preservation. The report reviews theoretical advances and practical developments for implementing data extraction. The inquiry is extended through an experiment that demonstrates the effectiveness and feasibility of implementing some of the suggested approaches. More specifically, the report discusses an approach based on unsupervised machine learning that employs the RSS feeds and HTML representations of blogs. It outlines the possibilities of extracting semantics available in blogs and demonstrates the benefits of exploiting available standards such as microformats and microdata. The report proceeds to propose a methodology for extracting and processing blog data to further inform the design and development of the BlogForever platform
Abmash: Mashing Up Legacy Web Applications by Automated Imitation of Human Actions
Many business web-based applications do not offer applications programming
interfaces (APIs) to enable other applications to access their data and
functions in a programmatic manner. This makes their composition difficult (for
instance to synchronize data between two applications). To address this
challenge, this paper presents Abmash, an approach to facilitate the
integration of such legacy web applications by automatically imitating human
interactions with them. By automatically interacting with the graphical user
interface (GUI) of web applications, the system supports all forms of
integrations including bi-directional interactions and is able to interact with
AJAX-based applications. Furthermore, the integration programs are easy to
write since they deal with end-user, visual user-interface elements. The
integration code is simple enough to be called a "mashup".Comment: Software: Practice and Experience (2013)
ELICA: An Automated Tool for Dynamic Extraction of Requirements Relevant Information
Requirements elicitation requires extensive knowledge and deep understanding
of the problem domain where the final system will be situated. However, in many
software development projects, analysts are required to elicit the requirements
from an unfamiliar domain, which often causes communication barriers between
analysts and stakeholders. In this paper, we propose a requirements ELICitation
Aid tool (ELICA) to help analysts better understand the target application
domain by dynamic extraction and labeling of requirements-relevant knowledge.
To extract the relevant terms, we leverage the flexibility and power of
Weighted Finite State Transducers (WFSTs) in dynamic modeling of natural
language processing tasks. In addition to the information conveyed through
text, ELICA captures and processes non-linguistic information about the
intention of speakers such as their confidence level, analytical tone, and
emotions. The extracted information is made available to the analysts as a set
of labeled snippets with highlighted relevant terms which can also be exported
as an artifact of the Requirements Engineering (RE) process. The application
and usefulness of ELICA are demonstrated through a case study. This study shows
how pre-existing relevant information about the application domain and the
information captured during an elicitation meeting, such as the conversation
and stakeholders' intentions, can be captured and used to support analysts
achieving their tasks.Comment: 2018 IEEE 26th International Requirements Engineering Conference
Workshop
- âŠ