70,078 research outputs found
Improving Contextual Suggestions using Open Web Domain Knowledge
Also published online by CEUR Workshop Proceedings (CEUR-WS.org, ISSN 1613-0073)Contextual suggestion aims at recommending items to users given
their current context, such as location-based tourist recommendations.
Our contextual suggestion ranking model consists of two
main components: selecting candidate suggestions and providing a
ranked list of personalized suggestions. We focus on selecting appropriate
suggestions from the ClueWeb12 collection using tourist
domain knowledge inferred from social sites and resources available
on the public Web (Open Web). Specifically, we generate two
candidate subsets retrieved from the ClueWeb12 collection, one by
filtering the content on mentions of the location context, and one
by integrating domain knowledge derived from the OpenWeb. The
impact of these candidate selection methods on contextual suggestion
effectiveness is analyzed using the test collection constructed
for the TREC Contextual Suggestion Track in 2014. Our main findings
are that contextual suggestion performance on the subset created
using OpenWeb domain knowledge is significantly better than
using only geographical information. Second, using a prior probability
estimated from domain knowledge leads to better suggestions
and improves the performance
Modeling an ontology on accessible evacuation routes for emergencies
Providing alert communication in emergency situations is vital to reduce the number of victims. However, this is a challenging goal for researchers and professionals due to the diverse pool of prospective users, e.g. people with disabilities as well as other vulnerable groups. Moreover, in the event of an emergency situation, many people could become vulnerable because of exceptional circumstances such as stress, an unknown environment or even visual impairment (e.g. fire causing smoke). Within this scope, a crucial activity is to notify affected people about safe places and available evacuation routes. In order to address this need, we propose to extend an ontology, called SEMA4A (Simple EMergency Alert 4 [for] All), developed in a previous work for managing knowledge about accessibility guidelines, emergency situations and communication technologies. In this paper, we introduce a semi-automatic technique for knowledge acquisition and modeling on accessible evacuation routes. We introduce a use case to show applications of the ontology and conclude with an evaluation involving several experts in evacuation procedures. © 2014 Elsevier Ltd. All rights reserved
Investigating people: a qualitative analysis of the search behaviours of open-source intelligence analysts
The Internet and the World Wide Web have become integral parts of the lives of many modern individuals, enabling almost instantaneous communication, sharing and broadcasting of thoughts, feelings and opinions. Much of this information is publicly facing, and as such, it can be utilised in a multitude of online investigations, ranging from employee vetting and credit checking to counter-terrorism and fraud prevention/detection. However, the search needs and behaviours of these investigators are not well documented in the literature. In order to address this gap, an in-depth qualitative study was carried out in cooperation with a leading investigation company. The research contribution is an initial identification of Open-Source Intelligence investigator search behaviours, the procedures and practices that they undertake, along with an overview of the difficulties and challenges that they encounter as part of their domain. This lays the foundation for future research in to the varied domain of Open-Source Intelligence gathering
Improving the translation environment for professional translators
When using computer-aided translation systems in a typical, professional translation workflow, there are several stages at which there is room for improvement. The SCATE (Smart Computer-Aided Translation Environment) project investigated several of these aspects, both from a human-computer interaction point of view, as well as from a purely technological side.
This paper describes the SCATE research with respect to improved fuzzy matching, parallel treebanks, the integration of translation memories with machine translation, quality estimation, terminology extraction from comparable texts, the use of speech recognition in the translation process, and human computer interaction and interface design for the professional translation environment. For each of these topics, we describe the experiments we performed and the conclusions drawn, providing an overview of the highlights of the entire SCATE project
ELICA: An Automated Tool for Dynamic Extraction of Requirements Relevant Information
Requirements elicitation requires extensive knowledge and deep understanding
of the problem domain where the final system will be situated. However, in many
software development projects, analysts are required to elicit the requirements
from an unfamiliar domain, which often causes communication barriers between
analysts and stakeholders. In this paper, we propose a requirements ELICitation
Aid tool (ELICA) to help analysts better understand the target application
domain by dynamic extraction and labeling of requirements-relevant knowledge.
To extract the relevant terms, we leverage the flexibility and power of
Weighted Finite State Transducers (WFSTs) in dynamic modeling of natural
language processing tasks. In addition to the information conveyed through
text, ELICA captures and processes non-linguistic information about the
intention of speakers such as their confidence level, analytical tone, and
emotions. The extracted information is made available to the analysts as a set
of labeled snippets with highlighted relevant terms which can also be exported
as an artifact of the Requirements Engineering (RE) process. The application
and usefulness of ELICA are demonstrated through a case study. This study shows
how pre-existing relevant information about the application domain and the
information captured during an elicitation meeting, such as the conversation
and stakeholders' intentions, can be captured and used to support analysts
achieving their tasks.Comment: 2018 IEEE 26th International Requirements Engineering Conference
Workshop
How will the Internet of Things enable Augmented Personalized Health?
Internet-of-Things (IoT) is profoundly redefining the way we create, consume,
and share information. Health aficionados and citizens are increasingly using
IoT technologies to track their sleep, food intake, activity, vital body
signals, and other physiological observations. This is complemented by IoT
systems that continuously collect health-related data from the environment and
inside the living quarters. Together, these have created an opportunity for a
new generation of healthcare solutions. However, interpreting data to
understand an individual's health is challenging. It is usually necessary to
look at that individual's clinical record and behavioral information, as well
as social and environmental information affecting that individual. Interpreting
how well a patient is doing also requires looking at his adherence to
respective health objectives, application of relevant clinical knowledge and
the desired outcomes.
We resort to the vision of Augmented Personalized Healthcare (APH) to exploit
the extensive variety of relevant data and medical knowledge using Artificial
Intelligence (AI) techniques to extend and enhance human health to presents
various stages of augmented health management strategies: self-monitoring,
self-appraisal, self-management, intervention, and disease progress tracking
and prediction. kHealth technology, a specific incarnation of APH, and its
application to Asthma and other diseases are used to provide illustrations and
discuss alternatives for technology-assisted health management. Several
prominent efforts involving IoT and patient-generated health data (PGHD) with
respect converting multimodal data into actionable information (big data to
smart data) are also identified. Roles of three components in an evidence-based
semantic perception approach- Contextualization, Abstraction, and
Personalization are discussed
- …