14,557 research outputs found
From the web of data to a world of action
This is the authorâs version of a work that was accepted for publication in Web Semantics: Science, Services and Agents on the World Wide Web. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Web Semantics: Science, Services and Agents on the World Wide Web 8.4
(2010): 10.1016/j.websem.2010.04.007This paper takes as its premise that the web is a place of action, not just information, and that the purpose of
global data is to serve human needs. The paper presents several component technologies, which together work
towards a vision where many small micro-applications can be threaded together using automated assistance to
enable a unified and rich interaction. These technologies include data detector technology to enable any text to
become a start point of semantic interaction; annotations for web-based services so that they can link data to
potential actions; spreading activation over personal ontologies, to allow modelling of context; algorithms for
automatically inferring 'typing' of web-form input data based on previous user inputs; and early work on inferring
task structures from action traces. Some of these have already been integrated within an experimental web-based
(extended) bookmarking tool, Snip!t, and a prototype desktop application On Time, and the paper discusses how the
components could be more fully, yet more openly, linked in terms of both architecture and interaction. As well as
contributing to the goal of an action and activity-focused web, the work also exposes a number of broader issues,
theoretical, practical, social and economic, for the Semantic Web.Parts of this work were supported by the Information
Society Technologies (IST) Program of the European
Commission as part of the DELOS Network of
Excellence on Digital Libraries (Contract G038-
507618). Thanks also to Emanuele Tracanna, Marco
Piva, and Raffaele Giuliano for their work on On
Time
Context Aware Computing for The Internet of Things: A Survey
As we are moving towards the Internet of Things (IoT), the number of sensors
deployed around the world is growing at a rapid pace. Market research has shown
a significant growth of sensor deployments over the past decade and has
predicted a significant increment of the growth rate in the future. These
sensors continuously generate enormous amounts of data. However, in order to
add value to raw sensor data we need to understand it. Collection, modelling,
reasoning, and distribution of context in relation to sensor data plays
critical role in this challenge. Context-aware computing has proven to be
successful in understanding sensor data. In this paper, we survey context
awareness from an IoT perspective. We present the necessary background by
introducing the IoT paradigm and context-aware fundamentals at the beginning.
Then we provide an in-depth analysis of context life cycle. We evaluate a
subset of projects (50) which represent the majority of research and commercial
solutions proposed in the field of context-aware computing conducted over the
last decade (2001-2011) based on our own taxonomy. Finally, based on our
evaluation, we highlight the lessons to be learnt from the past and some
possible directions for future research. The survey addresses a broad range of
techniques, methods, models, functionalities, systems, applications, and
middleware solutions related to context awareness and IoT. Our goal is not only
to analyse, compare and consolidate past research work but also to appreciate
their findings and discuss their applicability towards the IoT.Comment: IEEE Communications Surveys & Tutorials Journal, 201
Design and evaluation of acceleration strategies for speeding up the development of dialog applications
In this paper, we describe a complete development platform that features different innovative acceleration strategies, not included in any other current platform, that simplify and speed up the definition of the different elements required to design a spoken dialog service. The proposed accelerations are mainly based on using the information from the backend database schema and contents, as well as cumulative information produced throughout the different steps in the design. Thanks to these accelerations, the interaction between the designer and the platform is improved, and in most cases the design is reduced to simple confirmations of the âproposalsâ that the platform dynamically provides at each step.
In addition, the platform provides several other accelerations such as configurable templates that can be used to define the different tasks in the service or the dialogs to obtain or show information to the user, automatic proposals for the best way to request slot contents from the user (i.e. using mixed-initiative forms or directed forms), an assistant that offers the set of more probable actions required to complete the definition of the different tasks in the application, or another assistant for solving specific modality details such as confirmations of user answers or how to present them the lists of retrieved results after querying the backend database. Additionally, the platform also allows the creation of speech grammars and prompts, database access functions, and the possibility of using mixed initiative and over-answering dialogs. In the paper we also describe in detail each assistant in the platform, emphasizing the different kind of methodologies followed to facilitate the design process at each one.
Finally, we describe the results obtained in both a subjective and an objective evaluation with different designers that confirm the viability, usefulness, and functionality of the proposed accelerations. Thanks to the accelerations, the design time is reduced in more than 56% and the number of keystrokes by 84%
Web Data Extraction, Applications and Techniques: A Survey
Web Data Extraction is an important problem that has been studied by means of
different scientific tools and in a broad range of applications. Many
approaches to extracting data from the Web have been designed to solve specific
problems and operate in ad-hoc domains. Other approaches, instead, heavily
reuse techniques and algorithms developed in the field of Information
Extraction.
This survey aims at providing a structured and comprehensive overview of the
literature in the field of Web Data Extraction. We provided a simple
classification framework in which existing Web Data Extraction applications are
grouped into two main classes, namely applications at the Enterprise level and
at the Social Web level. At the Enterprise level, Web Data Extraction
techniques emerge as a key tool to perform data analysis in Business and
Competitive Intelligence systems as well as for business process
re-engineering. At the Social Web level, Web Data Extraction techniques allow
to gather a large amount of structured data continuously generated and
disseminated by Web 2.0, Social Media and Online Social Network users and this
offers unprecedented opportunities to analyze human behavior at a very large
scale. We discuss also the potential of cross-fertilization, i.e., on the
possibility of re-using Web Data Extraction techniques originally designed to
work in a given domain, in other domains.Comment: Knowledge-based System
OmniFill: Domain-Agnostic Form Filling Suggestions Using Multi-Faceted Context
Predictive suggestion systems offer contextually-relevant text entry
completions. Existing approaches, like autofill, often excel in
narrowly-defined domains but fail to generalize to arbitrary workflows. We
introduce a conceptual framework to analyze the compound demands of a
particular suggestion context, yielding unique opportunities for large language
models (LLMs) to infer suggestions for a wide range of domain-agnostic
form-filling tasks that were out of reach with prior approaches. We explore
these opportunities in OmniFill, a prototype that collects multi-faceted
context including browsing and text entry activity to construct an LLM prompt
that offers suggestions in situ for arbitrary structured text entry interfaces.
Through a user study with 18 participants, we found that OmniFill offered
valuable suggestions and we identified four themes that characterize users'
behavior and attitudes: an "opportunistic scrapbooking" approach; a trust
placed in the system; value in partial success; and a need for visibility into
prompt context.Comment: 14 pages, 5 figure
What Would You Ask to Your Home if It Were Intelligent? Exploring User Expectations about Next-Generation Homes
Ambient Intelligence (AmI) research is giving birth to a multitude of futuristic home scenarios and applications; however a clear discrepancy between current installations and research-level designs can be easily noticed. Whether this gap is due to the natural distance between research and engineered applications or to mismatching of needs and solutions remains to be understood. This paper discusses the results of a survey about user expectations with respect to intelligent homes. Starting from a very simple and open question about what users would ask to their intelligent homes, we derived user perceptions about what intelligent homes can do, and we analyzed to what extent current research solutions, as well as commercially available systems, address these emerging needs. Interestingly, most user concerns about smart homes involve comfort and household tasks and most of them can be currently addressed by existing commercial systems, or by suitable combinations of them. A clear trend emerges from the poll findings: the technical gap between user expectations and current solutions is actually narrower and easier to bridge than it may appear, but users perceive this gap as wide and limiting, thus requiring the AmI community to establish a more effective communication with final users, with an increased attention to real-world deploymen
- âŠ