19,808 research outputs found
Do (and say) as I say: Linguistic adaptation in human-computer dialogs
© Theodora Koulouri, Stanislao Lauria, and Robert D. Macredie. This article has been made available through the Brunel Open Access Publishing Fund.There is strong research evidence showing that people naturally align to each other’s vocabulary, sentence structure, and acoustic features in dialog, yet little is known about how the alignment mechanism operates in the interaction between users and computer systems let alone how it may be exploited to improve the efficiency of the interaction. This article provides an account of lexical alignment in human–computer dialogs, based on empirical data collected in a simulated human–computer interaction scenario. The results indicate that alignment is present, resulting in the gradual reduction and stabilization of the vocabulary-in-use, and that it is also reciprocal. Further, the results suggest that when system and user errors occur, the development of alignment is temporarily disrupted and users tend to introduce novel words to the dialog. The results also indicate that alignment in human–computer interaction may have a strong strategic component and is used as a resource to compensate for less optimal (visually impoverished) interaction conditions. Moreover, lower alignment is associated with less successful interaction, as measured by user perceptions. The article distills the results of the study into design recommendations for human–computer dialog systems and uses them to outline a model of dialog management that supports and exploits alignment through mechanisms for in-use adaptation of the system’s grammar and lexicon
How do field of view and resolution affect the information content of panoramic scenes for visual navigation? A computational investigation
The visual systems of animals have to provide information to guide behaviour and the informational requirements of an animal’s behavioural repertoire are often reflected in its sensory system. For insects, this is often evident in the optical array of the compound eye. One behaviour that insects share with many animals is the use of learnt visual information for navigation. As ants are expert visual navigators it may be that their vision is optimised for navigation. Here we take a computational approach in asking how the details of the optical array influence the informational content of scenes used in simple view matching strategies for orientation. We find that robust orientation is best achieved with low-resolution visual information and a large field of view, similar to the optical properties seen for many ant species. A lower resolution allows for a trade-off between specificity and generalisation for stored views. Additionally, our simulations show that orientation performance increases if different portions of the visual field are considered as discrete visual sensors, each giving an independent directional estimate. This suggests that ants might benefit by processing information from their two eyes independently
The Distributed Ontology Language (DOL): Use Cases, Syntax, and Extensibility
The Distributed Ontology Language (DOL) is currently being standardized
within the OntoIOp (Ontology Integration and Interoperability) activity of
ISO/TC 37/SC 3. It aims at providing a unified framework for (1) ontologies
formalized in heterogeneous logics, (2) modular ontologies, (3) links between
ontologies, and (4) annotation of ontologies. This paper presents the current
state of DOL's standardization. It focuses on use cases where distributed
ontologies enable interoperability and reusability. We demonstrate relevant
features of the DOL syntax and semantics and explain how these integrate into
existing knowledge engineering environments.Comment: Terminology and Knowledge Engineering Conference (TKE) 2012-06-20 to
2012-06-21 Madrid, Spai
Recommended from our members
Explainable and Advisable Learning for Self-driving Vehicles
Deep neural perception and control networks are likely to be a key component of self-driving vehicles. These models need to be explainable - they should provide easy-to-interpret rationales for their behavior - so that passengers, insurance companies, law enforcement, developers, etc., can understand what triggered a particular behavior. Explanations may be triggered by the neural controller, namely introspective explanations, or informed by the neural controller's output, namely rationalizations. Our work has focused on the challenge of generating introspective explanations of deep models for self-driving vehicles. In Chapter 3, we begin by exploring the use of visual explanations. These explanations take the form of real-time highlighted regions of an image that causally influence the network's output (steering control). In the first stage, we use a visual attention model to train a convolution network end-to-end from images to steering angle. The attention model highlights image regions that potentially influence the network's output. Some of these are true influences, but some are spurious. We then apply a causal filtering step to determine which input regions actually influence the output. This produces more succinct visual explanations and more accurately exposes the network's behavior. In Chapter 4, we add an attention-based video-to-text model to produce textual explanations of model actions, e.g. "the car slows down because the road is wet". The attention maps of controller and explanation model are aligned so that explanations are grounded in the parts of the scene that mattered to the controller. We explore two approaches to attention alignment, strong- and weak-alignment. These explainable systems represent an externalization of tacit knowledge. The network's opaque reasoning is simplified to a situation-specific dependence on a visible object in the image. This makes them brittle and potentially unsafe in situations that do not match training data. In Chapter 5, we propose to address this issue by augmenting training data with natural language advice from a human. Advice includes guidance about what to do and where to attend. We present the first step toward advice-giving, where we train an end-to-end vehicle controller that accepts advice. The controller adapts the way it attends to the scene (visual attention) and the control (steering and speed). Further, in Chapter 6, we propose a new approach that learns vehicle control with the help of long-term (global) human advice. Specifically, our system learns to summarize its visual observations in natural language, predict an appropriate action response (e.g. "I see a pedestrian crossing, so I stop"), and predict the controls, accordingly
Automated Enrichment of Routing Instructions
Commonly used navigation instructions are based on metric turn descriptions (e.g. “turn left onto Nienburger Straße in 100 m”). For the user it is easy to follow the route, but later it is typically hard to remember how s/he got there. Orientation is based on remarkable objects or locations called landmarks. They are then linked and combined to so-called survey knowledge in the psychological model of a cognitive map. Some of today’s navigation systems also contain landmarks – they are, however, only used at decision points of the route. The goal of this research is to enhance the user's own sense of orientation by enriching common routing instructions with relational hints to landmarks.
First, potential landmark objects are defined, extracted from OpenStreetMap and assigned an importance weight. The landmarks are then used to enrich the given routes: In the enrichment process, the influence of the landmarks is modeled as a decline of the weight by distance. Afterwards the most influential landmark is selected for each route segment. The 9-Intersection-Model and an adapted Direction-RelationMatrix are the core methods that are used to analyse and determine the relations between the route and the chosen landmarks.
The automatic description of relevant landmarks along a route is implemented as an interactive web-map. The main goal of this paper is the development of the system. Still, a first evaluation was conducted, in order to test the users’ ability of orientation after using enriched instructions compared to users using the classic ones
Experiments in climate governance – lessons from a systematic review of case studies in transition research
Experimentation has been proposed as one of the ways in which public policy can drive sustainability transitions, notably by creating or delimiting space for experimenting with innovative solutions to sustainability challenges. In this paper we report on a systematic review of articles published between 2009 and 2015 that have addressed experiments aiming either at understanding decarbonisation transitions or enhancing climate resilience. Using the case survey method, we find few empirical descriptions of real-world experiments in climate and energy contexts in the scholarly literature, being observed in only 25 articles containing 29 experiments. We discuss the objectives, outputs and outcomes of these experiments noting that explicit experimenting with climate policies could be identified only in 12 cases. Based on the results we suggest a definition of climate policy experiments and a typology of experiments for sustainability transitions that can be used to better understand the role of and learn more effectively from experiments in sustainability transitions
Land use patterns and access in Mexico City
The problem of distribution of land uses in urban space in Latin American cities has been examined
under different perspectives. Most authors tend to model patterns of population and land use as a
consequence of social and economic processes alone, failing to address urban space as an intrinsic
variable. Instead, the theory of cities as movement economies argues that land use patterns are
influenced by movement flows, which are in turn strongly affected by the urban grid. As a result, land
uses such as retail would seek highly accessible locations to take advantage of such flows while
residential uses would avoid them. However, space syntax techniques traditionally used to point out this
relationship do not seem to reveal it so easily in non-organic cities like Mexico. This paper addresses the
relationship between patterns of accessibility and land use in the first ring of Mexico City as a spatial
strategy. A new functional description of the city where plots are nodes connected to flows that represent
the street network is adopted. This model enables us to measure accessibility at the level of plots.
Following this, we focus on the occurrence of land use types in highly or low accessible locations using
cumulative distribution functions. If the distribution of land uses was random, the proportion of land use
types would be more or less uniform throughout the area. It is shown that the relationship between
accessibility and land use is not linear and is guided by movement economy forces. It is suggested that
the understanding of these relationship is key to plan for sustainable growth objectively
- …