106 research outputs found
Towards a Conceptualization of Sociomaterial Entanglement
In knowledge representation, socio-technical systems can be modeled
as multiagent systems in which the local knowledge of each individual agent can
be seen as a context. In this paper we propose formal ontologies as a means to
describe the assumptions driving the construction of contexts as local theories and
to enable interoperability among them. In particular, we present two alternative
conceptualizations of the notion of sociomateriality (and entanglement), which
is central in the recent debates on socio-technical systems in the social sciences,
namely critical and agential realism.
We thus start by providing a model of entanglement according to the critical realist
view, representing it as a property of objects that are essentially dependent on
different modules of an already given ontology. We refine then our treatment by
proposing a taxonomy of sociomaterial entanglements that distinguishes between
ontological and epistemological entanglement. In the final section, we discuss the
second perspective, which is more challenging form the point of view of knowledge
representation, and we show that the very distinction of information into
modules can be at least in principle built out of the assumption of an entangled
reality
Explanations of Black-Box Model Predictions by Contextual Importance and Utility
The significant advances in autonomous systems together with an immensely
wider application domain have increased the need for trustable intelligent
systems. Explainable artificial intelligence is gaining considerable attention
among researchers and developers to address this requirement. Although there is
an increasing number of works on interpretable and transparent machine learning
algorithms, they are mostly intended for the technical users. Explanations for
the end-user have been neglected in many usable and practical applications. In
this work, we present the Contextual Importance (CI) and Contextual Utility
(CU) concepts to extract explanations that are easily understandable by experts
as well as novice users. This method explains the prediction results without
transforming the model into an interpretable one. We present an example of
providing explanations for linear and non-linear models to demonstrate the
generalizability of the method. CI and CU are numerical values that can be
represented to the user in visuals and natural language form to justify actions
and explain reasoning for individual instances, situations, and contexts. We
show the utility of explanations in car selection example and Iris flower
classification by presenting complete (i.e. the causes of an individual
prediction) and contrastive explanation (i.e. contrasting instance against the
instance of interest). The experimental results show the feasibility and
validity of the provided explanation methods
An Action-Based Approach to Presence: Foundations and Methods
This chapter presents an action-based approach to presence. It starts by briefly describing the theoretical and empirical foundations of this approach, formalized into three key notions of place/space, action and mediation. In the light of these notions, some common assumptions about presence are then questioned: assuming a neat distinction between virtual and real environments, taking for granted the contours of the mediated environment and considering presence as a purely personal state. Some possible research topics opened up by adopting action as a unit of analysis are illustrated. Finally, a case study on driving as a form of mediated presence is discussed, to provocatively illustrate the flexibility of this approach as a unified framework for presence in digital and physical environment
An Exploration of the Relations between External Representations and Working Memory
It is commonly hypothesized that external representations serve as memory aids and improve task performance by means of expanding the limited capacity of working memory. However, very few studies have directly examined this memory aid hypothesis. By systematically manipulating how information is available externally versus internally in a sequential number comparison task, three experiments were designed to investigate the relation between external representations and working memory. The experimental results show that when the task requires information from both external representations and working memory, it is the interaction of information from the two sources that determines task performance. In particular, when information from the two sources does not match well, external representations hinder instead of enhance task performance. The study highlights the important role the coordination among different representations plays in distributed cognition. The general relations between external representations and working memory are discussed
Recommended from our members
Ultra-Strong Machine Learning: comprehensibility of programs learned with ILP
During the 1980s Michie defined Machine Learning in terms of two orthogonal axes of performance: predictive accuracy and comprehensibility of generated hypotheses. Since predictive accuracy was readily measurable and comprehensibility not so, later definitions in the 1990s, such as Mitchell’s, tended to use a one-dimensional approach to Machine Learning based solely on predictive accuracy, ultimately favouring statistical over symbolic Machine Learning approaches. In this paper we provide a definition of comprehensibility of hypotheses which can be estimated using human participant trials. We present two sets of experiments testing human comprehensibility of logic programs. In the first experiment we test human comprehensibility with and without predicate invention. Results indicate comprehensibility is affected not only by the complexity of the presented program but also by the existence of anonymous predicate symbols. In the second experiment we directly test whether any state-of-the-art ILP systems are ultra-strong learners in Michie’s sense, and select the Metagol system for use in humans trials. Results show participants were not able to learn the relational concept on their own from a set of examples but they were able to apply the relational definition provided by the ILP system correctly. This implies the existence of a class of relational concepts which are hard to acquire for humans, though easy to understand given an abstract explanation. We believe improved understanding of this class could have potential relevance to contexts involving human learning, teaching and verbal interaction
Augmenting the Eye of the Beholder: Exploring the Strategic Potential of Augmented Reality to Enhance Online Service Experiences
Driven by the proliferation of augmented reality (AR) technologies, many firms are pursuing a strategy of service augmentation to enhance customers’ online service experiences. Drawing on situated cognition theory, the authors show that AR - based service augmentation enhances customer value perceptions by simultaneously providing simulated physical control and environmental embedding. The resulting authentic situated experience, manifested in a feeling of spatial presence, funct ions as a mediator and also predicts customer decision comfort. Furthermore, the effect of spatial presence on utilitarian value perceptions is greater for customers who are disposed toward verbal rather than visual information processing, and the positive effect on decision comfort is attenuated by customers’ privacy concerns
- …