92 research outputs found
Narrative Similarity as Common Summary
The ability to identify similarities between narratives has been argued to be central in human interactions. Previous work that sought to formalize this task has hypothesized that narrative similarity can be equated to the existence of a common summary between the narratives involved. We offer tangible psychological evidence in support of this hypothesis. Human participants in our empirical study were presented with triples of stories, and were asked to rate: (i) the degree of similarity between story A and story B; (ii) the appropriateness of story C as a summary of story A; (iii) the appropriateness of story C as a summary of story B. The story triples were selected systematically to span the space of their possible interrelations. Empirical evidence gathered from this study overwhelmingly supports the position that the higher the latter two ratings are, the higher the first rating also is. Thus, while this work does not purport to formally define either of the two tasks involved, it does argue that one can be meaningfully reduced to the other
Intra-Organizational Communication 2.0
We put forward an architecture for the next generation of AImediated intra-organizational communication (IOC), towards enhanced team productivity and satisfaction. Our proposal rests on three key principles: hybrid human-AI collaboration via natural language interactions; diversity-aware dissemination of queries within the organization; incremental and participatory development of the IOC policy. We briefly discuss our ongoing work towards realizing the proposed IOC architecture
Sensitive Content Recognition in Social Interaction Messages
Online social networks are a predominant medium for social interaction where people communicate in a way similar to what they do in real life. User communication comes mainly in forms of textual data which are rich in personal information, opinions and sentiments. The automatic recognition of sensitive content in texts is quite important for a number of reasons. In this work, we study the dimensions of sensitive content recognition and we examine the performance of various machine learning methods for sensitive data recognition in text. Understanding the key features of sensitive content can assist in formulating more efficient user-centric interaction frameworks too that secure users’ privacy, promote users' inclusion and enhance the diversity awareness of the online society. Also, another part of this work focuses on the models’ explainability where the integration of LIME and SHAP offer insight on features that are consistent and robust predictors of sensitive content
Knowledge-Based Translation of Natural Language into Symbolic Form
We consider the scenario of machines that receive human advice in natural language to revise their object-level knowledge for a domain of interest. Although techniques exist to translate such natural language advice into a symbolic form that is appropriate for machine reasoning, the translation process itself is typically pre-programmed and, thus, it is not amenable to dynamic and gradual improvement , nor can it be adjusted to the linguistic particularities of the advice giver. We seek to examine whether these limitations can be overcome through the use of a knowledge-based translation process. In this position paper we take a first step in this investigation by demonstrating how such meta-level translation knowledge can be engineered to support the interpretation of object-level advice. While, admittedly , our engineering of the meta-level knowledge amounts to pre-programming, it nonetheless pushes towards the automated acquisition of this meta-level knowledge through advice-taking by demonstrating a key prerequisite: that it can be expressed , and reasoned with, under the same syntax and semantics as the object-level knowledge
Experience and Prediction: A Metric of Hardness for a Novel Litmus Test
In the last decade, the Winograd Schema Challenge (WSC) has become a central
aspect of the research community as a novel litmus test. Consequently, the WSC
has spurred research interest because it can be seen as the means to understand
human behavior. In this regard, the development of new techniques has made
possible the usage of Winograd schemas in various fields, such as the design of
novel forms of CAPTCHAs.
Work from the literature that established a baseline for human adult
performance on the WSC has shown that not all schemas are the same, meaning
that they could potentially be categorized according to their perceived
hardness for humans. In this regard, this \textit{hardness-metric} could be
used in future challenges or in the WSC CAPTCHA service to differentiate
between Winograd schemas.
Recent work of ours has shown that this could be achieved via the design of
an automated system that is able to output the hardness-indexes of Winograd
schemas, albeit with limitations regarding the number of schemas it could be
applied on. This paper adds to previous research by presenting a new system
that is based on Machine Learning (ML), able to output the hardness of any
Winograd schema faster and more accurately than any other previously used
method. Our developed system, which works within two different approaches,
namely the random forest and deep learning (LSTM-based), is ready to be used as
an extension of any other system that aims to differentiate between Winograd
schemas, according to their perceived hardness for humans. At the same time,
along with our developed system we extend previous work by presenting the
results of a large-scale experiment that shows how human performance varies
across Winograd schemas.Comment: 33 pages, 10 figures
Gathering Background Knowledge for Story Understanding through Crowdsourcing
Successfully comprehending stories involves integration of the story information with the reader\u27s own background knowledge. A prerequisite, then, of building automated story understanding systems is the availability of such background knowledge. We take the approach that knowledge appropriate for story understanding can be gathered by sourcing the task to the crowd. Our methodology centers on breaking this task into a sequence of more specific tasks, so that human participants not only identify relevant knowledge, but also convert it into a machine-readable form, generalize it, and evaluate its appropriateness. These individual tasks are presented to human participants as missions in an online game, offering them, in this manner, an incentive for their participation. We report on an initial deployment of the game, and discuss our ongoing work for integrating the knowledge gathering task into a full-fledged story understanding engine
Jumping to conclusions
Abstract Inspired by the profound effortlessness (but also the substantial carelessness) with which humans seem to draw inferences when given even only partial information, we consider a unified formal framework for computational cognition, placing our emphasis on the existence of naturalistic mechanisms for representing, manipulating, and acquiring knowledge. Through formal results and discussion, we suggest that such fast and loose mechanisms could provide a concrete basis for the design of cognitive systems
Neural Sculpting: Uncovering hierarchically modular task structure through pruning and network analysis
Natural target functions and tasks typically exhibit hierarchical modularity
- they can be broken down into simpler sub-functions that are organized in a
hierarchy. Such sub-functions have two important features: they have a distinct
set of inputs (input-separability) and they are reused as inputs higher in the
hierarchy (reusability). Previous studies have established that hierarchically
modular neural networks, which are inherently sparse, offer benefits such as
learning efficiency, generalization, multi-task learning, and transferability.
However, identifying the underlying sub-functions and their hierarchical
structure for a given task can be challenging. The high-level question in this
work is: if we learn a task using a sufficiently deep neural network, how can
we uncover the underlying hierarchy of sub-functions in that task? As a
starting point, we examine the domain of Boolean functions, where it is easier
to determine whether a task is hierarchically modular. We propose an approach
based on iterative unit and edge pruning (during training), combined with
network analysis for module detection and hierarchy inference. Finally, we
demonstrate that this method can uncover the hierarchical modularity of a wide
range of Boolean functions and two vision tasks based on the MNIST digits
dataset
Modular-E and the role of elaboration tolerance in solving the qualification problem
AbstractWe describe Modular-E (ME), a specialized, model-theoretic logic for reasoning about actions. ME is able to represent non-deterministic domains involving concurrency, static laws (constraints), indirect effects (ramifications), and narrative information in the form of action occurrences and observations along a time line. We give formal results which characterize ME's high degree of modularity and elaboration tolerance, and show how these properties help to separate out, and provide principled solutions to, different aspects of the qualification problem. In particular, we identify the endogenous qualification problem as the problem of properly accounting for highly distributed, and potentially conflicting, causal knowledge when reasoning about the effects of actions. We show how a comprehensive solution to the endogenous qualification problem helps simplify the exogenous qualification problem — the problem of reconciling conflicts between predictions about what should be true at particular times and actual observations. More precisely, we describe how ME is able to use straightforward default reasoning techniques to solve the exogenous qualification problem largely because its robust treatments of the frame, ramification and endogenous qualification problems combine into a particular characteristic of elaboration tolerance that we formally encapsulate as a notion of “free will”
Recommended from our members
Specifying and Monitoring Market Mechanisms Using Rights and Obligations
We provide a formal scripting language to capture the semantics of market mechanisms. The language is based on a set of well-defined principles, and is designed to capture an agent’s rights, as derived from property, and an agent’s obligations, as derived from restrictions placed on its actions, either voluntarily or as a consequence of other actions. Rights and obligations are viewed as first-class goods, from which we define fundamental axioms about well-functioning market-oriented worlds. Coupled with the scripting language is a run-time system that is able to monitor and enforce rights and obligations. Our treatment extends to represent a variety of market mechanisms, ranging from simple two-agent single-good exchanges to complicated combinatorial auctions.Engineering and Applied Science
- …