8,436 research outputs found
Efficient transfer entropy analysis of non-stationary neural time series
Information theory allows us to investigate information processing in neural
systems in terms of information transfer, storage and modification. Especially
the measure of information transfer, transfer entropy, has seen a dramatic
surge of interest in neuroscience. Estimating transfer entropy from two
processes requires the observation of multiple realizations of these processes
to estimate associated probability density functions. To obtain these
observations, available estimators assume stationarity of processes to allow
pooling of observations over time. This assumption however, is a major obstacle
to the application of these estimators in neuroscience as observed processes
are often non-stationary. As a solution, Gomez-Herrero and colleagues
theoretically showed that the stationarity assumption may be avoided by
estimating transfer entropy from an ensemble of realizations. Such an ensemble
is often readily available in neuroscience experiments in the form of
experimental trials. Thus, in this work we combine the ensemble method with a
recently proposed transfer entropy estimator to make transfer entropy
estimation applicable to non-stationary time series. We present an efficient
implementation of the approach that deals with the increased computational
demand of the ensemble method's practical application. In particular, we use a
massively parallel implementation for a graphics processing unit to handle the
computationally most heavy aspects of the ensemble method. We test the
performance and robustness of our implementation on data from simulated
stochastic processes and demonstrate the method's applicability to
magnetoencephalographic data. While we mainly evaluate the proposed method for
neuroscientific data, we expect it to be applicable in a variety of fields that
are concerned with the analysis of information transfer in complex biological,
social, and artificial systems.Comment: 27 pages, 7 figures, submitted to PLOS ON
Simulation of an SEIR infectious disease model on the dynamic contact network of conference attendees
The spread of infectious diseases crucially depends on the pattern of
contacts among individuals. Knowledge of these patterns is thus essential to
inform models and computational efforts. Few empirical studies are however
available that provide estimates of the number and duration of contacts among
social groups. Moreover, their space and time resolution are limited, so that
data is not explicit at the person-to-person level, and the dynamical aspect of
the contacts is disregarded. Here, we want to assess the role of data-driven
dynamic contact patterns among individuals, and in particular of their temporal
aspects, in shaping the spread of a simulated epidemic in the population.
We consider high resolution data of face-to-face interactions between the
attendees of a conference, obtained from the deployment of an infrastructure
based on Radio Frequency Identification (RFID) devices that assess mutual
face-to-face proximity. The spread of epidemics along these interactions is
simulated through an SEIR model, using both the dynamical network of contacts
defined by the collected data, and two aggregated versions of such network, in
order to assess the role of the data temporal aspects.
We show that, on the timescales considered, an aggregated network taking into
account the daily duration of contacts is a good approximation to the full
resolution network, whereas a homogeneous representation which retains only the
topology of the contact network fails in reproducing the size of the epidemic.
These results have important implications in understanding the level of
detail needed to correctly inform computational models for the study and
management of real epidemics
The brain is a prediction machine that cares about good and bad - Any implications for neuropragmatics?
Experimental pragmatics asks how people construct contextualized meaning in communication. So what does it mean for this field to add neuroas a prefix to its name? After analyzing the options for any subfield of cognitive science, I argue that neuropragmatics can and occasionally should go beyond the instrumental use of EEG or fMRI and beyond mapping classic theoretical distinctions onto Brodmann areas. In particular, if experimental pragmatics ‘goes neuro’, it should take into account that the brain evolved as a control system that helps its bearer negotiate a highly complex, rapidly changing and often not so friendly environment. In this context, the ability to predict current unknowns, and to rapidly tell good from bad, are essential ingredients of processing. Using insights from non-linguistic areas of cognitive neuroscience as well as from EEG research on utterance comprehension, I argue that for a balanced development of experimental pragmatics, these two characteristics of the brain cannot be ignored
Counterfactuals and Causability in Explainable Artificial Intelligence: Theory, Algorithms, and Applications
There has been a growing interest in model-agnostic methods that can make
deep learning models more transparent and explainable to a user. Some
researchers recently argued that for a machine to achieve a certain degree of
human-level explainability, this machine needs to provide human causally
understandable explanations, also known as causability. A specific class of
algorithms that have the potential to provide causability are counterfactuals.
This paper presents an in-depth systematic review of the diverse existing body
of literature on counterfactuals and causability for explainable artificial
intelligence. We performed an LDA topic modelling analysis under a PRISMA
framework to find the most relevant literature articles. This analysis resulted
in a novel taxonomy that considers the grounding theories of the surveyed
algorithms, together with their underlying properties and applications in
real-world data. This research suggests that current model-agnostic
counterfactual algorithms for explainable AI are not grounded on a causal
theoretical formalism and, consequently, cannot promote causability to a human
decision-maker. Our findings suggest that the explanations derived from major
algorithms in the literature provide spurious correlations rather than
cause/effects relationships, leading to sub-optimal, erroneous or even biased
explanations. This paper also advances the literature with new directions and
challenges on promoting causability in model-agnostic approaches for
explainable artificial intelligence
From Correlation to Causation: Estimation of Effective Connectivity from Continuous Brain Signals based on Zero-Lag Covariance
Knowing brain connectivity is of great importance both in basic research and
for clinical applications. We are proposing a method to infer directed
connectivity from zero-lag covariances of neuronal activity recorded at
multiple sites. This allows us to identify causal relations that are reflected
in neuronal population activity. To derive our strategy, we assume a generic
linear model of interacting continuous variables, the components of which
represent the activity of local neuronal populations. The suggested method for
inferring connectivity from recorded signals exploits the fact that the
covariance matrix derived from the observed activity contains information about
the existence, the direction and the sign of connections. Assuming a sparsely
coupled network, we disambiguate the underlying causal structure via
-minimization. In general, this method is suited to infer effective
connectivity from resting state data of various types. We show that our method
is applicable over a broad range of structural parameters regarding network
size and connection probability of the network. We also explored parameters
affecting its activity dynamics, like the eigenvalue spectrum. Also, based on
the simulation of suitable Ornstein-Uhlenbeck processes to model BOLD dynamics,
we show that with our method it is possible to estimate directed connectivity
from zero-lag covariances derived from such signals. In this study, we consider
measurement noise and unobserved nodes as additional confounding factors.
Furthermore, we investigate the amount of data required for a reliable
estimate. Additionally, we apply the proposed method on a fMRI dataset. The
resulting network exhibits a tendency for close-by areas being connected as
well as inter-hemispheric connections between corresponding areas. Also, we
found that a large fraction of identified connections were inhibitory.Comment: 18 pages, 10 figure
MoCa: Measuring Human-Language Model Alignment on Causal and Moral Judgment Tasks
Human commonsense understanding of the physical and social world is organized
around intuitive theories. These theories support making causal and moral
judgments. When something bad happens, we naturally ask: who did what, and why?
A rich literature in cognitive science has studied people's causal and moral
intuitions. This work has revealed a number of factors that systematically
influence people's judgments, such as the violation of norms and whether the
harm is avoidable or inevitable. We collected a dataset of stories from 24
cognitive science papers and developed a system to annotate each story with the
factors they investigated. Using this dataset, we test whether large language
models (LLMs) make causal and moral judgments about text-based scenarios that
align with those of human participants. On the aggregate level, alignment has
improved with more recent LLMs. However, using statistical analyses, we find
that LLMs weigh the different factors quite differently from human
participants. These results show how curated, challenge datasets combined with
insights from cognitive science can help us go beyond comparisons based merely
on aggregate metrics: we uncover LLMs implicit tendencies and show to what
extent these align with human intuitions.Comment: 34 pages, 7 figures. NeurIPS 202
Towards Formal Definitions of Blameworthiness, Intention, and Moral Responsibility
We provide formal definitions of degree of blameworthiness and intention
relative to an epistemic state (a probability over causal models and a utility
function on outcomes). These, together with a definition of actual causality,
provide the key ingredients for moral responsibility judgments. We show that
these definitions give insight into commonsense intuitions in a variety of
puzzling cases from the literature.Comment: Appears in AAAI-1
Quantitative Characteristics of Human-Written Short Stories as a Metric for Automated Storytelling
Evaluating the extent to which computer-produced stories are structured like human-invented narratives can be an important component of the quality of a story plot. In this paper, we report on an empirical experiment in which human subjects have invented short plots in a constrained scenario. The stories were annotated according to features commonly found in existing automatic story generators. The annotation was designed to measure the proportion and relations of story components that should be used in automatic computational systems for matching human behaviour. Results suggest that there are relatively common patterns that can be used as input data for identifying similarity to human-invented stories in automatic storytelling systems. The found patterns are in line with narratological models, and the results provide numerical quantification and layout of story components. The proposed method of story analysis is tested over two additional sources, the ROCStories corpus and stories generated by automated storytellers, to illustrate the valuable insights that may be derived from them
- …