955 research outputs found
Robust averaging protects decisions from noise in neural computations
An ideal observer will give equivalent weight to sources of information that are equally reliable. However, when averaging visual information, human observers tend to downweight or discount features that are relatively outlying or deviant (‘robust averaging’). Why humans adopt an integration policy that discards important decision information remains unknown. Here, observers were asked to judge the average tilt in a circular array of high-contrast gratings, relative to an orientation boundary defined by a central reference grating. Observers showed robust averaging of orientation, but the extent to which they did so was a positive predictor of their overall performance. Using computational simulations, we show that although robust averaging is suboptimal for a perfect integrator, it paradoxically enhances performance in the presence of “late” noise, i.e. which corrupts decisions during integration. In other words, robust decision strategies increase the brain’s resilience to noise arising in neural computations during decision-making
Synchronization and Redundancy: Implications for Robustness of Neural Learning and Decision Making
Learning and decision making in the brain are key processes critical to
survival, and yet are processes implemented by non-ideal biological building
blocks which can impose significant error. We explore quantitatively how the
brain might cope with this inherent source of error by taking advantage of two
ubiquitous mechanisms, redundancy and synchronization. In particular we
consider a neural process whose goal is to learn a decision function by
implementing a nonlinear gradient dynamics. The dynamics, however, are assumed
to be corrupted by perturbations modeling the error which might be incurred due
to limitations of the biology, intrinsic neuronal noise, and imperfect
measurements. We show that error, and the associated uncertainty surrounding a
learned solution, can be controlled in large part by trading off
synchronization strength among multiple redundant neural systems against the
noise amplitude. The impact of the coupling between such redundant systems is
quantified by the spectrum of the network Laplacian, and we discuss the role of
network topology in synchronization and in reducing the effect of noise. A
range of situations in which the mechanisms we model arise in brain science are
discussed, and we draw attention to experimental evidence suggesting that
cortical circuits capable of implementing the computations of interest here can
be found on several scales. Finally, simulations comparing theoretical bounds
to the relevant empirical quantities show that the theoretical estimates we
derive can be tight.Comment: Preprint, accepted for publication in Neural Computatio
Optimal utility and probability functions for agents with finite computational precision
When making economic choices, such as those between goods or gambles, humans act as if their internal representation of the value and probability of a prospect is distorted away from its true value. These distortions give rise to decisions which apparently fail to maximize reward, and preferences that reverse without reason. Why would humans have evolved to encode value and probability in a distorted fashion, in the face of selective pressure for reward-maximizing choices? Here, we show that under the simple assumption that humans make decisions with finite computational precision––in other words, that decisions are irreducibly corrupted by noise––the distortions of value and probability displayed by humans are approximately optimal in that they maximize reward and minimize uncertainty. In two empirical studies, we manipulate factors that change the reward-maximizing form of distortion, and find that in each case, humans adapt optimally to the manipulation. This work suggests an answer to the longstanding question of why humans make “irrational” economic choices
Recommended from our members
Five dichotomies in the psychophysics of ensemble perception
1. Whereas psychophysicists may formulate hypotheses about appearance, they can only measure performance. Bias and imprecision in psychophysical data need not necessarily reflect bias and imprecision in perception.
2. Sensory systems may exaggerate the differences between each item and its neighbors in an ensemble. Alternatively, sensory systems may homogenize the ensemble; thereby removing any apparent differences between neighboring items.
3. Ensemble perception may be involuntary when observers attempt to report the identities of individual items. Conversely, when asked to make a (voluntary) decision about the ensemble as a whole, observers may find it very difficult to compute statistics that are based on more than a very small number of individual items.
4. Modeling decisions about prothetic continua like size and contrast can be tricky, because sensory signals may be distorted before and/or after voluntarily computing ensemble statistics. With metathetic continua, like spatial orientation, distortion is less problematic; physically vertical things necessarily appear close to vertical and physically horizontal things necessarily appear close to horizontal.
5. Decision processes are corrupted by noise that, like distortion, may be added to sensory signals prior to and/or after voluntarily computing ensemble statistics
EEG-representational geometries and psychometric distortions in approximate numerical judgment
When judging the average value of sample stimuli (e.g., numbers) people tend to either over- or underweight extreme sample values, depending on task context. In a context of overweighting, recent work has shown that extreme sample values were overly represented also in neural signals, in terms of an anti-compressed geometry of number samples in multivariate electroencephalography (EEG) patterns. Here, we asked whether neural representational geometries may also reflect a relative underweighting of extreme values (i.e., compression) which has been observed behaviorally in a great variety of tasks. We used a simple experimental manipulation (instructions to average a single-stream or to compare dual-streams of samples) to induce compression or anti-compression in behavior when participants judged rapid number sequences. Model-based representational similarity analysis (RSA) replicated the previous finding of neural anti-compression in the dual-stream task, but failed to provide evidence for neural compression in the single-stream task, despite the evidence for compression in behavior. Instead, the results indicated enhanced neural processing of extreme values in either task, regardless of whether extremes were over- or underweighted in subsequent behavioral choice. We further observed more general differences in the neural representation of the sample information between the two tasks. Together, our results indicate a mismatch between sample-level EEG geometries and behavior, which raises new questions about the origin of common psychometric distortions, such as diminishing sensitivity for larger values
Human optional stopping in a heteroscedastic world
When making decisions, animals must trade off the benefits of information harvesting against the opportunity cost of prolonged deliberation. Deciding when to stop accumulating information and commit to a choice is challenging in natural environments, where the reliability of decision-relevant information may itself vary unpredictably over time (variable variance or "heteroscedasticity"). We asked humans to perform a categorization task in which discrete, continuously valued samples (oriented gratings) arrived in series until the observer made a choice. Human behavior was best described by a model that adaptively weighted sensory signals by their inverse prediction error and integrated the resulting quantities with a linear urgency signal to a decision threshold. This model approximated the output of a Bayesian model that computed the full posterior probability of a correct response, and successfully predicted adaptive weighting of decision information in neural signals. Adaptive weighting of decision information may have evolved to promote optional stopping in heteroscedastic natural environments. (PsycInfo Database Record (c) 2021 APA, all rights reserved)
From distributed machine learning to federated learning: In the view of data privacy and security
Federated learning is an improved version of distributed machine learning that further offloads operations which would usually be performed by a central server. The server becomes more like an assistant coordinating clients to work together rather than micromanaging the workforce as in traditional DML. One of the greatest advantages of federated learning is the additional privacy and security guarantees it affords. Federated learning architecture relies on smart devices, such as smartphones and IoT sensors, that collect and process their own data, so sensitive information never has to leave the client device. Rather, clients train a submodel locally and send an encrypted update to the central server for aggregation into the global model. These strong privacy guarantees make federated learning an attractive choice in a world where data breaches and information theft are common and serious threats. This survey outlines the landscape and latest developments in data privacy and security for federated learning. We identify the different mechanisms used to provide privacy and security, such as differential privacy, secure multiparty computation and secure aggregation. We also survey the current attack models, identifying the areas of vulnerability and the strategies adversaries use to penetrate federated systems. The survey concludes with a discussion on the open challenges and potential directions of future work in this increasingly popular learning paradigm
Acute stress impairs reward learning in men
Acute stress is ubiquitous in everyday life, but the extent to which acute stress affects how people learn from the outcomes of their choices is still poorly understood. Here, we investigate how acute stress impacts reward and punishment learning in men using a reinforcement-learning task. Sixty-two male participants performed the task whilst under stress and control conditions. We observed that acute stress impaired participants' choice performance towards monetary gains, but not losses. To unravel the mechanism(s) underlying such impairment, we fitted a reinforcement-learning model to participants' trial-by-trial choices. Computational modeling indicated that under acute stress participants learned more slowly from positive prediction errors - when the outcomes were better than expected - consistent with stress-induced dopamine disruptions. Such mechanistic understanding of how acute stress impairs reward learning is particularly important given the pervasiveness of stress in our daily life and the impact that stress can have on our wellbeing and mental health.ortuguese Foundation for Science and Technology (FCT) to A. Seara-Cardoso [PTDC/MHC-PCN/2296/2014, co-financed by FEDER through COMPETE2020 under the PT2020 Partnership Agreement (POCI-01-0145-FEDER-016747)] and to A. Mesquita (IF/00750/2015). J. Carvalheiro was supported by a FCT PhD fellowship (PD/BD/128467/2017). This study was conducted at the Psychology Research Centre (PSI/01662), School of Psychology, University of Minho, supported by FCT and the Portuguese Ministry of Science, Technology and Higher Education (UID/PSI/01662/2019), through national funds (PIDDAC
- …