21,392 research outputs found
Instrumental calculation, cognitive role-playing, or both? Self-perceptions of Seconded National Experts in the European Commission
Most work studying micro-processes of integration - i.e. how agents develop identities and decision-making behaviours within a particular institution - offers explanations based on either instrumental rationality or socialisation. This article proposes a twodimensional framework that allows analysing under which conditions both logics of social action co-exist. Our empirical analysis employs a unique dataset from a 2011 survey of all 1098 currently active Seconded National Experts (SNEs) in the European Commission, and is supportive the model's theoretical predictions. We find that a) instrumental cost-benefit calculation and cognitive role-playing (as semi-reflexive socialisation) often simultaneously influence SNEs' (perceptions of their) behaviour, and b) this joint presence of both logics of social action depends on certain scope conditions (i.e., SNEs' education, length of prior embeddedness and noviceness). --Socialisation,rational action,European Commission,Seconded National Experts,survey
Sensitivity of predicted bioaerosol exposure from open windrow composting facilities to ADMS dispersion model parameters
Bioaerosols are released in elevated quantities from composting facilities and are associated with negative health effects, although dose-response relationships are not well understood, and require improved exposure classification. Dispersion modelling has great potential to improve exposure classification, but has not yet been extensively used or validated in this context. We present a sensitivity analysis of the ADMS dispersion model specific to input parameter ranges relevant to bioaerosol emissions from open windrow composting. This analysis provides an aid for model calibration by prioritising parameter adjustment and targeting independent parameter estimation. Results showed that predicted exposure was most sensitive to the wet and dry deposition modules and the majority of parameters relating to emission source characteristics, including pollutant emission velocity, source geometry and source height. This research improves understanding of the accuracy of model input data required to provide more reliable exposure predictions
The Intuitive Appeal of Explainable Machines
Algorithmic decision-making has become synonymous with inexplicable decision-making, but what makes algorithms so difficult to explain? This Article examines what sets machine learning apart from other ways of developing rules for decision-making and the problem these properties pose for explanation. We show that machine learning models can be both inscrutable and nonintuitive and that these are related, but distinct, properties. Calls for explanation have treated these problems as one and the same, but disentangling the two reveals that they demand very different responses. Dealing with inscrutability requires providing a sensible description of the rules; addressing nonintuitiveness requires providing a satisfying explanation for why the rules are what they are. Existing laws like the Fair Credit Reporting Act (FCRA), the Equal Credit Opportunity Act (ECOA), and the General Data Protection Regulation (GDPR), as well as techniques within machine learning, are focused almost entirely on the problem of inscrutability. While such techniques could allow a machine learning system to comply with existing law, doing so may not help if the goal is to assess whether the basis for decision-making is normatively defensible. In most cases, intuition serves as the unacknowledged bridge between a descriptive account and a normative evaluation. But because machine learning is often valued for its ability to uncover statistical relationships that defy intuition, relying on intuition is not a satisfying approach. This Article thus argues for other mechanisms for normative evaluation. To know why the rules are what they are, one must seek explanations of the process behind a modelâs development, not just explanations of the model itself
Recommended from our members
Perceptions of randomness in binary sequences: Normative, heuristic, or both?
When people consider a series of random binary events, such as tossing an unbiased coin and recording the sequence of heads (H) and tails (T), they tend to erroneously rate sequences with less internal structure or order (such as HTTHT) as more probable than sequences containing more structure or order (such as HHHHH). This is traditionally explained as a local representativeness effect: Participants assume that the properties of long sequences of random outcomesâsuch as an equal proportion of heads and tails, and little internal structureâshould also apply to short sequences. However, recent theoretical work has noted that the probability of a particular sequence of say, heads and tails of length n, occurring within a larger (>n) sequence of coin flips actually differs by sequence, so P(HHHHH) < P(HTTHT). In this alternative account, people apply rational norms based on limited experience. We test these accounts. Participants in Experiment 1 rated the likelihood of occurrence for all possible strings of 4, 5, and 6 observations in a sequence of coin flips. Judgments were better explained by representativeness in alternation rate, relative proportion of heads and tails, and sequence complexity, than by objective probabilities. Experiments 2 and 3 gave similar results using incentivized binary choice procedures. Overall the evidence suggests that participants are not sensitive to variation in objective probabilities of a sub-sequence occurring; they appear to use heuristics based on several distinct forms of representativeness
Optimal Adaptation Principles In Neural Systems
Animal brains are remarkably efficient in handling complex computational tasks, which are intractable even for state-of-the-art computers. For instance, our ability to detect visual objects in the presence of substantial variability and clutter sur- passes any algorithm. This ability seems even more surprising given the noisiness and biophysical constraints of neural circuits. This thesis focuses on understanding the theoretical principles governing how neural systems, at various scales, are adapted to the structure of their environment in order to interact with it and perform informa- tion processing tasks efficiently. Here, we study this question in three very different and challenging scenarios: i) how a sensory neural circuit the olfactory pathway is organised to efficiently process odour stimuli in a very high-dimensional space with complex structure; ii) how individual neurons in the sensory periphery exploit the structure in a fast-changing environment to utilise their dynamic range efficiently; iii) how the auditory system of whole organisms is able to efficiently exploit temporal structure in a noisy, fast-changing environment to optimise perception of ambiguous sounds. We also study the theoretical issues in developing principled measures of model complexity and extending classical complexity notions to explicitly account for the scale/resolution at which we observe a system
Behavioral Economics & Machine Learning Expanding the Field Through a New Lens
In this thesis, I investigate central questions in behavioral economics as well as law and economics. I examine well-studied problems through a new methodological lens. The aim is to generate new insights and thus point behavioral scientists to novel analytical tools. To this end, I show how machine learning may be used to build new theories by reducing complexity in experimental economic data. Moreover, I use natural language processing to show how supervised learning can enable the scientific community to expand limited datasets. I also investigate the normative impact of the use of such tools in social science research or decision-making as well as their deficiencies
Normative Hidden Variable Models of Learning and Decision Making Under Uncertainty
Understanding the mechanisms behind learning and decision making under uncertainty remains an open challenge, despite a wealth of experimental and theoretical studies. In this thesis, we focus on passive learning and perceptual decision making, as investigated in classical conditioning experiments and the random-dot-kinematogram task, respectively. We show how both problem settings can be successfully modelled in a compact and theoretically sound manner by formulating them as normative latent variable models. By being explicit about both the statistical nature of the task setting, where hidden (latent) causes have to be inferred from noisy observations, and the computational goal the animal is trying to solve, we are able to arrive at powerful models that can explain a greater variety of data with fewer assumptions and free parameters than previous approaches, and we offer predictions about how behaviour is expected to change if task parameters or neural activity are altered. In the first half of this thesis we focus on decision making, and introduce a neurally plausible model that can link the task setting to behaviour. We then extend this work to propose a systems level model that includes motor control, such that previously puzzling data from neural recordings can be reconciled. In the second half of the thesis we focus on understanding passive learning in the context of classical conditioning. We show that our model can resolve long-standing disputes and differences between existing models of classical conditioning, by showing that they can be understood as special cases of our more general formulation
- âŠ