13,701 research outputs found
"The Psychology of Risk: A Brief Primer"
Risk is commonly defined in negative terms-the probability of suffering a loss or factors and actions involving uncertain dangers or hazards. On the other hand, the term risk as used in the social sciences relies on simply the degree of uncertainty: It merely addresses how much variance exists among the possible outcomes associated with a particular choice or action. A counterintuitive example is the classifying of an investment that is certain to lose 10. Andreassen states that uncertainty and value are treated as separate entities because expanding the notion of risk to include gains as well as losses adds considerable conceptual power. Economic theories based on perfect rationality are undoubtedly powerful. Andreassen states that id one wanted to predict human behavior in the simplest manner, one would certainly begin by assuming that people are motivated by self-interest, and that they can be extremely calculating when valuable opportunities arise, learning quickly from the success of others. Research on the psychology of risk does not begin by assuming that all human behavior is irrational, random, or thoughtless. Rather, this research has centered on how people may be biased by myriad social influences, the perceived choices available, or the cognitive rules of thumb used to simplify difficult economic and social decisions.
Reducing the Top Quark Mass Uncertainty with Jet Grooming
The measurement of the top quark mass has large systematic uncertainties
coming from the Monte Carlo simulations that are used to match theory and
experiment. We explore how much that uncertainty can be reduced by using jet
grooming procedures. We estimate the inherent ambiguity in what is meant by
Monte Carlo mass to be around 530 MeV without any corrections. This uncertainty
can be reduced by 60% to 200 MeV by calibrating to the W mass and a further 33%
to 140 MeV by applying soft-drop jet grooming (or by 20% more to 170 MeV with
trimming). At e+e- colliders, the associated uncertainty is around 110 MeV,
reducing to 50 MeV after calibrating to the W mass. By analyzing the tuning
parameters, we conclude that the importance of jet grooming after calibrating
to the W mass is to reduce sensitivity to the underlying event.Comment: 21 pages, 7 figure
Study addiction - a new area of psychological study: conceptualization, assessment, and preliminary empirical findings
Aims: Recent research has suggested that for some individuals, educational studying may become compulsive and excessive and lead to ‘study addiction’. The present study conceptualized and assessed study addiction within the framework of workaholism, defining it as compulsive over-involvement in studying that interferes with functioning in other domains and that is detrimental for individuals and/or their environment. Methods: The Bergen Study Addiction Scale (BStAS) was tested - reflecting seven core addiction symptoms (salience, mood modification, tolerance, withdrawal, conflict, relapse, and problems) - related to studying. The scale was administered via a cross-sectional survey distributed to Norwegian (n = 218) and Polish (n = 993) students with additional questions concerning demographic variables, study-related variables, health, and personality. Results: A one-factor solution had acceptable fit with the data in both samples and the scale demonstrated good reliability. Scores on BStAS converged with scores on learning engagement. Study addiction (BStAS) was significantly related to specific aspects of studying (longer learning time, lower academic performance), personality traits (higher neuroticism and conscientiousness, lower extroversion), and negative health-related factors (impaired general health, decreased quality of life and sleep quality, higher perceived stress). Conclusions: It is concluded that BStAS has good psychometric properties, making it a promising tool in the assessment of study addiction. Study addiction is related in predictable ways to personality and health variables, as predicted from contemporary workaholism theory and research
JUNIPR: a Framework for Unsupervised Machine Learning in Particle Physics
In applications of machine learning to particle physics, a persistent
challenge is how to go beyond discrimination to learn about the underlying
physics. To this end, a powerful tool would be a framework for unsupervised
learning, where the machine learns the intricate high-dimensional contours of
the data upon which it is trained, without reference to pre-established labels.
In order to approach such a complex task, an unsupervised network must be
structured intelligently, based on a qualitative understanding of the data. In
this paper, we scaffold the neural network's architecture around a
leading-order model of the physics underlying the data. In addition to making
unsupervised learning tractable, this design actually alleviates existing
tensions between performance and interpretability. We call the framework
JUNIPR: "Jets from UNsupervised Interpretable PRobabilistic models". In this
approach, the set of particle momenta composing a jet are clustered into a
binary tree that the neural network examines sequentially. Training is
unsupervised and unrestricted: the network could decide that the data bears
little correspondence to the chosen tree structure. However, when there is a
correspondence, the network's output along the tree has a direct physical
interpretation. JUNIPR models can perform discrimination tasks, through the
statistically optimal likelihood-ratio test, and they permit visualizations of
discrimination power at each branching in a jet's tree. Additionally, JUNIPR
models provide a probability distribution from which events can be drawn,
providing a data-driven Monte Carlo generator. As a third application, JUNIPR
models can reweight events from one (e.g. simulated) data set to agree with
distributions from another (e.g. experimental) data set.Comment: 37 pages, 24 figure
Enhancing Automated Test Selection in Probabilistic Networks
In diagnostic decision-support systems, test selection amounts to selecting, in a sequential manner, a test that is expected to yield the largest decrease
in the uncertainty about a patient’s diagnosis. For capturing this uncertainty, often an information measure is used. In this paper, we study the Shannon entropy,
the Gini index, and the misclassification error for this purpose. We argue that the
Gini index can be regarded as an approximation of the Shannon entropy and that
the misclassification error can be looked upon as an approximation of the Gini
index. We further argue that the differences between the first derivatives of the
three functions can explain different test sequences in practice. Experimental results from using the measures with a real-life probabilistic network in oncology
support our observations
- …