856 research outputs found

    Investigating (sequential) unit asking: an unsuccessful quest for scope sensitivity in willingness to donate judgments

    Get PDF
    People exhibit scope insensitivity: Their expressed valuation of a problem is not proportionate with its scope or size. To address scope insensitivity in charitable giving, Hsee et al. (2013) developed the (Classical) Unit Asking technique, where people are first asked how much they are willing to donate to support a single individual, followed by how much they are willing to donate to support a group of individuals. In this paper, we explored the mechanisms, extensions, and limitations of the technique. In particular, we investigated an extension of the technique, which we call Sequential Unit Asking (SUA). SUA asks people a series of willingness-to-donate questions, in which the number of individuals to be helped increases in a stepwise manner until it reaches the total group size. Across four studies investigating donation judgments (total (Formula presented.)), we did not find evidence that willingness to donate (WTD) judgments to the total group increased with larger groups. Instead, our results suggest that Unit Asking (sequential or classical) increases donation amounts only through a single one-off boost. Further, we find evidence in three out of four studies that the SUA extension increases WTD judgments over Classical Unit Asking. In a fifth study ((Formula presented.)) using a contingent valuation design (instead of donation judgments), we find scope sensitivity using all asking techniques. We conclude that, while it is difficult to create scope sensitivity in WTD judgments, SUA should be considered a promising approach to increase charitable donations

    When unlikely outcomes occur: the role of communication format in maintaining communicator credibility

    Get PDF
    The public expects science to reduce or eliminate uncertainty (Kinzig & Starrett, 2003), yet scientific forecasts are probabilistic (at best) and it is simply not possible to make predictions with certainty. Whilst an ‘unlikely’ outcome is not expected to occur, an ‘unlikely’ outcome will still occur one in five times (based on a translation of 20%, e.g. Theil, 2002), according to a frequentist perspective. When an ‘unlikely’ outcome does occur, the prediction may be deemed ‘erroneous’, reflecting a misunderstanding of the nature of uncertainty. Such misunderstandings could have ramifications for the subsequent (perceived) credibility of the communicator who made such a prediction. We examine whether the effect of ‘erroneous’ predictions on perceived credibility differs according to the communication format used. Specifically, we consider verbal, numerical (point and range [wide / narrow]) and mixed format probability expressions. We consistently find that subsequent perceptions are least affected by the ‘erroneous’ prediction when it is expressed numerically, regardless of whether it is a point or range estimate. Our findings suggest numbers should be used in consequential risk communications regarding ‘unlikely’ events, wherever possible

    An appropriate verbal probability lexicon for communicating surgical risks is unlikely to exist

    Get PDF
    Effective risk communication about medical procedures is critical to ethical shared decision-making. Here, we explore the potential for development of an evidence-based lexicon for verbal communication of surgical risk. We found that Ear, Nose and Throat (ENT) surgeons expressed a preference for communicating such risks using verbal probability expressions (VPEs; e.g., “high risk”). However, there was considerable heterogeneity in the expressions they reported using (Study 1). Study 2 compared ENT surgeons’ and laypeople’s (i.e., potential patients) interpretations of the ten most frequent VPEs listed in Study 1. While both groups displayed considerable variability in interpretations, lay participants demonstrated more, as well as providing systematically higher interpretations than those of surgeons. Study 3 found that lay participants were typically unable to provide unique VPEs to differentiate between the ranges of (low) probabilities required. Taken together, these results add to arguments that reliance on VPEs for surgical risk communication is ill-advised. Not only are there systematic interpretational differences between surgeons and potential patients, but the coarse granularity of VPEs raises severe challenges for developing an appropriate evidence-based lexicon for surgical risk communication. We caution against the use of VPEs in any risk context characterized by low, but very different, probabilities. (PsycInfo Database Record (c) 2022 APA, all rights reserved

    A pessimistic view of optimistic belief updating

    Get PDF
    Received academic wisdom holds that human judgment is characterized by unrealistic optimism, the tendency to underestimate the likelihood of negative events and overestimate the likelihood of positive events. With recent questions being raised over the degree to which the majority of this research genuinely demonstrates optimism, attention to possible mechanisms generating such a bias becomes ever more important. New studies have now claimed that unrealistic optimism emerges as a result of biased belief updating with distinctive neural correlates in the brain. On a behavioral level, these studies suggest that, for negative events, desirable information is incorporated into personal risk estimates to a greater degree than undesirable information (resulting in a more optimistic outlook). However, using task analyses, simulations and experiments we demonstrate that this pattern of results is a statistical artifact. In contrast with previous work, we examined participants’ use of new information with reference to the normative, Bayesian standard. Simulations reveal the fundamental difficulties that would need to be overcome by any robust test of optimistic updating. No such test presently exists, so that the best one can presently do is perform analyses with a number of techniques, all of which have important weaknesses. Applying these analyses to five experiments shows no evidence of optimistic updating. These results clarify the difficulties involved in studying human ‘bias’ and cast additional doubt over the status of optimism as a fundamental characteristic of healthy cognition

    Heavy quarkonium: progress, puzzles, and opportunities

    Get PDF
    A golden age for heavy quarkonium physics dawned a decade ago, initiated by the confluence of exciting advances in quantum chromodynamics (QCD) and an explosion of related experimental activity. The early years of this period were chronicled in the Quarkonium Working Group (QWG) CERN Yellow Report (YR) in 2004, which presented a comprehensive review of the status of the field at that time and provided specific recommendations for further progress. However, the broad spectrum of subsequent breakthroughs, surprises, and continuing puzzles could only be partially anticipated. Since the release of the YR, the BESII program concluded only to give birth to BESIII; the BB-factories and CLEO-c flourished; quarkonium production and polarization measurements at HERA and the Tevatron matured; and heavy-ion collisions at RHIC have opened a window on the deconfinement regime. All these experiments leave legacies of quality, precision, and unsolved mysteries for quarkonium physics, and therefore beg for continuing investigations. The plethora of newly-found quarkonium-like states unleashed a flood of theoretical investigations into new forms of matter such as quark-gluon hybrids, mesonic molecules, and tetraquarks. Measurements of the spectroscopy, decays, production, and in-medium behavior of c\bar{c}, b\bar{b}, and b\bar{c} bound states have been shown to validate some theoretical approaches to QCD and highlight lack of quantitative success for others. The intriguing details of quarkonium suppression in heavy-ion collisions that have emerged from RHIC have elevated the importance of separating hot- and cold-nuclear-matter effects in quark-gluon plasma studies. This review systematically addresses all these matters and concludes by prioritizing directions for ongoing and future efforts.Comment: 182 pages, 112 figures. Editors: N. Brambilla, S. Eidelman, B. K. Heltsley, R. Vogt. Section Coordinators: G. T. Bodwin, E. Eichten, A. D. Frawley, A. B. Meyer, R. E. Mitchell, V. Papadimitriou, P. Petreczky, A. A. Petrov, P. Robbe, A. Vair

    Anisotropic flow of charged hadrons, pions and (anti-)protons measured at high transverse momentum in Pb-Pb collisions at sNN=2.76\sqrt{s_{\rm NN}}=2.76 TeV

    Get PDF
    The elliptic, v2v_2, triangular, v3v_3, and quadrangular, v4v_4, azimuthal anisotropic flow coefficients are measured for unidentified charged particles, pions and (anti-)protons in Pb-Pb collisions at sNN=2.76\sqrt{s_{\rm NN}} = 2.76 TeV with the ALICE detector at the Large Hadron Collider. Results obtained with the event plane and four-particle cumulant methods are reported for the pseudo-rapidity range ∣η∣<0.8|\eta|<0.8 at different collision centralities and as a function of transverse momentum, pTp_{\rm T}, out to pT=20p_{\rm T}=20 GeV/cc. The observed non-zero elliptic and triangular flow depends only weakly on transverse momentum for pT>8p_{\rm T}>8 GeV/cc. The small pTp_{\rm T} dependence of the difference between elliptic flow results obtained from the event plane and four-particle cumulant methods suggests a common origin of flow fluctuations up to pT=8p_{\rm T}=8 GeV/cc. The magnitude of the (anti-)proton elliptic and triangular flow is larger than that of pions out to at least pT=8p_{\rm T}=8 GeV/cc indicating that the particle type dependence persists out to high pTp_{\rm T}.Comment: 16 pages, 5 captioned figures, authors from page 11, published version, figures at http://aliceinfo.cern.ch/ArtSubmission/node/186

    Centrality dependence of charged particle production at large transverse momentum in Pb-Pb collisions at sNN=2.76\sqrt{s_{\rm{NN}}} = 2.76 TeV

    Get PDF
    The inclusive transverse momentum (pTp_{\rm T}) distributions of primary charged particles are measured in the pseudo-rapidity range ∣η∣<0.8|\eta|<0.8 as a function of event centrality in Pb-Pb collisions at sNN=2.76\sqrt{s_{\rm{NN}}}=2.76 TeV with ALICE at the LHC. The data are presented in the pTp_{\rm T} range 0.15<pT<500.15<p_{\rm T}<50 GeV/cc for nine centrality intervals from 70-80% to 0-5%. The Pb-Pb spectra are presented in terms of the nuclear modification factor RAAR_{\rm{AA}} using a pp reference spectrum measured at the same collision energy. We observe that the suppression of high-pTp_{\rm T} particles strongly depends on event centrality. In central collisions (0-5%) the yield is most suppressed with RAA≈0.13R_{\rm{AA}}\approx0.13 at pT=6p_{\rm T}=6-7 GeV/cc. Above pT=7p_{\rm T}=7 GeV/cc, there is a significant rise in the nuclear modification factor, which reaches RAA≈0.4R_{\rm{AA}} \approx0.4 for pT>30p_{\rm T}>30 GeV/cc. In peripheral collisions (70-80%), the suppression is weaker with RAA≈0.7R_{\rm{AA}} \approx 0.7 almost independently of pTp_{\rm T}. The measured nuclear modification factors are compared to other measurements and model calculations.Comment: 17 pages, 4 captioned figures, 2 tables, authors from page 12, published version, figures at http://aliceinfo.cern.ch/ArtSubmission/node/284

    Measurement of charm production at central rapidity in proton-proton collisions at s=2.76\sqrt{s} = 2.76 TeV

    Get PDF
    The pTp_{\rm T}-differential production cross sections of the prompt (B feed-down subtracted) charmed mesons D0^0, D+^+, and D∗+^{*+} in the rapidity range ∣y∣<0.5|y|<0.5, and for transverse momentum 1<pT<121< p_{\rm T} <12 GeV/cc, were measured in proton-proton collisions at s=2.76\sqrt{s} = 2.76 TeV with the ALICE detector at the Large Hadron Collider. The analysis exploited the hadronic decays D0→^0 \rightarrow Kπ\pi, D+→^+ \rightarrow Kππ\pi\pi, D∗+→^{*+} \rightarrow D0π^0\pi, and their charge conjugates, and was performed on a Lint=1.1L_{\rm int} = 1.1 nb−1^{-1} event sample collected in 2011 with a minimum-bias trigger. The total charm production cross section at s=2.76\sqrt{s} = 2.76 TeV and at 7 TeV was evaluated by extrapolating to the full phase space the pTp_{\rm T}-differential production cross sections at s=2.76\sqrt{s} = 2.76 TeV and our previous measurements at s=7\sqrt{s} = 7 TeV. The results were compared to existing measurements and to perturbative-QCD calculations. The fraction of cdbar D mesons produced in a vector state was also determined.Comment: 20 pages, 5 captioned figures, 4 tables, authors from page 15, published version, figures at http://aliceinfo.cern.ch/ArtSubmission/node/307

    Search for a W' boson decaying to a bottom quark and a top quark in pp collisions at sqrt(s) = 7 TeV

    Get PDF
    Results are presented from a search for a W' boson using a dataset corresponding to 5.0 inverse femtobarns of integrated luminosity collected during 2011 by the CMS experiment at the LHC in pp collisions at sqrt(s)=7 TeV. The W' boson is modeled as a heavy W boson, but different scenarios for the couplings to fermions are considered, involving both left-handed and right-handed chiral projections of the fermions, as well as an arbitrary mixture of the two. The search is performed in the decay channel W' to t b, leading to a final state signature with a single lepton (e, mu), missing transverse energy, and jets, at least one of which is tagged as a b-jet. A W' boson that couples to fermions with the same coupling constant as the W, but to the right-handed rather than left-handed chiral projections, is excluded for masses below 1.85 TeV at the 95% confidence level. For the first time using LHC data, constraints on the W' gauge coupling for a set of left- and right-handed coupling combinations have been placed. These results represent a significant improvement over previously published limits.Comment: Submitted to Physics Letters B. Replaced with version publishe

    Search for the standard model Higgs boson decaying into two photons in pp collisions at sqrt(s)=7 TeV

    Get PDF
    A search for a Higgs boson decaying into two photons is described. The analysis is performed using a dataset recorded by the CMS experiment at the LHC from pp collisions at a centre-of-mass energy of 7 TeV, which corresponds to an integrated luminosity of 4.8 inverse femtobarns. Limits are set on the cross section of the standard model Higgs boson decaying to two photons. The expected exclusion limit at 95% confidence level is between 1.4 and 2.4 times the standard model cross section in the mass range between 110 and 150 GeV. The analysis of the data excludes, at 95% confidence level, the standard model Higgs boson decaying into two photons in the mass range 128 to 132 GeV. The largest excess of events above the expected standard model background is observed for a Higgs boson mass hypothesis of 124 GeV with a local significance of 3.1 sigma. The global significance of observing an excess with a local significance greater than 3.1 sigma anywhere in the search range 110-150 GeV is estimated to be 1.8 sigma. More data are required to ascertain the origin of this excess.Comment: Submitted to Physics Letters
    • 

    corecore