901 research outputs found

    Re-establishing Orthodoxy in the Realm of Causation

    Get PDF

    Uncertainty and the influence of group norms in the attitude–behaviour relationship

    Get PDF
    This is the author's post-print version of an article whose final and definitive form has been published in the British Journal of Social Psychology. Reproduced with permission from the British Journal of Social Psychology © The British Psychological Society 2007. The definitve version is available at: http://www.bpsjournals.co.uk/journals/bjsp/Two studies were conducted to examine the impact of subjective uncertainty on conformity to group norms in the attitude–behaviour context. In both studies, subjective uncertainty was manipulated using a deliberative mindset manipulation (McGregor, Zanna, Holmes, & Spencer, 2001). In Study 1 (N=106), participants were exposed to either an attitude-congruent or an attitude-incongruent in-group norm. In Study 2 (N=83), participants were exposed to either a congruent, incongruent, or an ambiguous in-group norm. Ranges of attitude–behaviour outcomes, including attitude-intention consistency and change in attitude-certainty, were assessed. In both studies, levels of group-normative behaviour varied as a function of uncertainty condition. In Study 1, conformity to group norms, as evidenced by variations in the level of attitude-intention consistency, was observed only in the high uncertainty condition. In Study 2, exposure to an ambiguous norm had different effects for those in the low and the high uncertainty conditions. In the low uncertainty condition, greatest conformity was observed in the attitude-congruent norm condition compared with an attitude-congruent or ambiguous norm. In contrast, individuals in the high uncertainty condition displayed greatest conformity when exposed to either an attitude-congruent or an ambiguous in-group norm. The implications of these results for the role of subjective uncertainty in social influence processes are discussed

    Palliative care needs in patients hospitalized with heart failure (PCHF) study: rationale and design

    Get PDF
    Abstract Aims The primary aim of this study is to provide data to inform the design of a randomized controlled clinical trial (RCT) of a palliative care (PC) intervention in heart failure (HF). We will identify an appropriate study population with a high prevalence of PC needs defined using quantifiable measures. We will also identify which components a specific and targeted PC intervention in HF should include and attempt to define the most relevant trial outcomes. Methods An unselected, prospective, near-consecutive, cohort of patients admitted to hospital with acute decompensated HF will be enrolled over a 2-year period. All potential participants will be screened using B-type natriuretic peptide and echocardiography, and all those enrolled will be extensively characterized in terms of their HF status, comorbidity, and PC needs. Quantitative assessment of PC needs will include evaluation of general and disease-specific quality of life, mood, symptom burden, caregiver burden, and end of life care. Inpatient assessments will be performed and after discharge outpatient assessments will be carried out every 4 months for up to 2.5 years. Participants will be followed up for a minimum of 1 year for hospital admissions, and place and cause of death. Methods for identifying patients with HF with PC needs will be evaluated, and estimates of healthcare utilisation performed. Conclusion By assessing the prevalence of these needs, describing how these needs change over time, and evaluating how best PC needs can be identified, we will provide the foundation for designing an RCT of a PC intervention in HF

    The Hubble series: Convergence properties and redshift variables

    Full text link
    In cosmography, cosmokinetics, and cosmology it is quite common to encounter physical quantities expanded as a Taylor series in the cosmological redshift z. Perhaps the most well-known exemplar of this phenomenon is the Hubble relation between distance and redshift. However, we now have considerable high-z data available, for instance we have supernova data at least back to redshift z=1.75. This opens up the theoretical question as to whether or not the Hubble series (or more generally any series expansion based on the z-redshift) actually converges for large redshift? Based on a combination of mathematical and physical reasoning, we argue that the radius of convergence of any series expansion in z is less than or equal to 1, and that z-based expansions must break down for z>1, corresponding to a universe less than half its current size. Furthermore, we shall argue on theoretical grounds for the utility of an improved parameterization y=z/(1+z). In terms of the y-redshift we again argue that the radius of convergence of any series expansion in y is less than or equal to 1, so that y-based expansions are likely to be good all the way back to the big bang y=1, but that y-based expansions must break down for y<-1, now corresponding to a universe more than twice its current size.Comment: 15 pages, 2 figures, accepted for publication in Classical and Quantum Gravit

    Approximation schemes for the dynamics of diluted spin models: the Ising ferromagnet on a Bethe lattice

    Full text link
    We discuss analytical approximation schemes for the dynamics of diluted spin models. The original dynamics of the complete set of degrees of freedom is replaced by a hierarchy of equations including an increasing number of global observables, which can be closed approximately at different levels of the hierarchy. We illustrate this method on the simple example of the Ising ferromagnet on a Bethe lattice, investigating the first three possible closures, which are all exact in the long time limit, and which yield more and more accurate predictions for the finite-time behavior. We also investigate the critical region around the phase transition, and the behavior of two-time correlation functions. We finally underline the close relationship between this approach and the dynamical replica theory under the assumption of replica symmetry.Comment: 21 pages, 5 figure

    Mapping the stellar structure of the Milky Way thick disk and halo using SEGUE photometry

    Full text link
    We map the stellar structure of the Galactic thick disk and halo by applying color-magnitude diagram (CMD) fitting to photometric data from the SEGUE survey, allowing, for the first time, a comprehensive analysis of their structure at both high and low latitudes using uniform SDSS photometry. Incorporating photometry of all relevant stars simultaneously, CMD fitting bypasses the need to choose single tracer populations. Using old stellar populations of differing metallicities as templates we obtain a sparse 3D map of the stellar mass distribution at |Z|>1 kpc. Fitting a smooth Milky Way model comprising exponential thin and thick disks and an axisymmetric power-law halo allows us to constrain the structural parameters of the thick disk and halo. The thick-disk scale height and length are well constrained at 0.75+-0.07 kpc and 4.1+-0.4 kpc, respectively. We find a stellar halo flattening within ~25 kpc of c/a=0.88+-0.03 and a power-law index of 2.75+-0.07 (for 7<R_{GC}<~30 kpc). The model fits yield thick-disk and stellar halo densities at the solar location of rho_{thick,sun}=10^{-2.3+-0.1} M_sun pc^{-3} and rho_{halo,sun}=10^{-4.20+-0.05} M_sun pc^{-3}, averaging over any substructures. Our analysis provides the first clear in situ evidence for a radial metallicity gradient in the Milky Way's stellar halo: within R<~15 kpc the stellar halo has a mean metallicity of [Fe/H]=-1.6, which shifts to [Fe/H]=-2.2 at larger radii. Subtraction of the best-fit smooth and symmetric model from the overall density maps reveals a wealth of substructures at all latitudes, some attributable to known streams and overdensities, and some new. A simple warp cannot account for the low latitude substructure, as overdensities occur simultaneously above and below the Galactic plane. (abridged)Comment: 13 pages, 10 figures, accepted for publication in Astrophysical Journa

    Think Outside the Color Box: Probabilistic Target Selection and the SDSS-XDQSO Quasar Targeting Catalog

    Full text link
    We present the SDSS-XDQSO quasar targeting catalog for efficient flux-based quasar target selection down to the faint limit of the Sloan Digital Sky Survey (SDSS) catalog, even at medium redshifts (2.5 <~ z <~ 3) where the stellar contamination is significant. We build models of the distributions of stars and quasars in flux space down to the flux limit by applying the extreme-deconvolution method to estimate the underlying density. We convolve this density with the flux uncertainties when evaluating the probability that an object is a quasar. This approach results in a targeting algorithm that is more principled, more efficient, and faster than other similar methods. We apply the algorithm to derive low-redshift (z < 2.2), medium-redshift (2.2 <= z 3.5) quasar probabilities for all 160,904,060 point sources with dereddened i-band magnitude between 17.75 and 22.45 mag in the 14,555 deg^2 of imaging from SDSS Data Release 8. The catalog can be used to define a uniformly selected and efficient low- or medium-redshift quasar survey, such as that needed for the SDSS-III's Baryon Oscillation Spectroscopic Survey project. We show that the XDQSO technique performs as well as the current best photometric quasar-selection technique at low redshift, and outperforms all other flux-based methods for selecting the medium-redshift quasars of our primary interest. We make code to reproduce the XDQSO quasar target selection publicly available

    A Simple Likelihood Method for Quasar Target Selection

    Full text link
    We present a new method for quasar target selection using photometric fluxes and a Bayesian probabilistic approach. For our purposes we target quasars using Sloan Digital Sky Survey (SDSS) photometry to a magnitude limit of g=22. The efficiency and completeness of this technique is measured using the Baryon Oscillation Spectroscopic Survey (BOSS) data, taken in 2010. This technique was used for the uniformly selected (CORE) sample of targets in BOSS year one spectroscopy to be realized in the 9th SDSS data release. When targeting at a density of 40 objects per sq-deg (the BOSS quasar targeting density) the efficiency of this technique in recovering z>2.2 quasars is 40%. The completeness compared to all quasars identified in BOSS data is 65%. This paper also describes possible extensions and improvements for this techniqueComment: Updated to accepted version for publication in the Astrophysical Journal. 10 pages, 10 figures, 3 table
    • …
    corecore