528 research outputs found

    What Makes Some People Think Astrology Is Scientific?

    Get PDF
    Citizens in both North America and Europe are apt to read horoscope columns in newspapers and magazines. While some people read these casually and purely for entertainment, some believe that astrology has scientific status and can provide real insight into events and personality. Using data from a European survey, this article explores some of the reasons why some people think that astrology is scientific and how astrology is viewed in relation to other knowledge-producing practices. Three hypotheses in particular are tested. The first is that some Europeans lack the necessary scientific literacy to distinguish science from pseudoscience. The second is that people are confused about what astrology actually is. The third is derived from Adorno’s work on authoritarianism and the occult and postulates that those who adhere to authoritarian values are more likely to believe in astrological claims. Support is found for all three hypotheses. </jats:p

    An updated radiocarbon-based ice margin chronology for the last deglaciation of the North American Ice Sheet Complex

    Get PDF
    The North American Ice Sheet Complex (NAISC; consisting of the Laurentide, Cordilleran and Innuitian ice sheets) was the largest ice mass to repeatedly grow and decay in the Northern Hemisphere during the Quaternary. Understanding its pattern of retreat following the Last Glacial Maximum is critical for studying many facets of the Late Quaternary, including ice sheet behaviour, the evolution of Holocene landscapes, sea level, atmospheric circulation, and the peopling of the Americas. Currently, the most up-to-date and authoritative margin chronology for the entire ice sheet complex is featured in two publications (Geological Survey of Canada Open File 1574 [Dyke et al., 2003]; ‘Quaternary Glaciations – Extent and Chronology, Part II’ [Dyke, 2004]). These often-cited datasets track ice margin recession in 36 time slices spanning 18 ka to 1 ka (all ages in uncalibrated radiocarbon years) using a combination of geomorphology, stratigraphy and radiocarbon dating. However, by virtue of being over 15 years old, the ice margin chronology requires updating to reflect new work and important revisions. This paper updates the aforementioned 36 ice margin maps to reflect new data from regional studies. We also update the original radiocarbon dataset from the 2003/2004 papers with 1541 new ages to reflect work up to and including 2018. A major revision is made to the 18 ka ice margin, where Banks and Eglinton islands (once considered to be glacial refugia) are now shown to be fully glaciated. Our updated 18 ka ice sheet increased in areal extent from 17.81 to 18.37 million km2, which is an increase of 3.1% in spatial coverage of the NAISC at that time. Elsewhere, we also summarize, region-by-region, significant changes to the deglaciation sequence. This paper integrates new information provided by regional experts and radiocarbon data into the deglaciation sequence while maintaining consistency with the original ice margin positions of Dyke et al. (2003) and Dyke (2004) where new information is lacking; this is a pragmatic solution to satisfy the needs of a Quaternary research community that requires up-to-date knowledge of the pattern of ice margin recession of what was once the world’s largest ice mass. The 36 updated isochrones are available in PDF and shapefile format, together with a spreadsheet of the expanded radiocarbon dataset (n = 5195 ages) and estimates of uncertainty for each interval

    Machine learning algorithms performed no better than regression models for prognostication in traumatic brain injury

    Get PDF
    Objective: We aimed to explore the added value of common machine learning (ML) algorithms for prediction of outcome for moderate and severe traumatic brain injury. Study Design and Setting: We performed logistic regression (LR), lasso regression, and ridge regression with key baseline predictors in the IMPACT-II database (15 studies, n = 11,022). ML algorithms included support vector machines, random forests, gradient boosting machines, and artificial neural networks and were trained using the same predictors. To assess generalizability of predictions, we performed internal, internal-external, and external validation on the recent CENTER-TBI study (patients with Glasgow Coma Scale <13, n = 1,554). Both calibration (calibration slope/intercept) and discrimination (area under the curve) was quantified. Results: In the IMPACT-II database, 3,332/11,022 (30%) died and 5,233(48%) had unfavorable outcome (Glasgow Outcome Scale less than 4). In the CENTER-TBI study, 348/1,554(29%) died and 651(54%) had unfavorable outcome. Discrimination and calibration varied widely between the studies and less so between the studied algorithms. The mean area under the curve was 0.82 for mortality and 0.77 for unfavorable outcomes in the CENTER-TBI study. Conclusion: ML algorithms may not outperform traditional regression approaches in a low-dimensional setting for outcome prediction after moderate or severe traumatic brain injury. Similar to regression-based prediction models, ML algorithms should be rigorously validated to ensure applicability to new populations

    Determination of the Form Factors for the Decay B0 --> D*-l+nu_l and of the CKM Matrix Element |Vcb|

    Get PDF
    We present a combined measurement of the Cabibbo-Kobayashi-Maskawa matrix element Vcb|V_{cb}| and of the parameters ρ2\rho^2, R1R_1, and R2R_2, which fully characterize the form factors of the B0D+νB^0 \to D^{*-}\ell^{+}\nu_\ell decay in the framework of HQET, based on a sample of about 52,800 B0D+νB^0 \to D^{*-}\ell^{+}\nu_\ell decays recorded by the BABAR detector. The kinematical information of the fully reconstructed decay is used to extract the following values for the parameters (where the first errors are statistical and the second systematic): ρ2=1.156±0.094±0.028\rho^2 = 1.156 \pm 0.094 \pm 0.028, R1=1.329±0.131±0.044R_1 = 1.329 \pm 0.131 \pm 0.044, R2=0.859±0.077±0.022R_2 = 0.859 \pm 0.077 \pm 0.022, F(1)Vcb=(35.03±0.39±1.15)×103\mathcal{F}(1)|V_{cb}| = (35.03 \pm 0.39 \pm 1.15) \times 10^{-3}. By combining these measurements with the previous BABAR measurements of the form factors which employs a different technique on a partial sample of the data, we improve the statistical accuracy of the measurement, obtaining: ρ2=1.179±0.048±0.028,R1=1.417±0.061±0.044,R2=0.836±0.037±0.022,\rho^2 = 1.179 \pm 0.048 \pm 0.028, R_1 = 1.417 \pm 0.061 \pm 0.044, R_2 = 0.836 \pm 0.037 \pm 0.022, and F(1)Vcb=(34.68±0.32±1.15)×103. \mathcal{F}(1)|V_{cb}| = (34.68 \pm 0.32 \pm 1.15) \times 10^{-3}. Using the lattice calculations for the axial form factor F(1)\mathcal{F}(1), we extract Vcb=(37.74±0.35±1.25±1.441.23)×103|V_{cb}| =(37.74 \pm 0.35 \pm 1.25 \pm ^{1.23}_{1.44}) \times 10^{-3}, where the third error is due to the uncertainty in F(1)\mathcal{F}(1)

    Study of the Exclusive Initial-State Radiation Production of the DDˉD \bar D System

    Get PDF
    A study of exclusive production of the DDˉD \bar D system through initial-state r adiation is performed in a search for charmonium states, where D=D0D=D^0 or D+D^+. The D0D^0 mesons are reconstructed in the D0Kπ+D^0 \to K^- \pi^+, D0Kπ+π0D^0 \to K^- \pi^+ \pi^0, and D0Kπ+π+πD^0 \to K^- \pi^+ \pi^+ \pi^- decay modes. The D+D^+ is reconstructed through the D+Kπ+π+D^+ \to K^- \pi^+ \pi^+ decay mode. The analysis makes use of an integrated luminosity of 288.5 fb1^{-1} collected by the BaBar experiment. The DDˉD \bar D mass spectrum shows a clear ψ(3770)\psi(3770) signal. Further structures appear in the 3.9 and 4.1 GeV/c2c^2 regions. No evidence is found for Y(4260) decays to DDˉD \bar D, implying an up per limit \frac{\BR(Y(4260)\to D \bar D)}{\BR(Y(4260)\to J/\psi \pi^+ \pi^-)} < 7.6 (95 % confidence level)

    Measurements of differential production cross sections for a Z boson in association with jets in pp collisions at root s=8 TeV

    Get PDF
    Peer reviewe

    Variation in Structure and Process of Care in Traumatic Brain Injury: Provider Profiles of European Neurotrauma Centers Participating in the CENTER-TBI Study.

    Get PDF
    INTRODUCTION: The strength of evidence underpinning care and treatment recommendations in traumatic brain injury (TBI) is low. Comparative effectiveness research (CER) has been proposed as a framework to provide evidence for optimal care for TBI patients. The first step in CER is to map the existing variation. The aim of current study is to quantify variation in general structural and process characteristics among centers participating in the Collaborative European NeuroTrauma Effectiveness Research in Traumatic Brain Injury (CENTER-TBI) study. METHODS: We designed a set of 11 provider profiling questionnaires with 321 questions about various aspects of TBI care, chosen based on literature and expert opinion. After pilot testing, questionnaires were disseminated to 71 centers from 20 countries participating in the CENTER-TBI study. Reliability of questionnaires was estimated by calculating a concordance rate among 5% duplicate questions. RESULTS: All 71 centers completed the questionnaires. Median concordance rate among duplicate questions was 0.85. The majority of centers were academic hospitals (n = 65, 92%), designated as a level I trauma center (n = 48, 68%) and situated in an urban location (n = 70, 99%). The availability of facilities for neuro-trauma care varied across centers; e.g. 40 (57%) had a dedicated neuro-intensive care unit (ICU), 36 (51%) had an in-hospital rehabilitation unit and the organization of the ICU was closed in 64% (n = 45) of the centers. In addition, we found wide variation in processes of care, such as the ICU admission policy and intracranial pressure monitoring policy among centers. CONCLUSION: Even among high-volume, specialized neurotrauma centers there is substantial variation in structures and processes of TBI care. This variation provides an opportunity to study effectiveness of specific aspects of TBI care and to identify best practices with CER approaches
    corecore