272 research outputs found

    The wisdom of collective grading and the effects of epistemic and semantic diversity

    Get PDF
    A computer simulation is used to study collective judgements that an expert panel reaches on the basis of qualitative probability judgements contributed by individual members. The simulated panel displays a strong and robust crowd wisdom effect. The panel's performance is better when members contribute precise probability estimates instead of qualitative judgements, but not by much. Surprisingly, it doesn't always hurt for panel members to interpret the probability expressions differently. Indeed, coordinating their understandings can be much worse

    Robotics and automation in the city: a research agenda

    Get PDF
    Globally cities are becoming experimental sites for new forms of robotic and automation technologies applied across a wide variety of sectors in multiple areas of economic and social life. As these innovations leave the laboratory and factory, this paper analyses how robotics and automation systems are being layered upon existing urban digital networks, extending the capabilities and capacities of human agency and infrastructure networks, and reshaping the city and citizen’s everyday experiences. To date, most work in this field has been speculative and isolated in nature. We set out a research agenda that goes beyond analysis of discrete applications and effects, to investigate how robotics and automation connect across urban domains and the implications for: differential urban geographies, the selective enhancement of individuals and collective management of infrastructures, the socio-spatial sorting of cities and the potential for responsible urban innovation

    Safety, immunogenicity, and reactogenicity of BNT162b2 and mRNA-1273 COVID-19 vaccines given as fourth-dose boosters following two doses of ChAdOx1 nCoV-19 or BNT162b2 and a third dose of BNT162b2 (COV-BOOST): a multicentre, blinded, phase 2, randomised trial

    Get PDF

    Safety, immunogenicity, and reactogenicity of BNT162b2 and mRNA-1273 COVID-19 vaccines given as fourth-dose boosters following two doses of ChAdOx1 nCoV-19 or BNT162b2 and a third dose of BNT162b2 (COV-BOOST): a multicentre, blinded, phase 2, randomised trial

    Get PDF
    Background Some high-income countries have deployed fourth doses of COVID-19 vaccines, but the clinical need, effectiveness, timing, and dose of a fourth dose remain uncertain. We aimed to investigate the safety, reactogenicity, and immunogenicity of fourth-dose boosters against COVID-19.Methods The COV-BOOST trial is a multicentre, blinded, phase 2, randomised controlled trial of seven COVID-19 vaccines given as third-dose boosters at 18 sites in the UK. This sub-study enrolled participants who had received BNT162b2 (Pfizer-BioNTech) as their third dose in COV-BOOST and randomly assigned them (1:1) to receive a fourth dose of either BNT162b2 (30 µg in 0·30 mL; full dose) or mRNA-1273 (Moderna; 50 µg in 0·25 mL; half dose) via intramuscular injection into the upper arm. The computer-generated randomisation list was created by the study statisticians with random block sizes of two or four. Participants and all study staff not delivering the vaccines were masked to treatment allocation. The coprimary outcomes were safety and reactogenicity, and immunogenicity (antispike protein IgG titres by ELISA and cellular immune response by ELISpot). We compared immunogenicity at 28 days after the third dose versus 14 days after the fourth dose and at day 0 versus day 14 relative to the fourth dose. Safety and reactogenicity were assessed in the per-protocol population, which comprised all participants who received a fourth-dose booster regardless of their SARS-CoV-2 serostatus. Immunogenicity was primarily analysed in a modified intention-to-treat population comprising seronegative participants who had received a fourth-dose booster and had available endpoint data. This trial is registered with ISRCTN, 73765130, and is ongoing.Findings Between Jan 11 and Jan 25, 2022, 166 participants were screened, randomly assigned, and received either full-dose BNT162b2 (n=83) or half-dose mRNA-1273 (n=83) as a fourth dose. The median age of these participants was 70·1 years (IQR 51·6–77·5) and 86 (52%) of 166 participants were female and 80 (48%) were male. The median interval between the third and fourth doses was 208·5 days (IQR 203·3–214·8). Pain was the most common local solicited adverse event and fatigue was the most common systemic solicited adverse event after BNT162b2 or mRNA-1273 booster doses. None of three serious adverse events reported after a fourth dose with BNT162b2 were related to the study vaccine. In the BNT162b2 group, geometric mean anti-spike protein IgG concentration at day 28 after the third dose was 23 325 ELISA laboratory units (ELU)/mL (95% CI 20 030–27 162), which increased to 37 460 ELU/mL (31 996–43 857) at day 14 after the fourth dose, representing a significant fold change (geometric mean 1·59, 95% CI 1·41–1·78). There was a significant increase in geometric mean anti-spike protein IgG concentration from 28 days after the third dose (25 317 ELU/mL, 95% CI 20 996–30 528) to 14 days after a fourth dose of mRNA-1273 (54 936 ELU/mL, 46 826–64 452), with a geometric mean fold change of 2·19 (1·90–2·52). The fold changes in anti-spike protein IgG titres from before (day 0) to after (day 14) the fourth dose were 12·19 (95% CI 10·37–14·32) and 15·90 (12·92–19·58) in the BNT162b2 and mRNA-1273 groups, respectively. T-cell responses were also boosted after the fourth dose (eg, the fold changes for the wild-type variant from before to after the fourth dose were 7·32 [95% CI 3·24–16·54] in the BNT162b2 group and 6·22 [3·90–9·92] in the mRNA-1273 group).Interpretation Fourth-dose COVID-19 mRNA booster vaccines are well tolerated and boost cellular and humoral immunity. Peak responses after the fourth dose were similar to, and possibly better than, peak responses after the third dose

    Increasing frailty is associated with higher prevalence and reduced recognition of delirium in older hospitalised inpatients: results of a multi-centre study

    Get PDF
    Purpose: Delirium is a neuropsychiatric disorder delineated by an acute change in cognition, attention, and consciousness. It is common, particularly in older adults, but poorly recognised. Frailty is the accumulation of deficits conferring an increased risk of adverse outcomes. We set out to determine how severity of frailty, as measured using the CFS, affected delirium rates, and recognition in hospitalised older people in the United Kingdom. Methods: Adults over 65 years were included in an observational multi-centre audit across UK hospitals, two prospective rounds, and one retrospective note review. Clinical Frailty Scale (CFS), delirium status, and 30-day outcomes were recorded. Results: The overall prevalence of delirium was 16.3% (483). Patients with delirium were more frail than patients without delirium (median CFS 6 vs 4). The risk of delirium was greater with increasing frailty [OR 2.9 (1.8–4.6) in CFS 4 vs 1–3; OR 12.4 (6.2–24.5) in CFS 8 vs 1–3]. Higher CFS was associated with reduced recognition of delirium (OR of 0.7 (0.3–1.9) in CFS 4 compared to 0.2 (0.1–0.7) in CFS 8). These risks were both independent of age and dementia. Conclusion: We have demonstrated an incremental increase in risk of delirium with increasing frailty. This has important clinical implications, suggesting that frailty may provide a more nuanced measure of vulnerability to delirium and poor outcomes. However, the most frail patients are least likely to have their delirium diagnosed and there is a significant lack of research into the underlying pathophysiology of both of these common geriatric syndromes

    Why are Normal Distributions Normal?

    No full text
    Abstract We seem to be surrounded by bell curves-curves more formally known as normal distributions, or Gaussian distributions. All manner of things appear to be distributed normally: people's heights, sizes of snowflakes, errors in measurements, lifetimes of lightbulbs, IQ scores, weights of loaves of bread, and so on. I argue that the standard explanation for why such quantities are normally distributed, which one sees throughout the sciences, is often false. The standard explanation invokes the Central Limit Theorem, and I argue that in many cases the conditions of the theorem are not satisfied, not even approximately. I then suggest an alternative explanatory schema for why a given quantity is normally distributed

    Three concepts of probability

    No full text
    Early probability theorists often spoke of probability in a way that was ambiguous between two different concepts of probability: a subjective concept and an objective concept. Subsequent theorists distinguished these two concepts from another, defining the subjective concept as the “uncertainty of an individual”, and the objective concept as a kind of “uncertainty in the world”. While these two concepts were distinguished from one another, some theorists believed that there was no such thing as “uncertainty in the world”, and that the only type of probability is subjective probability. The advent of quantum mechanics changed this orthodoxy. Here, for the first time—or so it has been argued—a scientific theory described the world as irreducibly probabilistic, using an objective concept of probability. While there have been authors who have levelled serious objections to this view, it has nonetheless been fairly popular. Scientific theories, however, were probabilistic long before quantum mechanics was developed. Perhaps the two most prominent cases in point were the fields of classical statistical mechanics and evolutionary theory. These two fields made use of probability theory in a way that looked objective, but often with the assumptions that the world is not irreducibly probabilistic, and that for there to be genuine “uncertainty in the world”, the world has to be irreducibly probabilistic. This caused many authors to wonder just what the probabilities in these fields could be representing. Proposed solutions to this puzzle have often been in the form of shoe-horning the probabilities of these fields into either the “uncertainty in the world” concept or the “uncertainty of an individual” concept—both with unsatisfactory consequences. In this dissertation, I investigate how we should understand the probabilities of classical statistical mechanics and evolutionary theory. To do this, I engage with arguments in the contemporary literature, and conclude that the probabilities of these two fields should be understood as neither the subjective concept of probability, nor the objective concept—standardly conceived. I argue that in order to develop an adequate account of these probabilities, we need to distinguish a third concept of objective probability that has nothing to do with “uncertainty”, whether it be in the world or of an individual. I then give an analysis of this third concept of probability

    From Kolmogorov, to Popper, to Renyi: There's No Escaping Humphreys' Paradox (When Generalized)

    No full text
    Humphreys� Paradox (Humphreys 1985) can be solved if propensity theorists (i) adopt R�nyi�s 1955 probability axiom system as the correct axiom system for propensities and (ii) maintain that there are no backwards propensities in the world. A similar move can be used to solve Milne�s Problem (Milne 1985)�another common objection to the propensity interpretation. That�s the good news. However, Humphreys� Paradox and Milne�s Problem are just two special cases of a much more general problem, and this problem causes trouble even for propensity theorists who accept (i) and (ii). That�s the bad news

    Deterministic probability: neither chance nor credence

    No full text
    Some have argued that chance and determinism are compatible in order to account for the objectivity of probabilities in theories that are compatible with determinism, like Classical Statistical Mechanics (CSM) and Evolutionary Theory (ET). Contrarily, some have argued that chance and determinism are incompatible, and so such probabilities are subjective. In this paper, I argue that both of these positions are unsatisfactory. I argue that the probabilities of theories like CSM and ET are not chances, but also that they are not subjective probabilities either. Rather, they are a third type of probability, which I call counterfactual probability. The main distinguishing feature of counterfactual-probability is the role it plays in conveying important counterfactual information in explanations. This distinguishes counterfactual probability from chance as a second concept of objective probability

    Vague Credence

    No full text
    It is natural to think of precise probabilities as being special cases of imprecise probabilities, the special case being when one’s lower and upper probabilities are equal. I argue, however, that it is better to think of the two models as representing two different aspects of our credences, which are often (if not always) vague to some degree. I show that by combining the two models into one model, and understanding that model as a model of vague credence, a natural interpretation arises that suggests a hypothesis concerning how we can improve the accuracy of aggregate credences. I present empirical results in support of this hypothesis. I also discuss how this modeling interpretation of imprecise probabilities bears upon a philosophical objection that has been raised against them, the so-called inductive learning problem
    • …
    corecore