27 research outputs found
What lies behind the data? How sampling assumptions shape and are shaped by inductive inference
The problems of everyday cognition, from perception to social interaction and higher level reasoning, require us to predict future events and outcomes on the basis of past experience. But often (if not always) solutions to the problems we face are under-determined by our experience. So we reason inductively, drawing uncertain conclusions from incomplete information. Yet, despite our lack of first hand data, our reasoning is efficient and effective nonetheless. So how do we close the gap between the paucity of experience and the effectiveness of reason? One way that we do this is by exploiting statistical regularities that we have observed in the world, assuming (contra philosophersâ counsel) that these regularities will continue to hold. In so doing, we leverage the evidentiary value of the data that we do have. This thesis examines our assumptions about what lies beneath the data and how we leverage them to reason beyond it. In particular, it focuses on our mental models of the world â generative models that connect observations to hypotheses through their consequences. I consider the assumptions we make in solving three separate reasoning problems of increasing complexity. Firstly, in a series of related experiments I explore the effect of sampling assumptions in a categorisation task based on low-dimensional perceptual stimuli. Together, these experiments examine how reasoners weigh the value of extra data when deciding how far to generalise, and the extent to which the computations involved are influenced by their representational and sampling assumptions. In addition, I use the same experimental framework to investigate a related question: if peopleâs sampling assumptions do alter the weighing of evidence, at what stage do these effects manifest â during learning, or only at the point of generalisation? Secondly, I examine the role of sampling assumptions in the shift from percept to concept. A key challenge for the reasoner when reasoning from high-dimensional categorical stimuli is in deciding which of the many dimensions or features represent the appropriate basis for induction. I investigate how the perceived relevance of particular features in the data is affected by peopleâs assumptions about the representativeness of the sampling process. In almost every sphere of human activity, we reason from data generated by others and we generate data from which others will reason. Equipped with a theory of mind, both senders and receivers of data may exploit recursive âI think, you think, I think...â reasoning to increase the evidentiary weight of data, and improve the utility of communication as a result. But when data is highly leveraged in this way, there is a downside risk. If reciprocal assumptions are not well calibrated, the reasoner may leap to the wrong conclusion. In the final study, I investigate the phenomena of recursive meta-inference in a setting where deception is warranted but lying is not an option â a setting which offers particular advantages. Firstly, when perpetrating or avoiding a deception, some degree of meta-inferential assumption becomes a vital pre-requisite. Secondly, placing the goals of communicating parties at odds offers the potential to more easily distinguish whether people engage in genuine reflection about the assumptions of another or merely respond to constraints implicit in the sampling process. The studies described in this thesis deal with progressively more complex challenges that we face as reasoners: how far should we generalise when the basis of induction is clear, how do we determine the relevant basis for induction in the first place, and how do we calibrate our own inductive inference with that of another. Through a combination of computational modelling and human behavioural experiments I demonstrate how our sampling assumptions influence the way we meet these challenges, and how our solution to each challenge may be inter-related.Thesis (Ph.D.) -- University of Adelaide, School of Psychology, 201
Gamma-ray and radio properties of six pulsars detected by the fermi large area telescope
We report the detection of pulsed Îł-rays for PSRs J0631+1036, J0659+1414, J0742-2822, J1420-6048, J1509-5850, and J1718-3825 using the Large Area Telescope on board the Fermi Gamma-ray Space Telescope (formerly known as GLAST). Although these six pulsars are diverse in terms of their spin parameters, they share an important feature: their Îł-ray light curves are (at least given the current count statistics) single peaked. For two pulsars, there are hints for a double-peaked structure in the light curves. The shapes of the observed light curves of this group of pulsars are discussed in the light of models for which the emission originates from high up in the magnetosphere. The observed phases of the Îł-ray light curves are, in general, consistent with those predicted by high-altitude models, although we speculate that the Îł-ray emission of PSR J0659+1414, possibly featuring the softest spectrum of all Fermi pulsars coupled with a very low efficiency, arises from relatively low down in the magnetosphere. High-quality radio polarization data are available showing that all but one have a high degree of linear polarization. This allows us to place some constraints on the viewing geometry and aids the comparison of the Îł-ray light curves with high-energy beam models
3 years of liraglutide versus placebo for type 2 diabetes risk reduction and weight management in individuals with prediabetes: a randomised, double-blind trial
Background:
Liraglutide 3·0 mg was shown to reduce bodyweight and improve glucose metabolism after the 56-week period of this trial, one of four trials in the SCALE programme. In the 3-year assessment of the SCALE Obesity and Prediabetes trial we aimed to evaluate the proportion of individuals with prediabetes who were diagnosed with type 2 diabetes.
Methods:
In this randomised, double-blind, placebo-controlled trial, adults with prediabetes and a body-mass index of at least 30 kg/m2, or at least 27 kg/m2 with comorbidities, were randomised 2:1, using a telephone or web-based system, to once-daily subcutaneous liraglutide 3·0 mg or matched placebo, as an adjunct to a reduced-calorie diet and increased physical activity. Time to diabetes onset by 160 weeks was the primary outcome, evaluated in all randomised treated individuals with at least one post-baseline assessment. The trial was conducted at 191 clinical research sites in 27 countries and is registered with ClinicalTrials.gov, number NCT01272219.
Findings:
The study ran between June 1, 2011, and March 2, 2015. We randomly assigned 2254 patients to receive liraglutide (n=1505) or placebo (n=749). 1128 (50%) participants completed the study up to week 160, after withdrawal of 714 (47%) participants in the liraglutide group and 412 (55%) participants in the placebo group. By week 160, 26 (2%) of 1472 individuals in the liraglutide group versus 46 (6%) of 738 in the placebo group were diagnosed with diabetes while on treatment. The mean time from randomisation to diagnosis was 99 (SD 47) weeks for the 26 individuals in the liraglutide group versus 87 (47) weeks for the 46 individuals in the placebo group. Taking the different diagnosis frequencies between the treatment groups into account, the time to onset of diabetes over 160 weeks among all randomised individuals was 2·7 times longer with liraglutide than with placebo (95% CI 1·9 to 3·9, p<0·0001), corresponding with a hazard ratio of 0·21 (95% CI 0·13â0·34). Liraglutide induced greater weight loss than placebo at week 160 (â6·1 [SD 7·3] vs â1·9% [6·3]; estimated treatment difference â4·3%, 95% CI â4·9 to â3·7, p<0·0001). Serious adverse events were reported by 227 (15%) of 1501 randomised treated individuals in the liraglutide group versus 96 (13%) of 747 individuals in the placebo group.
Interpretation:
In this trial, we provide results for 3 years of treatment, with the limitation that withdrawn individuals were not followed up after discontinuation. Liraglutide 3·0 mg might provide health benefits in terms of reduced risk of diabetes in individuals with obesity and prediabetes.
Funding:
Novo Nordisk, Denmark
A Population of Gamma-Ray Millisecond Pulsars Seen with the Fermi Large Area Telescope
Gamma-Ray Pulsar Bonanza
Most of the pulsars we know about were detected through their radio emission; a few are known to pulse gamma rays but were first detected at other wavelengths (see the Perspective by
Halpern
). Using the Fermi Gamma-Ray Space Telescope,
Abdo
et al.
(p.
840
, published online 2 July; see the cover) report the detection of 16 previously unknown pulsars based on their gamma-ray emission alone. Thirteen of these coincide with previously unidentified gamma-ray sources, solving the 30-year-old mystery of their identities. Pulsars are fast-rotating neutron stars. With time they slow down and cease to radiate; however, if they are in a binary system, they can have their spin rates increased by mass transfer from their companion stars, starting a new life as millisecond pulsars. In another study,
Abdo
et al.
(p.
845
) report the detection of gamma-ray emission from the globular cluster 47 Tucanae, which is coming from an ensemble of millisecond pulsars in the cluster's core. The data imply that there are up to 60 millisecond pulsars in 47 Tucanae, twice as many as predicted by radio observations. In a further companion study,
Abdo
et al.
(p.
848
, published online 2 July) searched Fermi Large Area Telescope data for pulsations from all known millisecond pulsars outside of stellar clusters, finding gamma-ray pulsations for eight of them. Their properties resemble those of other gamma-ray pulsars, suggesting that they share the same basic emission mechanism. Indeed, both sets of pulsars favor emission models in which the gamma rays are produced in the outer magnetosphere of the neutron star
Observation of gravitational waves from the coalescence of a 2.5â4.5 Mâ compact object and a neutron star
Recommended from our members
Practicing deception does not make you better at handling it
In social contexts, learners need to infer the knowledge and intentions of the information provider and vice-versa. In this study, we tested how well participants could infer the intentions of different information providers in the rectangle game, where a fictional information provider revealed clues about the structure of a rectangle that the learner (a participant) needed to guess. Participants received clues from either a helpful information provider, a provider who was randomly sampling clues, or one of two kinds of unhelpful providers (who could mislead but could not lie). We found that people learned efficiently and in line with the predictions of a Bayesian pedagogical model when the provider was helpful. However, although participants could identify that unhelpful providers were not being helpful, they struggled to learn the strategy those providers were using, even when they had the opportunity to practise being a deceptive information provider
Recommended from our members
Social meta-inference and the evidentiary value of consensus
Reasoning beyond available data is a ubiquitous feature of human
cognition. But while the availability of first-hand data typically diminishes
as the concepts we reason about become more complex, our ability to draw
inferences seems not to. We may offset the sparsity of direct evidence by
observing the statements of others, but such social meta-inference comes with
challenges of its own. The strength of socially-provided evidence depends on
multiple factors which themselves must be inferred, like the knowledge, social
goals, and independence of the people providing the data. Here, we present the
results of an experiment aimed at examining how people draw conclusions from
information provided by others in the context of social media posts. By
systematically varying the degree of consensus along with the number of people
and distinct arguments involved we are able to assess how much each factor
affects the conclusions reasoners draw. Across a range of topics we find that
while people are influenced by the number of people on each side of an
argument, the number of posts is the dominant factor driving belief
revision. In contrast to well established findings in simpler domains, we find
that people are largely insensitive to the diversity of the arguments made
Recommended from our members
Inferring the truth from deception: What can people learn from helpful and unhelpful information providers?
Sampling assumptions â the assumptions people make about how an example of a category or concept has been chosen â help us learn from examples efficiently. One context where sampling assumptions are particularly important is in social contexts, where a learner needs to infer the knowledge and intentions of the information provider and vice-versa. The pedagogical sampling assumptions model describes a Bayesian account of how learners and providers should behave given different assumptions they have about the other (e.g., is the provider trying to deceive or help me? Does the learner trust me?). In this study, we tested how well this model could describe learning behaviour in the rectangle game, where a fictional information provider revealed clues about the structure of a rectangle that the learner (a participant) needed to guess. Participants received clues from either a helpful information provider, a provider who was randomly sampling clues, or one of two kinds of unhelpful providers (who could mislead but could not lie). We found that people learned efficiently and in line with model predictions when the provider was helpful and that this was the case even when no cover story was provided. However, although participants could identify that unhelpful providers were not being helpful, they struggled to learn the strategy those providers were using
Recommended from our members
Source independence affects argument persuasiveness when the relevance is clear
Making inferences about claims we do not have direct experience with is a common feature of everyday life. In these situations, it makes sense to consult others: an apparent consensus may be a useful cue to the truth of a claim. This strategy is not without its challenges. The utility of a consensus should depend in part on the sources of evidence that underlie it. If each person based their conclusion on independent data then the fact that they agree is informative. If, instead, everyone relied on the same primary source, the consensus is less meaningful. However, the extent to which people are actually sensitive to this kind of source independence is still unclear. Here, we present the results of three experiments that examine this issue in a social media setting, by varying the sources of primary data cited via retweets. In each experiment, participants rated their agreement with 12 different claims before and after reading four tweets that were retweeted on the basis of either the same or different primary data. We found that people were sensitive to source independence only when it was clear that the tweeters had relied on the primary data to reach their conclusion. Implications for existing research are discussed
Recommended from our members
What interventions can decrease or increase belief polarisation in a population of rational agents?
In many situations where people communicate (e.g., Twitter, Facebook etc), people self-organise into âecho chambersâ of like-minded individuals, with different echo chambers espousing very different beliefs. Why does this occur? Previous work has demonstrated that such belief polarisation can emerge even when all agents are completely rational, as long as their initial beliefs are heterogeneous and they do not automatically know who to trust. In this work, we used agent-based simulations to further investigate the mechanisms for belief polarisation. Our work extended previous work by using a more realistic scenario. In this scenario, we found that previously proposed methods for reducing belief polarisation did not work but we were able to find a new method that did. However, this same method could be reversed by adversarial entities to increase belief polarisation. We discuss how this danger can be best mitigated and what theoretical conclusions be drawn from our findings