12 research outputs found

    Atrasentan and renal events in patients with type 2 diabetes and chronic kidney disease (SONAR): a double-blind, randomised, placebo-controlled trial

    Get PDF
    Background: Short-term treatment for people with type 2 diabetes using a low dose of the selective endothelin A receptor antagonist atrasentan reduces albuminuria without causing significant sodium retention. We report the long-term effects of treatment with atrasentan on major renal outcomes. Methods: We did this double-blind, randomised, placebo-controlled trial at 689 sites in 41 countries. We enrolled adults aged 18–85 years with type 2 diabetes, estimated glomerular filtration rate (eGFR)25–75 mL/min per 1·73 m 2 of body surface area, and a urine albumin-to-creatinine ratio (UACR)of 300–5000 mg/g who had received maximum labelled or tolerated renin–angiotensin system inhibition for at least 4 weeks. Participants were given atrasentan 0·75 mg orally daily during an enrichment period before random group assignment. Those with a UACR decrease of at least 30% with no substantial fluid retention during the enrichment period (responders)were included in the double-blind treatment period. Responders were randomly assigned to receive either atrasentan 0·75 mg orally daily or placebo. All patients and investigators were masked to treatment assignment. The primary endpoint was a composite of doubling of serum creatinine (sustained for ≥30 days)or end-stage kidney disease (eGFR <15 mL/min per 1·73 m 2 sustained for ≥90 days, chronic dialysis for ≥90 days, kidney transplantation, or death from kidney failure)in the intention-to-treat population of all responders. Safety was assessed in all patients who received at least one dose of their assigned study treatment. The study is registered with ClinicalTrials.gov, number NCT01858532. Findings: Between May 17, 2013, and July 13, 2017, 11 087 patients were screened; 5117 entered the enrichment period, and 4711 completed the enrichment period. Of these, 2648 patients were responders and were randomly assigned to the atrasentan group (n=1325)or placebo group (n=1323). Median follow-up was 2·2 years (IQR 1·4–2·9). 79 (6·0%)of 1325 patients in the atrasentan group and 105 (7·9%)of 1323 in the placebo group had a primary composite renal endpoint event (hazard ratio [HR]0·65 [95% CI 0·49–0·88]; p=0·0047). Fluid retention and anaemia adverse events, which have been previously attributed to endothelin receptor antagonists, were more frequent in the atrasentan group than in the placebo group. Hospital admission for heart failure occurred in 47 (3·5%)of 1325 patients in the atrasentan group and 34 (2·6%)of 1323 patients in the placebo group (HR 1·33 [95% CI 0·85–2·07]; p=0·208). 58 (4·4%)patients in the atrasentan group and 52 (3·9%)in the placebo group died (HR 1·09 [95% CI 0·75–1·59]; p=0·65). Interpretation: Atrasentan reduced the risk of renal events in patients with diabetes and chronic kidney disease who were selected to optimise efficacy and safety. These data support a potential role for selective endothelin receptor antagonists in protecting renal function in patients with type 2 diabetes at high risk of developing end-stage kidney disease. Funding: AbbVie

    MetaCOG: Learning a Metacognition to Recover What Objects Are Actually There

    Full text link
    Humans not only form representations about the world based on what we see, but also learn meta-cognitive representations about how our own vision works. This enables us to recognize when our vision is unreliable (e.g., when we realize that we are experiencing a visual illusion) and enables us to question what we see. Inspired by this human capacity, we present MetaCOG: a model that increases the robustness of object detectors by learning representations of their reliability, and does so without feedback. Specifically, MetaCOG is a hierarchical probabilistic model that expresses a joint distribution over the objects in a 3D scene and the outputs produced by a detector. When paired with an off-the-shelf object detector, MetaCOG takes detections as input and infers the detector's tendencies to miss objects of certain categories and to hallucinate objects that are not actually present, all without access to ground-truth object labels. When paired with three modern neural object detectors, MetaCOG learns useful and accurate meta-cognitive representations, resulting in improved performance on the detection task. Additionally, we show that MetaCOG is robust to varying levels of error in the detections. Our results are a proof-of-concept for a novel approach to the problem of correcting a faulty vision system's errors. The model code, datasets, results, and demos are available: https://osf.io/8b9qt/?view_only=8c1b1c412c6b4e1697e3c7859be2fce6Comment: 12 pages, 4 figure

    Neurocomputational Modeling of Human Physical Scene Understanding

    No full text
    Human scene understanding involves not just localizing objects,but also inferring latent attributes that affect how the scene mightunfold, such as the masses of objects within the scene. Theseattributes can sometimes only be inferred from the dynamicsof a scene, but people can flexibly integrate this information toupdate their inferences. Here we propose a neurally plausibleEfficient Physical Inferencemodel that can generate and updateinferences from videos. This model makes inferences over theinputs to a generative model of physics and graphics, usingan LSTM based recognition network to efficiently approximaterational probabilistic conditioning. We find that this model notonly rapidly and accurately recovers latent object information,but also that its inferences evolve with more information in away similar to human judgments. The model provides a testablehypothesis about the population-level activity in brain regionsunderlying physical reasoning.National Science Foundation (U.S.). Science and Technology Centers (STCs): Integrative Partnerships Program (Award CCF-1231216

    Efficient inverse graphics in biological face processing

    No full text
    © 2020 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works. Distributed under a Creative Commons Attribution NonCommercial License 4.0 (CC BY-NC). Vision not only detects and recognizes objects, but performs rich inferences about the underlying scene structure that causes the patterns of light we see. Inverting generative models, or “analysis-by-synthesis”, presents a possible solution, but its mechanistic implementations have typically been too slow for online perception, and their mapping to neural circuits remains unclear. Here we present a neurally plausible efficient inverse graphics model and test it in the domain of face recognition. The model is based on a deep neural network that learns to invert a three-dimensional face graphics program in a single fast feedforward pass. It explains human behavior qualitatively and quantitatively, including the classic “hollow face” illusion, and it maps directly onto a specialized face-processing circuit in the primate brain. The model fits both behavioral and neural data better than state-of-the-art computer vision models, and suggests an interpretable reverse-engineering account of how the brain transforms images into percepts
    corecore