12 research outputs found
Atrasentan and renal events in patients with type 2 diabetes and chronic kidney disease (SONAR): a double-blind, randomised, placebo-controlled trial
Background: Short-term treatment for people with type 2 diabetes using a low dose of the selective endothelin A receptor antagonist atrasentan reduces albuminuria without causing significant sodium retention. We report the long-term effects of treatment with atrasentan on major renal outcomes. Methods: We did this double-blind, randomised, placebo-controlled trial at 689 sites in 41 countries. We enrolled adults aged 18–85 years with type 2 diabetes, estimated glomerular filtration rate (eGFR)25–75 mL/min per 1·73 m 2 of body surface area, and a urine albumin-to-creatinine ratio (UACR)of 300–5000 mg/g who had received maximum labelled or tolerated renin–angiotensin system inhibition for at least 4 weeks. Participants were given atrasentan 0·75 mg orally daily during an enrichment period before random group assignment. Those with a UACR decrease of at least 30% with no substantial fluid retention during the enrichment period (responders)were included in the double-blind treatment period. Responders were randomly assigned to receive either atrasentan 0·75 mg orally daily or placebo. All patients and investigators were masked to treatment assignment. The primary endpoint was a composite of doubling of serum creatinine (sustained for ≥30 days)or end-stage kidney disease (eGFR <15 mL/min per 1·73 m 2 sustained for ≥90 days, chronic dialysis for ≥90 days, kidney transplantation, or death from kidney failure)in the intention-to-treat population of all responders. Safety was assessed in all patients who received at least one dose of their assigned study treatment. The study is registered with ClinicalTrials.gov, number NCT01858532. Findings: Between May 17, 2013, and July 13, 2017, 11 087 patients were screened; 5117 entered the enrichment period, and 4711 completed the enrichment period. Of these, 2648 patients were responders and were randomly assigned to the atrasentan group (n=1325)or placebo group (n=1323). Median follow-up was 2·2 years (IQR 1·4–2·9). 79 (6·0%)of 1325 patients in the atrasentan group and 105 (7·9%)of 1323 in the placebo group had a primary composite renal endpoint event (hazard ratio [HR]0·65 [95% CI 0·49–0·88]; p=0·0047). Fluid retention and anaemia adverse events, which have been previously attributed to endothelin receptor antagonists, were more frequent in the atrasentan group than in the placebo group. Hospital admission for heart failure occurred in 47 (3·5%)of 1325 patients in the atrasentan group and 34 (2·6%)of 1323 patients in the placebo group (HR 1·33 [95% CI 0·85–2·07]; p=0·208). 58 (4·4%)patients in the atrasentan group and 52 (3·9%)in the placebo group died (HR 1·09 [95% CI 0·75–1·59]; p=0·65). Interpretation: Atrasentan reduced the risk of renal events in patients with diabetes and chronic kidney disease who were selected to optimise efficacy and safety. These data support a potential role for selective endothelin receptor antagonists in protecting renal function in patients with type 2 diabetes at high risk of developing end-stage kidney disease. Funding: AbbVie
Recommended from our members
Automatic computation of navigational affordances explains selective processing of geometry in scene perception: behavioral and computational evidence
One of the more surprising findings in visual cognition is the apparent sparsity of our scene percepts. Yet, scene perception also enables planning and navigation, which require a detailed, structured analysis of the scene geometry, including exit locations and the obstacles along the way. Here, we hypothesize that computation of navigational affordances (e.g., paths to an exit) is a “default” task in the mind, and that task induces selective analysis of the scene geometry most relevant to computing these affordances. In an indoor scene setting, we show that observers more readily detect changes if these changes impact shortest paths to visible exits. We show that behavioral detection rates are explained by a new model of attention that makes heterogeneous-precision inferences about the scene geometry, relative to how its different regions impact navigational affordance computation. This work provides a formal window into the contents of our scene percepts
Recommended from our members
Where does the flow go? Humans automatically predict liquid pathing with coarse-grained simulation
Bodies of water manifest rich physical interactions via non-linear dynamics. Yet, humans can successfully perceive and negotiate such systems in everyday life. Here, we hypothesize that liquid bodies play such an integral role in human life that the mind automatically computes their approximate flow-paths, with attention dynamically deployed to efficiently predict flow trajectories using coarse mental simulation. When viewing animations of liquids flowing through maze-like scenes, we asked participants to detect temporary slowdowns embedded in these animations. This task, without any overt prompt of path or prediction, reveals that detection rates vary with the moment-to-moment changes in coarse flow-path predictions. Critically, coarse predictions better explain trial-level detection rates than a finer-grained alternative, independently of bottom-up salience of slowdowns. This work suggests liquid flow-path prediction as an implicit task in the mind, and introduces rich attentional dynamics as a new window into intuitive physics computations
Recommended from our members
Modeling temporal attention in dynamic scenes: Hypothesis-driven resourceallocation using adaptive computation explains both objective trackingperformance and subjective effort judgments
Most work on attention (in terms of both psychophysical experiments and computational modeling) involves selection instatic scenes. And even when dynamic displays are used, performance is still typically characterized with only a singlevariable (such as the number of items correctly tracked in Multiple Object Tracking; MOT). But the allocation of attentionin daily life (e.g. during foraging, navigation, or play) involves both objective performance and subjective effort, and canvary dramatically from moment to moment. Here we attempt to capture this sort of rich temporal ebb and flow of attentionin a novel and generalizable adaptive computation architecture. In this architecture, computing resources are dynamicallyallocated to perform partial belief updates over both objects (in space) and moments (in time) flexibly and according totask demands. During MOT this framework is able to explain both objective tracking performance and the subjective senseof trial-by-trial effort
Recommended from our members
Real-time inference of physical properties in dynamic scenes
Human scene understanding involves not just localizing objects, but also inferring the latent causal properties that giverise to the scene for instance, how heavy those objects are. These properties can be guessed based on visual features(e.g., material texture), but we can also infer them from how they impact the dynamics of the scene. Furthermore, theseinferences are performed rapidly in response to dynamic, ongoing information. Here we propose a computational frame-work for understanding these inferences, and three models that instantiate this framework. We compare these models tothe evolution of human beliefs about object masses. We find that while peoples judgments are generally consistent withBayesian inference over these latent parameters, the models that best explain human judgments are approximations to thisinference that hold and dynamically update beliefs. An earlier version of this work was published in the proceedings ofCCN 2018 at https://ccneuro.org/2018/proceedings/1091.pdf
MetaCOG: Learning a Metacognition to Recover What Objects Are Actually There
Humans not only form representations about the world based on what we see,
but also learn meta-cognitive representations about how our own vision works.
This enables us to recognize when our vision is unreliable (e.g., when we
realize that we are experiencing a visual illusion) and enables us to question
what we see. Inspired by this human capacity, we present MetaCOG: a model that
increases the robustness of object detectors by learning representations of
their reliability, and does so without feedback. Specifically, MetaCOG is a
hierarchical probabilistic model that expresses a joint distribution over the
objects in a 3D scene and the outputs produced by a detector. When paired with
an off-the-shelf object detector, MetaCOG takes detections as input and infers
the detector's tendencies to miss objects of certain categories and to
hallucinate objects that are not actually present, all without access to
ground-truth object labels. When paired with three modern neural object
detectors, MetaCOG learns useful and accurate meta-cognitive representations,
resulting in improved performance on the detection task. Additionally, we show
that MetaCOG is robust to varying levels of error in the detections. Our
results are a proof-of-concept for a novel approach to the problem of
correcting a faulty vision system's errors. The model code, datasets, results,
and demos are available:
https://osf.io/8b9qt/?view_only=8c1b1c412c6b4e1697e3c7859be2fce6Comment: 12 pages, 4 figure
Recommended from our members
Causal and compositional generative models in online perception
From a quick glance or the touch of an object, our brains map sensory signals to scenes composed of rich anddetailed shapes and surfaces. Unlike the standard approaches to perception, we argue that this mapping draws on internalcausal and compositional models of the physical world and these internal models underlie the generalization capacity of humanperception. Here, we present a generative model of visual and multisensory perception in which the latent variables encodeintrinsic (e.g., shape) and extrinsic (e.g., occlusion) object properties. Latent variables are inputs to causal models that outputsense-specific signals. We present a recognition network that performs efficient inference in the generative model, computingat a speed similar to online perception. We show that our model, but not alternatives, can account for human performance in anoccluded face matching task and in a visual-to-haptic face matching task
Neurocomputational Modeling of Human Physical Scene Understanding
Human scene understanding involves not just localizing objects,but also inferring latent attributes that affect how the scene mightunfold, such as the masses of objects within the scene. Theseattributes can sometimes only be inferred from the dynamicsof a scene, but people can flexibly integrate this information toupdate their inferences. Here we propose a neurally plausibleEfficient Physical Inferencemodel that can generate and updateinferences from videos. This model makes inferences over theinputs to a generative model of physics and graphics, usingan LSTM based recognition network to efficiently approximaterational probabilistic conditioning. We find that this model notonly rapidly and accurately recovers latent object information,but also that its inferences evolve with more information in away similar to human judgments. The model provides a testablehypothesis about the population-level activity in brain regionsunderlying physical reasoning.National Science Foundation (U.S.). Science and Technology Centers (STCs): Integrative Partnerships Program (Award CCF-1231216
Efficient inverse graphics in biological face processing
© 2020 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works. Distributed under a Creative Commons Attribution NonCommercial License 4.0 (CC BY-NC). Vision not only detects and recognizes objects, but performs rich inferences about the underlying scene structure that causes the patterns of light we see. Inverting generative models, or “analysis-by-synthesis”, presents a possible solution, but its mechanistic implementations have typically been too slow for online perception, and their mapping to neural circuits remains unclear. Here we present a neurally plausible efficient inverse graphics model and test it in the domain of face recognition. The model is based on a deep neural network that learns to invert a three-dimensional face graphics program in a single fast feedforward pass. It explains human behavior qualitatively and quantitatively, including the classic “hollow face” illusion, and it maps directly onto a specialized face-processing circuit in the primate brain. The model fits both behavioral and neural data better than state-of-the-art computer vision models, and suggests an interpretable reverse-engineering account of how the brain transforms images into percepts