20 research outputs found

    How Similar Are the Mice to Men? Between-Species Comparison of Left Ventricular Mechanics Using Strain Imaging

    Get PDF
    BACKGROUND: While mammalian heart size maintains constant proportion to whole body size, scaling of left ventricular (LV) function parameters shows a more complex scaling pattern. We used 2-D speckle tracking strain imaging to determine whether LV myocardial strains and strain rates scale to heart size. METHODS: We studied 18 mice, 15 rats, 6 rabbits, 12 dogs and 20 human volunteers by 2-D echocardiography. Relationship between longitudinal or circumferential strains/strain rates (S(Long)/SR(Long), S(Circ)/SR(Circ)), and LV end-diastolic volume (EDV) or mass were assessed by the allometric (power-law) equation Y = kM(β). RESULTS: Mean LV mass in individual species varied from 0.038 to 134 g, LV EDV varied from 0.015 to 102 ml, while RR interval varied from 81 to 1090 ms. While S(Long) increased with increasing LV EDV or mass (β values 0.047±0.006 and 0.051±0.005, p<0.0001 vs. 0 for both) S(Circ) was unchanged (p = NS for both LV EDV or mass). Systolic and diastolic SR(Long) and SR(Circ) showed inverse correlations to LV EDV or mass (p<0.0001 vs. 0 for all comparisons). The ratio between S(Long) and S(Circ) increased with increasing values of LV EDV or mass (β values 0.039±0.010 and 0.040±0.011, p>0.0003 for both). CONCLUSIONS: While S(Circ) is unchanged, S(Long) increases with increasing heart size, indicating that large mammals rely more on long axis contribution to systolic function. SR(Long) and SR(Circ), both diastolic and systolic, show an expected decrease with increasing heart size

    New Frontiers in Explainable AI: Understanding the GI to Interpret the GO

    No full text
    International audienceIn this paper we focus on the importance of interpreting the quality of the input of predictive models (potentially a GI, i.e., a Garbage In) to make sense of the reliability of their output (potentially a GO, a Garbage Out) in support of human decision making, especially in critical domains, like medicine. To this aim, we propose a framework where we distinguish between the Gold Standard (or Ground Truth) and the set of annotations from which this is derived, and a set of quality dimensions that help to assess and interpret the AI advice: fineness, trueness, representativeness, conformity, dryness. We then discuss implications for obtaining more informative training sets and for the design of more usable Decision Support Systems
    corecore