101 research outputs found

    Development and validation of a deep learning model to quantify glomerulosclerosis in kidney biopsy specimens

    Get PDF
    Importance: A chronic shortage of donor kidneys is compounded by a high discard rate, and this rate is directly associated with biopsy specimen evaluation, which shows poor reproducibility among pathologists. A deep learning algorithm for measuring percent global glomerulosclerosis (an important predictor of outcome) on images of kidney biopsy specimens could enable pathologists to more reproducibly and accurately quantify percent global glomerulosclerosis, potentially saving organs that would have been discarded. Objective: To compare the performances of pathologists with a deep learning model on quantification of percent global glomerulosclerosis in whole-slide images of donor kidney biopsy specimens, and to determine the potential benefit of a deep learning model on organ discard rates. Design, Setting, and Participants: This prognostic study used whole-slide images acquired from 98 hematoxylin-eosin-stained frozen and 51 permanent donor biopsy specimen sections retrieved from 83 kidneys. Serial annotation by 3 board-certified pathologists served as ground truth for model training and for evaluation. Images of kidney biopsy specimens were obtained from the Washington University database (retrieved between June 2015 and June 2017). Cases were selected randomly from a database of more than 1000 cases to include biopsy specimens representing an equitable distribution within 0% to 5%, 6% to 10%, 11% to 15%, 16% to 20%, and more than 20% global glomerulosclerosis. Main Outcomes and Measures: Correlation coefficient (r) and root-mean-square error (RMSE) with respect to annotations were computed for cross-validated model predictions and on-call pathologists\u27 estimates of percent global glomerulosclerosis when using individual and pooled slide results. Data were analyzed from March 2018 to August 2020. Results: The cross-validated model results of section images retrieved from 83 donor kidneys showed higher correlation with annotations (r = 0.916; 95% CI, 0.886-0.939) than on-call pathologists (r = 0.884; 95% CI, 0.825-0.923) that was enhanced when pooling glomeruli counts from multiple levels (r = 0.933; 95% CI, 0.898-0.956). Model prediction error for single levels (RMSE, 5.631; 95% CI, 4.735-6.517) was 14% lower than on-call pathologists (RMSE, 6.523; 95% CI, 5.191-7.783), improving to 22% with multiple levels (RMSE, 5.094; 95% CI, 3.972-6.301). The model decreased the likelihood of unnecessary organ discard by 37% compared with pathologists. Conclusions and Relevance: The findings of this prognostic study suggest that this deep learning model provided a scalable and robust method to quantify percent global glomerulosclerosis in whole-slide images of donor kidneys. The model performance improved by analyzing multiple levels of a section, surpassing the capacity of pathologists in the time-sensitive setting of examining donor biopsy specimens. The results indicate the potential of a deep learning model to prevent erroneous donor organ discard

    High contrast ultrasonic imaging of resin-rich regions in graphite/epoxy composites using entropy

    Get PDF
    This study compares different approaches for imaging a near-surface resin-rich defect in a thin graphite/epoxy plate using backscattered ultrasound. The specimen was created by cutting a circular hole in the second ply; this region filled with excess resin from the graphite/epoxy sheets during the curing process. Backscat-tered waveforms were acquired using a 4 in. focal length, 5MHz center frequency broadband transducer, scanned on a 100 × 100 grid of points that were 0.03 × 0.03 in. apart. The specimen was scanned with the defect side closest to the transducer. Consequently, the reflection from the resin-rich region cannot be gated from the large front-wall echo. At each point in the grid 256 waveforms were averaged together and subsequently used to produce peak-to-peak, Signal Energy (sum of squared digitized waveform values), as well as entropy images of two different types (a Renyi entropy, and a joint entropy). As the figure shows, all of the entropy images exhibit better border delineation and defect contrast than the either the peak-to-peak or Signal Energy. The best results are obtained using the joint entropy of the backscattered waveforms with a reference function. Two different references are examined. The first is a reflection of the insonifying pulse from a stainless steel reflector. The second is an approximate optimum obtained from an iterative parametric search. The joint entropy images produced using this reference exhibit three times the contrast obtained in previous studies

    Entropy vs. Energy Waveform Processing: A Comparison Based on the Heat Equation

    Get PDF
    Virtually all modern imaging devices collect electromagnetic or acoustic waves and use the energy carried by these waves to determine pixel values to create what is basically an “energy” picture. However, waves also carry “information,” as quantified by some form of entropy, and this may also be used to produce an “information” image. Numerous published studies have demonstrated the advantages of entropy, or “information imaging”, over conventional methods. The most sensitive information measure appears to be the joint entropy of the collected wave and a reference signal. The sensitivity of repeated experimental observations of a slowly-changing quantity may be defined as the mean variation (i.e., observed change) divided by mean variance (i.e., noise). Wiener integration permits computation of the required mean values and variances as solutions to the heat equation, permitting estimation of their relative magnitudes. There always exists a reference, such that joint entropy has larger variation and smaller variance than the corresponding quantities for signal energy, matching observations of several studies. Moreover, a general prescription for finding an “optimal” reference for the joint entropy emerges, which also has been validated in several studies

    Deep learning quantification of percent steatosis in donor liver biopsy frozen sections

    Get PDF
    BACKGROUND: Pathologist evaluation of donor liver biopsies provides information for accepting or discarding potential donor livers. Due to the urgent nature of the decision process, this is regularly performed using frozen sectioning at the time of biopsy. The percent steatosis in a donor liver biopsy correlates with transplant outcome, however there is significant inter- and intra-observer variability in quantifying steatosis, compounded by frozen section artifact. We hypothesized that a deep learning model could identify and quantify steatosis in donor liver biopsies. METHODS: We developed a deep learning convolutional neural network that generates a steatosis probability map from an input whole slide image (WSI) of a hematoxylin and eosin-stained frozen section, and subsequently calculates the percent steatosis. Ninety-six WSI of frozen donor liver sections from our transplant pathology service were annotated for steatosis and used to train (n = 30 WSI) and test (n = 66 WSI) the deep learning model. FINDINGS: The model had good correlation and agreement with the annotation in both the training set (r of 0.88, intraclass correlation coefficient [ICC] of 0.88) and novel input test sets (r = 0.85 and ICC=0.85). These measurements were superior to the estimates of the on-service pathologist at the time of initial evaluation (r = 0.52 and ICC=0.52 for the training set, and r = 0.74 and ICC=0.72 for the test set). INTERPRETATION: Use of this deep learning algorithm could be incorporated into routine pathology workflows for fast, accurate, and reproducible donor liver evaluation. FUNDING: Mid-America Transplant Society

    Double-Peaked Low-Ionization Emission Lines in Active Galactic Nuclei

    Full text link
    We present a new sample of 116 double-peaked Balmer line Active Galactic Nuclei (AGN) selected from the Sloan Digital Sky Survey. Double-peaked emission lines are believed to originate in the accretion disks of AGN, a few hundred gravitational radii (Rg) from the supermassive black hole. We investigate the properties of the candidate disk emitters with respect to the full sample of AGN over the same redshifts, focusing on optical, radio and X-ray flux, broad line shapes and narrow line equivalent widths and line flux-ratios. We find that the disk-emitters have medium luminosities (~10^44erg/s) and FWHM on average six times broader than the AGN in the parent sample. The double-peaked AGN are 1.6 times more likely to be radio-sources and are predominantly (76%) radio quiet, with about 12% of the objects classified as LINERs. Statistical comparison of the observed double-peaked line profiles with those produced by axisymmetric and non-axisymmetric accretion disk models allows us to impose constraints on accretion disk parameters. The observed Halpha line profiles are consistent with accretion disks with inclinations smaller than 50 deg, surface emissivity slopes of 1.0-2.5, outer radii larger than ~2000 Rg, inner radii between 200-800Rg, and local turbulent broadening of 780-1800 km/s. The comparison suggests that 60% of accretion disks require some form of asymmetry (e.g., elliptical disks, warps, spiral shocks or hot spots).Comment: 60 pages, 19 figures, accepted for publication in AJ. For high quality figures and full tables, please see http://astro.princeton.edu/~iskra/disks.htm

    LSST: from Science Drivers to Reference Design and Anticipated Data Products

    Get PDF
    (Abridged) We describe here the most ambitious survey currently planned in the optical, the Large Synoptic Survey Telescope (LSST). A vast array of science will be enabled by a single wide-deep-fast sky survey, and LSST will have unique survey capability in the faint time domain. The LSST design is driven by four main science themes: probing dark energy and dark matter, taking an inventory of the Solar System, exploring the transient optical sky, and mapping the Milky Way. LSST will be a wide-field ground-based system sited at Cerro Pach\'{o}n in northern Chile. The telescope will have an 8.4 m (6.5 m effective) primary mirror, a 9.6 deg2^2 field of view, and a 3.2 Gigapixel camera. The standard observing sequence will consist of pairs of 15-second exposures in a given field, with two such visits in each pointing in a given night. With these repeats, the LSST system is capable of imaging about 10,000 square degrees of sky in a single filter in three nights. The typical 5σ\sigma point-source depth in a single visit in rr will be ∌24.5\sim 24.5 (AB). The project is in the construction phase and will begin regular survey operations by 2022. The survey area will be contained within 30,000 deg2^2 with ÎŽ<+34.5∘\delta<+34.5^\circ, and will be imaged multiple times in six bands, ugrizyugrizy, covering the wavelength range 320--1050 nm. About 90\% of the observing time will be devoted to a deep-wide-fast survey mode which will uniformly observe a 18,000 deg2^2 region about 800 times (summed over all six bands) during the anticipated 10 years of operations, and yield a coadded map to r∌27.5r\sim27.5. The remaining 10\% of the observing time will be allocated to projects such as a Very Deep and Fast time domain survey. The goal is to make LSST data products, including a relational database of about 32 trillion observations of 40 billion objects, available to the public and scientists around the world.Comment: 57 pages, 32 color figures, version with high-resolution figures available from https://www.lsst.org/overvie

    COPPADIS-2015 (COhort of Patients with PArkinson's DIsease in Spain, 2015), a global--clinical evaluations, serum biomarkers, genetic studies and neuroimaging--prospective, multicenter, non-interventional, long-term study on Parkinson's disease progressio

    Get PDF
    Background: Parkinson?s disease (PD) is a progressive neurodegenerative disorder causing motor and non-motor symptoms that can affect independence, social adjustment and the quality of life (QoL) of both patients and caregivers. Studies designed to find diagnostic and/or progression biomarkers of PD are needed. We describe here the study protocol of COPPADIS-2015 (COhort of Patients with PArkinson?s DIsease in Spain, 2015), an integral PD project based on four aspects/concepts: 1) PD as a global disease (motor and non-motor symptoms); 2) QoL and caregiver issues; 3) Biomarkers; 4) Disease progression.Methods/design: Observational, descriptive, non-interventional, 5-year follow-up, national (Spain), multicenter (45 centers from 15 autonomous communities), evaluation study. Specific goals: (1) detailed study (clinical evaluations, serum biomarkers, genetic studies and neuroimaging) of a population of PD patients from different areas of Spain, (2) comparison with a control group and (3) follow-up for 5 years. COPPADIS-2015 has been specifically designed to assess 17 proposed objectives. Study population: approximately 800 non-dementia PD patients, 600 principal caregivers and 400 control subjects. Study evaluations: (1) baseline includes motor assessment (e.g., Unified Parkinson?s Disease Rating Scale part III), non-motor symptoms (e.g., Non-Motor Symptoms Scale), cognition (e.g., Parkinson?s Disease Cognitive Rating Scale), mood and neuropsychiatric symptoms (e.g., Neuropsychiatric Inventory), disability, QoL (e.g., 39-item Parkinson?s disease Quality of Life Questionnaire Summary-Index) and caregiver status (e.g., Zarit Caregiver Burden Inventory); (2) follow-up includes annual (patients) or biannual (caregivers and controls) evaluations. Serum biomarkers (S-100b protein, TNF-?, IL-1, IL-2, IL-6, vitamin B12, methylmalonic acid, homocysteine, uric acid, C-reactive protein, ferritin, iron) and brain MRI (volumetry, tractography and MTAi [Medial Temporal Atrophy Index]), at baseline and at the end of follow-up, and genetic studies (DNA and RNA) at baseline will be performed in a subgroup of subjects (300 PD patients and 100 control subjects). Study periods: (1) recruitment period, from November, 2015 to February, 2017 (basal assessment); (2) follow-up period, 5 years; (3) closing date of clinical follow-up, May, 2022. Funding: Public/Private. Discussion: COPPADIS-2015 is a challenging initiative. This project will provide important information on the natural history of PD and the value of various biomarkers

    A novel ESR2 frameshift mutation predisposes to medullary thyroid carcinoma and causes inappropriate RET expression

    Get PDF

    Finishing the euchromatic sequence of the human genome

    Get PDF
    The sequence of the human genome encodes the genetic instructions for human physiology, as well as rich information about human evolution. In 2001, the International Human Genome Sequencing Consortium reported a draft sequence of the euchromatic portion of the human genome. Since then, the international collaboration has worked to convert this draft into a genome sequence with high accuracy and nearly complete coverage. Here, we report the result of this finishing process. The current genome sequence (Build 35) contains 2.85 billion nucleotides interrupted by only 341 gaps. It covers ∌99% of the euchromatic genome and is accurate to an error rate of ∌1 event per 100,000 bases. Many of the remaining euchromatic gaps are associated with segmental duplications and will require focused work with new methods. The near-complete sequence, the first for a vertebrate, greatly improves the precision of biological analyses of the human genome including studies of gene number, birth and death. Notably, the human enome seems to encode only 20,000-25,000 protein-coding genes. The genome sequence reported here should serve as a firm foundation for biomedical research in the decades ahead
    • 

    corecore