11 research outputs found

    Distal Radioulnar Joint Arthroplasty with a Scheker Prosthesis

    No full text

    Hyperpolarized 13C Magnetic Resonance Spectroscopic Imaging of Pyruvate Metabolism in Murine Breast Cancer Models of Different Metastatic Potential

    No full text
    This study uses dynamic hyperpolarized [1-13C]pyruvate magnetic resonance spectroscopic imaging (MRSI) to estimate differences in glycolytic metabolism between highly metastatic (4T1, n = 7) and metastatically dormant (4T07, n = 7) murine breast cancer models. The apparent conversion rate of pyruvate-to-lactate (kPL) and lactate-to-pyruvate area-under-the-curve ratio (AUCL/P) were estimated from the metabolite images and compared with biochemical metabolic measures and immunohistochemistry (IHC). A non-significant trend of increasing kPL (p = 0.17) and AUCL/P (p = 0.11) from 4T07 to 4T1 tumors was observed. No significant differences in tumor IHC lactate dehydrogenase-A (LDHA), monocarboxylate transporter-1 (MCT1), cluster of differentiation 31 (CD31), and hypoxia inducible factor-α (HIF-1α), tumor lactate-dehydrogenase (LDH) activity, or blood lactate or glucose levels were found between the two tumor lines. However, AUCL/P was significantly correlated with tumor LDH activity (ρspearman = 0.621, p = 0.027) and blood glucose levels (ρspearman = −0.474, p = 0.042). kPL displayed a similar, non-significant trend for LDH activity (ρspearman = 0.480, p = 0.114) and blood glucose levels (ρspearman = −0.414, p = 0.088). Neither kPL nor AUCL/P were significantly correlated with blood lactate levels or tumor LDHA or MCT1. The significant positive correlation between AUCL/P and tumor LDH activity indicates the potential of AUCL/P as a biomarker of glycolytic metabolism in breast cancer models. However, the lack of a significant difference between in vivo tumor metabolism for the two models suggest similar pyruvate-to-lactate conversion despite differing metastatic potential

    Automated Placement of Scan and Pre-Scan Volumes for Breast MRI Using a Convolutional Neural Network

    No full text
    Graphically prescribed patient-specific imaging volumes and local pre-scan volumes are routinely placed by MRI technologists to optimize image quality. However, manual placement of these volumes by MR technologists is time-consuming, tedious, and subject to intra- and inter-operator variability. Resolving these bottlenecks is critical with the rise in abbreviated breast MRI exams for screening purposes. This work proposes an automated approach for the placement of scan and pre-scan volumes for breast MRI. Anatomic 3-plane scout image series and associated scan volumes were retrospectively collected from 333 clinical breast exams acquired on 10 individual MRI scanners. Bilateral pre-scan volumes were also generated and reviewed in consensus by three MR physicists. A deep convolutional neural network was trained to predict both the scan and pre-scan volumes from the 3-plane scout images. The agreement between the network-predicted volumes and the clinical scan volumes or physicist-placed pre-scan volumes was evaluated using the intersection over union, the absolute distance between volume centers, and the difference in volume sizes. The scan volume model achieved a median 3D intersection over union of 0.69. The median error in scan volume location was 2.7 cm and the median size error was 2%. The median 3D intersection over union for the pre-scan placement was 0.68 with no significant difference in mean value between the left and right pre-scan volumes. The median error in the pre-scan volume location was 1.3 cm and the median size error was −2%. The average estimated uncertainty in positioning or volume size for both models ranged from 0.2 to 3.4 cm. Overall, this work demonstrates the feasibility of an automated approach for the placement of scan and pre-scan volumes based on a neural network model

    Addressing the Challenge of Assessing Physician-Level Screening Performance: Mammography as an Example

    Get PDF
    <div><p>Background</p><p>Motivated by the challenges in assessing physician-level cancer screening performance and the negative impact of misclassification, we propose a method (using mammography as an example) that enables confident assertion of adequate or inadequate performance or alternatively recognizes when more data is required.</p><p>Methods</p><p>Using established metrics for mammography screening performance–cancer detection rate (CDR) and recall rate (RR)–and observed benchmarks from the Breast Cancer Surveillance Consortium (BCSC), we calculate the minimum volume required to be 95% confident that a physician is performing at or above benchmark thresholds. We graphically display the minimum observed CDR and RR values required to confidently assert adequate performance over a range of interpretive volumes. We use a prospectively collected database of consecutive mammograms from a clinical screening program outside the BCSC to illustrate how this method classifies individual physician performance as volume accrues.</p><p>Results</p><p>Our analysis reveals that an annual interpretive volume of 2770 screening mammograms, above the United States’ (US) mandatory (480) and average (1777) annual volumes but below England’s mandatory (5000) annual volume is necessary to confidently assert that a physician performed adequately. In our analyzed US practice, a single year of data uniformly allowed confident assertion of adequate performance in terms of RR but not CDR, which required aggregation of data across more than one year.</p><p>Conclusion</p><p>For individual physician quality assessment in cancer screening programs that target low incidence populations, considering imprecision in observed performance metrics due to small numbers of patients with cancer is important.</p></div

    Annual observed performance values as compared to aggregated data.

    No full text
    <p>Annual CDR for each individual radiologist are shown on this bar graph with performance values and lower bound 95% CI summarized below the bar graph. The fourth bar for each physician represents performance over the 3 years of the study period aggregated (“Agg”) into a consolidated performance metric. Performance values in th first row in italics and bold represent performance values that would be characterized as inadequate using previously published benchmark thresholds.</p

    Individual physician performance assessment based on volume.

    No full text
    <p>Plots of (A) CDR and (B) RR for the 4 included radiologists at 6 volumes from 500 examinations (then at 1000 and subsequently 1000 exam increments) to the maximum volume read over the 3 years or 5000 total (whichever was least).</p

    Distribution of study population.

    No full text
    <p>*According to Rosenberg, et al. <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0089418#pone.0089418-Rosenberg1" target="_blank">[19]</a>.</p

    Defining adequate performance based on volume.

    No full text
    <p>Plots demonstrate our method for constructing curves by using the benchmark threshold as the limit of 95% confidence based on volume: (A) CDR performance levels are established using 2.4 as the lower boundary for 95% CI of adequate performance (CIs shown) and the upper boundary for inadequate performance (CIs not shown). This methodology shows (indicated with a black dot) that a volume of 2770 is required to confidently assert the CDR benchmark median of 4.4/1000 is adequate; (B) RR performance levels are established using 16.8 as the upper boundary for 95% CI of adequate (CI shown) and inadequate (CI not shown) performance. A volume of 120 (indicated with a black dot) is required to confidently assert the RR benchmark median of 9.7% is adequate. Plots define regions of adequate, uncertain, and inadequate performance for (B) CDR and (D) RR.</p
    corecore