91 research outputs found

    Adaptive Segmentation of Knee Radiographs for Selecting the Optimal ROI in Texture Analysis

    Full text link
    The purposes of this study were to investigate: 1) the effect of placement of region-of-interest (ROI) for texture analysis of subchondral bone in knee radiographs, and 2) the ability of several texture descriptors to distinguish between the knees with and without radiographic osteoarthritis (OA). Bilateral posterior-anterior knee radiographs were analyzed from the baseline of OAI and MOST datasets. A fully automatic method to locate the most informative region from subchondral bone using adaptive segmentation was developed. We used an oversegmentation strategy for partitioning knee images into the compact regions that follow natural texture boundaries. LBP, Fractal Dimension (FD), Haralick features, Shannon entropy, and HOG methods were computed within the standard ROI and within the proposed adaptive ROIs. Subsequently, we built logistic regression models to identify and compare the performances of each texture descriptor and each ROI placement method using 5-fold cross validation setting. Importantly, we also investigated the generalizability of our approach by training the models on OAI and testing them on MOST dataset.We used area under the receiver operating characteristic (ROC) curve (AUC) and average precision (AP) obtained from the precision-recall (PR) curve to compare the results. We found that the adaptive ROI improves the classification performance (OA vs. non-OA) over the commonly used standard ROI (up to 9% percent increase in AUC). We also observed that, from all texture parameters, LBP yielded the best performance in all settings with the best AUC of 0.840 [0.825, 0.852] and associated AP of 0.804 [0.786, 0.820]. Compared to the current state-of-the-art approaches, our results suggest that the proposed adaptive ROI approach in texture analysis of subchondral bone can increase the diagnostic performance for detecting the presence of radiographic OA

    Unsupervised denoising for sparse multi-spectral computed tomography

    Full text link
    Multi-energy computed tomography (CT) with photon counting detectors (PCDs) enables spectral imaging as PCDs can assign the incoming photons to specific energy channels. However, PCDs with many spectral channels drastically increase the computational complexity of the CT reconstruction, and bespoke reconstruction algorithms need fine-tuning to varying noise statistics. \rev{Especially if many projections are taken, a large amount of data has to be collected and stored. Sparse view CT is one solution for data reduction. However, these issues are especially exacerbated when sparse imaging scenarios are encountered due to a significant reduction in photon counts.} In this work, we investigate the suitability of learning-based improvements to the challenging task of obtaining high-quality reconstructions from sparse measurements for a 64-channel PCD-CT. In particular, to overcome missing reference data for the training procedure, we propose an unsupervised denoising and artefact removal approach by exploiting different filter functions in the reconstruction and an explicit coupling of spectral channels with the nuclear norm. Performance is assessed on both simulated synthetic data and the openly available experimental Multi-Spectral Imaging via Computed Tomography (MUSIC) dataset. We compared the quality of our unsupervised method to iterative total nuclear variation regularized reconstructions and a supervised denoiser trained with reference data. We show that improved reconstruction quality can be achieved with flexibility on noise statistics and effective suppression of streaking artefacts when using unsupervised denoising with spectral coupling

    Impaired WNT signaling and the spine-Heterozygous WNT1 mutation causes severe age-related spinal pathology

    Get PDF
    Background: WNT signaling plays a major role in bone and cartilage metabolism. Impaired WNT/beta-catenin signaling leads to early-onset osteoporosis, but specific features in bone and other tissues remain inadequately characterized. We have identified two large Finnish families with early-onset osteoporosis due to a heterozygous WNT1 mutation c.652T>G, p.C218G. This study evaluated the impact of impaired WNT/beta-catenin signaling on spinal structures. Methods: Altogether 18 WNT1 mutation-positive (age range 11-76 years, median 49 years) and 14 mutation negative subjects (10-77 years, median 43 years) underwent magnetic resonance imaging (MRI) of the spine. The images were reviewed for spinal alignment, vertebral compression fractures, intervertebral disc changes and possible endplate deterioration. The findings were correlated with clinical data. Results: Vertebral compression fractures were present in 78% (7/9) of those aged over 50 years but were not seen in younger mutation-positive subjects. All those with fractures had several severely compressed vertebrae. Altogether spinal compression fractures were present in 39% of those with a WNT1 mutation. Only 14% (2/14) mutation -negative subjects had one mild compressed vertebra each. The mutation-positive subjects had a higher mean spinal deformity index (4.0 +/- 7.3 vs 0.0 +/- 0.4) and more often increased thoracic kyphosis (Z-score > + 2.0 in 33% vs 0%). Further, they had more often Schmorl nodes (61% vs 36%), already in adolescence, and their intervertebral discs were enlarged. Conclusion: Compromised WNT signaling introduces severe and progressive changes to the spinal structures. Schmorl nodes are prevalent even at an early age and increased thoracic kyphosis and compression fractures become evident after the age of 50 years. Therapies targeting the WNT pathway may be an effective way to prevent spinal pathology not only in those harboring a mutation but also in the general population with similar pathology. (C) 2017 Elsevier Inc. All rights reserved.Peer reviewe

    Local edge computing for radiological image reconstruction and computer-assisted detection: A feasibility study

    Get PDF
    Computational requirements for data processing at different stages of the radiology value chain are increasing. Cone beam computed tomography (CBCT) is a diagnostic imaging technique used in dental and extremity imaging, involving a highly demanding image reconstruction task. In turn, artificial intelligence (AI) assisted diagnostics are becoming increasingly popular, thus increasing the use of computation resources. Furthermore, the need for fully independent imaging units outside radiology departments and with remotely performed diagnostics emphasize the need for wireless connectivity between the imaging unit and hospital infrastructure. In this feasibility study, we propose an approach based on a distributed edge-cloud computing platform, consisting of small-scale local edge nodes, edge servers with traditional cloud resources to perform data processing tasks in radiology. We are interested in the use of local computing resources with Graphics Processing Units (GPUs), in our case Jetson Xavier NX, for hosting the algorithms for two use-cases, namely image reconstruction in cone beam computed tomography and AI-assisted cancer detection from mammographic images. Particularly, we wanted to determine the technical requirements for local edge computing platform for these two tasks and whether CBCT image reconstruction and breast cancer detection tasks are possible in a diagnostically acceptable time frame. We validated the use-cases and the proposed edge computing platform in two stages. First, the algorithms were validated use-case-wise by comparing the computing performance of the edge nodes against a reference setup (regular workstation). Second, we performed qualitative evaluation on the edge computing platform by running the algorithms as nanoservices. Our results, obtained through real-life prototyping, indicate that it is possible and technically feasible to run both reconstruction and AI-assisted image analysis functions in a diagnostically acceptable computing time. Furthermore, based on the qualitative evaluation, we confirmed that the local edge computing capacity can be scaled up and down during runtime by adding or removing edge devices without the need for manual reconfigurations. We also found all previously implemented software components to be transferable as such. Overall, the results are promising and help in developing future applications, e.g., in mobile imaging scenarios, where such a platform is beneficial

    An Automatic Regularization Method : An Application for 3-D X-Ray Micro-CT Reconstruction Using Sparse Data

    Get PDF
    X-ray tomography is a reliable tool for determining the inner structure of 3-D object with penetrating X-rays. However, traditional reconstruction methods, such as Feldkamp-Davis-Kress (FDK), require dense angular sampling in the data acquisition phase leading to long measurement times, especially in X-ray micro-tomography to obtain high-resolution scans. Acquiring less data using greater angular steps is an obvious way for speeding up the process and avoiding the need to save huge data sets. However, computing 3-D reconstruction from such a sparsely sampled data set is difficult because the measurement data are usually contaminated by errors, and linear measurement models do not contain sufficient information to solve the problem in practice. An automatic regularization method is proposed for robust reconstruction, based on enforcing sparsity in the 3-D shearlet transform domain. The inputs of the algorithm are the projection data and a priori known expected degree of sparsity, denoted as 0 <C-prPeer reviewe

    Hydroxychloroquine reduces interleukin-6 levels after myocardial infarction : The randomized, double-blind, placebo-controlled OXI pilot trial

    Get PDF
    Objectives: To determine the anti-inflammatory effect and safety of hydroxychloroquine after acute myocardial infarction. Method: In this multicenter, double-blind, placebo-controlled OXI trial, 125 myocardial infarction patients were randomized at a median of 43 h after hospitalization to receive hydroxychloroquine 300 mg (n = 64) or placebo (n = 61) once daily for 6 months and, followed for an average of 32 months. Laboratory values were measured at baseline, 1, 6, and 12 months. Results: The levels of interleukin-6 (IL-6) were comparable at baseline between study groups (p = 0.18). At six months, the IL-6 levels were lower in the hydroxychloroquine group (p = 0.042, between groups), and in the on-treatment analysis, the difference at this time point was even more pronounced (p = 0.019, respectively). The high-sensitivity C-reactive protein levels did not differ significantly between study groups at any time points. Eleven patients in the hydroxychloroquine group and four in the placebo group had adverse events leading to in-terruption or withdrawal of study medication, none of which was serious (p = 0.10, between groups). Conclusions: In patients with myocardial infarction, hydroxychloroquine reduced IL-6 levels significantly more than did placebo without causing any clinically significant adverse events. A larger randomized clinical trial is warranted to prove the potential ability of hydroxychloroquine to reduce cardiovascular endpoints after myocar-dial infarction. (c) 2021 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license (http:// creativecommons.org/licenses/by/4.0/).Peer reviewe
    • 

    corecore