135 research outputs found

    Value of deep learning models based on ultrasonic dynamic videos for distinguishing thyroid nodules

    Get PDF
    ObjectiveThis study was designed to distinguish benign and malignant thyroid nodules by using deep learning(DL) models based on ultrasound dynamic videos.MethodsUltrasound dynamic videos of 1018 thyroid nodules were retrospectively collected from 657 patients in Zhejiang Cancer Hospital from January 2020 to December 2020 for the tests with 5 DL models.ResultsIn the internal test set, the area under the receiver operating characteristic curve (AUROC) was 0.929(95% CI: 0.888,0.970) for the best-performing model LSTM Two radiologists interpreted the dynamic video with AUROC values of 0.760 (95% CI: 0.653, 0.867) and 0.815 (95% CI: 0.778, 0.853). In the external test set, the best-performing DL model had AUROC values of 0.896(95% CI: 0.847,0.945), and two ultrasound radiologist had AUROC values of 0.754 (95% CI: 0.649,0.850) and 0.833 (95% CI: 0.797,0.869).ConclusionThis study demonstrates that the DL model based on ultrasound dynamic videos performs better than the ultrasound radiologists in distinguishing thyroid nodules

    Ultrasound Detection of Subquadricipital Recess Distension

    Full text link
    Joint bleeding is a common condition for people with hemophilia and, if untreated, can result in hemophilic arthropathy. Ultrasound imaging has recently emerged as an effective tool to diagnose joint recess distension caused by joint bleeding. However, no computer-aided diagnosis tool exists to support the practitioner in the diagnosis process. This paper addresses the problem of automatically detecting the recess and assessing whether it is distended in knee ultrasound images collected in patients with hemophilia. After framing the problem, we propose two different approaches: the first one adopts a one-stage object detection algorithm, while the second one is a multi-task approach with a classification and a detection branch. The experimental evaluation, conducted with 483483 annotated images, shows that the solution based on object detection alone has a balanced accuracy score of 0.740.74 with a mean IoU value of 0.660.66, while the multi-task approach has a higher balanced accuracy value (0.780.78) at the cost of a slightly lower mean IoU value

    Decomposing and Coupling Saliency Map for Lesion Segmentation in Ultrasound Images

    Full text link
    Complex scenario of ultrasound image, in which adjacent tissues (i.e., background) share similar intensity with and even contain richer texture patterns than lesion region (i.e., foreground), brings a unique challenge for accurate lesion segmentation. This work presents a decomposition-coupling network, called DC-Net, to deal with this challenge in a (foreground-background) saliency map disentanglement-fusion manner. The DC-Net consists of decomposition and coupling subnets, and the former preliminarily disentangles original image into foreground and background saliency maps, followed by the latter for accurate segmentation under the assistance of saliency prior fusion. The coupling subnet involves three aspects of fusion strategies, including: 1) regional feature aggregation (via differentiable context pooling operator in the encoder) to adaptively preserve local contextual details with the larger receptive field during dimension reduction; 2) relation-aware representation fusion (via cross-correlation fusion module in the decoder) to efficiently fuse low-level visual characteristics and high-level semantic features during resolution restoration; 3) dependency-aware prior incorporation (via coupler) to reinforce foreground-salient representation with the complementary information derived from background representation. Furthermore, a harmonic loss function is introduced to encourage the network to focus more attention on low-confidence and hard samples. The proposed method is evaluated on two ultrasound lesion segmentation tasks, which demonstrates the remarkable performance improvement over existing state-of-the-art methods.Comment: 18 pages, 18 figure

    Computational ultrasound tissue characterisation for brain tumour resection

    Get PDF
    In brain tumour resection, it is vital to know where critical neurovascular structuresand tumours are located to minimise surgical injuries and cancer recurrence. Theaim of this thesis was to improve intraoperative guidance during brain tumourresection by integrating both ultrasound standard imaging and elastography in thesurgical workflow. Brain tumour resection requires surgeons to identify the tumourboundaries to preserve healthy brain tissue and prevent cancer recurrence. Thisthesis proposes to use ultrasound elastography in combination with conventionalultrasound B-mode imaging to better characterise tumour tissue during surgery.Ultrasound elastography comprises a set of techniques that measure tissue stiffness,which is a known biomarker of brain tumours. The objectives of the researchreported in this thesis are to implement novel learning-based methods for ultrasoundelastography and to integrate them in an image-guided intervention framework.Accurate and real-time intraoperative estimation of tissue elasticity can guide towardsbetter delineation of brain tumours and improve the outcome of neurosurgery. We firstinvestigated current challenges in quasi-static elastography, which evaluates tissuedeformation (strain) by estimating the displacement between successive ultrasoundframes, acquired before and after applying manual compression. Recent approachesin ultrasound elastography have demonstrated that convolutional neural networkscan capture ultrasound high-frequency content and produce accurate strain estimates.We proposed a new unsupervised deep learning method for strain prediction, wherethe training of the network is driven by a regularised cost function, composed of asimilarity metric and a regularisation term that preserves displacement continuityby directly optimising the strain smoothness. We further improved the accuracy of our method by proposing a recurrent network architecture with convolutional long-short-term memory decoder blocks to improve displacement estimation and spatio-temporal continuity between time series ultrasound frames. We then demonstrateinitial results towards extending our ultrasound displacement estimation method toshear wave elastography, which provides a quantitative estimation of tissue stiffness.Furthermore, this thesis describes the development of an open-source image-guidedintervention platform, specifically designed to combine intra-operative ultrasoundimaging with a neuronavigation system and perform real-time ultrasound tissuecharacterisation. The integration was conducted using commercial hardware andvalidated on an anatomical phantom. Finally, preliminary results on the feasibilityand safety of the use of a novel intraoperative ultrasound probe designed for pituitarysurgery are presented. Prior to the clinical assessment of our image-guided platform,the ability of the ultrasound probe to be used alongside standard surgical equipmentwas demonstrated in 5 pituitary cases

    Risk Stratification of Thyroid Nodule: From Ultrasound Features to TIRADS

    Get PDF
    Since the 1990s, ultrasound (US) has played a major role in the assessment of thyroid nodules and their risk of malignancy. Over the last decade, the most eminent international societies have published US-based systems for the risk stratification of thyroid lesions, namely, Thyroid Imaging Reporting And Data Systems (TIRADSs). The introduction of TIRADSs into clinical practice has significantly increased the diagnostic power of US to a level approaching that of fine-needle aspiration cytology (FNAC). At present, we are probably approaching a new era in which US could be the primary tool to diagnose thyroid cancer. However, before using US in this new dominant role, we need further proof. This Special Issue, which includes reviews and original articles, aims to pave the way for the future in the field of thyroid US. Highly experienced thyroidologists focused on US are asked to contribute to achieve this goal
    • …
    corecore