479 research outputs found

    Real-Time Ultrasound Segmentation, Analysis and Visualisation of Deep Cervical Muscle Structure

    Get PDF
    Despite widespread availability of ultrasound and a need for personalised muscle diagnosis (neck/back pain-injury, work related disorder, myopathies, neuropathies), robust, online segmentation of muscles within complex groups remains unsolved by existing methods. For example, Cervical Dystonia (CD) is a prevalent neurological condition causing painful spasticity in one or multiple muscles in the cervical muscle system. Clinicians currently have no method for targeting/monitoring treatment of deep muscles. Automated methods of muscle segmentation would enable clinicians to study, target, and monitor the deep cervical muscles via ultrasound. We have developed a method for segmenting five bilateral cervical muscles and the spine via ultrasound alone, in real-time. Magnetic Resonance Imaging (MRI) and ultrasound data were collected from 22 participants (age: 29.0 ± 6.6, male: 12). To acquire ultrasound muscle segment labels, a novel multimodal registration method was developed, involving MRI image annotation, and shape registration to MRI-matched ultrasound images, via approximation of the tissue deformation. We then applied polynomial regression to transform our annotations and textures into a mean space, before using shape statistics to generate a texture-to-shape dictionary. For segmentation, test images were compared to dictionary textures giving an initial segmentation, and then we used a customized Active Shape Model to refine the fit. Using ultrasound alone, on unseen participants, our technique currently segments a single image in ≈0.45s to over 86% accuracy (Jaccard index). We propose this approach is applicable generally to segment, extrapolate and visualise deep muscle structure, and analyse statistical features online

    Objective analysis of neck muscle boundaries for cervical dystonia using ultrasound imaging and deep learning

    Get PDF
    Objective: To provide objective visualization and pattern analysis of neck muscle boundaries to inform and monitor treatment of cervical dystonia. Methods: We recorded transverse cervical ultrasound (US) images and whole-body motion analysis of sixty-one standing participants (35 cervical dystonia, 26 age matched controls). We manually annotated 3,272 US images sampling posture and the functional range of pitch, yaw, and roll head movements. Using previously validated methods, we used 60-fold cross validation to train, validate and test a deep neural network (U-net) to classify pixels to 13 categories (five paired neck muscles, skin, ligamentum nuchae, vertebra). For all participants for their normal standing posture, we segmented US images and classified condition (Dystonia/Control), sex and age (higher/lower) from segment boundaries. We performed an explanatory, visualization analysis of dystonia muscle-boundaries. Results: For all segments, agreement with manual labels was Dice Coefficient (64±21%) and Hausdorff Distance (5.7±4 mm). For deep muscle layers, boundaries predicted central injection sites with average precision 94±3%. Using leave-one-out cross-validation, a support-vector-machine classified condition, sex, and age from predicted muscle boundaries at accuracy 70.5%, 67.2%, 52.4% respectively, exceeding classification by manual labels. From muscle boundaries, Dystonia clustered optimally into three sub-groups. These sub-groups are visualized and explained by three eigen-patterns which correlate significantly with truncal and head posture. Conclusion: Using US, neck muscle shape alone discriminates dystonia from healthy controls. Significance: Using deep learning, US imaging allows online, automated visualization, and diagnostic analysis of cervical dystonia and segmentation of individual muscles for targeted injection. The dataset is available (DOI: 10.23634/MMUDR.00624643)

    Estimation of Absolute States of Human Skeletal Muscle via Standard B-Mode Ultrasound Imaging and Deep Convolutional Neural Networks

    Get PDF
    Objective: To test automated in vivo estimation of active and passive skeletal muscle states using ultrasonic imaging. Background: Current technology (electromyography, dynamometry, shear wave imaging) provides no general, non-invasive method for online estimation of skeletal muscle states. Ultrasound (US) allows non-invasive imaging of muscle, yet current computational approaches have never achieved simultaneous extraction nor generalisation of independently varying, active and passive states. We use deep learning to investigate the generalizable content of 2D US muscle images. Method: US data synchronized with electromyography of the calf muscles, with measures of joint moment/angle were recorded from 32 healthy participants (7 female, ages: 27.5, 19-65). We extracted a region of interest of medial gastrocnemius and soleus using our prior developed accurate segmentation algorithm. From the segmented images, a deep convolutional neural network was trained to predict three absolute, driftfree, components of the neurobiomechanical state (activity, joint angle, joint moment) during experimentally designed, simultaneous, independent variation of passive (joint angle) and active (electromyography) inputs. Results: For all 32 held-out participants (16-fold cross-validation) the ankle joint angle, electromyography, and joint moment were estimated to accuracy 55±8%, 57±11%, and 46±9% respectively. Significance: With 2D US imaging, deep neural networks can encode in generalizable form, the activitylength-tension state relationship of these muscles. Observation only, low power, 2D US imaging can provide a new category of technology for non-invasive estimation of neural output, length and tension in skeletal muscle. This proof of principle has value for personalised muscle assessment in pain, injury, neurological conditions, neuropathies, myopathies and ageing

    Characterization of brain development in preterm children using ultrasound images

    Get PDF
    El període més important per al desenvolupament del cervell humà és la fase fetal. Durant aquest període de quaranta setmanes, es produeixen canvis morfològics importants al cervell humà, incloent un enorme augment de la superfície cerebral després del desenvolupament dels solcs i circumvolucions. En els nadons prematurs, aquests canvis es produeixen en un entorn extrauterí i s’ha demostrat un deteriorament del desenvolupament cerebral en aquesta població a una edat equivalent al terme. Un atles normalitzat de maduració cerebral amb ultrasons cerebrals pot permetre als clínics avaluar aquests canvis setmanalment des del naixement fins a una edat equivalent al terme. Basat en les imatges dels diferents nadons proporcionats per dos investigadors clínics, aquest estudi proposa una aplicació web implementada amb Python i les seves diferents biblioteques, inclòs Dash, i accessible a través de Docker que permet accedir directament a l’aplicació dissenyada i a la seva base de dades. D’aquesta manera, es proporciona una eina que permet fer una primera definició de les diferents ranures manualment per passar-les finalment per un algorisme amb l’objectiu de millorar la precisió i poder exportar tant la imatge com les coordenades que se n’obtenen.El período más importante para el desarrollo del cerebro humano es la fase fetal. Durante este período de cuarenta semanas, se producen importantes cambios morfológicos en el cerebro humano, incluido un gran aumento en la superficie del cerebro a raíz del desarrollo de surcos y circunvoluciones. En los recién nacidos prematuros, estos cambios se producen en un entorno extrauterino y se ha demostrado un deterioro del desarrollo cerebral en esta población a la edad equivalente a término. Un atlas normalizado de maduración cerebral con ecografía cerebral puede permitir a los médicos evaluar estos cambios semanalmente desde el nacimiento hasta la edad equivalente a término. A partir de las imágenes de los diferentes bebés proporcionados por dos investigadores clínicos, este estudio propone una aplicación web implementada con Python y sus diferentes bibliotecas, incluida Dash, y accesible a través de Docker que permite el acceso directo a la aplicación diseñada y su base de datos. De esta forma, se proporciona una herramienta que permite realizar una primera definición de las diferentes ranuras de forma manual para finalmente pasarlas por un algoritmo con el objetivo de mejorar la precisión y poder exportar tanto la imagen como las coordenadas obtenidas de la misma.The most important period for human’s brain development is the fetal phase. During these forty weeks period, important morphological changes take place in the human brain, including a huge increase in the brain surface following the development of sulci and gyri. In preterm newborns these changes occur in an extrauterine environment, and an impaired brain development has been shown in this population at term equivalent age. A normalized atlas of brain maturation with cerebral ultrasound may allow the clinicians to assess these changes weekly from birth to term equivalent age. Based on the images of the different babies provided by two clinical researchers, this study proposes a web application implemented with python and its different libraries, including Dash, and accessible through docker that allows direct access to the designed app and its database. In this way, a tool is provided that allows a first definition of the different grooves to be made manually to finally pass them through an algorithm with the aim of improving precision and being able to export both the image and the coordinates obtained from it

    Objective localisation of oral mucosal lesions using optical coherence tomography.

    Get PDF
    PhDIdentification of the most representative location for biopsy is critical in establishing the definitive diagnosis of oral mucosal lesions. Currently, this process involves visual evaluation of the colour characteristics of tissue aided by topical application of contrast enhancing agents. Although, this approach is widely practiced, it remains limited by its lack of objectivity in identifying and delineating suspicious areas for biopsy. To overcome this drawback there is a need to introduce a technique that would provide macroscopic guidance based on microscopic imaging and analysis. Optical Coherence Tomography is an emerging high resolution biomedical imaging modality that can potentially be used as an in vivo tool for selection of the most appropriate site for biopsy. This thesis investigates the use of OCT for qualitative and quantitative mapping of oral mucosal lesions. Feasibility studies were performed on patient biopsy samples prior to histopathological processing using a commercial OCT microscope. Qualitative imaging results examining a variety of normal, benign, inflammatory and premalignant lesions of the oral mucosa will be presented. Furthermore, the identification and utilisation of a common quantifiable parameter in OCT and histology of images of normal and dysplastic oral epithelium will be explored thus ensuring objective and reproducible mapping of the progression of oral carcinogenesis. Finally, the selection of the most representative biopsy site of oral epithelial dysplasia would be investigated using a novel approach, scattering attenuation microscopy. It is hoped this approach may help convey more clinical meaning than the conventional visualisation of OCT images

    Real-time Ultrasound Signals Processing: Denoising and Super-resolution

    Get PDF
    Ultrasound acquisition is widespread in the biomedical field, due to its properties of low cost, portability, and non-invasiveness for the patient. The processing and analysis of US signals, such as images, 2D videos, and volumetric images, allows the physician to monitor the evolution of the patient's disease, and support diagnosis, and treatments (e.g., surgery). US images are affected by speckle noise, generated by the overlap of US waves. Furthermore, low-resolution images are acquired when a high acquisition frequency is applied to accurately characterise the behaviour of anatomical features that quickly change over time. Denoising and super-resolution of US signals are relevant to improve the visual evaluation of the physician and the performance and accuracy of processing methods, such as segmentation and classification. The main requirements for the processing and analysis of US signals are real-time execution, preservation of anatomical features, and reduction of artefacts. In this context, we present a novel framework for the real-time denoising of US 2D images based on deep learning and high-performance computing, which reduces noise while preserving anatomical features in real-time execution. We extend our framework to the denoise of arbitrary US signals, such as 2D videos and 3D images, and we apply denoising algorithms that account for spatio-temporal signal properties into an image-to-image deep learning model. As a building block of this framework, we propose a novel denoising method belonging to the class of low-rank approximations, which learns and predicts the optimal thresholds of the Singular Value Decomposition. While previous denoise work compromises the computational cost and effectiveness of the method, the proposed framework achieves the results of the best denoising algorithms in terms of noise removal, anatomical feature preservation, and geometric and texture properties conservation, in a real-time execution that respects industrial constraints. The framework reduces the artefacts (e.g., blurring) and preserves the spatio-temporal consistency among frames/slices; also, it is general to the denoising algorithm, anatomical district, and noise intensity. Then, we introduce a novel framework for the real-time reconstruction of the non-acquired scan lines through an interpolating method; a deep learning model improves the results of the interpolation to match the target image (i.e., the high-resolution image). We improve the accuracy of the prediction of the reconstructed lines through the design of the network architecture and the loss function. %The design of the deep learning architecture and the loss function allow the network to improve the accuracy of the prediction of the reconstructed lines. In the context of signal approximation, we introduce our kernel-based sampling method for the reconstruction of 2D and 3D signals defined on regular and irregular grids, with an application to US 2D and 3D images. Our method improves previous work in terms of sampling quality, approximation accuracy, and geometry reconstruction with a slightly higher computational cost. For both denoising and super-resolution, we evaluate the compliance with the real-time requirement of US applications in the medical domain and provide a quantitative evaluation of denoising and super-resolution methods on US and synthetic images. Finally, we discuss the role of denoising and super-resolution as pre-processing steps for segmentation and predictive analysis of breast pathologies

    Multimodal optical systems for clinical oncology

    Get PDF
    This thesis presents three multimodal optical (light-based) systems designed to improve the capabilities of existing optical modalities for cancer diagnostics and theranostics. Optical diagnostic and therapeutic modalities have seen tremendous success in improving the detection, monitoring, and treatment of cancer. For example, optical spectroscopies can accurately distinguish between healthy and diseased tissues, fluorescence imaging can light up tumours for surgical guidance, and laser systems can treat many epithelial cancers. However, despite these advances, prognoses for many cancers remain poor, positive margin rates following resection remain high, and visual inspection and palpation remain crucial for tumour detection. The synergistic combination of multiple optical modalities, as presented here, offers a promising solution. The first multimodal optical system (Chapter 3) combines Raman spectroscopic diagnostics with photodynamic therapy using a custom-built multimodal optical probe. Crucially, this system demonstrates the feasibility of nanoparticle-free theranostics, which could simplify the clinical translation of cancer theranostic systems without sacrificing diagnostic or therapeutic benefit. The second system (Chapter 4) applies computer vision to Raman spectroscopic diagnostics to achieve spatial spectroscopic diagnostics. It provides an augmented reality display of the surgical field-of-view, overlaying spatially co-registered spectroscopic diagnoses onto imaging data. This enables the translation of Raman spectroscopy from a 1D technique to a 2D diagnostic modality and overcomes the trade-off between diagnostic accuracy and field-of-view that has limited optical systems to date. The final system (Chapter 5) integrates fluorescence imaging and Raman spectroscopy for fluorescence-guided spatial spectroscopic diagnostics. This facilitates macroscopic tumour identification to guide accurate spectroscopic margin delineation, enabling the spectroscopic examination of suspicious lesions across large tissue areas. Together, these multimodal optical systems demonstrate that the integration of multiple optical modalities has potential to improve patient outcomes through enhanced tumour detection and precision-targeted therapies.Open Acces

    Integrated navigation and visualisation for skull base surgery

    Get PDF
    Skull base surgery involves the management of tumours located on the underside of the brain and the base of the skull. Skull base tumours are intricately associated with several critical neurovascular structures making surgery challenging and high risk. Vestibular schwannoma (VS) is a benign nerve sheath tumour arising from one of the vestibular nerves and is the commonest pathology encountered in skull base surgery. The goal of modern VS surgery is maximal tumour removal whilst preserving neurological function and maintaining quality of life but despite advanced neurosurgical techniques, facial nerve paralysis remains a potentially devastating complication of this surgery. This thesis describes the development and integration of various advanced navigation and visualisation techniques to increase the precision and accuracy of skull base surgery. A novel Diffusion Magnetic Resonance Imaging (dMRI) acquisition and processing protocol for imaging the facial nerve in patients with VS was developed to improve delineation of facial nerve preoperatively. An automated Artificial Intelligence (AI)-based framework was developed to segment VS from MRI scans. A user-friendly navigation system capable of integrating dMRI and tractography of the facial nerve, 3D tumour segmentation and intraoperative 3D ultrasound was developed and validated using an anatomically-realistic acoustic phantom model of a head including the skull, brain and VS. The optical properties of five types of human brain tumour (meningioma, pituitary adenoma, schwannoma, low- and high-grade glioma) and nine different types of healthy brain tissue were examined across a wavelength spectrum of 400 nm to 800 nm in order to inform the development of an Intraoperative Hypserpectral Imaging (iHSI) system. Finally, functional and technical requirements of an iHSI were established and a prototype system was developed and tested in a first-in-patient study
    corecore