137 research outputs found

    Advanced Sensing and Image Processing Techniques for Healthcare Applications

    Get PDF
    This Special Issue aims to attract the latest research and findings in the design, development and experimentation of healthcare-related technologies. This includes, but is not limited to, using novel sensing, imaging, data processing, machine learning, and artificially intelligent devices and algorithms to assist/monitor the elderly, patients, and the disabled population

    Automatic registration of 3D models to laparoscopic video images for guidance during liver surgery

    Get PDF
    Laparoscopic liver interventions offer significant advantages over open surgery, such as less pain and trauma, and shorter recovery time for the patient. However, they also bring challenges for the surgeons such as the lack of tactile feedback, limited field of view and occluded anatomy. Augmented reality (AR) can potentially help during laparoscopic liver interventions by displaying sub-surface structures (such as tumours or vasculature). The initial registration between the 3D model extracted from the CT scan and the laparoscopic video feed is essential for an AR system which should be efficient, robust, intuitive to use and with minimal disruption to the surgical procedure. Several challenges of registration methods in laparoscopic interventions include the deformation of the liver due to gas insufflation in the abdomen, partial visibility of the organ and lack of prominent geometrical or texture-wise landmarks. These challenges are discussed in detail and an overview of the state of the art is provided. This research project aims to provide the tools to move towards a completely automatic registration. Firstly, the importance of pre-operative planning is discussed along with the characteristics of the liver that can be used in order to constrain a registration method. Secondly, maximising the amount of information obtained before the surgery, a semi-automatic surface based method is proposed to recover the initial rigid registration irrespective of the position of the shapes. Finally, a fully automatic 3D-2D rigid global registration is proposed which estimates a global alignment of the pre-operative 3D model using a single intra-operative image. Moving towards incorporating the different liver contours can help constrain the registration, especially for partial surfaces. Having a robust, efficient AR system which requires no manual interaction from the surgeon will aid in the translation of such approaches to the clinics

    Multimodal optical systems for clinical oncology

    Get PDF
    This thesis presents three multimodal optical (light-based) systems designed to improve the capabilities of existing optical modalities for cancer diagnostics and theranostics. Optical diagnostic and therapeutic modalities have seen tremendous success in improving the detection, monitoring, and treatment of cancer. For example, optical spectroscopies can accurately distinguish between healthy and diseased tissues, fluorescence imaging can light up tumours for surgical guidance, and laser systems can treat many epithelial cancers. However, despite these advances, prognoses for many cancers remain poor, positive margin rates following resection remain high, and visual inspection and palpation remain crucial for tumour detection. The synergistic combination of multiple optical modalities, as presented here, offers a promising solution. The first multimodal optical system (Chapter 3) combines Raman spectroscopic diagnostics with photodynamic therapy using a custom-built multimodal optical probe. Crucially, this system demonstrates the feasibility of nanoparticle-free theranostics, which could simplify the clinical translation of cancer theranostic systems without sacrificing diagnostic or therapeutic benefit. The second system (Chapter 4) applies computer vision to Raman spectroscopic diagnostics to achieve spatial spectroscopic diagnostics. It provides an augmented reality display of the surgical field-of-view, overlaying spatially co-registered spectroscopic diagnoses onto imaging data. This enables the translation of Raman spectroscopy from a 1D technique to a 2D diagnostic modality and overcomes the trade-off between diagnostic accuracy and field-of-view that has limited optical systems to date. The final system (Chapter 5) integrates fluorescence imaging and Raman spectroscopy for fluorescence-guided spatial spectroscopic diagnostics. This facilitates macroscopic tumour identification to guide accurate spectroscopic margin delineation, enabling the spectroscopic examination of suspicious lesions across large tissue areas. Together, these multimodal optical systems demonstrate that the integration of multiple optical modalities has potential to improve patient outcomes through enhanced tumour detection and precision-targeted therapies.Open Acces

    Data-Driven Deep Learning-Based Analysis on THz Imaging

    Get PDF
    Breast cancer affects about 12.5% of women population in the United States. Surgical operations are often needed post diagnosis. Breast conserving surgery can help remove malignant tumors while maximizing the remaining healthy tissues. Due to lacking effective real-time tumor analysis tools and a unified operation standard, re-excision rate could be higher than 30% among breast conserving surgery patients. This results in significant physical, physiological, and financial burdens to those patients. This work designs deep learning-based segmentation algorithms that detect tissue type in excised tissues using pulsed THz technology. This work evaluates the algorithms for tissue type classification task among freshly excised tumor samples. Freshly excised tumor samples are more challenging than formalin-fixed, paraffin-embedded (FFPE) block sample counterparts due to excessive fluid, image registration difficulties, and lacking trustworthy pixelwise labels of each tissue sample. Additionally, evaluating freshly excised tumor samples has profound meaning of potentially applying pulsed THz scan technology to breast conserving cancer surgery in operating room. Recently, deep learning techniques have been heavily researched since GPU based computation power becomes economical and stronger. This dissertation revisits breast cancer tissue segmentation related problems using pulsed terahertz wave scan technique among murine samples and applies recent deep learning frameworks to enhance the performance in various tasks. This study first performs pixelwise classification on terahertz scans with CNN-based neural networks and time-frequency based feature tensors using wavelet transformation. This study then explores the neural network based semantic segmentation strategy performing on terahertz scans considering spatial information and incorporating noisy label handling with label correction techniques. Additionally, this study performs resolution restoration for visual enhancement on terahertz scans using an unsupervised, generative image-to-image translation methodology. This work also proposes a novel data processing pipeline that trains a semantic segmentation network using only neural generated synthetic terahertz scans. The performance is evaluated using various evaluation metrics among different tasks

    Development of an image guidance system for laparoscopic liver surgery and evaluation of optical and computer vision techniques for the assessment of liver tissue

    Get PDF
    Introduction: Liver resection is increasingly being carried out via the laparoscopic approach (keyhole surgery) because there is mounting evidence that it benefits patients by reducing pain and length of hospitalisation. There are however ongoing concerns about oncological radicality (i.e. ability to completely remove cancer) and an inability to control massive haemorrhage. These issues can partially be attributed to a loss of sensation such as depth perception, tactile feedback and a reduced field of view. Utilisation of optical imaging and computer vision may be able to compensate for some of the lost sensory input because these modalities can facilitate visualisation of liver tissue and structural anatomy. Their use in laparoscopy is attractive because it is easy to adapt or integrate with existing technology. The aim of this thesis is to explore to what extent this technology can aid in the detection of normal and abnormal liver tissue and structures. / Methods: The current state of the art for optical imaging and computer vision in laparoscopic liver surgery is assessed in a systematic review. Evaluation of confocal laser endomicroscopy is carried out on a murine and porcine model of liver disease. Multispectral near infrared imaging is evaluated on ex-vivo liver specimen. Video magnification is assessed on a mechanical flow phantom and a porcine model of liver disease. The latter model was also employed to develop a computer vision based image guidance system for laparoscopic liver surgery. This image guidance system is further evaluated in a clinical feasibility study. Where appropriate, experimental findings are substantiated with statistical analysis. / Results: Use of confocal laser endomicroscopy enabled discrimination between cancer and normal liver tissue with a sub-millimetre precision. This technology also made it possible to verify the adequacy of thermal liver ablation. Multispectral imaging, at specific wavelengths was shown to have the potential to highlight the presence of colorectal and hepatocellular cancer. An image reprocessing algorithm is proposed to simplify visual interpretation of the resulting images. It is shown that video magnification can determine the presence of pulsatile motion but that it cannot reliably determine the extent of motion. Development and performance metrics of an image guidance system for laparoscopic liver surgery are outlined. The system was found to improve intraoperative orientation more development work is however required to enable reliable prediction of oncological margins. / Discussion: The results in this thesis indicate that confocal laser endomicroscopy and image guidance systems have reached a development stage where their intraoperative use may benefit surgeons by visualising features of liver anatomy and tissue characteristics. Video magnification and multispectral imaging require more development and suggestions are made to direct this work. It is also highlighted that it is crucial to standardise assessment methods for these technologies which will allow a more direct comparison between the outcomes of different groups. Limited imaging depth is a major restriction of these technologies but this may be overcome by combining them with preoperatively obtained imaging data. Just like laparoscopy, optical imaging and computer vision use functions of light, a shared characteristic that makes their combined use complementary

    A study of Raman spectroscopy for the early detection and characterization of prostate cancer using blood plasma and prostate tissue biopsy.

    Get PDF
    Prostate cancer (PC) is the most common cancer in men after non-melanoma skin cancer in the United Kingdom (Cancer Research UK, 2019). Current diagnostic methods (PSA, DRE, MRI & prostate biopsy) have limitations as these are unable to distinguish between low-risk cancers that do not need active treatment from cancers which are more likely to progress. In addition, prostate biopsy is invasive with potential side effects. There is an urgent need to identify new biomarkers for early diagnosis and prognostication in PC. Raman spectroscopy (RS) is an optical technique that utilises molecular-specific, inelastic scattering of light photons to interrogate biological samples. When laser light is incident on a biological sample, the photons from the laser light can interact with the intramolecular bonds present within the sample. The Raman spectrum is a direct function of the molecular composition of the tissue, providing a molecular fingerprint of the phenotypic expression of the cells and tissues, which can give good objective information regarding the pathological state of the biological sample under interrogation. We applied a technique of drop coating deposition Raman (DCDR) spectroscopy using both blood plasma and sera to see if a more accurate prediction of the presence and progression of prostate cancer could be achieved than PSA which would allow for blood sample triage of patients into at risk groups. 100 participants were recruited for this study (100 blood plasma and 100 serum samples). Secondly, 79 prostate tissue samples (from the same cohort) were interrogated with the aid of Raman micro-spectroscopy to ascertain if Raman spectroscopy can provide molecular fingerprint that can be utilised for real time in vivo analysis. Multivariate analysis of support vector machine (SVM) learning and linear discriminant analysis (LDA) were utilised differently to test the performance accuracy of the discriminant model for distinguishing between benign and malignant mean plasma spectra. SVM gave a better performance accuracy than LDA with sensitivity and specificity of 96% and 97% respectively and an area under the curve (AUC) of 0.98 as opposed to sensitivity and specificity of 51% and 80% respectively with AUC of 0.74 using LDA. Slightly lower performance accuracy was also observed when blood serum mean spectra analysis was compared with blood plasma mean spectra analysis for both machine learning algorithms (SVM & LDA). Tissue spectral analysis on the other hand recorded an overall accuracy of 80.8% and AUC of 0.82 with the SVM algorithm compared to performance accuracy of 75% and AUC of 0.77 with LDA algorithm (better performance noted with the SVM algorithm). The small sample size of 79 prostate biopsy tissues was responsible for the low sensitivity and specificity. Therefore, the tissues were insufficient to describe all the variances in each group as well as the variability of the gold standard technique. Conclusion: Raman spectroscopy could be a potentially useful technique in the management of Prostate Cancer in areas such as tissue diagnosis, assessment of surgical margin after radical prostatectomy, detection of metastasis, Prostate Cancer screening as well as monitoring and prognosticating patients with Prostate Cancer. However, more needs to be done to validate the approaches outlined in this thesis using prospective collection of new samples to test the classification models independently with sufficient statistical power. At this stage only the fluid-based models are likely to be large enough for this validation process

    Computer vision based classification of fruits and vegetables for self-checkout at supermarkets

    Get PDF
    The field of machine learning, and, in particular, methods to improve the capability of machines to perform a wider variety of generalised tasks are among the most rapidly growing research areas in today’s world. The current applications of machine learning and artificial intelligence can be divided into many significant fields namely computer vision, data sciences, real time analytics and Natural Language Processing (NLP). All these applications are being used to help computer based systems to operate more usefully in everyday contexts. Computer vision research is currently active in a wide range of areas such as the development of autonomous vehicles, object recognition, Content Based Image Retrieval (CBIR), image segmentation and terrestrial analysis from space (i.e. crop estimation). Despite significant prior research, the area of object recognition still has many topics to be explored. This PhD thesis focuses on using advanced machine learning approaches to enable the automated recognition of fresh produce (i.e. fruits and vegetables) at supermarket self-checkouts. This type of complex classification task is one of the most recently emerging applications of advanced computer vision approaches and is a productive research topic in this field due to the limited means of representing the features and machine learning techniques for classification. Fruits and vegetables offer significant inter and intra class variance in weight, shape, size, colour and texture which makes the classification challenging. The applications of effective fruit and vegetable classification have significant importance in daily life e.g. crop estimation, fruit classification, robotic harvesting, fruit quality assessment, etc. One potential application for this fruit and vegetable classification capability is for supermarket self-checkouts. Increasingly, supermarkets are introducing self-checkouts in stores to make the checkout process easier and faster. However, there are a number of challenges with this as all goods cannot readily be sold with packaging and barcodes, for instance loose fresh items (e.g. fruits and vegetables). Adding barcodes to these types of items individually is impractical and pre-packaging limits the freedom of choice when selecting fruits and vegetables and creates additional waste, hence reducing customer satisfaction. The current situation, which relies on customers correctly identifying produce themselves leaves open the potential for incorrect billing either due to inadvertent error, or due to intentional fraudulent misclassification resulting in financial losses for the store. To address this identified problem, the main goals of this PhD work are: (a) exploring the types of visual and non-visual sensors that could be incorporated into a self-checkout system for classification of fruits and vegetables, (b) determining a suitable feature representation method for fresh produce items available at supermarkets, (c) identifying optimal machine learning techniques for classification within this context and (d) evaluating our work relative to the state-of-the-art object classification results presented in the literature. An in-depth analysis of related computer vision literature and techniques is performed to identify and implement the possible solutions. A progressive process distribution approach is used for this project where the task of computer vision based fruit and vegetables classification is divided into pre-processing and classification techniques. Different classification techniques have been implemented and evaluated as possible solution for this problem. Both visual and non-visual features of fruit and vegetables are exploited to perform the classification. Novel classification techniques have been carefully developed to deal with the complex and highly variant physical features of fruit and vegetables while taking advantages of both visual and non-visual features. The capability of classification techniques is tested in individual and ensemble manner to achieved the higher effectiveness. Significant results have been obtained where it can be concluded that the fruit and vegetables classification is complex task with many challenges involved. It is also observed that a larger dataset can better comprehend the complex variant features of fruit and vegetables. Complex multidimensional features can be extracted from the larger datasets to generalise on higher number of classes. However, development of a larger multiclass dataset is an expensive and time consuming process. The effectiveness of classification techniques can be significantly improved by subtracting the background occlusions and complexities. It is also worth mentioning that ensemble of simple and less complicated classification techniques can achieve effective results even if applied to less number of features for smaller number of classes. The combination of visual and nonvisual features can reduce the struggle of a classification technique to deal with higher number of classes with similar physical features. Classification of fruit and vegetables with similar physical features (i.e. colour and texture) needs careful estimation and hyper-dimensional embedding of visual features. Implementing rigorous classification penalties as loss function can achieve this goal at the cost of time and computational requirements. There is a significant need to develop larger datasets for different fruit and vegetables related computer vision applications. Considering more sophisticated loss function penalties and discriminative hyper-dimensional features embedding techniques can significantly improve the effectiveness of the classification techniques for the fruit and vegetables applications

    Investigating Ultrasound-Guided Autonomous Assistance during Robotic Minimally Invasive Surgery

    Get PDF
    Despite it being over twenty years since the first introduction of robotic surgical systems in common surgical practice, they are still far from widespread across all healthcare systems, surgical disciplines and procedures. At the same time, the systems that are used act as mere tele-manipulators with motion scaling and have yet to make use of the immense potential of their sensory data in providing autonomous assistance during surgery or perform tasks themselves in a semi-autonomous fashion. Equivalently, the potential of using intracorporeal imaging, particularly Ultrasound (US) during surgery for improved tumour localisation remains largely unused. Aside from the cost factors, this also has to do with the necessity of adequate training for scan interpretation and the difficulty of handling an US probe near the surgical sight. Additionally, the potential for automation that is being explored in extracorporeal US using serial manipulators does not yet translate into ultrasound-enabled autonomous assistance in a surgical robotic setting. Motivated by this research gap, this work explores means to enable autonomous intracorporeal ultrasound in a surgical robotic setting. Based around the the da Vinci Research Kit (dVRK), it first develops a surgical robotics platform that allows for precise evaluation of the robot’s performance using Infrared (IR) tracking technology. Based on this initial work, it then explores the possibility to provide autonomous ultrasound guidance during surgery. Therefore, it develops and assesses means to improve kinematic accuracy despite manipulator backlash as well as enabling adequate probe position with respect to the tissue surface and anatomy. Founded on the acquired anatomical information, this thesis explores the integration of a second robotic arm and its usage for autonomous assistance. Starting with an autonomously acquired tumor scan, the setup is extended and methods devised to enable the autonomous marking of margined tumor boundaries on the tissue surface both in a phantom as well as in an ex-vivo experiment on porcine liver. Moving towards increased autonomy, a novel minimally invasive High Intensity Focused Ultrasound (HIFUS) transducer is integrated into the robotic setup including a sensorised, water-filled membrane for sensing interaction forces with the tissue surface. For this purpose an extensive material characterisation is caried out, exploring different surface material pairings. Finally, the proposed system, including trajectory planning and a hybrid-force position control scheme are evaluated in a benchtop ultrasound phantom trial
    • …
    corecore