6 research outputs found

    From Constraints to Opportunities: Efficient Object Detection Learning for Humanoid Robots

    Get PDF
    Reliable perception and efficient adaptation to novel conditions are priority skills for robots that function in ever-changing environments. Indeed, autonomously operating in real world scenarios raises the need of identifying different context\u2019s states and act accordingly. Moreover, the requested tasks might not be known a-priori, requiring the system to update on-line. Robotic platforms allow to gather various types of perceptual information due to the multiple sensory modalities they are provided with. Nonetheless, latest results in computer vision motivate a particular interest in visual perception. Specifically, in this thesis, I mainly focused on the object detection task since it can be at the basis of more sophisticated capabilities. The vast advancements in latest computer vision research, brought by deep learning methods, are appealing in a robotic setting. However, their adoption in applied domains is not straightforward since adapting them to new tasks is strongly demanding in terms of annotated data, optimization time and computational resources. These requirements do not generally meet current robotics constraints. Nevertheless, robotic platforms and especially humanoids present opportunities that can be exploited. The sensors they are provided with represent precious sources of additional information. Moreover, their embodiment in the workspace and their motion capabilities allow for a natural interaction with the environment. Motivated by these considerations, in this Ph.D project, I mainly aimed at devising and developing solutions able to integrate the worlds of computer vision and robotics, by focusing on the task of object detection. Specifically, I dedicated a large amount of effort in alleviating state-of-the-art methods requirements in terms of annotated data and training time, preserving their accuracy by exploiting robotics opportunity

    Breast Lesion Detection in Ultrasound Images Using Deep Neural Networks: Clustering Based Approach for False Positive Reduction

    Get PDF
    Breast cancer is one of the most common forms of cancer. Popular imaging modalities used for breast cancer screening include Mammograms, Ultrasound (US), and Magnetic Resonance Imaging (MRI). US is a widely adopted modality due to its relative affordability, portability and higher patient safety. Early detection of lesion(s) is crucial to ensure a high survival rate and minimise adverse effects on the body. Currently, we face a global crisis in the number of experienced radiologists available per patient. Therefore, automating lesion detection, with Artificial intelligence (AI) acting as a secondary opinion, can assist radiologists in faster diagnosis. In recent years, deep-learning (DL) based object detection methods have become popular in Computer-Aided Diagnosis (CAD) systems due to their ability in extracting high level, abstract features resulting in their higher generalisation capability and applicability in real-life operations. Compared to object detection in natural images, lesion detection in US images is a challenging task due to the inherent characteristics of these images. Due to these challenges and a lack of large scale US datasets, the number of DL-based lesion detection methods developed for US images is relatively lower compared to object detection methods developed for natural images. Thus, it is common practice to modify an existing object detector originally designed for natural images for lesion detection in US images. One such popularly adapted detector is Faster R-CNN (FRCNN). Limited attention has been given to adapted FRCNN for breast lesion detection in US images. The adaptation results in a relatively high detection rate along with a high number of false positive (FP) detections that degrade the overall performance. Such high FPs may mystify radiologists in reading and interpreting the US images and lead to unnecessary additional checks and biopsies. Reducing FPs in breast US images still remains an open investigation area which provides us the motivation for this study. Up to the point reported in this thesis, no work has been specifically developed to adequately address the issue of FPs in DL-based detection methods for breast lesion detection in US images. The aim of this research is to create a novel and effective DL-based method for detecting breast lesions from 2D US images. The research starts by investigating the effectiveness of FRCNN for breast lesion detection using large datasets of US images collected from different medical centres and machine makers. The research then provides the first solution to address the issue of FP detections by searching and identifying the optimal training and architectural hyperparameters of this powerful network. The adapted FRCNN model outperformed the original FRCNN through a significant reduction in FPs and small negative impact on the number of correct detections. Additionally, the adapted model also surpassed several existing detectors developed for natural images as well as those adapted for breast lesion detection in US images. Furthermore, this research develops a new U-Detect method. U-Detect is a clustering-based approach that combines unsupervised learning technique and the adapted FRCNN to reduce the FP detections. Two variants of the U-Detect method are developed: U-Detect-Base and U-Detect-RPN models. Both U Detect models outperform original and adapted FRCNN models through considerable reduction in FPs resulting in its higher precision. Additionally, U-Detect-RPN detected higher number of lesions than the adapted FRCNN model. Inspired by the domain knowledge of breast lesion characteristics, we further enhanced the architecture of U-Detect by developing a new classification-based approach (U-DetectH) that uses a fusion of textural and morphological handcrafted features to improve the classification scores in U-Detect and ultimately reduce the FP detections. Two variants of U-Detect-H are developed: U-DetectH-Base and U-DetectH-RPN models. The research concludes that on multiple datasets comprising a combined total of 3119 US images, U-DetectH-Base outperforms original FRCNN with 5.49% to 32.83% higher precision and a small drop of 0.27% to 10.02% in recall. This significantly higher precision is due to a 31.86% to 77.07% reduction in FPs. The work presented in this thesis provides an approach for scientists to design a robust object detection model for other cancer types as well as other medical modalities
    corecore