1,703 research outputs found

    Toward Image-Guided Automated Suture Grasping Under Complex Environments: A Learning-Enabled and Optimization-Based Holistic Framework

    Get PDF
    To realize a higher-level autonomy of surgical knot tying in minimally invasive surgery (MIS), automated suture grasping, which bridges the suture stitching and looping procedures, is an important yet challenging task needs to be achieved. This paper presents a holistic framework with image-guided and automation techniques to robotize this operation even under complex environments. The whole task is initialized by suture segmentation, in which we propose a novel semi-supervised learning architecture featured with a suture-aware loss to pertinently learn its slender information using both annotated and unannotated data. With successful segmentation in stereo-camera, we develop a Sampling-based Sliding Pairing (SSP) algorithm to online optimize the suture's 3D shape. By jointly studying the robotic configuration and the suture's spatial characteristics, a target function is introduced to find the optimal grasping pose of the surgical tool with Remote Center of Motion (RCM) constraints. To compensate for inherent errors and practical uncertainties, a unified grasping strategy with a novel vision-based mechanism is introduced to autonomously accomplish this grasping task. Our framework is extensively evaluated from learning-based segmentation, 3D reconstruction, and image-guided grasping on the da Vinci Research Kit (dVRK) platform, where we achieve high performances and successful rates in perceptions and robotic manipulations. These results prove the feasibility of our approach in automating the suture grasping task, and this work fills the gap between automated surgical stitching and looping, stepping towards a higher-level of task autonomy in surgical knot tying

    A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community

    Full text link
    In recent years, deep learning (DL), a re-branding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, natural language processing, etc. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV; e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should be aware of, if not at the leading edge of, of advancements like DL. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets, (ii) human-understandable solutions for modelling physical phenomena, (iii) Big Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote Sensin

    Scaling Out-of-Distribution Detection for Real-World Settings

    Full text link
    Detecting out-of-distribution examples is important for safety-critical machine learning applications such as medical screening and self-driving cars. However, existing research mainly focuses on simple small-scale settings. To set the stage for more realistic out-of-distribution detection, we depart from small-scale settings and explore large-scale multiclass and multi-label settings with high-resolution images and hundreds of classes. To make future work in real-world settings possible, we also create a new benchmark for anomaly segmentation by introducing the Combined Anomalous Object Segmentation benchmark. Our novel benchmark combines two datasets for anomaly segmentation that incorporate both realism and anomaly diversity. Using both real images and those from a simulated driving environment, we ensure the background context and a wide variety of anomalous objects are naturally integrated, unlike before. We conduct extensive experiments in these more realistic settings for out-of-distribution detection and find that a surprisingly simple detector based on the maximum logit outperforms prior methods in all the large-scale multi-class, multi-label, and segmentation tasks we consider, establishing a new baseline for future work. These results, along with our new anomaly segmentation benchmark, open the door to future research in out-of-distribution detection.Comment: StreetHazards dataset and code are available at https://github.com/hendrycks/anomaly-se

    LIDAR data classification and compression

    Get PDF
    Airborne Laser Detection and Ranging (LIDAR) data has a wide range of applications in agriculture, archaeology, biology, geology, meteorology, military and transportation, etc. LIDAR data consumes hundreds of gigabytes in a typical day of acquisition, and the amount of data collected will continue to grow as sensors improve in resolution and functionality. LIDAR data classification and compression are therefore very important for managing, visualizing, analyzing and using this huge amount of data. Among the existing LIDAR data classification schemes, supervised learning has been used and can obtain up to 96% of accuracy. However some of the features used are not readily available, and the training data is also not always available in practice. In existing LIDAR data compression schemes, the compressed size can be 5%-23% of the original size, but still could be in the order of gigabyte, which is impractical for many applications. The objectives of this dissertation are (1) to develop LIDAR classification schemes that can classify airborne LIDAR data more accurately without some features or training data that existing work requires; (2) to explore lossy compression schemes that can compress LIDAR data at a much higher compression rate than is currently available. We first investigate two independent ways to classify LIDAR data depending on the availability of training data: when training data is available, we use supervised machine learning techniques such as support vector machine (SVM); when training data is not readily available, we develop an unsupervised classification method that can classify LIDAR data as good as supervised classification methods. Experimental results show that the accuracy of our classification results are over 99%. We then present two new lossy LIDAR data compression methods and compare their performance. The first one is a wavelet based compression scheme while the second one is geometry based. Our new geometry based compression is a geometry and statistics driven LIDAR point-cloud compression method which combines both application knowledge and scene content to enable fast transmission from the sensor platform while preserving the geometric properties of objects within a scene. The new algorithm is based on the idea of compression by classification. It utilizes the unique height function simplicity as well as the local spatial coherence and linearity of the aerial LIDAR data and can automatically compress the data to the desired level-of-details defined by the user. Either of the two developed classification methods can be used to automatically detect regions that are not locally linear such as vegetations or trees. In those regions, the local statistics descriptions, such as mean, variance, expectation, etc., are stored to efficiently represent the region and restore the geometry in the decompression phase. The new geometry-based compression schemes for building and ground data can compress efficiently and significantly reduce the file size, while retaining a good fit for the scalable "zoom in" requirements. Experimental results show that compared with existing LIDAR lossy compression work, our proposed approach achieves two orders of magnitude lower bit rate with the same quality, making it feasible for applications that were not practical before. The ability to store information into a database and query them efficiently becomes possible with the proposed highly efficient compression scheme.Includes bibliographical references (pages 106-116)

    Variational methods and its applications to computer vision

    Get PDF
    Many computer vision applications such as image segmentation can be formulated in a ''variational'' way as energy minimization problems. Unfortunately, the computational task of minimizing these energies is usually difficult as it generally involves non convex functions in a space with thousands of dimensions and often the associated combinatorial problems are NP-hard to solve. Furthermore, they are ill-posed inverse problems and therefore are extremely sensitive to perturbations (e.g. noise). For this reason in order to compute a physically reliable approximation from given noisy data, it is necessary to incorporate into the mathematical model appropriate regularizations that require complex computations. The main aim of this work is to describe variational segmentation methods that are particularly effective for curvilinear structures. Due to their complex geometry, classical regularization techniques cannot be adopted because they lead to the loss of most of low contrasted details. In contrast, the proposed method not only better preserves curvilinear structures, but also reconnects some parts that may have been disconnected by noise. Moreover, it can be easily extensible to graphs and successfully applied to different types of data such as medical imagery (i.e. vessels, hearth coronaries etc), material samples (i.e. concrete) and satellite signals (i.e. streets, rivers etc.). In particular, we will show results and performances about an implementation targeting new generation of High Performance Computing (HPC) architectures where different types of coprocessors cooperate. The involved dataset consists of approximately 200 images of cracks, captured in three different tunnels by a robotic machine designed for the European ROBO-SPECT project.Open Acces

    A survey on generative adversarial networks for imbalance problems in computer vision tasks

    Get PDF
    Any computer vision application development starts off by acquiring images and data, then preprocessing and pattern recognition steps to perform a task. When the acquired images are highly imbalanced and not adequate, the desired task may not be achievable. Unfortunately, the occurrence of imbalance problems in acquired image datasets in certain complex real-world problems such as anomaly detection, emotion recognition, medical image analysis, fraud detection, metallic surface defect detection, disaster prediction, etc., are inevitable. The performance of computer vision algorithms can significantly deteriorate when the training dataset is imbalanced. In recent years, Generative Adversarial Neural Networks (GANs) have gained immense attention by researchers across a variety of application domains due to their capability to model complex real-world image data. It is particularly important that GANs can not only be used to generate synthetic images, but also its fascinating adversarial learning idea showed good potential in restoring balance in imbalanced datasets. In this paper, we examine the most recent developments of GANs based techniques for addressing imbalance problems in image data. The real-world challenges and implementations of synthetic image generation based on GANs are extensively covered in this survey. Our survey first introduces various imbalance problems in computer vision tasks and its existing solutions, and then examines key concepts such as deep generative image models and GANs. After that, we propose a taxonomy to summarize GANs based techniques for addressing imbalance problems in computer vision tasks into three major categories: 1. Image level imbalances in classification, 2. object level imbalances in object detection and 3. pixel level imbalances in segmentation tasks. We elaborate the imbalance problems of each group, and provide GANs based solutions in each group. Readers will understand how GANs based techniques can handle the problem of imbalances and boost performance of the computer vision algorithms
    • …
    corecore