14 research outputs found

    Deep Learning for 2D and 3D Scene Understanding

    Get PDF
    This thesis comprises a body of work that investigates the use of deep learning for 2D and 3D scene understanding. Although there has been significant progress made in computer vision using deep learning, a lot of that progress has been relative to performance benchmarks, and for static images; it is common to find that good performance on one benchmark does not necessarily mean good generalization to the kind of viewing conditions that might be encountered by an autonomous robot or agent. In this thesis, we address a variety of problems motivated by the desire to see deep learning algorithms generalize better to robotic vision scenarios. Specifically, we span topics of multi-object detection, unsupervised domain adaptation for semantic segmentation, video object segmentation, and semantic scene completion. First, most modern object detectors use a final post-processing step known as Non-maximum suppression (GreedyNMS). This suffers an inevitable trade-off between precision and recall in crowded scenes. To overcome this limitation, we propose a Pairwise-NMS to cure GreedyNMS. Specifically, a pairwise-relationship network that is based on deep learning is learned to predict if two overlapping proposal boxes contain two objects or zero/one object, which can handle multiple overlapping objects effectively. A common issue in training deep neural networks is the need for large training sets. One approach to this is to use simulated image and video data, but this suffers from a domain gap wherein the performance on real-world data is poor relative to performance on the simulation data. We target a few approaches to addressing so-called domain adaptation for semantic segmentation: (1) Single and multi-exemplars are employed for each class in order to cluster the per-pixel features in the embedding space; (2) Class-balanced self-training strategy is utilized for generating pseudo labels in the target domain; (3) Moreover, a convolutional adaptor is adopted to enforce the features in the source domain and target domain are closed with each other. Next, we tackle the video object segmentation by formulating it as a meta-learning problem, where the base learner aims to learn semantic scene understanding for general objects, and the meta learner quickly adapts the appearance of the target object with a few examples. Our proposed meta-learning method uses a closed-form optimizer, the so-called \ridge regression", which is conducive to fast and better training convergence. One-shot video object segmentation (OSVOS) has the limitation to \overemphasize" the generic semantic object information while \diluting" the instance cues of the object(s), which largely block the whole training process. Through adding a common module, video loss, which we formulate with various forms of constraints (including weighted BCE loss, high-dimensional triplet loss, as well as a novel mixed instance-aware video loss), to train the parent network, the network is then better prepared for the online fine-tuning. Next, we introduce a light-weight Dimensional Decomposition Residual network (DDR) for 3D dense prediction tasks. The novel factorized convolution layer is effective for reducing the network parameters, and the proposed multi-scale fusion mechanism for depth and color image can improve the completion and segmentation accuracy simultaneously. Moreover, we propose PALNet, a novel hybrid network for Semantic Scene Completion(SSC) based on single depth. PALNet utilizes a two-stream network to extract both 2D and 3D features from multi-stages using fine-grained depth information to eficiently capture the context, as well as the geometric cues of the scene. Position Aware Loss (PA-Loss) considers Local Geometric Anisotropy to determine the importance of different positions within the scene. It is beneficial for recovering key details like the boundaries of objects and the corners of the scene. Finally, we propose a 3D gated recurrent fusion network (GRFNet), which learns to adaptively select and fuse the relevant information from depth and RGB by making use of the gate and memory modules. Based on the single-stage fusion, we further propose a multi-stage fusion strategy, which could model the correlations among different stages within the network.Thesis (Ph.D.) -- University of Adelaide, School of Computer Science, 202

    Dynamic scene understanding using deep neural networks

    Get PDF

    Siam R-CNN: Visual Tracking by Re-Detection

    Full text link
    We present Siam R-CNN, a Siamese re-detection architecture which unleashes the full power of two-stage object detection approaches for visual object tracking. We combine this with a novel tracklet-based dynamic programming algorithm, which takes advantage of re-detections of both the first-frame template and previous-frame predictions, to model the full history of both the object to be tracked and potential distractor objects. This enables our approach to make better tracking decisions, as well as to re-detect tracked objects after long occlusion. Finally, we propose a novel hard example mining strategy to improve Siam R-CNN's robustness to similar looking objects. Siam R-CNN achieves the current best performance on ten tracking benchmarks, with especially strong results for long-term tracking. We make our code and models available at www.vision.rwth-aachen.de/page/siamrcnn.Comment: CVPR 2020 camera-ready versio

    Detection-aided medical image segmentation using deep learning

    Get PDF
    The details of the work will be defined once the student reaches the destination institution.A fully automatic technique for segmenting the liver and localizing its unhealthy tissues is a convenient tool in order to diagnose hepatic diseases and also to assess the response to the according treatments. In this thesis we propose a method to segment the liver and its lesions from Computed Tomography (CT) scans, as well as other anatomical structures and organs of the human body. We have used Convolutional Neural Networks (CNNs), that have proven good results in a variety of tasks, including medical imaging. The network to segment the lesions consists of a cascaded architecture, which first focuses on the liver region in order to segment the lesion. Moreover, we train a detector to localize the lesions and just keep those pixels from the output of the segmentation network where a lesion is detected. The segmentation architecture is based on DRIU (Maninis, 2016), a Fully Convolutional Network (FCN) with side outputs that work at feature maps of different resolutions, to finally benefit from the multi-scale information learned by different stages of the network. Our pipeline is 2.5D, as the input of the network is a stack of consecutive slices of the CT scans. We also study different methods to benefit from the liver segmentation in order to delineate the lesion. The main focus of this work is to use the detector to localize the lesions, as we demonstrate that it helps to remove false positives triggered by the segmentation network. The benefits of using a detector on top of the segmentation is that the detector acquires a more global insight of the healthiness of a liver tissue compared to the segmentation network, whose final output is pixel-wise and is not forced to take a global decision over a whole liver patch. We show experiments with the LiTS dataset for the lesion and liver segmentation. In order to prove the generality of the segmentation network, we also segment several anatomical structures from the Visceral dataset

    Capsule Networks for Video Understanding

    Get PDF
    With the increase of videos available online, it is more important than ever to learn how to process and understand video data. Although convolutional neural networks have revolutionized the representation learning from images and videos, they do not explicitly model entities within the given input. It would be useful for learned models to be able to represent part-to-whole relationships within a given image or video. To this end, a novel neural network architecture - capsule networks - has been proposed. Capsule networks add extra structure to allow for the modeling of entities and has shown great promise when applied to image data. By grouping neural activations and propagating information from one layer to the next through a routing-by-agreement procedure, capsule networks are able to learn part-to-whole relationships as well as robust object representations. In this dissertation, we explore how capsule networks can be generalized to video and be used to effectively solve several video understanding problems. First, we generalize capsule networks from the image domain so that it can process 3-dimensional video data. Our proposed video capsule network (VideoCapsuleNet) tackles the problem of video action detection. We introduce capsule-pooling in the convolutional capsule layer to make the voting algorithm tractable in the 3-dimensional video domain. The network\u27s routing-by-agreement inherently models the action representations and various action characteristics are captured by the predicted capsules. We show that VideoCapsuleNet is able to successfully produce pixel-wise localizations of actions present in videos. While action detection only requires a coarse localization, we show that video capsule networks can generate fine-grained segmentations. To that end, we propose a capsule-based approach for video object segmentation, CapsuleVOS, which can segment several frames at once conditioned on a reference frame and segmentation mask. This conditioning is performed through a novel routing algorithm for attention-based efficient capsule selection. We address two challenging issues in video object segmentation: segmentation of small objects and occlusion of objects across time. The first issue is addressed with a zooming module; the second, is dealt with by a novel memory module based on recurrent neural networks. Above we show that capsule networks can effectively localize actors and objects within videos. Next, we address the problem of integration of video and text for the task of actor and action video segmentation from a sentence. We propose a novel capsule-based approach to perform pixel-level localization based on a natural language query describing the actor of interest. We encode both the video and textual input in the form of capsules, and propose a visual-textual routing mechanism for the fusion of these capsules to successfully localize the actor and action within all frames of a video. The previous works are all fully supervised: they are all trained on manually annotated data, which is often time-consuming and costly to acquire. Finally, we propose a novel method for self-supervised learning which does not rely on manually annotated data. We present a capsule network that jointly learns high-level concepts and their relationships across different low-level multimodal (video, audio, and text) input representations. To adapt the capsules to large-scale input data, we propose a routing by self-attention mechanism that selects relevant capsules which are then used to generate a final joint multimodal feature representation. This allows us to learn robust representations from noisy video data and to scale up the size of the capsule network compared to traditional routing methods while still being computationally efficient
    corecore