4,412 research outputs found

    Cascade R-CNN: Delving into High Quality Object Detection

    Full text link
    In object detection, an intersection over union (IoU) threshold is required to define positives and negatives. An object detector, trained with low IoU threshold, e.g. 0.5, usually produces noisy detections. However, detection performance tends to degrade with increasing the IoU thresholds. Two main factors are responsible for this: 1) overfitting during training, due to exponentially vanishing positive samples, and 2) inference-time mismatch between the IoUs for which the detector is optimal and those of the input hypotheses. A multi-stage object detection architecture, the Cascade R-CNN, is proposed to address these problems. It consists of a sequence of detectors trained with increasing IoU thresholds, to be sequentially more selective against close false positives. The detectors are trained stage by stage, leveraging the observation that the output of a detector is a good distribution for training the next higher quality detector. The resampling of progressively improved hypotheses guarantees that all detectors have a positive set of examples of equivalent size, reducing the overfitting problem. The same cascade procedure is applied at inference, enabling a closer match between the hypotheses and the detector quality of each stage. A simple implementation of the Cascade R-CNN is shown to surpass all single-model object detectors on the challenging COCO dataset. Experiments also show that the Cascade R-CNN is widely applicable across detector architectures, achieving consistent gains independently of the baseline detector strength. The code will be made available at https://github.com/zhaoweicai/cascade-rcnn

    Iterative Object and Part Transfer for Fine-Grained Recognition

    Full text link
    The aim of fine-grained recognition is to identify sub-ordinate categories in images like different species of birds. Existing works have confirmed that, in order to capture the subtle differences across the categories, automatic localization of objects and parts is critical. Most approaches for object and part localization relied on the bottom-up pipeline, where thousands of region proposals are generated and then filtered by pre-trained object/part models. This is computationally expensive and not scalable once the number of objects/parts becomes large. In this paper, we propose a nonparametric data-driven method for object and part localization. Given an unlabeled test image, our approach transfers annotations from a few similar images retrieved in the training set. In particular, we propose an iterative transfer strategy that gradually refine the predicted bounding boxes. Based on the located objects and parts, deep convolutional features are extracted for recognition. We evaluate our approach on the widely-used CUB200-2011 dataset and a new and large dataset called Birdsnap. On both datasets, we achieve better results than many state-of-the-art approaches, including a few using oracle (manually annotated) bounding boxes in the test images.Comment: To appear in ICME 2017 as an oral pape

    Object Detection in 20 Years: A Survey

    Full text link
    Object detection, as of one the most fundamental and challenging problems in computer vision, has received great attention in recent years. Its development in the past two decades can be regarded as an epitome of computer vision history. If we think of today's object detection as a technical aesthetics under the power of deep learning, then turning back the clock 20 years we would witness the wisdom of cold weapon era. This paper extensively reviews 400+ papers of object detection in the light of its technical evolution, spanning over a quarter-century's time (from the 1990s to 2019). A number of topics have been covered in this paper, including the milestone detectors in history, detection datasets, metrics, fundamental building blocks of the detection system, speed up techniques, and the recent state of the art detection methods. This paper also reviews some important detection applications, such as pedestrian detection, face detection, text detection, etc, and makes an in-deep analysis of their challenges as well as technical improvements in recent years.Comment: This work has been submitted to the IEEE TPAMI for possible publicatio

    UBSegNet: Unified Biometric Region of Interest Segmentation Network

    Full text link
    Digital human identity management, can now be seen as a social necessity, as it is essentially required in almost every public sector such as, financial inclusions, security, banking, social networking e.t.c. Hence, in today's rampantly emerging world with so many adversarial entities, relying on a single biometric trait is being too optimistic. In this paper, we have proposed a novel end-to-end, Unified Biometric ROI Segmentation Network (UBSegNet), for extracting region of interest from five different biometric traits viz. face, iris, palm, knuckle and 4-slap fingerprint. The architecture of the proposed UBSegNet consists of two stages: (i) Trait classification and (ii) Trait localization. For these stages, we have used a state of the art region based convolutional neural network (RCNN), comprising of three major parts namely convolutional layers, region proposal network (RPN) along with classification and regression heads. The model has been evaluated over various huge publicly available biometric databases. To the best of our knowledge this is the first unified architecture proposed, segmenting multiple biometric traits. It has been tested over around 5000 * 5 = 25,000 images (5000 images per trait) and produces very good results. Our work on unified biometric segmentation, opens up the vast opportunities in the field of multiple biometric traits based authentication systems.Comment: 4th Asian Conference on Pattern Recognition (ACPR 2017
    • …
    corecore