16,568 research outputs found

    What Can Help Pedestrian Detection?

    Full text link
    Aggregating extra features has been considered as an effective approach to boost traditional pedestrian detection methods. However, there is still a lack of studies on whether and how CNN-based pedestrian detectors can benefit from these extra features. The first contribution of this paper is exploring this issue by aggregating extra features into CNN-based pedestrian detection framework. Through extensive experiments, we evaluate the effects of different kinds of extra features quantitatively. Moreover, we propose a novel network architecture, namely HyperLearner, to jointly learn pedestrian detection as well as the given extra feature. By multi-task training, HyperLearner is able to utilize the information of given features and improve detection performance without extra inputs in inference. The experimental results on multiple pedestrian benchmarks validate the effectiveness of the proposed HyperLearner.Comment: Accepted to IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 201

    Image Restoration Using Very Deep Convolutional Encoder-Decoder Networks with Symmetric Skip Connections

    Full text link
    In this paper, we propose a very deep fully convolutional encoding-decoding framework for image restoration such as denoising and super-resolution. The network is composed of multiple layers of convolution and de-convolution operators, learning end-to-end mappings from corrupted images to the original ones. The convolutional layers act as the feature extractor, which capture the abstraction of image contents while eliminating noises/corruptions. De-convolutional layers are then used to recover the image details. We propose to symmetrically link convolutional and de-convolutional layers with skip-layer connections, with which the training converges much faster and attains a higher-quality local optimum. First, The skip connections allow the signal to be back-propagated to bottom layers directly, and thus tackles the problem of gradient vanishing, making training deep networks easier and achieving restoration performance gains consequently. Second, these skip connections pass image details from convolutional layers to de-convolutional layers, which is beneficial in recovering the original image. Significantly, with the large capacity, we can handle different levels of noises using a single model. Experimental results show that our network achieves better performance than all previously reported state-of-the-art methods.Comment: Accepted to Proc. Advances in Neural Information Processing Systems (NIPS'16). Content of the final version may be slightly different. Extended version is available at http://arxiv.org/abs/1606.0892

    (1ϵ)(1-\epsilon)-Approximation of Knapsack in Nearly Quadratic Time

    Full text link
    Knapsack is one of the most fundamental problems in theoretical computer science. In the (1ϵ)(1 - \epsilon)-approximation setting, although there is a fine-grained lower bound of (n+1/ϵ)2o(1)(n + 1 / \epsilon) ^ {2 - o(1)} based on the (min,+)(\min, +)-convolution hypothesis ([K{\"u}nnemann, Paturi and Stefan Schneider, ICALP 2017] and [Cygan, Mucha, Wegrzycki and Wlodarczyk, 2017]), the best algorithm is randomized and runs in O~(n+(1/ϵ)11/5)\tilde O(n + (1 / \epsilon) ^ {11/5}) time [Deng, Jin and Mao, SODA 2023], and it remains an important open problem whether an algorithm with a running time that matches the lower bound (up to a sub-polynomial factor) exists. We answer the problem positively by showing a deterministic (1ϵ)(1 - \epsilon)-approximation scheme for knapsack that runs in O~(n+(1/ϵ)2)\tilde O(n + (1 / \epsilon) ^ {2}) time. We first extend a known lemma in a recursive way to reduce the problem to nϵn \epsilon-additve approximation for nn items. Then we give a simple efficient geometry-based algorithm for the reduced problem
    corecore