89,759 research outputs found

    On monotonicity of regression quantile functions

    Get PDF
    In the linear regression quantile model, the conditional quantile of the response, Y, given x is QY|x(τ)≡x′β(τ). Though QY|x(τ) must be monotonically increasing, the Koenker–Bassett regression quantile estimator, View the MathML source, is not monotonic outside a vanishingly small neighborhood of View the MathML source. Given a grid of mesh δn, let View the MathML source be the linear interpolation of the values of View the MathML source along the grid. We show here that for a range of rates, δn, View the MathML source will be strictly monotonic (with probability tending to one) and will be asymptotically equivalent to View the MathML source in the sense that n1/2 times the difference tends to zero at a rate depending on δn

    EAST: An Efficient and Accurate Scene Text Detector

    Full text link
    Previous approaches for scene text detection have already achieved promising performances across various benchmarks. However, they usually fall short when dealing with challenging scenarios, even when equipped with deep neural network models, because the overall performance is determined by the interplay of multiple stages and components in the pipelines. In this work, we propose a simple yet powerful pipeline that yields fast and accurate text detection in natural scenes. The pipeline directly predicts words or text lines of arbitrary orientations and quadrilateral shapes in full images, eliminating unnecessary intermediate steps (e.g., candidate aggregation and word partitioning), with a single neural network. The simplicity of our pipeline allows concentrating efforts on designing loss functions and neural network architecture. Experiments on standard datasets including ICDAR 2015, COCO-Text and MSRA-TD500 demonstrate that the proposed algorithm significantly outperforms state-of-the-art methods in terms of both accuracy and efficiency. On the ICDAR 2015 dataset, the proposed algorithm achieves an F-score of 0.7820 at 13.2fps at 720p resolution.Comment: Accepted to CVPR 2017, fix equation (3

    BriskStream: Scaling Data Stream Processing on Shared-Memory Multicore Architectures

    Full text link
    We introduce BriskStream, an in-memory data stream processing system (DSPSs) specifically designed for modern shared-memory multicore architectures. BriskStream's key contribution is an execution plan optimization paradigm, namely RLAS, which takes relative-location (i.e., NUMA distance) of each pair of producer-consumer operators into consideration. We propose a branch and bound based approach with three heuristics to resolve the resulting nontrivial optimization problem. The experimental evaluations demonstrate that BriskStream yields much higher throughput and better scalability than existing DSPSs on multi-core architectures when processing different types of workloads.Comment: To appear in SIGMOD'1
    corecore