1,132 research outputs found

    The Dual JL Transforms and Superfast Matrix Algorithms

    Full text link
    We call a matrix algorithm superfast (aka running at sublinear cost) if it involves much fewer flops and memory cells than the matrix has entries. Using such algorithms is highly desired or even imperative in computations for Big Data, which involve immense matrices and are quite typically reduced to solving linear least squares problem and/or computation of low rank approximation of an input matrix. The known algorithms for these problems are not superfast, but we prove that their certain superfast modifications output reasonable or even nearly optimal solutions for large input classes. We also propose, analyze, and test a novel superfast algorithm for iterative refinement of any crude but sufficiently close low rank approximation of a matrix. The results of our numerical tests are in good accordance with our formal study.Comment: 36.1 pages, 5 figures, and 1 table. arXiv admin note: text overlap with arXiv:1710.07946, arXiv:1906.0411

    Superfast Refinement of Low Rank Approximation of a Matrix

    Full text link
    Low rank approximation (LRA) of a matrix is a hot subject of modern computations. In application to Big Data mining and analysis the input matrices are usually so immense that one must apply superfast algorithms, which only access a tiny fraction of the input entries and involve much fewer memory cells and flops than an input matrix has entries. Recently we devised and analyzed some superfast LRA algorithms; in this paper we extend a classical algorithm of iterative refinement of the solution of linear systems of equations to superfast refinement of a crude but reasonably close LRA; we also list some heuristic recipes for superfast a posteriori estimation of the errors of LRA and support our superfast refinement algorithm with some superfast heuristic recipes for a posteriori error estimation of LRA and with superfast back and forth transition between any LRA of a matrix and its SVD. Our algorithm of iterative refinement of LRA is the first attempt of this kind and should motivate further effort in that direction, but already our initial tests are in good accordance with our formal study.Comment: 12.5 pages,, 1 table and 1 figur

    THE STUDY ON INFLUENCES OF ONLINE REVIEW HELPFULNESS

    Get PDF
    With the surge in the number of the online review, how to get valuable information from a large number of useless information become a new problem that people face with when they shopping online. So far, many scholars have carried on some researches on this problem, but most of the studies were based on tangible goods\u27 online reviews, and studies about services\u27 online reviews are still very few. This research-in-progress aims to apply the ELM (Elaboration Likelihood Model) analysis framework to develop the model of customer review helpfulness from the perspective of cognitive theory. We plan to carry on empirical research on the review data of service from Dianping.com, and test the model presented in this paper according to the theoretical analysis

    CUR Low Rank Approximation of a Matrix at Sublinear Cost

    Full text link
    Low rank approximation of a matrix (hereafter LRA) is a highly important area of Numerical Linear and Multilinear Algebra and Data Mining and Analysis. One can operate with LRA at sublinear cost, that is, by using much fewer memory cells and flops than an input matrix has entries, but no sublinear cost algorithm can compute accurate LRA of the worst case input matrices or even of the matrices of small families in our Appendix. Nevertheless we prove that Cross-Approximation celebrated algorithms and even more primitive sublinear cost algorithms output quite accurate LRA for a large subclass of the class of all matrices that admit LRA and in a sense for most of such matrices. Moreover, we accentuate the power of sublinear cost LRA by means of multiplicative pre-processing of an input matrix, and this also reveals a link between C-A algorithms and Randomized and Sketching LRA algorithms. Our tests are in good accordance with our formal study.Comment: 29 pages, 5 figures, 5 tables. arXiv admin note: text overlap with arXiv:1906.0492

    Curvilinear object segmentation in medical images based on ODoS filter and deep learning network

    Full text link
    Automatic segmentation of curvilinear objects in medical images plays an important role in the diagnosis and evaluation of human diseases, yet it is a challenging uncertainty in the complex segmentation tasks due to different issues such as various image appearances, low contrast between curvilinear objects and their surrounding backgrounds, thin and uneven curvilinear structures, and improper background illumination conditions. To overcome these challenges, we present a unique curvilinear structure segmentation framework based on an oriented derivative of stick (ODoS) filter and a deep learning network for curvilinear object segmentation in medical images. Currently, a large number of deep learning models emphasize developing deep architectures and ignore capturing the structural features of curvilinear objects, which may lead to unsatisfactory results. Consequently, a new approach that incorporates an ODoS filter as part of a deep learning network is presented to improve the spatial attention of curvilinear objects. Specifically, the input image is transfered into four-channel image constructed by the ODoS filter. In which, the original image is considered the principal part to describe various image appearance and complex background illumination conditions, a multi-step strategy is used to enhance the contrast between curvilinear objects and their surrounding backgrounds, and a vector field is applied to discriminate thin and uneven curvilinear structures. Subsequently, a deep learning framework is employed to extract various structural features for curvilinear object segmentation in medical images. The performance of the computational model is validated in experiments conducted on the publicly available DRIVE, STARE and CHASEDB1 datasets. The experimental results indicate that the presented model yields surprising results compared with those of some state-of-the-art methods.Comment: 20 pages, 8 figure
    • …
    corecore