3 research outputs found

    An Incomplete Tensor Tucker decomposition based Traffic Speed Prediction Method

    Full text link
    In intelligent transport systems, it is common and inevitable with missing data. While complete and valid traffic speed data is of great importance to intelligent transportation systems. A latent factorization-of-tensors (LFT) model is one of the most attractive approaches to solve missing traffic data recovery due to its well-scalability. A LFT model achieves optimization usually via a stochastic gradient descent (SGD) solver, however, the SGD-based LFT suffers from slow convergence. To deal with this issue, this work integrates the unique advantages of the proportional-integral-derivative (PID) controller into a Tucker decomposition based LFT model. It adopts two-fold ideas: a) adopting tucker decomposition to build a LFT model for achieving a better recovery accuracy. b) taking the adjusted instance error based on the PID control theory into the SGD solver to effectively improve convergence rate. Our experimental studies on two major city traffic road speed datasets show that the proposed model achieves significant efficiency gain and highly competitive prediction accuracy

    Structure-Aware Dynamic Scheduler for Parallel Machine Learning

    Full text link
    Training large machine learning (ML) models with many variables or parameters can take a long time if one employs sequential procedures even with stochastic updates. A natural solution is to turn to distributed computing on a cluster; however, naive, unstructured parallelization of ML algorithms does not usually lead to a proportional speedup and can even result in divergence, because dependencies between model elements can attenuate the computational gains from parallelization and compromise correctness of inference. Recent efforts toward this issue have benefited from exploiting the static, a priori block structures residing in ML algorithms. In this paper, we take this path further by exploring the dynamic block structures and workloads therein present during ML program execution, which offers new opportunities for improving convergence, correctness, and load balancing in distributed ML. We propose and showcase a general-purpose scheduler, STRADS, for coordinating distributed updates in ML algorithms, which harnesses the aforementioned opportunities in a systematic way. We provide theoretical guarantees for our scheduler, and demonstrate its efficacy versus static block structures on Lasso and Matrix Factorization

    A Novel Non-Negative Matrix Factorization Method for Recommender Systems

    Get PDF
    Recommender systems collect various kinds of data to create their recommendations. Collaborative filtering is a common technique in this area. This technique gathers and analyzes information on users preferences, and then estimates what users will like based on their similarity to other users. However, most of current collaborative filtering approaches have faced two problems: sparsity and scalability. This paper proposes a novel method by applying non-negative matrix factorization, which alleviates these problems via matrix factorization and similarity. Non-negative matrix factorization attempts to find two non-negative matrices whose product can well approximate the original matrix. It also imposes non-negative constraints on the latent factors. The proposed method presents novel update rules to learn the latent factors for predicting unknown rating. Unlike most of collaborative filtering methods, the proposed method can predict all the unknown ratings. It is easily implemented and its computational complexity is very low. Empirical studies on MovieLens and Book-Crossing datasets display that the proposed method is more tolerant against the problems of sparsity and scalability, and obtains good results
    corecore