6 research outputs found

    An Accelerated Doubly Stochastic Gradient Method with Faster Explicit Model Identification

    Full text link
    Sparsity regularized loss minimization problems play an important role in various fields including machine learning, data mining, and modern statistics. Proximal gradient descent method and coordinate descent method are the most popular approaches to solving the minimization problem. Although existing methods can achieve implicit model identification, aka support set identification, in a finite number of iterations, these methods still suffer from huge computational costs and memory burdens in high-dimensional scenarios. The reason is that the support set identification in these methods is implicit and thus cannot explicitly identify the low-complexity structure in practice, namely, they cannot discard useless coefficients of the associated features to achieve algorithmic acceleration via dimension reduction. To address this challenge, we propose a novel accelerated doubly stochastic gradient descent (ADSGD) method for sparsity regularized loss minimization problems, which can reduce the number of block iterations by eliminating inactive coefficients during the optimization process and eventually achieve faster explicit model identification and improve the algorithm efficiency. Theoretically, we first prove that ADSGD can achieve a linear convergence rate and lower overall computational complexity. More importantly, we prove that ADSGD can achieve a linear rate of explicit model identification. Numerically, experimental results on benchmark datasets confirm the efficiency of our proposed method

    Sampling Through the Lens of Sequential Decision Making

    Full text link
    Sampling is ubiquitous in machine learning methodologies. Due to the growth of large datasets and model complexity, we want to learn and adapt the sampling process while training a representation. Towards achieving this grand goal, a variety of sampling techniques have been proposed. However, most of them either use a fixed sampling scheme or adjust the sampling scheme based on simple heuristics. They cannot choose the best sample for model training in different stages. Inspired by "Think, Fast and Slow" (System 1 and System 2) in cognitive science, we propose a reward-guided sampling strategy called Adaptive Sample with Reward (ASR) to tackle this challenge. To the best of our knowledge, this is the first work utilizing reinforcement learning (RL) to address the sampling problem in representation learning. Our approach optimally adjusts the sampling process to achieve optimal performance. We explore geographical relationships among samples by distance-based sampling to maximize overall cumulative reward. We apply ASR to the long-standing sampling problems in similarity-based loss functions. Empirical results in information retrieval and clustering demonstrate ASR's superb performance across different datasets. We also discuss an engrossing phenomenon which we name as "ASR gravity well" in experiments

    Online Transfer Learning for RSV Case Detection

    Full text link
    Transfer learning has become a pivotal technique in machine learning and has proven to be effective in various real-world applications. However, utilizing this technique for classification tasks with sequential data often faces challenges, primarily attributed to the scarcity of class labels. To address this challenge, we introduce Multi-Source Adaptive Weighting (MSAW), an online multi-source transfer learning method. MSAW integrates a dynamic weighting mechanism into an ensemble framework, enabling automatic adjustment of weights based on the relevance and contribution of each source (representing historical knowledge) and target model (learning from newly acquired data). We demonstrate the effectiveness of MSAW by applying it to detect Respiratory Syncytial Virus cases within Emergency Department visits, utilizing multiple years of electronic health records from the University of Pittsburgh Medical Center. Our method demonstrates performance improvements over many baselines, including refining pre-trained models with online learning as well as three static weighting approaches, showing MSAW's capacity to integrate historical knowledge with progressively accumulated new data. This study indicates the potential of online transfer learning in healthcare, particularly for developing machine learning models that dynamically adapt to evolving situations where new data is incrementally accumulated.Comment: 10 pages, 2 figure

    Efficient Learning Algorithms for Training Large-Scale and High-Dimensional Machine Learning Models

    Full text link
    Machine learning has achieved tremendous successes and played increasingly essential roles in many application scenarios in the past decades. The recent advance in machine learning relies heavily upon the emergence of big data with both massive samples and numerous features. However, the computational inefficiency and memory burden of the learning algorithms restrict the capability of machine learning for large-scale applications. Therefore, it is important to design efficient learning algorithms for big data mining. In this dissertation, we propose several newly designed efficient learning algorithms to address the challenges of high dimensionality from the aspects of both samples and features for big data mining. First, we develop an efficient approximate solution path algorithm and introduce a safe screening rule to accelerate the model training of the Ordered Weighted L1 regression. We also formulate a unified safe variable screening rule for the family of ordered weighted sparse models, which can effectively accelerate the training algorithms. Second, we develop a new accelerated doubly stochastic gradient descent method for regularized loss minimization problems. The proposed method is able to simultaneously achieve a linear convergence rate and linear rate of explicit model identification. Finally, we design a novel distributed dynamic safe screening method in parallel and distributed computing to solve sparse models and apply the method to the shared-memory and distributed-memory architecture. The method can accelerate the training process without any loss of accuracy. The contributions of the thesis are expected to speed up the training of large-scale machine learning models through smart handling of the model sparsity and data sparsity

    Demystify the Gravity Well in the Optimization Landscape (Student Abstract)

    Full text link
    We provide both empirical and theoretical insights to demystify the gravity well phenomenon in the optimization landscape. We start from describe the problem setup and theoretical results (an escape time lower bound) of the Softmax Gravity Well (SGW) in the literature. Then we move toward the understanding of a recent observation called ASR gravity well. We provide an explanation of why normal distribution with high variance can lead to suboptimal plateaus from an energy function point of view. We also contribute to the empirical insights of curriculum learning by comparison of policy initialization by different normal distributions. Furthermore, we provide the ASR escape time lower bound to understand the ASR gravity well theoretically. Future work includes more specific modeling of the reward as a function of time and quantitative evaluation of normal distribution’s influence on policy initialization
    corecore