1,212 research outputs found

    Accelerated Variance Reduced Stochastic ADMM

    Full text link
    Recently, many variance reduced stochastic alternating direction method of multipliers (ADMM) methods (e.g.\ SAG-ADMM, SDCA-ADMM and SVRG-ADMM) have made exciting progress such as linear convergence rates for strongly convex problems. However, the best known convergence rate for general convex problems is O(1/T) as opposed to O(1/T^2) of accelerated batch algorithms, where TT is the number of iterations. Thus, there still remains a gap in convergence rates between existing stochastic ADMM and batch algorithms. To bridge this gap, we introduce the momentum acceleration trick for batch optimization into the stochastic variance reduced gradient based ADMM (SVRG-ADMM), which leads to an accelerated (ASVRG-ADMM) method. Then we design two different momentum term update rules for strongly convex and general convex cases. We prove that ASVRG-ADMM converges linearly for strongly convex problems. Besides having a low per-iteration complexity as existing stochastic ADMM methods, ASVRG-ADMM improves the convergence rate on general convex problems from O(1/T) to O(1/T^2). Our experimental results show the effectiveness of ASVRG-ADMM.Comment: 16 pages, 5 figures, Appears in Proceedings of the 31th AAAI Conference on Artificial Intelligence (AAAI), San Francisco, California, USA, pp. 2287--2293, 201

    Scalable Algorithms for Tractable Schatten Quasi-Norm Minimization

    Full text link
    The Schatten-p quasi-norm (0<p<1)(0<p<1) is usually used to replace the standard nuclear norm in order to approximate the rank function more accurately. However, existing Schatten-p quasi-norm minimization algorithms involve singular value decomposition (SVD) or eigenvalue decomposition (EVD) in each iteration, and thus may become very slow and impractical for large-scale problems. In this paper, we first define two tractable Schatten quasi-norms, i.e., the Frobenius/nuclear hybrid and bi-nuclear quasi-norms, and then prove that they are in essence the Schatten-2/3 and 1/2 quasi-norms, respectively, which lead to the design of very efficient algorithms that only need to update two much smaller factor matrices. We also design two efficient proximal alternating linearized minimization algorithms for solving representative matrix completion problems. Finally, we provide the global convergence and performance guarantees for our algorithms, which have better convergence properties than existing algorithms. Experimental results on synthetic and real-world data show that our algorithms are more accurate than the state-of-the-art methods, and are orders of magnitude faster.Comment: 16 pages, 5 figures, Appears in Proceedings of the 30th AAAI Conference on Artificial Intelligence (AAAI), Phoenix, Arizona, USA, pp. 2016--2022, 201

    A Non-Prototypical Perspective of Transitivity: Evidence-based Research

    Get PDF
    The prototype theory on transitivity was first developed in the 1970’s as a response to the Aristotelian classical theory. Despite its popularity, it has shortcomings that cannot go unchallenged including the existence of fuzzy boundaries and problems related to graded categorization. This study thus sought to refute the prototype perspective by highlighting its weaknesses by providing counter-evidence. It employed a thematic analysis methodology where 8 main sources were analyzed to find out the weaknesses of the prototypical theory and refuting their claims through empirically based counter-arguments. The thematic analysis method was important as the emergent themes directly provided answers to the research questions. The study points out that the prototype category does not solve the transitivity problem, and in fact complicates it. Due to the fact that it is constrained, the prototype category has no ultimate explanatory power. In contrast, the research is able to demonstrate the strong explanatory power of classical category theory. The implication of the study is that, to successfully falsify the prototypical transitivity is significant in that it goes against conventional thought. This argument against the prototype theory is a breakthrough and innovative, providing food for thought for linguists all across the world

    Inter-tier Interference Suppression in Heterogeneous Cloud Radio Access Networks

    Full text link
    Incorporating cloud computing into heterogeneous networks, the heterogeneous cloud radio access network (H-CRAN) has been proposed as a promising paradigm to enhance both spectral and energy efficiencies. Developing interference suppression strategies is critical for suppressing the inter-tier interference between remote radio heads (RRHs) and a macro base station (MBS) in H-CRANs. In this paper, inter-tier interference suppression techniques are considered in the contexts of collaborative processing and cooperative radio resource allocation (CRRA). In particular, interference collaboration (IC) and beamforming (BF) are proposed to suppress the inter-tier interference, and their corresponding performance is evaluated. Closed-form expressions for the overall outage probabilities, system capacities, and average bit error rates under these two schemes are derived. Furthermore, IC and BF based CRRA optimization models are presented to maximize the RRH-accessed users' sum rates via power allocation, which is solved with convex optimization. Simulation results demonstrate that the derived expressions for these performance metrics for IC and BF are accurate; and the relative performance between IC and BF schemes depends on system parameters, such as the number of antennas at the MBS, the number of RRHs, and the target signal-to-interference-plus-noise ratio threshold. Furthermore, it is seen that the sum rates of IC and BF schemes increase almost linearly with the transmit power threshold under the proposed CRRA optimization solution

    A condition-based opportunistic maintenance policy integrated with energy efficiency for two-component parallel systems

    Get PDF
    Purpose: In order to improve the energy utilization and achieve sustainable development, this paper integrates energy efficiency into condition-based maintenance(CBM) decision-making for two-component parallel systems. The objective is to obtain the optimal maintenance policy by minimizing total cost. Design/methodology/approach: Based on energy efficiency, the paper considers the economic dependence between the two components to take opportunistic maintenance. Specifically, the objective function consists of traditional maintenance cost and energy cost incurred by energy consumption of components. In order to assess the performance of the proposed new maintenance policy, the paper uses Monte-Carlo method to evaluate the total cost and find the optimal maintenance policy. Findings: Simulation results indicate that the new maintenance policy is superior to the classical condition-based opportunistic maintenance policy in terms of total economic costs. Originality/value: For two-component parallel systems, previous researches usually simply establish a condition-based opportunistic maintenance model based on real deterioration data, but ignore energy consumption, energy efficiency (EE) and their contributions of sustainable development. This paper creatively takes energy efficiency into condition-based maintenance(CBM) decision-making process, and proposes a new condition-based opportunistic maintenance policy by using energy efficiency indicator(EEI).Peer Reviewe
    • …
    corecore