34 research outputs found

    SDPNAL+: A Matlab software for semidefinite programming with bound constraints (version 1.0)

    Full text link
    SDPNAL+ is a {\sc Matlab} software package that implements an augmented Lagrangian based method to solve large scale semidefinite programming problems with bound constraints. The implementation was initially based on a majorized semismooth Newton-CG augmented Lagrangian method, here we designed it within an inexact symmetric Gauss-Seidel based semi-proximal ADMM/ALM (alternating direction method of multipliers/augmented Lagrangian method) framework for the purpose of deriving simpler stopping conditions and closing the gap between the practical implementation of the algorithm and the theoretical algorithm. The basic code is written in {\sc Matlab}, but some subroutines in C language are incorporated via Mex files. We also design a convenient interface for users to input their SDP models into the solver. Numerous problems arising from combinatorial optimization and binary integer quadratic programming problems have been tested to evaluate the performance of the solver. Extensive numerical experiments conducted in [Yang, Sun, and Toh, Mathematical Programming Computation, 7 (2015), pp. 331--366] show that the proposed method is quite efficient and robust, in that it is able to solve 98.9\% of the 745 test instances of SDP problems arising from various applications to the accuracy of 106 10^{-6} in the relative KKT residual

    An efficient sieving based secant method for sparse optimization problems with least-squares constraints

    Full text link
    In this paper, we propose an efficient sieving based secant method to address the computational challenges of solving sparse optimization problems with least-squares constraints. A level-set method has been introduced in [X. Li, D.F. Sun, and K.-C. Toh, SIAM J. Optim., 28 (2018), pp. 1842--1866] that solves these problems by using the bisection method to find a root of a univariate nonsmooth equation φ(λ)=ϱ\varphi(\lambda) = \varrho for some ϱ>0\varrho > 0, where φ()\varphi(\cdot) is the value function computed by a solution of the corresponding regularized least-squares optimization problem. When the objective function in the constrained problem is a polyhedral gauge function, we prove that (i) for any positive integer kk, φ()\varphi(\cdot) is piecewise CkC^k in an open interval containing the solution λ\lambda^* to the equation φ(λ)=ϱ\varphi(\lambda) = \varrho; (ii) the Clarke Jacobian of φ()\varphi(\cdot) is always positive. These results allow us to establish the essential ingredients of the fast convergence rates of the secant method. Moreover, an adaptive sieving technique is incorporated into the secant method to effectively reduce the dimension of the level-set subproblems for computing the value of φ()\varphi(\cdot). The high efficiency of the proposed algorithm is demonstrated by extensive numerical results

    An Efficient HPR Algorithm for the Wasserstein Barycenter Problem with O(Dim(P)/ε)O({Dim(P)}/\varepsilon) Computational Complexity

    Full text link
    In this paper, we propose and analyze an efficient Halpern-Peaceman-Rachford (HPR) algorithm for solving the Wasserstein barycenter problem (WBP) with fixed supports. While the Peaceman-Rachford (PR) splitting method itself may not be convergent for solving the WBP, the HPR algorithm can achieve an O(1/ε)O(1/\varepsilon) non-ergodic iteration complexity with respect to the Karush-Kuhn-Tucker (KKT) residual. More interestingly, we propose an efficient procedure with linear time computational complexity to solve the linear systems involved in the subproblems of the HPR algorithm. As a consequence, the HPR algorithm enjoys an O(Dim(P)/ε)O({\rm Dim(P)}/\varepsilon) non-ergodic computational complexity in terms of flops for obtaining an ε\varepsilon-optimal solution measured by the KKT residual for the WBP, where Dim(P){\rm Dim(P)} is the dimension of the variable of the WBP. This is better than the best-known complexity bound for the WBP. Moreover, the extensive numerical results on both the synthetic and real data sets demonstrate the superior performance of the HPR algorithm for solving the large-scale WBP

    Randomly Projected Convex Clustering Model: Motivation, Realization, and Cluster Recovery Guarantees

    Full text link
    In this paper, we propose a randomly projected convex clustering model for clustering a collection of nn high dimensional data points in Rd\mathbb{R}^d with KK hidden clusters. Compared to the convex clustering model for clustering original data with dimension dd, we prove that, under some mild conditions, the perfect recovery of the cluster membership assignments of the convex clustering model, if exists, can be preserved by the randomly projected convex clustering model with embedding dimension m=O(ϵ2log(n))m = O(\epsilon^{-2}\log(n)), where 0<ϵ<10 < \epsilon < 1 is some given parameter. We further prove that the embedding dimension can be improved to be O(ϵ2log(K))O(\epsilon^{-2}\log(K)), which is independent of the number of data points. Extensive numerical experiment results will be presented in this paper to demonstrate the robustness and superior performance of the randomly projected convex clustering model. The numerical results presented in this paper also demonstrate that the randomly projected convex clustering model can outperform the randomly projected K-means model in practice

    Generate What You Prefer: Reshaping Sequential Recommendation via Guided Diffusion

    Full text link
    Sequential recommendation aims to recommend the next item that matches a user's interest, based on the sequence of items he/she interacted with before. Scrutinizing previous studies, we can summarize a common learning-to-classify paradigm -- given a positive item, a recommender model performs negative sampling to add negative items and learns to classify whether the user prefers them or not, based on his/her historical interaction sequence. Although effective, we reveal two inherent limitations:(1) it may differ from human behavior in that a user could imagine an oracle item in mind and select potential items matching the oracle; and (2) the classification is limited in the candidate pool with noisy or easy supervision from negative samples, which dilutes the preference signals towards the oracle item. Yet, generating the oracle item from the historical interaction sequence is mostly unexplored. To bridge the gap, we reshape sequential recommendation as a learning-to-generate paradigm, which is achieved via a guided diffusion model, termed DreamRec.Specifically, for a sequence of historical items, it applies a Transformer encoder to create guidance representations. Noising target items explores the underlying distribution of item space; then, with the guidance of historical interactions, the denoising process generates an oracle item to recover the positive item, so as to cast off negative sampling and depict the true preference of the user directly. We evaluate the effectiveness of DreamRec through extensive experiments and comparisons with existing methods. Codes and data are open-sourced at https://github.com/YangZhengyi98/DreamRec

    Large Language Model Can Interpret Latent Space of Sequential Recommender

    Full text link
    Sequential recommendation is to predict the next item of interest for a user, based on her/his interaction history with previous items. In conventional sequential recommenders, a common approach is to model item sequences using discrete IDs, learning representations that encode sequential behaviors and reflect user preferences. Inspired by recent success in empowering large language models (LLMs) to understand and reason over diverse modality data (e.g., image, audio, 3D points), a compelling research question arises: ``Can LLMs understand and work with hidden representations from ID-based sequential recommenders?''.To answer this, we propose a simple framework, RecInterpreter, which examines the capacity of open-source LLMs to decipher the representation space of sequential recommenders. Specifically, with the multimodal pairs (\ie representations of interaction sequence and text narrations), RecInterpreter first uses a lightweight adapter to map the representations into the token embedding space of the LLM. Subsequently, it constructs a sequence-recovery prompt that encourages the LLM to generate textual descriptions for items within the interaction sequence. Taking a step further, we propose a sequence-residual prompt instead, which guides the LLM in identifying the residual item by contrasting the representations before and after integrating this residual into the existing sequence. Empirical results showcase that our RecInterpreter enhances the exemplar LLM, LLaMA, to understand hidden representations from ID-based sequential recommenders, especially when guided by our sequence-residual prompts. Furthermore, RecInterpreter enables LLaMA to instantiate the oracle items generated by generative recommenders like DreamRec, concreting the item a user would ideally like to interact with next. Codes are available at https://github.com/YangZhengyi98/RecInterpreter

    Vitamin C Enhances the Generation of Mouse and Human Induced Pluripotent Stem Cells

    Get PDF
    SummarySomatic cells can be reprogrammed into induced pluripotent stem cells (iPSCs) by defined factors. However, the low efficiency and slow kinetics of the reprogramming process have hampered progress with this technology. Here we report that a natural compound, vitamin C (Vc), enhances iPSC generation from both mouse and human somatic cells. Vc acts at least in part by alleviating cell senescence, a recently identified roadblock for reprogramming. In addition, Vc accelerates gene expression changes and promotes the transition of pre-iPSC colonies to a fully reprogrammed state. Our results therefore highlight a straightforward method for improving the speed and efficiency of iPSC generation and provide additional insights into the mechanistic basis of the reprogramming process

    SIMULTANEOUS MODEL FOR CLUSTERING AND INTRA-GROUP FEATURE SELECTION

    No full text
    Ph.DDOCTOR OF PHILOSOPHY (FOS
    corecore