34 research outputs found
SDPNAL+: A Matlab software for semidefinite programming with bound constraints (version 1.0)
SDPNAL+ is a {\sc Matlab} software package that implements an augmented
Lagrangian based method to solve large scale semidefinite programming problems
with bound constraints. The implementation was initially based on a majorized
semismooth Newton-CG augmented Lagrangian method, here we designed it within an
inexact symmetric Gauss-Seidel based semi-proximal ADMM/ALM (alternating
direction method of multipliers/augmented Lagrangian method) framework for the
purpose of deriving simpler stopping conditions and closing the gap between the
practical implementation of the algorithm and the theoretical algorithm. The
basic code is written in {\sc Matlab}, but some subroutines in C language are
incorporated via Mex files. We also design a convenient interface for users to
input their SDP models into the solver. Numerous problems arising from
combinatorial optimization and binary integer quadratic programming problems
have been tested to evaluate the performance of the solver. Extensive numerical
experiments conducted in [Yang, Sun, and Toh, Mathematical Programming
Computation, 7 (2015), pp. 331--366] show that the proposed method is quite
efficient and robust, in that it is able to solve 98.9\% of the 745 test
instances of SDP problems arising from various applications to the accuracy of
in the relative KKT residual
An efficient sieving based secant method for sparse optimization problems with least-squares constraints
In this paper, we propose an efficient sieving based secant method to address
the computational challenges of solving sparse optimization problems with
least-squares constraints. A level-set method has been introduced in [X. Li,
D.F. Sun, and K.-C. Toh, SIAM J. Optim., 28 (2018), pp. 1842--1866] that solves
these problems by using the bisection method to find a root of a univariate
nonsmooth equation for some , where
is the value function computed by a solution of the
corresponding regularized least-squares optimization problem. When the
objective function in the constrained problem is a polyhedral gauge function,
we prove that (i) for any positive integer , is piecewise
in an open interval containing the solution to the equation
; (ii) the Clarke Jacobian of is
always positive. These results allow us to establish the essential ingredients
of the fast convergence rates of the secant method. Moreover, an adaptive
sieving technique is incorporated into the secant method to effectively reduce
the dimension of the level-set subproblems for computing the value of
. The high efficiency of the proposed algorithm is demonstrated
by extensive numerical results
An Efficient HPR Algorithm for the Wasserstein Barycenter Problem with Computational Complexity
In this paper, we propose and analyze an efficient Halpern-Peaceman-Rachford
(HPR) algorithm for solving the Wasserstein barycenter problem (WBP) with fixed
supports. While the Peaceman-Rachford (PR) splitting method itself may not be
convergent for solving the WBP, the HPR algorithm can achieve an
non-ergodic iteration complexity with respect to the
Karush-Kuhn-Tucker (KKT) residual. More interestingly, we propose an efficient
procedure with linear time computational complexity to solve the linear systems
involved in the subproblems of the HPR algorithm. As a consequence, the HPR
algorithm enjoys an non-ergodic computational
complexity in terms of flops for obtaining an -optimal solution
measured by the KKT residual for the WBP, where is the dimension
of the variable of the WBP. This is better than the best-known complexity bound
for the WBP. Moreover, the extensive numerical results on both the synthetic
and real data sets demonstrate the superior performance of the HPR algorithm
for solving the large-scale WBP
Randomly Projected Convex Clustering Model: Motivation, Realization, and Cluster Recovery Guarantees
In this paper, we propose a randomly projected convex clustering model for
clustering a collection of high dimensional data points in
with hidden clusters. Compared to the convex clustering model for
clustering original data with dimension , we prove that, under some mild
conditions, the perfect recovery of the cluster membership assignments of the
convex clustering model, if exists, can be preserved by the randomly projected
convex clustering model with embedding dimension ,
where is some given parameter. We further prove that the
embedding dimension can be improved to be , which is
independent of the number of data points. Extensive numerical experiment
results will be presented in this paper to demonstrate the robustness and
superior performance of the randomly projected convex clustering model. The
numerical results presented in this paper also demonstrate that the randomly
projected convex clustering model can outperform the randomly projected K-means
model in practice
Generate What You Prefer: Reshaping Sequential Recommendation via Guided Diffusion
Sequential recommendation aims to recommend the next item that matches a
user's interest, based on the sequence of items he/she interacted with before.
Scrutinizing previous studies, we can summarize a common learning-to-classify
paradigm -- given a positive item, a recommender model performs negative
sampling to add negative items and learns to classify whether the user prefers
them or not, based on his/her historical interaction sequence. Although
effective, we reveal two inherent limitations:(1) it may differ from human
behavior in that a user could imagine an oracle item in mind and select
potential items matching the oracle; and (2) the classification is limited in
the candidate pool with noisy or easy supervision from negative samples, which
dilutes the preference signals towards the oracle item. Yet, generating the
oracle item from the historical interaction sequence is mostly unexplored. To
bridge the gap, we reshape sequential recommendation as a learning-to-generate
paradigm, which is achieved via a guided diffusion model, termed
DreamRec.Specifically, for a sequence of historical items, it applies a
Transformer encoder to create guidance representations. Noising target items
explores the underlying distribution of item space; then, with the guidance of
historical interactions, the denoising process generates an oracle item to
recover the positive item, so as to cast off negative sampling and depict the
true preference of the user directly. We evaluate the effectiveness of DreamRec
through extensive experiments and comparisons with existing methods. Codes and
data are open-sourced at https://github.com/YangZhengyi98/DreamRec
Large Language Model Can Interpret Latent Space of Sequential Recommender
Sequential recommendation is to predict the next item of interest for a user,
based on her/his interaction history with previous items. In conventional
sequential recommenders, a common approach is to model item sequences using
discrete IDs, learning representations that encode sequential behaviors and
reflect user preferences. Inspired by recent success in empowering large
language models (LLMs) to understand and reason over diverse modality data
(e.g., image, audio, 3D points), a compelling research question arises: ``Can
LLMs understand and work with hidden representations from ID-based sequential
recommenders?''.To answer this, we propose a simple framework, RecInterpreter,
which examines the capacity of open-source LLMs to decipher the representation
space of sequential recommenders. Specifically, with the multimodal pairs (\ie
representations of interaction sequence and text narrations), RecInterpreter
first uses a lightweight adapter to map the representations into the token
embedding space of the LLM. Subsequently, it constructs a sequence-recovery
prompt that encourages the LLM to generate textual descriptions for items
within the interaction sequence. Taking a step further, we propose a
sequence-residual prompt instead, which guides the LLM in identifying the
residual item by contrasting the representations before and after integrating
this residual into the existing sequence. Empirical results showcase that our
RecInterpreter enhances the exemplar LLM, LLaMA, to understand hidden
representations from ID-based sequential recommenders, especially when guided
by our sequence-residual prompts. Furthermore, RecInterpreter enables LLaMA to
instantiate the oracle items generated by generative recommenders like
DreamRec, concreting the item a user would ideally like to interact with next.
Codes are available at https://github.com/YangZhengyi98/RecInterpreter
Vitamin C Enhances the Generation of Mouse and Human Induced Pluripotent Stem Cells
SummarySomatic cells can be reprogrammed into induced pluripotent stem cells (iPSCs) by defined factors. However, the low efficiency and slow kinetics of the reprogramming process have hampered progress with this technology. Here we report that a natural compound, vitamin C (Vc), enhances iPSC generation from both mouse and human somatic cells. Vc acts at least in part by alleviating cell senescence, a recently identified roadblock for reprogramming. In addition, Vc accelerates gene expression changes and promotes the transition of pre-iPSC colonies to a fully reprogrammed state. Our results therefore highlight a straightforward method for improving the speed and efficiency of iPSC generation and provide additional insights into the mechanistic basis of the reprogramming process
SIMULTANEOUS MODEL FOR CLUSTERING AND INTRA-GROUP FEATURE SELECTION
Ph.DDOCTOR OF PHILOSOPHY (FOS