6 research outputs found
Recommended from our members
Community detection method based on mixed-norm sparse subspace clustering
Community or group is an important structure in disciplines such as social networks, biology gene expression, and physics systems. Community detections for different types of networks have attracted considerable interest. However, it is still challenging to find meaningful community structures in various networks. In particular, accurate community description and implementation of effective detection algorithms with huge datasets are still not solved. In this paper, we present a novel community detection algorithm based on the theory of sparse subspace clustering (SSC) with mixed-norm constraints. Inspired by the sparse representation of subspace, each community in a given network can span a subspace in some similarity measure space. If the basis of subspaces can be solved, all of the nodes can be represented as a linear combination of the nodes that span the same subspace. By introducing a novel mixed-norm constraint in SCC, the connections of nodes among different communities are modeled as noise to improve the clustering accuracy. The formulation of the basis of subspaces is derived from the self-representation property of data by using SSC. Then, the alternating directions method of multipliers (ADMM) framework is used to solve the formulation. Finally, communities are detected by subspace clustering method. The proposed method is compared with state-of-the-art algorithms on synthetic networks and real-world networks. The experimental results show the effectiveness of the proposed algorithm in accurately describing the community. The results also show that the mixed-norm SSC is a practical approach for detecting communities in huge datasets
Meta-learning for Multi-variable Non-convex Optimization Problems: Iterating Non-optimums Makes Optimum Possible
In this paper, we aim to address the problem of solving a non-convex
optimization problem over an intersection of multiple variable sets. This kind
of problems is typically solved by using an alternating minimization (AM)
strategy which splits the overall problem into a set of sub-problems
corresponding to each variable, and then iteratively performs minimization over
each sub-problem using a fixed updating rule. However, due to the intrinsic
non-convexity of the overall problem, the optimization can usually be trapped
into bad local minimum even when each sub-problem can be globally optimized at
each iteration. To tackle this problem, we propose a meta-learning based Global
Scope Optimization (GSO) method. It adaptively generates optimizers for
sub-problems via meta-learners and constantly updates these meta-learners with
respect to the global loss information of the overall problem. Therefore, the
sub-problems are optimized with the objective of minimizing the global loss
specifically. We evaluate the proposed model on a number of simulations,
including solving bi-linear inverse problems: matrix completion, and non-linear
problems: Gaussian mixture models. The experimental results show that our
proposed approach outperforms AM-based methods in standard settings, and is
able to achieve effective optimization in some challenging cases while other
methods would typically fail.Comment: 15 pages, 8 figure