272 research outputs found
Proof of Convergence and Performance Analysis for Sparse Recovery via Zero-point Attracting Projection
A recursive algorithm named Zero-point Attracting Projection (ZAP) is
proposed recently for sparse signal reconstruction. Compared with the reference
algorithms, ZAP demonstrates rather good performance in recovery precision and
robustness. However, any theoretical analysis about the mentioned algorithm,
even a proof on its convergence, is not available. In this work, a strict proof
on the convergence of ZAP is provided and the condition of convergence is put
forward. Based on the theoretical analysis, it is further proved that ZAP is
non-biased and can approach the sparse solution to any extent, with the proper
choice of step-size. Furthermore, the case of inaccurate measurements in noisy
scenario is also discussed. It is proved that disturbance power linearly
reduces the recovery precision, which is predictable but not preventable. The
reconstruction deviation of -compressible signal is also provided. Finally,
numerical simulations are performed to verify the theoretical analysis.Comment: 29 pages, 6 figure
Local-set-based Graph Signal Reconstruction
Signal processing on graph is attracting more and more attentions. For a
graph signal in the low-frequency subspace, the missing data associated with
unsampled vertices can be reconstructed through the sampled data by exploiting
the smoothness of the graph signal. In this paper, the concept of local set is
introduced and two local-set-based iterative methods are proposed to
reconstruct bandlimited graph signal from sampled data. In each iteration, one
of the proposed methods reweights the sampled residuals for different vertices,
while the other propagates the sampled residuals in their respective local
sets. These algorithms are built on frame theory and the concept of local sets,
based on which several frames and contraction operators are proposed. We then
prove that the reconstruction methods converge to the original signal under
certain conditions and demonstrate the new methods lead to a significantly
faster convergence compared with the baseline method. Furthermore, the
correspondence between graph signal sampling and time-domain irregular sampling
is analyzed comprehensively, which may be helpful to future works on graph
signals. Computer simulations are conducted. The experimental results
demonstrate the effectiveness of the reconstruction methods in various sampling
geometries, imprecise priori knowledge of cutoff frequency, and noisy
scenarios.Comment: 28 pages, 9 figures, 6 tables, journal manuscrip
A Distributed Tracking Algorithm for Reconstruction of Graph Signals
The rapid development of signal processing on graphs provides a new
perspective for processing large-scale data associated with irregular domains.
In many practical applications, it is necessary to handle massive data sets
through complex networks, in which most nodes have limited computing power.
Designing efficient distributed algorithms is critical for this task. This
paper focuses on the distributed reconstruction of a time-varying bandlimited
graph signal based on observations sampled at a subset of selected nodes. A
distributed least square reconstruction (DLSR) algorithm is proposed to recover
the unknown signal iteratively, by allowing neighboring nodes to communicate
with one another and make fast updates. DLSR uses a decay scheme to annihilate
the out-of-band energy occurring in the reconstruction process, which is
inevitably caused by the transmission delay in distributed systems. Proof of
convergence and error bounds for DLSR are provided in this paper, suggesting
that the algorithm is able to track time-varying graph signals and perfectly
reconstruct time-invariant signals. The DLSR algorithm is numerically
experimented with synthetic data and real-world sensor network data, which
verifies its ability in tracking slowly time-varying graph signals.Comment: 30 pages, 9 figures, 2 tables, journal pape
Local Measurement and Reconstruction for Noisy Graph Signals
The emerging field of signal processing on graph plays a more and more
important role in processing signals and information related to networks.
Existing works have shown that under certain conditions a smooth graph signal
can be uniquely reconstructed from its decimation, i.e., data associated with a
subset of vertices. However, in some potential applications (e.g., sensor
networks with clustering structure), the obtained data may be a combination of
signals associated with several vertices, rather than the decimation. In this
paper, we propose a new concept of local measurement, which is a generalization
of decimation. Using the local measurements, a local-set-based method named
iterative local measurement reconstruction (ILMR) is proposed to reconstruct
bandlimited graph signals. It is proved that ILMR can reconstruct the original
signal perfectly under certain conditions. The performance of ILMR against
noise is theoretically analyzed. The optimal choice of local weights and a
greedy algorithm of local set partition are given in the sense of minimizing
the expected reconstruction error. Compared with decimation, the proposed local
measurement sampling and reconstruction scheme is more robust in noise existing
scenarios.Comment: 24 pages, 6 figures, 2 tables, journal manuscrip
Follow Whom? Chinese Users Have Different Choice
Sina Weibo, which was launched in 2009, is the most popular Chinese
micro-blogging service. It has been reported that Sina Weibo has more than 400
million registered users by the end of the third quarter in 2012. Sina Weibo
and Twitter have a lot in common, however, in terms of the following
preference, Sina Weibo users, most of whom are Chinese, behave differently
compared with those of Twitter.
This work is based on a data set of Sina Weibo which contains 80.8 million
users' profiles and 7.2 billion relations and a large data set of Twitter.
Firstly some basic features of Sina Weibo and Twitter are analyzed such as
degree and activeness distribution, correlation between degree and activeness,
and the degree of separation. Then the following preference is investigated by
studying the assortative mixing, friend similarities, following distribution,
edge balance ratio, and ranking correlation, where edge balance ratio is newly
proposed to measure balance property of graphs. It is found that Sina Weibo has
a lower reciprocity rate, more positive balanced relations and is more
disassortative. Coinciding with Asian traditional culture, the following
preference of Sina Weibo users is more concentrated and hierarchical: they are
more likely to follow people at higher or the same social levels and less
likely to follow people lower than themselves. In contrast, the same kind of
following preference is weaker in Twitter. Twitter users are open as they
follow people from levels, which accords with its global characteristic and the
prevalence of western civilization. The message forwarding behavior is studied
by displaying the propagation levels, delays, and critical users. The following
preference derives from not only the usage habits but also underlying reasons
such as personalities and social moralities that is worthy of future research.Comment: 9 pages, 13 figure
GAGA: Deciphering Age-path of Generalized Self-paced Regularizer
Nowadays self-paced learning (SPL) is an important machine learning paradigm
that mimics the cognitive process of humans and animals. The SPL regime
involves a self-paced regularizer and a gradually increasing age parameter,
which plays a key role in SPL but where to optimally terminate this process is
still non-trivial to determine. A natural idea is to compute the solution path
w.r.t. age parameter (i.e., age-path). However, current age-path algorithms are
either limited to the simplest regularizer, or lack solid theoretical
understanding as well as computational efficiency. To address this challenge,
we propose a novel \underline{G}eneralized \underline{Ag}e-path
\underline{A}lgorithm (GAGA) for SPL with various self-paced regularizers based
on ordinary differential equations (ODEs) and sets control, which can learn the
entire solution spectrum w.r.t. a range of age parameters. To the best of our
knowledge, GAGA is the first exact path-following algorithm tackling the
age-path for general self-paced regularizer. Finally the algorithmic steps of
classic SVM and Lasso are described in detail. We demonstrate the performance
of GAGA on real-world datasets, and find considerable speedup between our
algorithm and competing baselines.Comment: 33 pages. Published as a conference paper at NeurIPS 202
Robust Synthetic-to-Real Transfer for Stereo Matching
With advancements in domain generalized stereo matching networks, models
pre-trained on synthetic data demonstrate strong robustness to unseen domains.
However, few studies have investigated the robustness after fine-tuning them in
real-world scenarios, during which the domain generalization ability can be
seriously degraded. In this paper, we explore fine-tuning stereo matching
networks without compromising their robustness to unseen domains. Our
motivation stems from comparing Ground Truth (GT) versus Pseudo Label (PL) for
fine-tuning: GT degrades, but PL preserves the domain generalization ability.
Empirically, we find the difference between GT and PL implies valuable
information that can regularize networks during fine-tuning. We also propose a
framework to utilize this difference for fine-tuning, consisting of a frozen
Teacher, an exponential moving average (EMA) Teacher, and a Student network.
The core idea is to utilize the EMA Teacher to measure what the Student has
learned and dynamically improve GT and PL for fine-tuning. We integrate our
framework with state-of-the-art networks and evaluate its effectiveness on
several real-world datasets. Extensive experiments show that our method
effectively preserves the domain generalization ability during fine-tuning.Comment: Accepted at CVPR 202
A Feasibility Study on Deep Learning Based Brain Tumor Segmentation Using 2D Ellipse Box Areas
In most deep learning-based brain tumor segmentation methods, training the deep network requires annotated tumor areas. However, accurate tumor annotation puts high demands on medical personnel. The aim of this study is to train a deep network for segmentation by using ellipse box areas surrounding the tumors. In the proposed method, the deep network is trained by using a large number of unannotated tumor images with foreground (FG) and background (BG) ellipse box areas surrounding the tumor and background, and a small number of patients (<20) with annotated tumors. The training is conducted by initial training on two ellipse boxes on unannotated MRIs, followed by refined training on a small number of annotated MRIs. We use a multi-stream U-Net for conducting our experiments, which is an extension of the conventional U-Net. This enables the use of complementary information from multi-modality (e.g., T1, T1ce, T2, and FLAIR) MRIs. To test the feasibility of the proposed approach, experiments and evaluation were conducted on two datasets for glioma segmentation. Segmentation performance on the test sets is then compared with those used on the same network but trained entirely by annotated MRIs. Our experiments show that the proposed method has obtained good tumor segmentation results on the test sets, wherein the dice score on tumor areas is (0.8407, 0.9104), and segmentation accuracy on tumor areas is (83.88%, 88.47%) for the MICCAI BraTS’17 and US datasets, respectively. Comparing the segmented results by using the network trained by all annotated tumors, the drop in the segmentation performance from the proposed approach is (0.0594, 0.0159) in the dice score, and (8.78%, 2.61%) in segmented tumor accuracy for MICCAI and US test sets, which is relatively small. Our case studies have demonstrated that training the network for segmentation by using ellipse box areas in place of all annotated tumors is feasible, and can be considered as an alternative, which is a trade-off between saving medical experts’ time annotating tumors and a small drop in segmentation performance
Editing Conceptual Knowledge for Large Language Models
Recently, there has been a growing interest in knowledge editing for Large
Language Models (LLMs). Current approaches and evaluations merely explore the
instance-level editing, while whether LLMs possess the capability to modify
concepts remains unclear. This paper pioneers the investigation of editing
conceptual knowledge for LLMs, by constructing a novel benchmark dataset
ConceptEdit and establishing a suite of new metrics for evaluation. The
experimental results reveal that, although existing editing methods can
efficiently modify concept-level definition to some extent, they also have the
potential to distort the related instantial knowledge in LLMs, leading to poor
performance. We anticipate this can inspire further progress in better
understanding LLMs. Our project homepage is available at
https://zjunlp.github.io/project/ConceptEdit.Comment: Work in progress. Code: https://github.com/zjunlp/EasyEdit Dataset:
https://huggingface.co/datasets/zjunlp/ConceptEdi
- …