128 research outputs found
Learning and Prediction Theory of Distributed Least Squares
With the fast development of the sensor and network technology, distributed
estimation has attracted more and more attention, due to its capability in
securing communication, in sustaining scalability, and in enhancing safety and
privacy. In this paper, we consider a least-squares (LS)-based distributed
algorithm build on a sensor network to estimate an unknown parameter vector of
a dynamical system, where each sensor in the network has partial information
only but is allowed to communicate with its neighbors. Our main task is to
generalize the well-known theoretical results on the traditional LS to the
current distributed case by establishing both the upper bound of the
accumulated regrets of the adaptive predictor and the convergence of the
distributed LS estimator, with the following key features compared with the
existing literature on distributed estimation: Firstly, our theory does not
need the previously imposed independence, stationarity or Gaussian property on
the system signals, and hence is applicable to stochastic systems with feedback
control. Secondly, the cooperative excitation condition introduced and used in
this paper for the convergence of the distributed LS estimate is the weakest
possible one, which shows that even if any individual sensor cannot estimate
the unknown parameter by the traditional LS, the whole network can still
fulfill the estimation task by the distributed LS. Moreover, our theoretical
analysis is also different from the existing ones for distributed LS, because
it is an integration of several powerful techniques including stochastic
Lyapunov functions, martingale convergence theorems, and some inequalities on
convex combination of nonnegative definite matrices.Comment: 14 pages, submitted to IEEE Transactions on Automatic Contro
Generative Adversarial Mapping Networks
Generative Adversarial Networks (GANs) have shown impressive performance in
generating photo-realistic images. They fit generative models by minimizing
certain distance measure between the real image distribution and the generated
data distribution. Several distance measures have been used, such as
Jensen-Shannon divergence, -divergence, and Wasserstein distance, and
choosing an appropriate distance measure is very important for training the
generative network. In this paper, we choose to use the maximum mean
discrepancy (MMD) as the distance metric, which has several nice theoretical
guarantees. In fact, generative moment matching network (GMMN) (Li, Swersky,
and Zemel 2015) is such a generative model which contains only one generator
network trained by directly minimizing MMD between the real and generated
distributions. However, it fails to generate meaningful samples on challenging
benchmark datasets, such as CIFAR-10 and LSUN. To improve on GMMN, we propose
to add an extra network , called mapper. maps both real data
distribution and generated data distribution from the original data space to a
feature representation space , and it is trained to maximize MMD
between the two mapped distributions in , while the generator
tries to minimize the MMD. We call the new model generative adversarial mapping
networks (GAMNs). We demonstrate that the adversarial mapper can help
to better capture the underlying data distribution. We also show that GAMN
significantly outperforms GMMN, and is also superior to or comparable with
other state-of-the-art GAN based methods on MNIST, CIFAR-10 and LSUN-Bedrooms
datasets.Comment: 9 pages, 7 figure
MUGC: Machine Generated versus User Generated Content Detection
As advanced modern systems like deep neural networks (DNNs) and generative AI
continue to enhance their capabilities in producing convincing and realistic
content, the need to distinguish between user-generated and machine generated
content is becoming increasingly evident. In this research, we undertake a
comparative evaluation of eight traditional machine-learning algorithms to
distinguish between machine-generated and human-generated data across three
diverse datasets: Poems, Abstracts, and Essays. Our results indicate that
traditional methods demonstrate a high level of accuracy in identifying
machine-generated data, reflecting the documented effectiveness of popular
pre-trained models like RoBERT. We note that machine-generated texts tend to be
shorter and exhibit less word variety compared to human-generated content.
While specific domain-related keywords commonly utilized by humans, albeit
disregarded by current LLMs (Large Language Models), may contribute to this
high detection accuracy, we show that deeper word representations like word2vec
can capture subtle semantic variances. Furthermore, readability, bias, moral,
and affect comparisons reveal a discernible contrast between machine-generated
and human generated content. There are variations in expression styles and
potentially underlying biases in the data sources (human and
machine-generated). This study provides valuable insights into the advancing
capacities and challenges associated with machine-generated content across
various domains.Comment: 11 pages, 16 figure
ShapeGrasp: Zero-Shot Task-Oriented Grasping with Large Language Models through Geometric Decomposition
Task-oriented grasping of unfamiliar objects is a necessary skill for robots
in dynamic in-home environments. Inspired by the human capability to grasp such
objects through intuition about their shape and structure, we present a novel
zero-shot task-oriented grasping method leveraging a geometric decomposition of
the target object into simple, convex shapes that we represent in a graph
structure, including geometric attributes and spatial relationships. Our
approach employs minimal essential information - the object's name and the
intended task - to facilitate zero-shot task-oriented grasping. We utilize the
commonsense reasoning capabilities of large language models to dynamically
assign semantic meaning to each decomposed part and subsequently reason over
the utility of each part for the intended task. Through extensive experiments
on a real-world robotics platform, we demonstrate that our grasping approach's
decomposition and reasoning pipeline is capable of selecting the correct part
in 92% of the cases and successfully grasping the object in 82% of the tasks we
evaluate. Additional videos, experiments, code, and data are available on our
project website: https://shapegrasp.github.io/.Comment: 8 page
Modulation of the Meridional Structures of the Indo-Pacific Warm Pool on the Response of the Hadley Circulation to Tropical SST
By decomposing the variations of the Hadley circulation (HC) and tropical zonal-mean sea surface temperature (SST) into the equatorially asymmetric (HEA for HC, SEA for SST) and symmetric (HES for HC, SES for SST) components, the varying response of the HC to different SST meridional structures under warm and cold conditions of the Indo-Pacific warm pool (IPWP) is investigated over the period 1979–2016. The response of the HC to SST evidences an asymmetric variation between warm and cold IPWP conditions; that is, the response ratio of HEA to SEA relative to that of HES to SES is ~5 under warm conditions and ~2 under cold conditions. This asymmetry is primarily due to a decrease in the HEA-to-SEA ratio under cold IPWP conditions, and is driven by changes in the meridional distribution of SST anomalies. Equatorial asymmetric (symmetric) SST anomalies are dominated by warm (cold) IPWP conditions. Thus, variations of SEA are suppressed under cold IPWP conditions, contributing to the observed weakening of the HEA-to-SEA ratio. The results presented here indicate that the HC is more sensitive to the underlying SST when the IPWP is warmer, during which the variation of SEA is enhanced, suggesting a recent strengthening of the response of the HC to SST, as the IPWP has warmed over the past several decades, and highlighting the importance of the IPWP meridional structures rather than the overall warming of the HC
Large-scale Point Cloud Registration Based on Graph Matching Optimization
Point Clouds Registration is a fundamental and challenging problem in 3D
computer vision. It has been shown that the isometric transformation is an
essential property in rigid point cloud registration, but the existing methods
only utilize it in the outlier rejection stage. In this paper, we emphasize
that the isometric transformation is also important in the feature learning
stage for improving registration quality. We propose a \underline{G}raph
\underline{M}atching \underline{O}ptimization based \underline{Net}work
(denoted as GMONet for short), which utilizes the graph matching method to
explicitly exert the isometry preserving constraints in the point feature
learning stage to improve %refine the point representation. Specifically, we
%use exploit the partial graph matching constraint to enhance the overlap
region detection abilities of super points ( down-sampled key points)
and full graph matching to refine the registration accuracy at the fine-level
overlap region. Meanwhile, we leverage the mini-batch sampling to improve the
efficiency of the full graph matching optimization. Given high discriminative
point features in the evaluation stage, we utilize the RANSAC approach to
estimate the transformation between the scanned pairs. The proposed method has
been evaluated on the 3DMatch/3DLoMatch benchmarks and the KITTI benchmark. The
experimental results show that our method achieves competitive performance
compared with the existing state-of-the-art baselines
- …