111,025 research outputs found
TROM: A Testing-based Method for Finding Transcriptomic Similarity of Biological Samples
Comparative transcriptomics has gained increasing popularity in genomic
research thanks to the development of high-throughput technologies including
microarray and next-generation RNA sequencing that have generated numerous
transcriptomic data. An important question is to understand the conservation
and differentiation of biological processes in different species. We propose a
testing-based method TROM (Transcriptome Overlap Measure) for comparing
transcriptomes within or between different species, and provide a different
perspective to interpret transcriptomic similarity in contrast to traditional
correlation analyses. Specifically, the TROM method focuses on identifying
associated genes that capture molecular characteristics of biological samples,
and subsequently comparing the biological samples by testing the overlap of
their associated genes. We use simulation and real data studies to demonstrate
that TROM is more powerful in identifying similar transcriptomes and more
robust to stochastic gene expression noise than Pearson and Spearman
correlations. We apply TROM to compare the developmental stages of six
Drosophila species, C. elegans, S. purpuratus, D. rerio and mouse liver, and
find interesting correspondence patterns that imply conserved gene expression
programs in the development of these species. The TROM method is available as
an R package on CRAN (http://cran.r-project.org/) with manuals and source codes
available at http://www.stat.ucla.edu/ jingyi.li/software-and-data/trom.html
Accelerating federated learning via momentum gradient descent
Federated learning (FL) provides a communication-efficient approach to solve machine learning problems concerning distributed data, without sending raw data to a central server. However, existing works on FL only utilize first-order gradient descent (GD) and do not consider the preceding iterations to gradient update which can potentially accelerate convergence. In this article, we consider momentum term which relates to the last iteration. The proposed momentum federated learning (MFL) uses momentum gradient descent (MGD) in the local update step of FL system. We establish global convergence properties of MFL and derive an upper bound on MFL convergence rate. Comparing the upper bounds on MFL and FL convergence rates, we provide conditions in which MFL accelerates the convergence. For different machine learning models, the convergence performance of MFL is evaluated based on experiments with MNIST and CIFAR-10 datasets. Simulation results confirm that MFL is globally convergent and further reveal significant convergence improvement over FL
Partitioned Sampling of Public Opinions Based on Their Social Dynamics
Public opinion polling is usually done by random sampling from the entire
population, treating individual opinions as independent. In the real world,
individuals' opinions are often correlated, e.g., among friends in a social
network. In this paper, we explore the idea of partitioned sampling, which
partitions individuals with high opinion similarities into groups and then
samples every group separately to obtain an accurate estimate of the population
opinion. We rigorously formulate the above idea as an optimization problem. We
then show that the simple partitions which contain only one sample in each
group are always better, and reduce finding the optimal simple partition to a
well-studied Min-r-Partition problem. We adapt an approximation algorithm and a
heuristic algorithm to solve the optimization problem. Moreover, to obtain
opinion similarity efficiently, we adapt a well-known opinion evolution model
to characterize social interactions, and provide an exact computation of
opinion similarities based on the model. We use both synthetic and real-world
datasets to demonstrate that the partitioned sampling method results in
significant improvement in sampling quality and it is robust when some opinion
similarities are inaccurate or even missing
Enabling Multi-level Trust in Privacy Preserving Data Mining
Privacy Preserving Data Mining (PPDM) addresses the problem of developing
accurate models about aggregated data without access to precise information in
individual data record. A widely studied \emph{perturbation-based PPDM}
approach introduces random perturbation to individual values to preserve
privacy before data is published. Previous solutions of this approach are
limited in their tacit assumption of single-level trust on data miners.
In this work, we relax this assumption and expand the scope of
perturbation-based PPDM to Multi-Level Trust (MLT-PPDM). In our setting, the
more trusted a data miner is, the less perturbed copy of the data it can
access. Under this setting, a malicious data miner may have access to
differently perturbed copies of the same data through various means, and may
combine these diverse copies to jointly infer additional information about the
original data that the data owner does not intend to release. Preventing such
\emph{diversity attacks} is the key challenge of providing MLT-PPDM services.
We address this challenge by properly correlating perturbation across copies at
different trust levels. We prove that our solution is robust against diversity
attacks with respect to our privacy goal. That is, for data miners who have
access to an arbitrary collection of the perturbed copies, our solution prevent
them from jointly reconstructing the original data more accurately than the
best effort using any individual copy in the collection. Our solution allows a
data owner to generate perturbed copies of its data for arbitrary trust levels
on-demand. This feature offers data owners maximum flexibility.Comment: 20 pages, 5 figures. Accepted for publication in IEEE Transactions on
Knowledge and Data Engineerin
- …