298 research outputs found
A distributed adaptive steplength stochastic approximation method for monotone stochastic Nash Games
We consider a distributed stochastic approximation (SA) scheme for computing
an equilibrium of a stochastic Nash game. Standard SA schemes employ
diminishing steplength sequences that are square summable but not summable.
Such requirements provide a little or no guidance for how to leverage
Lipschitzian and monotonicity properties of the problem and naive choices
generally do not preform uniformly well on a breadth of problems. While a
centralized adaptive stepsize SA scheme is proposed in [1] for the optimization
framework, such a scheme provides no freedom for the agents in choosing their
own stepsizes. Thus, a direct application of centralized stepsize schemes is
impractical in solving Nash games. Furthermore, extensions to game-theoretic
regimes where players may independently choose steplength sequences are limited
to recent work by Koshal et al. [2]. Motivated by these shortcomings, we
present a distributed algorithm in which each player updates his steplength
based on the previous steplength and some problem parameters. The steplength
rules are derived from minimizing an upper bound of the errors associated with
players' decisions. It is shown that these rules generate sequences that
converge almost surely to an equilibrium of the stochastic Nash game.
Importantly, variants of this rule are suggested where players independently
select steplength sequences while abiding by an overall coordination
requirement. Preliminary numerical results are seen to be promising.Comment: 8 pages, Proceedings of the American Control Conference, Washington,
201
Distributed Gradient Tracking Methods with Guarantees for Computing a Solution to Stochastic MPECs
We consider a class of hierarchical multi-agent optimization problems over
networks where agents seek to compute an approximate solution to a single-stage
stochastic mathematical program with equilibrium constraints (MPEC). MPECs
subsume several important problem classes including Stackelberg games, bilevel
programs, and traffic equilibrium problems, to name a few. Our goal in this
work is to provably resolve stochastic MPECs in distributed regimes where the
agents only have access to their local objectives and an inexact best-response
to the lower-level equilibrium problem. To this end, we devise a new method
called randomized smoothed distributed zeroth-order gradient tracking
(rs-DZGT). This is a novel gradient tracking scheme where agents employ a
zeroth-order implicit scheme to approximate their (unavailable) local
gradients. Leveraging the properties of a randomized smoothing technique, we
establish the convergence of the method and derive complexity guarantees for
computing a stationary point of an optimization problem with a smoothed
implicit global objective. We also provide preliminary numerical experiments
where we compare the performance of rs-DZGT on networks under different
settings with that of its centralized counterpart
Zeroth-Order Methods for Nondifferentiable, Nonconvex, and Hierarchical Federated Optimization
Federated learning (FL) has emerged as an enabling framework for
communication-efficient decentralized training. In this paper, we study three
broadly applicable problem classes in FL: (i) Nondifferentiable nonconvex
optimization, e.g., in training of ReLU neural networks; (ii) Federated bilevel
optimization, e.g., in hyperparameter learning; (iii) Federated minimax
problems, e.g., in adversarial training. Research on such problems has been
limited and afflicted by reliance on strong assumptions, including
differentiability and L-smoothness of the implicit function in (ii)-(iii).
Unfortunately, such assumptions may fail to hold in practical settings. We
bridge this gap by making the following contributions. In (i), by leveraging
convolution-based smoothing and Clarke's subdifferential calculus, we devise a
randomized smoothing-enabled zeroth-order FL method and derive communication
and iteration complexity guarantees for computing an approximate Clarke
stationary point. Notably, our scheme allows for local functions that are both
nonconvex and nondifferentiable. In (ii) and (iii), we devise a unifying
randomized implicit zeroth-order FL framework, equipped with explicit
communication and iteration complexities. Importantly, this method employs
single-timescale local steps, resulting in significant reduction in
communication overhead when addressing hierarchical problems. We validate the
theory using numerical experiments on nonsmooth and hierarchical ML problems.Comment: Accepted at The 37th Annual Conference on Neural Information
Processing Systems (NeurIPS 2023
MetaCRAM: an integrated pipeline for metagenomic taxonomy identification and compression
Background: Metagenomics is a genomics research discipline devoted to the study of microbial communities in environmental samples and human and animal organs and tissues. Sequenced metagenomic samples usually comprise reads from a large number of different bacterial communities and hence tend to result in large file sizes, typically ranging between 1–10 GB. This leads to challenges in analyzing, transferring and storing metagenomic data. In order to overcome these data processing issues, we introduce MetaCRAM, the first de novo, parallelized software suite specialized for FASTA and FASTQ format metagenomic read processing and lossless compression. Results: MetaCRAM integrates algorithms for taxonomy identification and assembly, and introduces parallel execution methods; furthermore, it enables genome reference selection and CRAM based compression. MetaCRAM also uses novel reference-based compression methods designed through extensive studies of integer compression techniques and through fitting of empirical distributions of metagenomic read-reference positions. MetaCRAM is a lossless method compatible with standard CRAM formats, and it allows for fast selection of relevant files in the compressed domain via maintenance of taxonomy information. The performance of MetaCRAM as a stand-alone compression platform was evaluated on various metagenomic samples from the NCBI Sequence Read Archive, suggesting 2- to 4-fold compression ratio improvements compared to gzip. On average, the compressed file sizes were 2-13 percent of the original raw metagenomic file sizes. Conclusions: We described the first architecture for reference-based, lossless compression of metagenomic data. The compression scheme proposed offers significantly improved compression ratios as compared to off-the-shelf methods such as zip programs. Furthermore, it enables running different components in parallel and it provides the user with taxonomic and assembly information generated during execution of the compression pipeline. Availability: The MetaCRAM software is freely available at http://web.engr.illinois.edu/~mkim158/metacram.html. The website also contains a README file and other relevant instructions for running the code. Note that to run the code one needs a minimum of 16 GB of RAM. In addition, virtual box is set up on a 4GB RAM machine for users to run a simple demonstration
Path analysis of grain yield with ıts components in durum wheat under drought stress
This experiment was conducted in order to study the path analysis of grain yield with its components in durum
wheat under potential and drought stress condition during 2005-2006 cropping season in Agriculture Research
Station of Tabriz Islamic Azad University. 49 durum wheat line (6 line from Iran and 43line from other fount)
was used for this purpose. Two separate simple lattic design (7 7) with two replications was conducted . In one
experiment, the plants were commonly irrigated until physiological but in another experiment drought stress
imposed in four different stages including; tillering, stem elongation, anthesis and grain filling. Correlations
among traits after combining two experiments was calculated by SPSS software . Harvest index(r =0.849**),
plant height(r =0.695**), and number of tiller (r =0.689**) had high correlation with grain yield. Back ward
regressions was used for regressing grain yield on its components. Number of seeds per spike (0.432) , length of
spike(0.407) and 1000 seed weight (0.385) had the highest direct positive effects on grain yield. Path analysis
for 1000 seed weight, number of tillers per plant and number of seeds per spike showed that plant height
(0.452), length of spike (0.857), days to flowering (0.345) were the most effective components of traits,
respectively. Therefore, traits such as number of seeds per spike, spike length and 1000 seed weight could be
used as a suitable indices in irrigated and dry farming conditions for obtaining durum wheat genotypes with high
yield
- …