210 research outputs found
An empirical learning-based validation procedure for simulation workflow
Simulation workflow is a top-level model for the design and control of
simulation process. It connects multiple simulation components with time and
interaction restrictions to form a complete simulation system. Before the
construction and evaluation of the component models, the validation of
upper-layer simulation workflow is of the most importance in a simulation
system. However, the methods especially for validating simulation workflow is
very limit. Many of the existing validation techniques are domain-dependent
with cumbersome questionnaire design and expert scoring. Therefore, this paper
present an empirical learning-based validation procedure to implement a
semi-automated evaluation for simulation workflow. First, representative
features of general simulation workflow and their relations with validation
indices are proposed. The calculation process of workflow credibility based on
Analytic Hierarchy Process (AHP) is then introduced. In order to make full use
of the historical data and implement more efficient validation, four learning
algorithms, including back propagation neural network (BPNN), extreme learning
machine (ELM), evolving new-neuron (eNFN) and fast incremental gaussian mixture
model (FIGMN), are introduced for constructing the empirical relation between
the workflow credibility and its features. A case study on a landing-process
simulation workflow is established to test the feasibility of the proposed
procedure. The experimental results also provide some useful overview of the
state-of-the-art learning algorithms on the credibility evaluation of
simulation models
Bayesian Local Smoothing Modeling and Inference for Pre-surgical FMRI Data.
There is a growing interest in using fMRI measurements and analyses as tools for pre-surgical planning. For such applications, spatial precision and control over false negatives and false positives are vital, requiring careful design of an image smoothing method and a classification procedure. This dissertation seeks computationally efficient approaches to overcome the limitation of existing methods and address new challenges in pre-surgical fMRI analyses.
In the first study, we develop a Bayesian solution for the pre-surgical analysis of a single fMRI brain image. Specifically, we propose a novel spatially adaptive conditionally autoregressive model (CWAS) that adaptively and locally smoothes the fMRI data. We introduce a Bayesian theoretical decision approach that allows control of both false positives and false negatives to identify activated and deactivated brain regions. We benchmark the proposed solution to two existing spatially adaptive smoothing models, through simulation studies and two patients' pre-surgical fMRI datasets.
In the second study, we extend the idea of spatially adaptive smoothing to multiple fMRI brain images in order to leverage spatial correlations across multiple images. In particular, we propose three spatially adaptive multivariate conditional autoregressive models that can be considered as extensions of the multivariate conditional autoregressive (MCAR) model (Gelfand and Vounatsou, 2003), the CWAS model, and the model of Reich and Hodges (2008), respectively, and one mixed-effects model assuming that all observed fMRI images originate from one common image. We compare the performance of the proposed models with those from the MCAR and CWAS models using simulation studies and two sets of fMRI brain images, acquired either from the same patient, same paradigm or same patient, different paradigms.
The last study is motivated by fMRI brain images acquired at two different spatial resolutions from the same patient. We develop a Bayesian hierarchical model with spatially varying coefficients to retain the spatial precision from the high resolution image while utilizing information from the low resolution image to improve estimation and inference. Comparisons between the proposed model and the CWAS model, which operates at a single spatial resolution, are performed on simulated data and a patient's multi-resolution pre-surgical fMRI data.PhDBiostatisticsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/133339/1/zhuqingl_1.pd
SAGDA: Achieving Communication Complexity in Federated Min-Max Learning
To lower the communication complexity of federated min-max learning, a
natural approach is to utilize the idea of infrequent communications (through
multiple local updates) same as in conventional federated learning. However,
due to the more complicated inter-outer problem structure in federated min-max
learning, theoretical understandings of communication complexity for federated
min-max learning with infrequent communications remain very limited in the
literature. This is particularly true for settings with non-i.i.d. datasets and
partial client participation. To address this challenge, in this paper, we
propose a new algorithmic framework called stochastic sampling averaging
gradient descent ascent (SAGDA), which i) assembles stochastic gradient
estimators from randomly sampled clients as control variates and ii) leverages
two learning rates on both server and client sides. We show that SAGDA achieves
a linear speedup in terms of both the number of clients and local update steps,
which yields an communication complexity that is
orders of magnitude lower than the state of the art. Interestingly, by noting
that the standard federated stochastic gradient descent ascent (FSGDA) is in
fact a control-variate-free special version of SAGDA, we immediately arrive at
an communication complexity result for FSGDA.
Therefore, through the lens of SAGDA, we also advance the current understanding
on communication complexity of the standard FSGDA method for federated min-max
learning.Comment: Published as a conference paper at NeurIPS 202
Federated Multi-Objective Learning
In recent years, multi-objective optimization (MOO) emerges as a foundational
problem underpinning many multi-agent multi-task learning applications.
However, existing algorithms in MOO literature remain limited to centralized
learning settings, which do not satisfy the distributed nature and data privacy
needs of such multi-agent multi-task learning applications. This motivates us
to propose a new federated multi-objective learning (FMOL) framework with
multiple clients distributively and collaboratively solving an MOO problem
while keeping their training data private. Notably, our FMOL framework allows a
different set of objective functions across different clients to support a wide
range of applications, which advances and generalizes the MOO formulation to
the federated learning paradigm for the first time. For this FMOL framework, we
propose two new federated multi-objective optimization (FMOO) algorithms called
federated multi-gradient descent averaging (FMGDA) and federated stochastic
multi-gradient descent averaging (FSMGDA). Both algorithms allow local updates
to significantly reduce communication costs, while achieving the {\em same}
convergence rates as those of their algorithmic counterparts in the
single-objective federated learning. Our extensive experiments also corroborate
the efficacy of our proposed FMOO algorithms.Comment: Accepted in NeurIPS 202
Uncertainty-Induced Transferability Representation for Source-Free Unsupervised Domain Adaptation
Source-free unsupervised domain adaptation (SFUDA) aims to learn a target
domain model using unlabeled target data and the knowledge of a well-trained
source domain model. Most previous SFUDA works focus on inferring semantics of
target data based on the source knowledge. Without measuring the
transferability of the source knowledge, these methods insufficiently exploit
the source knowledge, and fail to identify the reliability of the inferred
target semantics. However, existing transferability measurements require either
source data or target labels, which are infeasible in SFUDA. To this end,
firstly, we propose a novel Uncertainty-induced Transferability Representation
(UTR), which leverages uncertainty as the tool to analyse the channel-wise
transferability of the source encoder in the absence of the source data and
target labels. The domain-level UTR unravels how transferable the encoder
channels are to the target domain and the instance-level UTR characterizes the
reliability of the inferred target semantics. Secondly, based on the UTR, we
propose a novel Calibrated Adaption Framework (CAF) for SFUDA, including i)the
source knowledge calibration module that guides the target model to learn the
transferable source knowledge and discard the non-transferable one, and ii)the
target semantics calibration module that calibrates the unreliable semantics.
With the help of the calibrated source knowledge and the target semantics, the
model adapts to the target domain safely and ultimately better. We verified the
effectiveness of our method using experimental results and demonstrated that
the proposed method achieves state-of-the-art performances on the three SFUDA
benchmarks. Code is available at https://github.com/SPIresearch/UTR
DIAMOND: Taming Sample and Communication Complexities in Decentralized Bilevel Optimization
Decentralized bilevel optimization has received increasing attention recently
due to its foundational role in many emerging multi-agent learning paradigms
(e.g., multi-agent meta-learning and multi-agent reinforcement learning) over
peer-to-peer edge networks. However, to work with the limited computation and
communication capabilities of edge networks, a major challenge in developing
decentralized bilevel optimization techniques is to lower sample and
communication complexities. This motivates us to develop a new decentralized
bilevel optimization called DIAMOND (decentralized single-timescale stochastic
approximation with momentum and gradient-tracking). The contributions of this
paper are as follows: i) our DIAMOND algorithm adopts a single-loop structure
rather than following the natural double-loop structure of bilevel
optimization, which offers low computation and implementation complexity; ii)
compared to existing approaches, the DIAMOND algorithm does not require any
full gradient evaluations, which further reduces both sample and computational
complexities; iii) through a careful integration of momentum information and
gradient tracking techniques, we show that the DIAMOND algorithm enjoys
in sample and communication complexities for
achieving an -stationary solution, both of which are independent of
the dataset sizes and significantly outperform existing works. Extensive
experiments also verify our theoretical findings
Localization of BEN1-LIKE protein and nuclear degradation during development of metaphloem sieve elements in Triticum aestivum L.
Metaphloem sieve elements (MSEs) in the developing caryopsis of Triticum aestivum L. undergo a unique type of programmed cell death (PCD); cell organelles gradually degrade with the MSE differentiation while mature sieve elements keep active. This study focuses on locating BEN1-LIKE protein and nuclear degradation in differentiating MSEs of wheat. Transmission electron microscopy (TEM) showed that nuclei degraded in MSE development. First, the degradation started at 2–3 days after flowering (DAF). The degraded fragments were then swallowed by phagocytic vacuoles at 4 DAF. Finally, nuclei almost completely degraded at 5 DAF. We measured the BEN1-LIKE protein expression in differentiating MSEs. In situ hybridization showed that BEN1-LIKE mRNA was a more obvious hybridization signal at 3–4 DAF at the microscopic level. Immuno-electron microscopy further revealed that BEN1-LIKE protein was mainly localized in MSE nuclei. Furthermore, MSE differentiation was tested using a TSQ Zn2+ fluorescence probe which showed that the dynamic change of Zn2+ accumulation was similar to BEN1-LIKE protein expression. These results suggest that nucleus degradation in wheat MSEs is associated with BEN1-LIKE protein and that the expression of this protein may be regulated by Zn2+ accumulation variation
- …