72 research outputs found
Three-Dimensional Modelling and Simulation of the Ice Accretion Process on Aircraft Wings
© 2018 Chang S, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.In this article, a new computational method for the three-dimensional (3D) ice accretion analysis on an aircraft wing is formulated and validated. The two-phase flow field is calculated based on Eulerian-Eulerian approach using standard dispersed turbulence model and second order upwind differencing with the aid of commercial software Fluent, and the corresponding local droplet collection efficiency, convective heat transfer coefficient, freezing fraction and surface temperature are obtained. The classical Messinger model is modified to be capable of describing 3D thermodynamic characteristics of ice accretion. Considering effects of runback water, which is along chordwise and spanwise direction, an extended Messinger method is employed for the prediction of the 3D ice accretion rates. Validation of the newly developed model is carried out through comparisons with available experimental ice shape and LEWICE codes over a GLC-305 wing under both rime and glaze icing conditions. Results show that good agreement is achieved between the current computational ice shapes and the compared results. Further calculations based on the proposed method over a M6 wing under different test conditions are numerically demonstrated.Peer reviewedFinal Published versio
Suppression of tar and char formation in supercritical water gasification of sewage sludge by additive addition
A Comprehensive Survey on Data-Efficient GANs in Image Generation
Generative Adversarial Networks (GANs) have achieved remarkable achievements
in image synthesis. These successes of GANs rely on large scale datasets,
requiring too much cost. With limited training data, how to stable the training
process of GANs and generate realistic images have attracted more attention.
The challenges of Data-Efficient GANs (DE-GANs) mainly arise from three
aspects: (i) Mismatch Between Training and Target Distributions, (ii)
Overfitting of the Discriminator, and (iii) Imbalance Between Latent and Data
Spaces. Although many augmentation and pre-training strategies have been
proposed to alleviate these issues, there lacks a systematic survey to
summarize the properties, challenges, and solutions of DE-GANs. In this paper,
we revisit and define DE-GANs from the perspective of distribution
optimization. We conclude and analyze the challenges of DE-GANs. Meanwhile, we
propose a taxonomy, which classifies the existing methods into three
categories: Data Selection, GANs Optimization, and Knowledge Sharing. Last but
not the least, we attempt to highlight the current problems and the future
directions.Comment: Under revie
Distributed Pruning Towards Tiny Neural Networks in Federated Learning
Neural network pruning is an essential technique for reducing the size and
complexity of deep neural networks, enabling large-scale models on devices with
limited resources. However, existing pruning approaches heavily rely on
training data for guiding the pruning strategies, making them ineffective for
federated learning over distributed and confidential datasets. Additionally,
the memory- and computation-intensive pruning process becomes infeasible for
recourse-constrained devices in federated learning. To address these
challenges, we propose FedTiny, a distributed pruning framework for federated
learning that generates specialized tiny models for memory- and
computing-constrained devices. We introduce two key modules in FedTiny to
adaptively search coarse- and finer-pruned specialized models to fit deployment
scenarios with sparse and cheap local computation. First, an adaptive batch
normalization selection module is designed to mitigate biases in pruning caused
by the heterogeneity of local data. Second, a lightweight progressive pruning
module aims to finer prune the models under strict memory and computational
budgets, allowing the pruning policy for each layer to be gradually determined
rather than evaluating the overall model structure. The experimental results
demonstrate the effectiveness of FedTiny, which outperforms state-of-the-art
approaches, particularly when compressing deep models to extremely sparse tiny
models. FedTiny achieves an accuracy improvement of 2.61% while significantly
reducing the computational cost by 95.91% and the memory footprint by 94.01%
compared to state-of-the-art methods.Comment: This paper has been accepted to ICDCS 202
On-Device Model Fine-Tuning with Label Correction in Recommender Systems
To meet the practical requirements of low latency, low cost, and good privacy
in online intelligent services, more and more deep learning models are
offloaded from the cloud to mobile devices. To further deal with cross-device
data heterogeneity, the offloaded models normally need to be fine-tuned with
each individual user's local samples before being put into real-time inference.
In this work, we focus on the fundamental click-through rate (CTR) prediction
task in recommender systems and study how to effectively and efficiently
perform on-device fine-tuning. We first identify the bottleneck issue that each
individual user's local CTR (i.e., the ratio of positive samples in the local
dataset for fine-tuning) tends to deviate from the global CTR (i.e., the ratio
of positive samples in all the users' mixed datasets on the cloud for training
out the initial model). We further demonstrate that such a CTR drift problem
makes on-device fine-tuning even harmful to item ranking. We thus propose a
novel label correction method, which requires each user only to change the
labels of the local samples ahead of on-device fine-tuning and can well align
the locally prior CTR with the global CTR. The offline evaluation results over
three datasets and five CTR prediction models as well as the online A/B testing
results in Mobile Taobao demonstrate the necessity of label correction in
on-device fine-tuning and also reveal the improvement over cloud-based learning
without fine-tuning
Distributed Non-Convex Optimization with Sublinear Speedup under Intermittent Client Availability
Federated learning is a new distributed machine learning framework, where a
bunch of heterogeneous clients collaboratively train a model without sharing
training data. In this work, we consider a practical and ubiquitous issue when
deploying federated learning in mobile environments: intermittent client
availability, where the set of eligible clients may change during the training
process. Such intermittent client availability would seriously deteriorate the
performance of the classical Federated Averaging algorithm (FedAvg for short).
Thus, we propose a simple distributed non-convex optimization algorithm, called
Federated Latest Averaging (FedLaAvg for short), which leverages the latest
gradients of all clients, even when the clients are not available, to jointly
update the global model in each iteration. Our theoretical analysis shows that
FedLaAvg attains the convergence rate of ,
achieving a sublinear speedup with respect to the total number of clients. We
implement FedLaAvg along with several baselines and evaluate them over the
benchmarking MNIST and Sentiment140 datasets. The evaluation results
demonstrate that FedLaAvg achieves more stable training than FedAvg in both
convex and non-convex settings and indeed reaches a sublinear speedup
- …