3,783 research outputs found
On the Importance and Applicability of Pre-Training for Federated Learning
Pre-training is prevalent in nowadays deep learning to improve the learned
model's performance. However, in the literature on federated learning (FL),
neural networks are mostly initialized with random weights. These attract our
interest in conducting a systematic study to explore pre-training for FL.
Across multiple visual recognition benchmarks, we found that pre-training can
not only improve FL, but also close its accuracy gap to the counterpart
centralized learning, especially in the challenging cases of non-IID clients'
data. To make our findings applicable to situations where pre-trained models
are not directly available, we explore pre-training with synthetic data or even
with clients' data in a decentralized manner, and found that they can already
improve FL notably. Interesting, many of the techniques we explore are
complementary to each other to further boost the performance, and we view this
as a critical result toward scaling up deep FL for real-world applications. We
conclude our paper with an attempt to understand the effect of pre-training on
FL. We found that pre-training enables the learned global models under
different clients' data conditions to converge to the same loss basin, and
makes global aggregation in FL more stable. Nevertheless, pre-training seems to
not alleviate local model drifting, a fundamental problem in FL under non-IID
data.Comment: Preprin
Fast minimum variance wavefront reconstruction for extremely large telescopes
We present a new algorithm, FRiM (FRactal Iterative Method), aiming at the
reconstruction of the optical wavefront from measurements provided by a
wavefront sensor. As our application is adaptive optics on extremely large
telescopes, our algorithm was designed with speed and best quality in mind. The
latter is achieved thanks to a regularization which enforces prior statistics.
To solve the regularized problem, we use the conjugate gradient method which
takes advantage of the sparsity of the wavefront sensor model matrix and avoids
the storage and inversion of a huge matrix. The prior covariance matrix is
however non-sparse and we derive a fractal approximation to the Karhunen-Loeve
basis thanks to which the regularization by Kolmogorov statistics can be
computed in O(N) operations, N being the number of phase samples to estimate.
Finally, we propose an effective preconditioning which also scales as O(N) and
yields the solution in 5-10 conjugate gradient iterations for any N. The
resulting algorithm is therefore O(N). As an example, for a 128 x 128
Shack-Hartmann wavefront sensor, FRiM appears to be more than 100 times faster
than the classical vector-matrix multiplication method.Comment: to appear in the Journal of the Optical Society of America
Map online system using internet-based image catalogue
Digital maps carry along its geodata information such as coordinate that is important in one particular topographic and thematic map. These geodatas are meaningful especially in military field. Since the maps carry along this information, its makes the size of the images is too big. The bigger size, the bigger storage is required to allocate the image file. It also can cause longer loading time. These conditions make it did not suitable to be applied in image catalogue approach via internet environment. With compression techniques, the image size can be reduced and the quality of the image is still guaranteed without much changes. This report is paying attention to one of the image compression technique using wavelet technology. Wavelet technology is much batter than any other image compression technique nowadays. As a result, the compressed images applied to a system called Map Online that used Internet-based Image Catalogue approach. This system allowed user to buy map online. User also can download the maps that had been bought besides using the searching the map. Map searching is based on several meaningful keywords. As a result, this system is expected to be used by Jabatan Ukur dan Pemetaan Malaysia (JUPEM) in order to make the organization vision is implemented
Fractal image compression and the self-affinity assumption : a stochastic signal modelling perspective
Bibliography: p. 208-225.Fractal image compression is a comparatively new technique which has gained considerable attention in the popular technical press, and more recently in the research literature. The most significant advantages claimed are high reconstruction quality at low coding rates, rapid decoding, and "resolution independence" in the sense that an encoded image may be decoded at a higher resolution than the original. While many of the claims published in the popular technical press are clearly extravagant, it appears from the rapidly growing body of published research that fractal image compression is capable of performance comparable with that of other techniques enjoying the benefit of a considerably more robust theoretical foundation. . So called because of the similarities between the form of image representation and a mechanism widely used in generating deterministic fractal images, fractal compression represents an image by the parameters of a set of affine transforms on image blocks under which the image is approximately invariant. Although the conditions imposed on these transforms may be shown to be sufficient to guarantee that an approximation of the original image can be reconstructed, there is no obvious theoretical reason to expect this to represent an efficient representation for image coding purposes. The usual analogy with vector quantisation, in which each image is considered to be represented in terms of code vectors extracted from the image itself is instructive, but transforms the fundamental problem into one of understanding why this construction results in an efficient codebook. The signal property required for such a codebook to be effective, termed "self-affinity", is poorly understood. A stochastic signal model based examination of this property is the primary contribution of this dissertation. The most significant findings (subject to some important restrictions} are that "self-affinity" is not a natural consequence of common statistical assumptions but requires particular conditions which are inadequately characterised by second order statistics, and that "natural" images are only marginally "self-affine", to the extent that fractal image compression is effective, but not more so than comparable standard vector quantisation techniques
Connected component identification and cluster update on GPU
Cluster identification tasks occur in a multitude of contexts in physics and
engineering such as, for instance, cluster algorithms for simulating spin
models, percolation simulations, segmentation problems in image processing, or
network analysis. While it has been shown that graphics processing units (GPUs)
can result in speedups of two to three orders of magnitude as compared to
serial codes on CPUs for the case of local and thus naturally parallelized
problems such as single-spin flip update simulations of spin models, the
situation is considerably more complicated for the non-local problem of cluster
or connected component identification. I discuss the suitability of different
approaches of parallelization of cluster labeling and cluster update algorithms
for calculations on GPU and compare to the performance of serial
implementations.Comment: 15 pages, 14 figures, one table, submitted to PR
From Malware Samples to Fractal Images: A New Paradigm for Classification. (Version 2.0, Previous version paper name: Have you ever seen malware?)
To date, a large number of research papers have been written on the
classification of malware, its identification, classification into different
families and the distinction between malware and goodware. These works have
been based on captured malware samples and have attempted to analyse malware
and goodware using various techniques, including techniques from the field of
artificial intelligence. For example, neural networks have played a significant
role in these classification methods. Some of this work also deals with
analysing malware using its visualisation. These works usually convert malware
samples capturing the structure of malware into image structures, which are
then the object of image processing. In this paper, we propose a very
unconventional and novel approach to malware visualisation based on dynamic
behaviour analysis, with the idea that the images, which are visually very
interesting, are then used to classify malware concerning goodware. Our
approach opens an extensive topic for future discussion and provides many new
directions for research in malware analysis and classification, as discussed in
conclusion. The results of the presented experiments are based on a database of
6 589 997 goodware, 827 853 potentially unwanted applications and 4 174 203
malware samples provided by ESET and selected experimental data (images,
generating polynomial formulas and software generating images) are available on
GitHub for interested readers. Thus, this paper is not a comprehensive compact
study that reports the results obtained from comparative experiments but rather
attempts to show a new direction in the field of visualisation with possible
applications in malware analysis.Comment: This paper is under review; the section describing conversion from
malware structure to fractal figure is temporarily erased here to protect our
idea. It will be replaced by a full version when accepte
- …