232,466 research outputs found
Some comments on C. S. Wallace's random number generators
We outline some of Chris Wallace's contributions to pseudo-random number
generation. In particular, we consider his idea for generating normally
distributed variates without relying on a source of uniform random numbers, and
compare it with more conventional methods for generating normal random numbers.
Implementations of Wallace's idea can be very fast (approximately as fast as
good uniform generators). We discuss the statistical quality of the output, and
mention how certain pitfalls can be avoided.Comment: 13 pages. For further information, see
http://wwwmaths.anu.edu.au/~brent/pub/pub213.htm
Biosignal Generation and Latent Variable Analysis with Recurrent Generative Adversarial Networks
The effectiveness of biosignal generation and data augmentation with
biosignal generative models based on generative adversarial networks (GANs),
which are a type of deep learning technique, was demonstrated in our previous
paper. GAN-based generative models only learn the projection between a random
distribution as input data and the distribution of training data.Therefore, the
relationship between input and generated data is unclear, and the
characteristics of the data generated from this model cannot be controlled.
This study proposes a method for generating time-series data based on GANs and
explores their ability to generate biosignals with certain classes and
characteristics. Moreover, in the proposed method, latent variables are
analyzed using canonical correlation analysis (CCA) to represent the
relationship between input and generated data as canonical loadings. Using
these loadings, we can control the characteristics of the data generated by the
proposed method. The influence of class labels on generated data is analyzed by
feeding the data interpolated between two class labels into the generator of
the proposed GANs. The CCA of the latent variables is shown to be an effective
method of controlling the generated data characteristics. We are able to model
the distribution of the time-series data without requiring domain-dependent
knowledge using the proposed method. Furthermore, it is possible to control the
characteristics of these data by analyzing the model trained using the proposed
method. To the best of our knowledge, this work is the first to generate
biosignals using GANs while controlling the characteristics of the generated
data
Multi-dimensional key generation of ICMetrics for cloud computing
Despite the rapid expansion and uptake of cloud based services, lack of trust in the provenance of such services represents a significant inhibiting factor in the further expansion of such service. This paper explores an approach to assure trust and provenance in cloud based services via the generation of digital signatures using properties or features derived from their own construction and software behaviour. The resulting system removes the need for a server to store a private key in a typical Public/Private-Key Infrastructure for data sources. Rather, keys are generated at run-time by features obtained as service execution proceeds. In this paper we investigate several potential software features for suitability during the employment of a cloud service identification system. The generation of stable and unique digital identity from features in Cloud computing is challenging because of the unstable operation environments that implies the features employed are likely to vary under normal operating conditions. To address this, we introduce a multi-dimensional key generation technology which maps from multi-dimensional feature space directly to a key space. Subsequently, a smooth entropy algorithm is developed to evaluate the entropy of key space
Random Latin squares and Sudoku designs generation
Uniform random generation of Latin squares is a classical problem. In this
paper we prove that both Latin squares and Sudoku designs are maximum cliques
of properly defined graphs. We have developed a simple algorithm for uniform
random sampling of Latin squares and Sudoku designs. It makes use of recent
tools for graph analysis. The corresponding SAS code is annexed
Adversarially Tuned Scene Generation
Generalization performance of trained computer vision systems that use
computer graphics (CG) generated data is not yet effective due to the concept
of 'domain-shift' between virtual and real data. Although simulated data
augmented with a few real world samples has been shown to mitigate domain shift
and improve transferability of trained models, guiding or bootstrapping the
virtual data generation with the distributions learnt from target real world
domain is desired, especially in the fields where annotating even few real
images is laborious (such as semantic labeling, and intrinsic images etc.). In
order to address this problem in an unsupervised manner, our work combines
recent advances in CG (which aims to generate stochastic scene layouts coupled
with large collections of 3D object models) and generative adversarial training
(which aims train generative models by measuring discrepancy between generated
and real data in terms of their separability in the space of a deep
discriminatively-trained classifier). Our method uses iterative estimation of
the posterior density of prior distributions for a generative graphical model.
This is done within a rejection sampling framework. Initially, we assume
uniform distributions as priors on the parameters of a scene described by a
generative graphical model. As iterations proceed the prior distributions get
updated to distributions that are closer to the (unknown) distributions of
target data. We demonstrate the utility of adversarially tuned scene generation
on two real-world benchmark datasets (CityScapes and CamVid) for traffic scene
semantic labeling with a deep convolutional net (DeepLab). We realized
performance improvements by 2.28 and 3.14 points (using the IoU metric) between
the DeepLab models trained on simulated sets prepared from the scene generation
models before and after tuning to CityScapes and CamVid respectively.Comment: 9 pages, accepted at CVPR 201
- …