7,448 research outputs found
Solving -norm regularization with tensor kernels
In this paper, we discuss how a suitable family of tensor kernels can be used
to efficiently solve nonparametric extensions of regularized learning
methods. Our main contribution is proposing a fast dual algorithm, and showing
that it allows to solve the problem efficiently. Our results contrast recent
findings suggesting kernel methods cannot be extended beyond Hilbert setting.
Numerical experiments confirm the effectiveness of the method
Modeling Time Series of Real Systems using Genetic Programming
Analytic models of two computer generated time series (Logistic map and
Rossler system) and two real time series (ion saturation current in Aditya
Tokamak plasma and NASDAQ composite index) are constructed using Genetic
Programming (GP) framework. In each case, the optimal map that results from
fitting part of the data set also provides a very good description of rest of
the data. Predictions made using the map iteratively range from being very good
to fair.Comment: 10 pages, 9 figures, submitted to Physical Review
Information, complexity and entropy: a new approach to theory and measurement methods
In this paper, we present some results on information, complexity and entropy
as defined below and we discuss their relations with the Kolmogorov-Sinai
entropy which is the most important invariant of a dynamical system. These
results have the following features and motivations:
-we give a new computable definition of information and complexity which
allows to give a computable characterization of the K-S entropy;
-these definitions make sense even for a single orbit and can be measured by
suitable data compression algorithms; hence they can be used in simulations and
in the analysis of experimental data;
-the asymptotic behavior of these quantities allows to compute not only the
Kolmogorov-Sinai entropy but also other quantities which give a measure of the
chaotic behavior of a dynamical system even in the case of null entropy.Comment: 30 pages, 6 figure
Separating a mixture of chaotic signals
Chaos is popularly associated with its property of sensitivity to initial
conditions. In this paper we will show that there can be a flip side to this
property which is quite fascinating and highly useful in many applications. As
a result, we can mix a large number of chaotic signals and one completely
arbitrary signal and later a recipient of this transformed and weighted mixture
can separate each of the signals, one by one. The chaotic signals, could be
generated by various maps which belong to the logistic family. The arbitrary
signal, could be a message, some random noise, some periodic signal or a
chaotic signal generated by a source, either belonging or not belonging to the
family. The key behind this procedure is a family of maps which can dovetail
into each other without altering each of their predecessor's symbolic sequence.Comment: 9 pages, 7 figures. This paper was presented at International
Conference on Non-linear Dynamics and Chaos: Advances and Perspectives,
17--21 September 2007, Aberdeen, U
Exploiting ergodicity of the logistic map using deep-zoom to improve security of chaos-based cryptosystems
This paper explores the deep-zoom properties of the chaotic k-logistic map,
in order to propose an improved chaos-based cryptosystem. This map was shown to
enhance the random features of the Logistic map, while at the same time
reducing the predictability about its orbits. We incorporate its strengths to
security into a previously published cryptosystem to provide an optimal
pseudo-random number generator (PRNG) as its core operation. The result is a
reliable method that does not have the weaknesses previously reported about the
original cryptosystem.Comment: 11 pages, 6 figure
Learning and inference in knowledge-based probabilistic model for medical diagnosis
Based on a weighted knowledge graph to represent first-order knowledge and
combining it with a probabilistic model, we propose a methodology for the
creation of a medical knowledge network (MKN) in medical diagnosis. When a set
of symptoms is activated for a specific patient, we can generate a ground
medical knowledge network composed of symptom nodes and potential disease
nodes. By Incorporating a Boltzmann machine into the potential function of a
Markov network, we investigated the joint probability distribution of the MKN.
In order to deal with numerical symptoms, a multivariate inference model is
presented that uses conditional probability. In addition, the weights for the
knowledge graph were efficiently learned from manually annotated Chinese
Electronic Medical Records (CEMRs). In our experiments, we found numerically
that the optimum choice of the quality of disease node and the expression of
symptom variable can improve the effectiveness of medical diagnosis. Our
experimental results comparing a Markov logic network and the logistic
regression algorithm on an actual CEMR database indicate that our method holds
promise and that MKN can facilitate studies of intelligent diagnosis.Comment: 32 pages, 8 figure
Image Encryption Algorithm Using Natural Interval Extensions
It is known that chaotic systems have widely been used in cryptography.
Generally, floating point simulations are used to generate pseudo-random
sequence of numbers. Although, it is possible to find some works on the
degradation of chaotic systems due to finite precision of digital computers,
little attention has been paid to exploit this limitation to formulate
efficient process for image encode. This article proposes a novel image
encryption method using natural interval extensions. The sequence of arithmetic
operations is different in each natural interval extension. This is what we
need to produce two different sequences; the difference between these sequences
is used to generate the lower bound error, which has been shown to present
satisfactory pseudo-random properties. The approach has been successfully
tested using the Chua's circuit as the chaotic system. The secret key has
presented good properties for encrypting the Lena image.Comment: BTSym'18 - Brazilian Techonology Symposium, 2018, Campinas. 5 page
Functional Dynamics I : Articulation Process
The articulation process of dynamical networks is studied with a functional
map, a minimal model for the dynamic change of relationships through iteration.
The model is a dynamical system of a function , not of variables, having a
self-reference term , introduced by recalling that operation in a
biological system is often applied to itself, as is typically seen in rules in
the natural language or genes. Starting from an inarticulate network, two types
of fixed points are formed as an invariant structure with iterations. The
function is folded with time, until it has finite or infinite piecewise-flat
segments of fixed points, regarded as articulation. For an initial logistic
map, attracted functions are classified into step, folded step, fractal, and
random phases, according to the degree of folding. Oscillatory dynamics are
also found, where function values are mapped to several fixed points
periodically. The significance of our results to prototype categorization in
language is discussed.Comment: 48 pages, 15 figeres (5 gif files
Compressive sampling with chaotic dynamical systems
We investigate the possibility of using different chaotic sequences to
construct measurement matrices in compressive sampling. In particular, we
consider sequences generated by Chua, Lorenz and Rossler dynamical systems and
investigate the accuracy of reconstruction when using each of them to construct
measurement matrices. Chua and Lorenz sequences appear to be suitable to
construct measurement matrices. We compare the recovery rate of the original
sequence with that obtained by using Gaussian, Bernoulli and uniformly
distributed random measurement matrices. We also investigate the impact of
correlation on the recovery rate. It appears that correlation does not
influence the probability of exact reconstruction significantly.Comment: 19th Telecommunications Forum TELFOR 2011, November 2011, Belgrade,
Serbi
AX-DBN: An Approximate Computing Framework for the Design of Low-Power Discriminative Deep Belief Networks
The power budget for embedded hardware implementations of Deep Learning
algorithms can be extremely tight. To address implementation challenges in such
domains, new design paradigms, like Approximate Computing, have drawn
significant attention. Approximate Computing exploits the innate
error-resilience of Deep Learning algorithms, a property that makes them
amenable for deployment on low-power computing platforms. This paper describes
an Approximate Computing design methodology, AX-DBN, for an architecture
belonging to the class of stochastic Deep Learning algorithms known as Deep
Belief Networks (DBNs). Specifically, we consider procedures for efficiently
implementing the Discriminative Deep Belief Network (DDBN), a stochastic neural
network which is used for classification tasks, extending Approximation
Computing from the analysis of deterministic to stochastic neural networks. For
the purpose of optimizing the DDBN for hardware implementations, we explore the
use of: (a)Limited precision of neurons and functional approximations of
activation functions; (b) Criticality analysis to identify nodes in the network
which can operate at reduced precision while allowing the network to maintain
target accuracy levels; and (c) A greedy search methodology with incremental
retraining to determine the optimal reduction in precision for all neurons to
maximize power savings. Using the AX-DBN methodology proposed in this paper, we
present experimental results across several network architectures that show
significant power savings under a user-specified accuracy loss constraint with
respect to ideal full precision implementations
- …