31,202 research outputs found
Genetic algorithms
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology
Symmetric tensor decomposition
We present an algorithm for decomposing a symmetric tensor, of dimension n
and order d as a sum of rank-1 symmetric tensors, extending the algorithm of
Sylvester devised in 1886 for binary forms. We recall the correspondence
between the decomposition of a homogeneous polynomial in n variables of total
degree d as a sum of powers of linear forms (Waring's problem), incidence
properties on secant varieties of the Veronese Variety and the representation
of linear forms as a linear combination of evaluations at distinct points. Then
we reformulate Sylvester's approach from the dual point of view. Exploiting
this duality, we propose necessary and sufficient conditions for the existence
of such a decomposition of a given rank, using the properties of Hankel (and
quasi-Hankel) matrices, derived from multivariate polynomials and normal form
computations. This leads to the resolution of polynomial equations of small
degree in non-generic cases. We propose a new algorithm for symmetric tensor
decomposition, based on this characterization and on linear algebra
computations with these Hankel matrices. The impact of this contribution is
two-fold. First it permits an efficient computation of the decomposition of any
tensor of sub-generic rank, as opposed to widely used iterative algorithms with
unproved global convergence (e.g. Alternate Least Squares or gradient
descents). Second, it gives tools for understanding uniqueness conditions, and
for detecting the rank
Gravitational waves: search results, data analysis and parameter estimation
The Amaldi 10 Parallel Session C2 on gravitational wave (GW) search results, data analysis and parameter estimation included three lively sessions of lectures by 13 presenters, and 34 posters. The talks and posters covered a huge range of material, including results and analysis techniques for ground-based GW detectors, targeting anticipated signals from different astrophysical sources: compact binary inspiral, merger and ringdown; GW bursts from intermediate mass binary black hole mergers, cosmic string cusps, core-collapse supernovae, and other unmodeled sources; continuous waves from spinning neutron stars; and a stochastic GW background. There was considerable emphasis on Bayesian techniques for estimating the parameters of coalescing compact binary systems from the gravitational waveforms extracted from the data from the advanced detector network. This included methods to distinguish deviations of the signals from what is expected in the context of General Relativity
Astrophysical Data Analytics based on Neural Gas Models, using the Classification of Globular Clusters as Playground
In Astrophysics, the identification of candidate Globular Clusters through
deep, wide-field, single band HST images, is a typical data analytics problem,
where methods based on Machine Learning have revealed a high efficiency and
reliability, demonstrating the capability to improve the traditional
approaches. Here we experimented some variants of the known Neural Gas model,
exploring both supervised and unsupervised paradigms of Machine Learning, on
the classification of Globular Clusters, extracted from the NGC1399 HST data.
Main focus of this work was to use a well-tested playground to scientifically
validate such kind of models for further extended experiments in astrophysics
and using other standard Machine Learning methods (for instance Random Forest
and Multi Layer Perceptron neural network) for a comparison of performances in
terms of purity and completeness.Comment: Proceedings of the XIX International Conference "Data Analytics and
Management in Data Intensive Domains" (DAMDID/RCDL 2017), Moscow, Russia,
October 10-13, 2017, 8 pages, 4 figure
Locking of correlated neural activity to ongoing oscillations
Population-wide oscillations are ubiquitously observed in mesoscopic signals
of cortical activity. In these network states a global oscillatory cycle
modulates the propensity of neurons to fire. Synchronous activation of neurons
has been hypothesized to be a separate channel of signal processing information
in the brain. A salient question is therefore if and how oscillations interact
with spike synchrony and in how far these channels can be considered separate.
Experiments indeed showed that correlated spiking co-modulates with the static
firing rate and is also tightly locked to the phase of beta-oscillations. While
the dependence of correlations on the mean rate is well understood in
feed-forward networks, it remains unclear why and by which mechanisms
correlations tightly lock to an oscillatory cycle. We here demonstrate that
such correlated activation of pairs of neurons is qualitatively explained by
periodically-driven random networks. We identify the mechanisms by which
covariances depend on a driving periodic stimulus. Mean-field theory combined
with linear response theory yields closed-form expressions for the
cyclostationary mean activities and pairwise zero-time-lag covariances of
binary recurrent random networks. Two distinct mechanisms cause time-dependent
covariances: the modulation of the susceptibility of single neurons (via the
external input and network feedback) and the time-varying variances of single
unit activities. For some parameters, the effectively inhibitory recurrent
feedback leads to resonant covariances even if mean activities show
non-resonant behavior. Our analytical results open the question of
time-modulated synchronous activity to a quantitative analysis.Comment: 57 pages, 12 figures, published versio
Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks
Future wireless networks have a substantial potential in terms of supporting
a broad range of complex compelling applications both in military and civilian
fields, where the users are able to enjoy high-rate, low-latency, low-cost and
reliable information services. Achieving this ambitious goal requires new radio
techniques for adaptive learning and intelligent decision making because of the
complex heterogeneous nature of the network structures and wireless services.
Machine learning (ML) algorithms have great success in supporting big data
analytics, efficient parameter estimation and interactive decision making.
Hence, in this article, we review the thirty-year history of ML by elaborating
on supervised learning, unsupervised learning, reinforcement learning and deep
learning. Furthermore, we investigate their employment in the compelling
applications of wireless networks, including heterogeneous networks (HetNets),
cognitive radios (CR), Internet of things (IoT), machine to machine networks
(M2M), and so on. This article aims for assisting the readers in clarifying the
motivation and methodology of the various ML algorithms, so as to invoke them
for hitherto unexplored services as well as scenarios of future wireless
networks.Comment: 46 pages, 22 fig
Damage identification in structural health monitoring: a brief review from its implementation to the Use of data-driven applications
The damage identification process provides relevant information about the current state of a structure under inspection, and it can be approached from two different points of view. The first approach uses data-driven algorithms, which are usually associated with the collection of data using sensors. Data are subsequently processed and analyzed. The second approach uses models to analyze information about the structure. In the latter case, the overall performance of the approach is associated with the accuracy of the model and the information that is used to define it. Although both approaches are widely used, data-driven algorithms are preferred in most cases because they afford the ability to analyze data acquired from sensors and to provide a real-time solution for decision making; however, these approaches involve high-performance processors due to the high computational cost. As a contribution to the researchers working with data-driven algorithms and applications, this work presents a brief review of data-driven algorithms for damage identification in structural health-monitoring applications. This review covers damage detection, localization, classification, extension, and prognosis, as well as the development of smart structures. The literature is systematically reviewed according to the natural steps of a structural health-monitoring system. This review also includes information on the types of sensors used as well as on the development of data-driven algorithms for damage identification.Peer ReviewedPostprint (published version
Toward single particle reconstruction without particle picking: Breaking the detection limit
Single-particle cryo-electron microscopy (cryo-EM) has recently joined X-ray
crystallography and NMR spectroscopy as a high-resolution structural method for
biological macromolecules. In a cryo-EM experiment, the microscope produces
images called micrographs. Projections of the molecule of interest are embedded
in the micrographs at unknown locations, and under unknown viewing directions.
Standard imaging techniques first locate these projections (detection) and then
reconstruct the 3-D structure from them. Unfortunately, high noise levels
hinder detection. When reliable detection is rendered impossible, the standard
techniques fail. This is a problem especially for small molecules, which can be
particularly hard to detect. In this paper, we propose a radically different
approach: we contend that the structure could, in principle, be reconstructed
directly from the micrographs, without intermediate detection. As a result,
even small molecules should be within reach for cryo-EM. To support this claim,
we setup a simplified mathematical model and demonstrate how our
autocorrelation analysis technique allows to go directly from the micrographs
to the sought signals. This involves only one pass over the micrographs, which
is desirable for large experiments. We show numerical results and discuss
challenges that lay ahead to turn this proof-of-concept into a competitive
alternative to state-of-the-art algorithms
Photometric redshift estimation based on data mining with PhotoRApToR
Photometric redshifts (photo-z) are crucial to the scientific exploitation of
modern panchromatic digital surveys. In this paper we present PhotoRApToR
(Photometric Research Application To Redshift): a Java/C++ based desktop
application capable to solve non-linear regression and multi-variate
classification problems, in particular specialized for photo-z estimation. It
embeds a machine learning algorithm, namely a multilayer neural network trained
by the Quasi Newton learning rule, and special tools dedicated to pre- and
postprocessing data. PhotoRApToR has been successfully tested on several
scientific cases. The application is available for free download from the DAME
Program web site.Comment: To appear on Experimental Astronomy, Springer, 20 pages, 15 figure
- …