5,482 research outputs found
Learning Local Feature Aggregation Functions with Backpropagation
This paper introduces a family of local feature aggregation functions and a
novel method to estimate their parameters, such that they generate optimal
representations for classification (or any task that can be expressed as a cost
function minimization problem). To achieve that, we compose the local feature
aggregation function with the classifier cost function and we backpropagate the
gradient of this cost function in order to update the local feature aggregation
function parameters. Experiments on synthetic datasets indicate that our method
discovers parameters that model the class-relevant information in addition to
the local feature space. Further experiments on a variety of motion and visual
descriptors, both on image and video datasets, show that our method outperforms
other state-of-the-art local feature aggregation functions, such as Bag of
Words, Fisher Vectors and VLAD, by a large margin.Comment: In Proceedings of the 25th European Signal Processing Conference
(EUSIPCO 2017
Second-order Democratic Aggregation
Aggregated second-order features extracted from deep convolutional networks
have been shown to be effective for texture generation, fine-grained
recognition, material classification, and scene understanding. In this paper,
we study a class of orderless aggregation functions designed to minimize
interference or equalize contributions in the context of second-order features
and we show that they can be computed just as efficiently as their first-order
counterparts and they have favorable properties over aggregation by summation.
Another line of work has shown that matrix power normalization after
aggregation can significantly improve the generalization of second-order
representations. We show that matrix power normalization implicitly equalizes
contributions during aggregation thus establishing a connection between matrix
normalization techniques and prior work on minimizing interference. Based on
the analysis we present {\gamma}-democratic aggregators that interpolate
between sum ({\gamma}=1) and democratic pooling ({\gamma}=0) outperforming both
on several classification tasks. Moreover, unlike power normalization, the
{\gamma}-democratic aggregations can be computed in a low dimensional space by
sketching that allows the use of very high-dimensional second-order features.
This results in a state-of-the-art performance on several datasets
Fuzzy set methods for object recognition in space applications
Progress on the following tasks is reported: (1) fuzzy set-based decision making methodologies; (2) feature calculation; (3) clustering for curve and surface fitting; and (4) acquisition of images. The general structure for networks based on fuzzy set connectives which are being used for information fusion and decision making in space applications is described. The structure and training techniques for such networks consisting of generalized means and gamma-operators are described. The use of other hybrid operators in multicriteria decision making is currently being examined. Numerous classical features on image regions such as gray level statistics, edge and curve primitives, texture measures from cooccurrance matrix, and size and shape parameters were implemented. Several fractal geometric features which may have a considerable impact on characterizing cluttered background, such as clouds, dense star patterns, or some planetary surfaces, were used. A new approach to a fuzzy C-shell algorithm is addressed. NASA personnel are in the process of acquiring suitable simulation data and hopefully videotaped actual shuttle imagery. Photographs have been digitized to use in the algorithms. Also, a model of the shuttle was assembled and a mechanism to orient this model in 3-D to digitize for experiments on pose estimation is being constructed
Modeling Financial Time Series with Artificial Neural Networks
Financial time series convey the decisions and actions of a population of human actors over time. Econometric and regressive models have been developed in the past decades for analyzing these time series. More recently, biologically inspired artificial neural network models have been shown to overcome some of the main challenges of traditional techniques by better exploiting the non-linear, non-stationary, and oscillatory nature of noisy, chaotic human interactions. This review paper explores the options, benefits, and weaknesses of the various forms of artificial neural networks as compared with regression techniques in the field of financial time series analysis.CELEST, a National Science Foundation Science of Learning Center (SBE-0354378); SyNAPSE program of the Defense Advanced Research Project Agency (HR001109-03-0001
Comparative performance of some popular ANN algorithms on benchmark and function approximation problems
We report an inter-comparison of some popular algorithms within the
artificial neural network domain (viz., Local search algorithms, global search
algorithms, higher order algorithms and the hybrid algorithms) by applying them
to the standard benchmarking problems like the IRIS data, XOR/N-Bit parity and
Two Spiral. Apart from giving a brief description of these algorithms, the
results obtained for the above benchmark problems are presented in the paper.
The results suggest that while Levenberg-Marquardt algorithm yields the lowest
RMS error for the N-bit Parity and the Two Spiral problems, Higher Order
Neurons algorithm gives the best results for the IRIS data problem. The best
results for the XOR problem are obtained with the Neuro Fuzzy algorithm. The
above algorithms were also applied for solving several regression problems such
as cos(x) and a few special functions like the Gamma function, the
complimentary Error function and the upper tail cumulative
-distribution function. The results of these regression problems
indicate that, among all the ANN algorithms used in the present study,
Levenberg-Marquardt algorithm yields the best results. Keeping in view the
highly non-linear behaviour and the wide dynamic range of these functions, it
is suggested that these functions can be also considered as standard benchmark
problems for function approximation using artificial neural networks.Comment: 18 pages 5 figures. Accepted in Pramana- Journal of Physic
- …