144 research outputs found
Attractor neural networks storing multiple space representations: a model for hippocampal place fields
A recurrent neural network model storing multiple spatial maps, or
``charts'', is analyzed. A network of this type has been suggested as a model
for the origin of place cells in the hippocampus of rodents. The extremely
diluted and fully connected limits are studied, and the storage capacity and
the information capacity are found. The important parameters determining the
performance of the network are the sparsity of the spatial representations and
the degree of connectivity, as found already for the storage of individual
memory patterns in the general theory of auto-associative networks. Such
results suggest a quantitative parallel between theories of hippocampal
function in different animal species, such as primates (episodic memory) and
rodents (memory for space).Comment: 19 RevTeX pages, 8 pes figure
Radars for Autonomous Driving: A Review of Deep Learning Methods and Challenges
Radar is a key component of the suite of perception sensors used for safe and
reliable navigation of autonomous vehicles. Its unique capabilities include
high-resolution velocity imaging, detection of agents in occlusion and over
long ranges, and robust performance in adverse weather conditions. However, the
usage of radar data presents some challenges: it is characterized by low
resolution, sparsity, clutter, high uncertainty, and lack of good datasets.
These challenges have limited radar deep learning research. As a result,
current radar models are often influenced by lidar and vision models, which are
focused on optical features that are relatively weak in radar data, thus
resulting in under-utilization of radar's capabilities and diminishing its
contribution to autonomous perception. This review seeks to encourage further
deep learning research on autonomous radar data by 1) identifying key research
themes, and 2) offering a comprehensive overview of current opportunities and
challenges in the field. Topics covered include early and late fusion,
occupancy flow estimation, uncertainty modeling, and multipath detection. The
paper also discusses radar fundamentals and data representation, presents a
curated list of recent radar datasets, and reviews state-of-the-art lidar and
vision models relevant for radar research. For a summary of the paper and more
results, visit the website: autonomous-radars.github.io
A survey of outlier detection methodologies
Outlier detection has been used for centuries to detect and, where appropriate, remove anomalous observations from data. Outliers arise due to mechanical faults, changes in system behaviour, fraudulent behaviour, human error, instrument error or simply through natural deviations in populations. Their detection can identify system faults and fraud before they escalate with potentially catastrophic consequences. It can identify errors and remove their contaminating effect on the data set and as such to purify the data for processing. The original outlier detection methods were arbitrary but now, principled and systematic techniques are used, drawn from the full gamut of Computer Science and Statistics. In this paper, we introduce a survey of contemporary techniques for outlier detection. We identify their respective motivations and distinguish their advantages and disadvantages in a comparative review
Deep Active Learning Explored Across Diverse Label Spaces
abstract: Deep learning architectures have been widely explored in computer vision and have
depicted commendable performance in a variety of applications. A fundamental challenge
in training deep networks is the requirement of large amounts of labeled training
data. While gathering large quantities of unlabeled data is cheap and easy, annotating
the data is an expensive process in terms of time, labor and human expertise.
Thus, developing algorithms that minimize the human effort in training deep models
is of immense practical importance. Active learning algorithms automatically identify
salient and exemplar samples from large amounts of unlabeled data and can augment
maximal information to supervised learning models, thereby reducing the human annotation
effort in training machine learning models. The goal of this dissertation is to
fuse ideas from deep learning and active learning and design novel deep active learning
algorithms. The proposed learning methodologies explore diverse label spaces to
solve different computer vision applications. Three major contributions have emerged
from this work; (i) a deep active framework for multi-class image classication, (ii)
a deep active model with and without label correlation for multi-label image classi-
cation and (iii) a deep active paradigm for regression. Extensive empirical studies
on a variety of multi-class, multi-label and regression vision datasets corroborate the
potential of the proposed methods for real-world applications. Additional contributions
include: (i) a multimodal emotion database consisting of recordings of facial
expressions, body gestures, vocal expressions and physiological signals of actors enacting
various emotions, (ii) four multimodal deep belief network models and (iii)
an in-depth analysis of the effect of transfer of multimodal emotion features between
source and target networks on classification accuracy and training time. These related
contributions help comprehend the challenges involved in training deep learning
models and motivate the main goal of this dissertation.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201
Associative learning on imbalanced environments: An empirical study
Associative memories have emerged as a powerful computational neural network model for several pattern classification problems. Like most traditional classifiers, these models assume that the classes share similar prior probabilities. However, in many real-life applications the ratios of prior probabilities between classes are extremely skewed. Although the literature has provided numerous studies that examine the performance degradation of renowned classifiers on different imbalanced scenarios, so far this effect has not been supported by a thorough empirical study in the context of associative memories. In this paper, we fix our attention on the applicability of the associative neural networks to the classification of imbalanced data. The key questions here addressed are whether these models perform better, the same or worse than other popular classifiers, how the level of imbalance affects their performance, and whether distinct resampling strategies produce a different impact on the associative memories. In order to answer these questions and gain further insight into the feasibility and efficiency of the associative memories, a large-scale experimental evaluation with 31 databases, seven classification models and four resampling algorithms is carried out here, along with a non-parametric statistical test to discover any significant differences between each pair of classifiers.This work has partially been supported by the Mexican Science and Technology Council (CONACYT-Mexico) through the Postdoctoral Fellowship Program (232167), the Mexican PRODEP(DSA/103.5/15/7004), the Spanish Ministry of Economy(TIN2013-46522-P) and the Generalitat Valenciana (PROMETEOII/2014/062)
Recommended from our members
PARSNIP: A Connectionist Network that Learns Natural Language Grammar from Exposure to Natural Language Sentences
Linguists have pointed out that exposure to language is probably not sufficient for a general,domain-independent, learning mechanism to acquire natural language grammar. This "povertyof the stimulus" argument has prompted linguists to invoke a large innate component inlanguage acquisition as well as to discourage views of a general learning device (GLD) forlanguage acquisition. W e describe a connectionist non-supervised learning model (PARSNIP^)that "learns" on the basis of exposure to natural language sentences from a million wordmachine-readable text corpus (Brown corpus). PARSNIP, an auto-associator, was shown threeseparate samples consisting of 10, 100 or 1000 syntactically tagged sentences, each 15 words orless. The network leamed to produce correct syntactic category labels corresponding to eachposition of the sentence originally presented to it, and it was able to generalize to another 1000sentences which were distinct from all three training samples. PARSNIP does sentencecompletion on sentence fragments, prefers syntactically correct sentences, and also recognizesnovel sentence patterns absent from the presented corpus. One interesting parallel betweenPARSNIP and human language users is the fact that PARSNIP correctly reproduces testsentences reflecting one level deep center-embedded patterns which it has never seen beforewhile failing to reproduce multiply center-embedded patterns
- …