2 research outputs found
Learning and generalization of compositional representations of visual scenes
Complex visual scenes that are composed of multiple objects, each with
attributes, such as object name, location, pose, color, etc., are challenging
to describe in order to train neural networks. Usually,deep learning networks
are trained supervised by categorical scene descriptions. The common
categorical description of a scene contains the names of individual objects but
lacks information about other attributes. Here, we use distributed
representations of object attributes and vector operations in a vector symbolic
architecture to create a full compositional description of a scene in a
high-dimensional vector. To control the scene composition, we use artificial
images composed of multiple, translated and colored MNIST digits. In contrast
to learning category labels, here we train deep neural networks to output the
full compositional vector description of an input image. The output of the deep
network can then be interpreted by a VSA resonator network, to extract object
identity or other properties of indiviual objects. We evaluate the performance
and generalization properties of the system on randomly generated scenes.
Specifically, we show that the network is able to learn the task and generalize
to unseen seen digit shapes and scene configurations. Further, the
generalisation ability of the trained model is limited. For example, with a gap
in the training data, like an object not shown in a particular image location
during training, the learning does not automatically fill this gap.Comment: 10 pages, 6 figure
Hardware-Aware Static Optimization of Hyperdimensional Computations
Binary spatter code (BSC)-based hyperdimensional computing (HDC) is a highly
error-resilient approximate computational paradigm suited for error-prone,
emerging hardware platforms. In BSC HDC, the basic datatype is a hypervector, a
typically large binary vector, where the size of the hypervector has a
significant impact on the fidelity and resource usage of the computation.
Typically, the hypervector size is dynamically tuned to deliver the desired
accuracy; this process is time-consuming and often produces hypervector sizes
that lack accuracy guarantees and produce poor results when reused for very
similar workloads. We present Heim, a hardware-aware static analysis and
optimization framework for BSC HD computations. Heim analytically derives the
minimum hypervector size that minimizes resource usage and meets the target
accuracy requirement. Heim guarantees the optimized computation converges to
the user-provided accuracy target on expectation, even in the presence of
hardware error. Heim deploys a novel static analysis procedure that unifies
theoretical results from the neuroscience community to systematically optimize
HD computations.
We evaluate Heim against dynamic tuning-based optimization on 25 benchmark
data structures. Given a 99% accuracy requirement, Heim-optimized computations
achieve a 99.2%-100.0% median accuracy, up to 49.5% higher than dynamic
tuning-based optimization, while achieving 1.15x-7.14x reductions in
hypervector size compared to HD computations that achieve comparable query
accuracy and finding parametrizations 30.0x-100167.4x faster than dynamic
tuning-based approaches. We also use Heim to systematically evaluate the
performance benefits of using analog CAMs and multiple-bit-per-cell ReRAM over
conventional hardware, while maintaining iso-accuracy -- for both emerging
technologies, we find usages where the emerging hardware imparts significant
benefits