15,219 research outputs found
Computing with Capsules
Capsules provide a clean algebraic representation of the state of a computation in higher-order functional and imperative languages. They play the same role as closures or heap- or stack-allocated environments but are much simpler. A capsule is essentially a finite coalgebraic representation of a regular closed lambda-coterm. One can give an operational semantics based on capsules for a higher-order programming language with functional and imperative features, including mutable bindings. Lexical scoping is captured purely algebraically without stacks, heaps, or closures. All operations of interest are typable with simple types, yet the language is Turing complete. Recursive functions are represented directly as capsules without the need for unnatural and untypable fixpoint combinators
CapProNet: Deep Feature Learning via Orthogonal Projections onto Capsule Subspaces
In this paper, we formalize the idea behind capsule nets of using a capsule
vector rather than a neuron activation to predict the label of samples. To this
end, we propose to learn a group of capsule subspaces onto which an input
feature vector is projected. Then the lengths of resultant capsules are used to
score the probability of belonging to different classes. We train such a
Capsule Projection Network (CapProNet) by learning an orthogonal projection
matrix for each capsule subspace, and show that each capsule subspace is
updated until it contains input feature vectors corresponding to the associated
class. We will also show that the capsule projection can be viewed as
normalizing the multiple columns of the weight matrix simultaneously to form an
orthogonal basis, which makes it more effective in incorporating novel
components of input features to update capsule representations. In other words,
the capsule projection can be viewed as a multi-dimensional weight
normalization in capsule subspaces, where the conventional weight normalization
is simply a special case of the capsule projection onto 1D lines. Only a small
negligible computing overhead is incurred to train the network in
low-dimensional capsule subspaces or through an alternative hyper-power
iteration to estimate the normalization matrix. Experiment results on image
datasets show the presented model can greatly improve the performance of the
state-of-the-art ResNet backbones by and that of the Densenet by
respectively at the same level of computing and memory expenses. The
CapProNet establishes the competitive state-of-the-art performance for the
family of capsule nets by significantly reducing test errors on the benchmark
datasets.Comment: Liheng Zhang, Marzieh Edraki, Guo-Jun Qi. CapProNet: Deep Feature
Learning via Orthogonal Projections onto Capsule Subspaces, in Proccedings of
Thirty-second Conference on Neural Information Processing Systems (NIPS
2018), Palais des Congr\`es de Montr\'eal, Montr\'eal, Canda, December 3-8,
201
VideoCapsuleNet: A Simplified Network for Action Detection
The recent advances in Deep Convolutional Neural Networks (DCNNs) have shown
extremely good results for video human action classification, however, action
detection is still a challenging problem. The current action detection
approaches follow a complex pipeline which involves multiple tasks such as tube
proposals, optical flow, and tube classification. In this work, we present a
more elegant solution for action detection based on the recently developed
capsule network. We propose a 3D capsule network for videos, called
VideoCapsuleNet: a unified network for action detection which can jointly
perform pixel-wise action segmentation along with action classification. The
proposed network is a generalization of capsule network from 2D to 3D, which
takes a sequence of video frames as input. The 3D generalization drastically
increases the number of capsules in the network, making capsule routing
computationally expensive. We introduce capsule-pooling in the convolutional
capsule layer to address this issue which makes the voting algorithm tractable.
The routing-by-agreement in the network inherently models the action
representations and various action characteristics are captured by the
predicted capsules. This inspired us to utilize the capsules for action
localization and the class-specific capsules predicted by the network are used
to determine a pixel-wise localization of actions. The localization is further
improved by parameterized skip connections with the convolutional capsule
layers and the network is trained end-to-end with a classification as well as
localization loss. The proposed network achieves sate-of-the-art performance on
multiple action detection datasets including UCF-Sports, J-HMDB, and UCF-101
(24 classes) with an impressive ~20% improvement on UCF-101 and ~15%
improvement on J-HMDB in terms of v-mAP scores
UKC ANSAware Survival Guide (for Modula-3)
The ANSAware platform is a suite of libraries and tools which facilitate the building of distributed applications. The documentation with the release forms little more that a reference manual to the language and does not aid the first time user. This document provides a simple introduction to distributed systems concepts and, through the use of an example, demonstrates how to build applications with ANSAware
Efficient and accurate simulations of deformable particles immersed in a fluid using a combined immersed boundary lattice Boltzmann finite element method
The deformation of an initially spherical capsule, freely suspended in simple
shear flow, can be computed analytically in the limit of small deformations [D.
Barthes-Biesel, J. M. Rallison, The Time-Dependent Deformation of a Capsule
Freely Suspended in a Linear Shear Flow, J. Fluid Mech. 113 (1981) 251-267].
Those analytic approximations are used to study the influence of the mesh
tessellation method, the spatial resolution, and the discrete delta function of
the immersed boundary method on the numerical results obtained by a coupled
immersed boundary lattice Boltzmann finite element method. For the description
of the capsule membrane, a finite element method and the Skalak constitutive
model [R. Skalak et al., Strain Energy Function of Red Blood Cell Membranes,
Biophys. J. 13 (1973) 245-264] have been employed. Our primary goal is the
investigation of the presented model for small resolutions to provide a sound
basis for efficient but accurate simulations of multiple deformable particles
immersed in a fluid. We come to the conclusion that details of the membrane
mesh, as tessellation method and resolution, play only a minor role. The
hydrodynamic resolution, i.e., the width of the discrete delta function, can
significantly influence the accuracy of the simulations. The discretization of
the delta function introduces an artificial length scale, which effectively
changes the radius and the deformability of the capsule. We discuss
possibilities of reducing the computing time of simulations of deformable
objects immersed in a fluid while maintaining high accuracy.Comment: 23 pages, 14 figures, 3 table
Making history: intentional capture of future memories
Lifelogging' technology makes it possible to amass digital data about every aspect of our everyday lives. Instead of focusing on such technical possibilities, here we investigate the way people compose long-term mnemonic representations of their lives. We asked 10 families to create a time capsule, a collection of objects used to trigger remembering in the distant future. Our results show that contrary to the lifelogging view, people are less interested in exhaustively digitally recording their past than in reconstructing it from carefully selected cues that are often physical objects. Time capsules were highly expressive and personal, many objects were made explicitly for inclusion, however with little object annotation. We use these findings to propose principles for designing technology that supports the active reconstruction of our future past
The Parallel Persistent Memory Model
We consider a parallel computational model that consists of processors,
each with a fast local ephemeral memory of limited size, and sharing a large
persistent memory. The model allows for each processor to fault with bounded
probability, and possibly restart. On faulting all processor state and local
ephemeral memory are lost, but the persistent memory remains. This model is
motivated by upcoming non-volatile memories that are as fast as existing random
access memory, are accessible at the granularity of cache lines, and have the
capability of surviving power outages. It is further motivated by the
observation that in large parallel systems, failure of processors and their
caches is not unusual.
Within the model we develop a framework for developing locality efficient
parallel algorithms that are resilient to failures. There are several
challenges, including the need to recover from failures, the desire to do this
in an asynchronous setting (i.e., not blocking other processors when one
fails), and the need for synchronization primitives that are robust to
failures. We describe approaches to solve these challenges based on breaking
computations into what we call capsules, which have certain properties, and
developing a work-stealing scheduler that functions properly within the context
of failures. The scheduler guarantees a time bound of in expectation, where and are the work and
depth of the computation (in the absence of failures), is the average
number of processors available during the computation, and is the
probability that a capsule fails. Within the model and using the proposed
methods, we develop efficient algorithms for parallel sorting and other
primitives.Comment: This paper is the full version of a paper at SPAA 2018 with the same
nam
- …