2,978 research outputs found
An Alternate Construction of an Access-Optimal Regenerating Code with Optimal Sub-Packetization Level
Given the scale of today's distributed storage systems, the failure of an
individual node is a common phenomenon. Various metrics have been proposed to
measure the efficacy of the repair of a failed node, such as the amount of data
download needed to repair (also known as the repair bandwidth), the amount of
data accessed at the helper nodes, and the number of helper nodes contacted.
Clearly, the amount of data accessed can never be smaller than the repair
bandwidth. In the case of a help-by-transfer code, the amount of data accessed
is equal to the repair bandwidth. It follows that a help-by-transfer code
possessing optimal repair bandwidth is access optimal. The focus of the present
paper is on help-by-transfer codes that employ minimum possible bandwidth to
repair the systematic nodes and are thus access optimal for the repair of a
systematic node.
The zigzag construction by Tamo et al. in which both systematic and parity
nodes are repaired is access optimal. But the sub-packetization level required
is where is the number of parities and is the number of
systematic nodes. To date, the best known achievable sub-packetization level
for access-optimal codes is in a MISER-code-based construction by
Cadambe et al. in which only the systematic nodes are repaired and where the
location of symbols transmitted by a helper node depends only on the failed
node and is the same for all helper nodes. Under this set-up, it turns out that
this sub-packetization level cannot be improved upon. In the present paper, we
present an alternate construction under the same setup, of an access-optimal
code repairing systematic nodes, that is inspired by the zigzag code
construction and that also achieves a sub-packetization level of .Comment: To appear in National Conference on Communications 201
Why my photos look sideways or upside down? Detecting Canonical Orientation of Images using Convolutional Neural Networks
Image orientation detection requires high-level scene understanding. Humans
use object recognition and contextual scene information to correctly orient
images. In literature, the problem of image orientation detection is mostly
confronted by using low-level vision features, while some approaches
incorporate few easily detectable semantic cues to gain minor improvements. The
vast amount of semantic content in images makes orientation detection
challenging, and therefore there is a large semantic gap between existing
methods and human behavior. Also, existing methods in literature report highly
discrepant detection rates, which is mainly due to large differences in
datasets and limited variety of test images used for evaluation. In this work,
for the first time, we leverage the power of deep learning and adapt
pre-trained convolutional neural networks using largest training dataset
to-date for the image orientation detection task. An extensive evaluation of
our model on different public datasets shows that it remarkably generalizes to
correctly orient a large set of unconstrained images; it also significantly
outperforms the state-of-the-art and achieves accuracy very close to that of
humans
Mass-Transport Models with Fragmentation and Aggregation
We present a review of nonequilibrium phase transitions in mass-transport
models with kinetic processes like fragmentation, diffusion, aggregation, etc.
These models have been used extensively to study a wide range of physical
problems. We provide a detailed discussion of the analytical and numerical
techniques used to study mass-transport phenomena.Comment: 29 pages, 4 figure
Compact Environment-Invariant Codes for Robust Visual Place Recognition
Robust visual place recognition (VPR) requires scene representations that are
invariant to various environmental challenges such as seasonal changes and
variations due to ambient lighting conditions during day and night. Moreover, a
practical VPR system necessitates compact representations of environmental
features. To satisfy these requirements, in this paper we suggest a
modification to the existing pipeline of VPR systems to incorporate supervised
hashing. The modified system learns (in a supervised setting) compact binary
codes from image feature descriptors. These binary codes imbibe robustness to
the visual variations exposed to it during the training phase, thereby, making
the system adaptive to severe environmental changes. Also, incorporating
supervised hashing makes VPR computationally more efficient and easy to
implement on simple hardware. This is because binary embeddings can be learned
over simple-to-compute features and the distance computation is also in the
low-dimensional hamming space of binary codes. We have performed experiments on
several challenging data sets covering seasonal, illumination and viewpoint
variations. We also compare two widely used supervised hashing methods of
CCAITQ and MLH and show that this new pipeline out-performs or closely matches
the state-of-the-art deep learning VPR methods that are based on
high-dimensional features extracted from pre-trained deep convolutional neural
networks.Comment: Conference on Computer and Robot Vision (CRV) 201
Codes With Hierarchical Locality
In this paper, we study the notion of {\em codes with hierarchical locality}
that is identified as another approach to local recovery from multiple
erasures. The well-known class of {\em codes with locality} is said to possess
hierarchical locality with a single level. In a {\em code with two-level
hierarchical locality}, every symbol is protected by an inner-most local code,
and another middle-level code of larger dimension containing the local code. We
first consider codes with two levels of hierarchical locality, derive an upper
bound on the minimum distance, and provide optimal code constructions of low
field-size under certain parameter sets. Subsequently, we generalize both the
bound and the constructions to hierarchical locality of arbitrary levels.Comment: 12 pages, submitted to ISIT 201
Tuning density profiles and mobility of inhomogeneous fluids
Density profiles are the most common measure of inhomogeneous structure in
confined fluids, but their connection to transport coefficients is poorly
understood. We explore via simulation how tuning particle-wall interactions to
flatten or enhance the particle layering of a model confined fluid impacts its
self-diffusivity, viscosity, and entropy. Interestingly, interactions that
eliminate particle layering significantly reduce confined fluid mobility,
whereas those that enhance layering can have the opposite effect. Excess
entropy helps to understand and predict these trends.Comment: 5 pages, 3 figure
Perturbations of the Kerr spacetime in horizon penetrating coordinates
We derive the Teukolsky equation for perturbations of a Kerr spacetime when
the spacetime metric is written in either ingoing or outgoing Kerr-Schild form.
We also write explicit formulae for setting up the initial data for the
Teukolsky equation in the time domain in terms of a three metric and an
extrinsic curvature. The motivation of this work is to have in place a
formalism to study the evolution in the ``close limit'' of two recently
proposed solutions to the initial value problem in general relativity that are
based on Kerr-Schild slicings. A perturbative formalism in horizon penetrating
coordinates is also very desirable in connection with numerical relativity
simulations using black hole ``excision''.Comment: 8 pages, RevTex, 2 figures, final version to appear in CQ
- …
