14,927 research outputs found
Deterministic Sampling and Range Counting in Geometric Data Streams
We present memory-efficient deterministic algorithms for constructing
epsilon-nets and epsilon-approximations of streams of geometric data. Unlike
probabilistic approaches, these deterministic samples provide guaranteed bounds
on their approximation factors. We show how our deterministic samples can be
used to answer approximate online iceberg geometric queries on data streams. We
use these techniques to approximate several robust statistics of geometric data
streams, including Tukey depth, simplicial depth, regression depth, the
Thiel-Sen estimator, and the least median of squares. Our algorithms use only a
polylogarithmic amount of memory, provided the desired approximation factors
are inverse-polylogarithmic. We also include a lower bound for non-iceberg
geometric queries.Comment: 12 pages, 1 figur
Archiving scientific data
We present an archiving technique for hierarchical data with key structure. Our approach is based on the notion of timestamps whereby an element appearing in multiple versions of the database is stored only once along with a compact description of versions in which it appears. The basic idea of timestamping was discovered by Driscoll et. al. in the context of persistent data structures where one wishes to track the sequences of changes made to a data structure. We extend this idea to develop an archiving tool for XML data that is capable of providing meaningful change descriptions and can also efficiently support a variety of basic functions concerning the evolution of data such as retrieval of any specific version from the archive and querying the temporal history of any element. This is in contrast to diff-based approaches where such operations may require undoing a large number of changes or significant reasoning with the deltas. Surprisingly, our archiving technique does not incur any significant space overhead when contrasted with other approaches. Our experimental results support this and also show that the compacted archive file interacts well with other compression techniques. Finally, another useful property of our approach is that the resulting archive is also in XML and hence can directly leverage existing XML tools
Web bases for sl(3) are not dual canonical
We compare two natural bases for the invariant space of a tensor product of
irreducible representations of A_2, or sl(3). One basis is the web basis,
defined from a skein theory called the combinatorial A_2 spider. The other
basis is the dual canonical basis, the dual of the basis defined by Lusztig and
Kashiwara. For sl(2) or A_1, the web bases have been discovered many times and
were recently shown to be dual canonical by Frenkel and Khovanov.
We prove that for sl(3), the two bases eventually diverge even though they
agree in many small cases. The first disagreement comes in the invariant space
Inv((V^+ tensor V^+ tensor V^- tensor V^-)^{tensor 3}), where V^+ and V^- are
the two 3-dimensional representations of sl(3). If the tensor factors are
listed in the indicated order, only 511 of the 512 invariant basis vectors
coincide.Comment: 18 pages. This version has very minor correction
Dynamic Steerable Blocks in Deep Residual Networks
Filters in convolutional networks are typically parameterized in a pixel
basis, that does not take prior knowledge about the visual world into account.
We investigate the generalized notion of frames designed with image properties
in mind, as alternatives to this parametrization. We show that frame-based
ResNets and Densenets can improve performance on Cifar-10+ consistently, while
having additional pleasant properties like steerability. By exploiting these
transformation properties explicitly, we arrive at dynamic steerable blocks.
They are an extension of residual blocks, that are able to seamlessly transform
filters under pre-defined transformations, conditioned on the input at training
and inference time. Dynamic steerable blocks learn the degree of invariance
from data and locally adapt filters, allowing them to apply a different
geometrical variant of the same filter to each location of the feature map.
When evaluated on the Berkeley Segmentation contour detection dataset, our
approach outperforms all competing approaches that do not utilize pre-training.
Our results highlight the benefits of image-based regularization to deep
networks
The Projective Line Over the Finite Quotient Ring GF(2)[]/ and Quantum Entanglement I. Theoretical Background
The paper deals with the projective line over the finite factor ring
GF(2)[]/. The line is endowed with 18
points, spanning the neighbourhoods of three pairwise distant points. As
is not a local ring, the neighbour (or parallel) relation is
not an equivalence relation so that the sets of neighbour points to two distant
points overlap. There are nine neighbour points to any point of the line,
forming three disjoint families under the reduction modulo either of two
maximal ideals of the ring. Two of the families contain four points each and
they swap their roles when switching from one ideal to the other; the points of
the one family merge with (the image of) the point in question, while the
points of the other family go in pairs into the remaining two points of the
associated ordinary projective line of order two. The single point of the
remaining family is sent to the reference point under both the mappings and its
existence stems from a non-trivial character of the Jacobson radical, , of the ring. The factor ring is isomorphic to GF(2)
GF(2). The projective line over features nine
points, each of them being surrounded by four neighbour and the same number of
distant points, and any two distant points share two neighbours. These
remarkable ring geometries are surmised to be of relevance for modelling
entangled qubit states, to be discussed in detail in Part II of the paper.Comment: 8 pages, 2 figure
An Algorithm to Simplify Tensor Expressions
The problem of simplifying tensor expressions is addressed in two parts. The
first part presents an algorithm designed to put tensor expressions into a
canonical form, taking into account the symmetries with respect to index
permutations and the renaming of dummy indices. The tensor indices are split
into classes and a natural place for them is defined. The canonical form is the
closest configuration to the natural configuration. In the second part, the
Groebner basis method is used to simplify tensor expressions which obey the
linear identities that come from cyclic symmetries (or more general tensor
identities, including non-linear identities). The algorithm is suitable for
implementation in general purpose computer algebra systems. Some timings of an
experimental implementation over the Riemann package are shown.Comment: 15 pages, Latex2e, submitted to Computer Physics Communications:
Thematic Issue on "Computer Algebra in Physics Research
- …