2,722 research outputs found

### A remark on the multipliers on spaces of weak products of functions

If $\mathcal{H}$ denotes a Hilbert space of analytic functions on a region
$\Omega \subseteq \mathbb{C}^d$, then the weak product is defined by
$\mathcal{H}\odot\mathcal{H}=\left\{h=\sum_{n=1}^\infty f_n g_n :
\sum_{n=1}^\infty \|f_n\|_{\mathcal{H}}\|g_n\|_{\mathcal{H}} <\infty\right\}.$
We prove that if $\mathcal{H}$ is a first order holomorphic Besov Hilbert space
on the unit ball of $\mathbb{C}^d$, then the multiplier algebras of
$\mathcal{H}$ and of $\mathcal{H}\odot\mathcal{H}$ coincide.Comment: v1: 6 pages. To appear Concr. Ope

### The Structure of Inner Multipliers on Spaces with Complete Nevanlinna Pick Kernels

We establish some multivariate generalizations of the Beurling-Lax-Halmos
theorem.Comment: 21 page

### Main Memory Adaptive Indexing for Multi-core Systems

Adaptive indexing is a concept that considers index creation in databases as
a by-product of query processing; as opposed to traditional full index creation
where the indexing effort is performed up front before answering any queries.
Adaptive indexing has received a considerable amount of attention, and several
algorithms have been proposed over the past few years; including a recent
experimental study comparing a large number of existing methods. Until now,
however, most adaptive indexing algorithms have been designed single-threaded,
yet with multi-core systems already well established, the idea of designing
parallel algorithms for adaptive indexing is very natural. In this regard only
one parallel algorithm for adaptive indexing has recently appeared in the
literature: The parallel version of standard cracking. In this paper we
describe three alternative parallel algorithms for adaptive indexing, including
a second variant of a parallel standard cracking algorithm. Additionally, we
describe a hybrid parallel sorting algorithm, and a NUMA-aware method based on
sorting. We then thoroughly compare all these algorithms experimentally; along
a variant of a recently published parallel version of radix sort. Parallel
sorting algorithms serve as a realistic baseline for multi-threaded adaptive
indexing techniques. In total we experimentally compare seven parallel
algorithms. Additionally, we extensively profile all considered algorithms. The
initial set of experiments considered in this paper indicates that our parallel
algorithms significantly improve over previously known ones. Our results
suggest that, although adaptive indexing algorithms are a good design choice in
single-threaded environments, the rules change considerably in the parallel
case. That is, in future highly-parallel environments, sorting algorithms could
be serious alternatives to adaptive indexing.Comment: 26 pages, 7 figure

### Only Aggressive Elephants are Fast Elephants

Yellow elephants are slow. A major reason is that they consume their inputs
entirely before responding to an elephant rider's orders. Some clever riders
have trained their yellow elephants to only consume parts of the inputs before
responding. However, the teaching time to make an elephant do that is high. So
high that the teaching lessons often do not pay off. We take a different
approach. We make elephants aggressive; only this will make them very fast. We
propose HAIL (Hadoop Aggressive Indexing Library), an enhancement of HDFS and
Hadoop MapReduce that dramatically improves runtimes of several classes of
MapReduce jobs. HAIL changes the upload pipeline of HDFS in order to create
different clustered indexes on each data block replica. An interesting feature
of HAIL is that we typically create a win-win situation: we improve both data
upload to HDFS and the runtime of the actual Hadoop MapReduce job. In terms of
data upload, HAIL improves over HDFS by up to 60% with the default replication
factor of three. In terms of query execution, we demonstrate that HAIL runs up
to 68x faster than Hadoop. In our experiments, we use six clusters including
physical and EC2 clusters of up to 100 nodes. A series of scalability
experiments also demonstrates the superiority of HAIL.Comment: VLDB201

### Weak products of complete Pick spaces

Let $\mathcal H$ be the Drury-Arveson or Dirichlet space of the unit ball of
$\mathbb C^d$. The weak product $\mathcal H\odot\mathcal H$ of $\mathcal H$ is
the collection of all functions $h$ that can be written as $h=\sum_{n=1}^\infty
f_n g_n$, where $\sum_{n=1}^\infty \|f_n\|\|g_n\|<\infty$. We show that
$\mathcal H\odot\mathcal H$ is contained in the Smirnov class of $\mathcal H$,
i.e. every function in $\mathcal H\odot\mathcal H$ is a quotient of two
multipliers of $\mathcal H$, where the function in the denominator can be
chosen to be cyclic in $\mathcal H$. As a consequence we show that the map
$\mathcal N \to clos_{\mathcal H\odot\mathcal H} \mathcal N$ establishes a 1-1
and onto correspondence between the multiplier invariant subspaces of $\mathcal
H$ and of $\mathcal H\odot\mathcal H$.
The results hold for many weighted Besov spaces $\mathcal H$ in the unit ball
of $\mathbb C^d$ provided the reproducing kernel has the complete Pick
property. One of our main technical lemmas states that for weighted Besov
spaces $\mathcal H$ that satisfy what we call the multiplier inclusion
condition any bounded column multiplication operator $\mathcal H \to
\oplus_{n=1}^\infty \mathcal H$ induces a bounded row multiplication operator
$\oplus_{n=1}^\infty \mathcal H \to \mathcal H$. For the Drury-Arveson space
$H^2_d$ this leads to an alternate proof of the characterization of
interpolating sequences in terms of weak separation and Carleson measure
conditions.Comment: minor change

- β¦