179,568 research outputs found
A Network Coding Approach to Loss Tomography
Network tomography aims at inferring internal network characteristics based
on measurements at the edge of the network. In loss tomography, in particular,
the characteristic of interest is the loss rate of individual links and
multicast and/or unicast end-to-end probes are typically used. Independently,
recent advances in network coding have shown that there are advantages from
allowing intermediate nodes to process and combine, in addition to just
forward, packets. In this paper, we study the problem of loss tomography in
networks with network coding capabilities. We design a framework for estimating
link loss rates, which leverages network coding capabilities, and we show that
it improves several aspects of tomography including the identifiability of
links, the trade-off between estimation accuracy and bandwidth efficiency, and
the complexity of probe path selection. We discuss the cases of inferring link
loss rates in a tree topology and in a general topology. In the latter case,
the benefits of our approach are even more pronounced compared to standard
techniques, but we also face novel challenges, such as dealing with cycles and
multiple paths between sources and receivers. Overall, this work makes the
connection between active network tomography and network coding
Striking a Balance Between Physical and Digital Resources
In various configurations—be they academic, archival, county, juvenile, monastic, national, personal, public, reference, or research, the library has been a fixture in human affairs for a long time. Digital — meaning, content or communication that is delivered through the internet, is 20 years old (but younger in parts). Basically, both approaches to organizing serve to structure information for access. However, digital is multiplying very fast and libraries all-round contemplate an existential crisis; the more hopeful librarians fret about physical and digital space.
Yet, the crux of the matter is not about physical vs. digital: without doubt, the digital space of content or communication transmogrifies all walks of life and cannot be wished away; but, the physical space of libraries is time-tested, extremely valuable, and can surely offer more than currently meets the eye. Except for entirely virtual libraries, the symbiotic relationship between the physical and the digital is innately powerful: for superior outcomes, it must be recognized, nurtured, and leveraged; striking a balance between physical and digital resources can be accomplished. This paper examines the subject of delivering digital from macro, meso, and micro perspectives: it looks into complexity theory, digital strategy, and digitization
Crossing the Logarithmic Barrier for Dynamic Boolean Data Structure Lower Bounds
This paper proves the first super-logarithmic lower bounds on the cell probe
complexity of dynamic boolean (a.k.a. decision) data structure problems, a
long-standing milestone in data structure lower bounds.
We introduce a new method for proving dynamic cell probe lower bounds and use
it to prove a lower bound on the operational
time of a wide range of boolean data structure problems, most notably, on the
query time of dynamic range counting over ([Pat07]). Proving an
lower bound for this problem was explicitly posed as one of
five important open problems in the late Mihai P\v{a}tra\c{s}cu's obituary
[Tho13]. This result also implies the first lower bound for the
classical 2D range counting problem, one of the most fundamental data structure
problems in computational geometry and spatial databases. We derive similar
lower bounds for boolean versions of dynamic polynomial evaluation and 2D
rectangle stabbing, and for the (non-boolean) problems of range selection and
range median.
Our technical centerpiece is a new way of "weakly" simulating dynamic data
structures using efficient one-way communication protocols with small advantage
over random guessing. This simulation involves a surprising excursion to
low-degree (Chebychev) polynomials which may be of independent interest, and
offers an entirely new algorithmic angle on the "cell sampling" method of
Panigrahy et al. [PTW10]
Error, reproducibility and sensitivity : a pipeline for data processing of Agilent oligonucleotide expression arrays
Background
Expression microarrays are increasingly used to obtain large scale transcriptomic information on a wide range of biological samples. Nevertheless, there is still much debate on the best ways to process data, to design experiments and analyse the output. Furthermore, many of the more sophisticated mathematical approaches to data analysis in the literature remain inaccessible to much of the biological research community. In this study we examine ways of extracting and analysing a large data set obtained using the Agilent long oligonucleotide transcriptomics platform, applied to a set of human macrophage and dendritic cell samples.
Results
We describe and validate a series of data extraction, transformation and normalisation steps which are implemented via a new R function. Analysis of replicate normalised reference data demonstrate that intrarray variability is small (only around 2% of the mean log signal), while interarray variability from replicate array measurements has a standard deviation (SD) of around 0.5 log2 units ( 6% of mean). The common practise of working with ratios of Cy5/Cy3 signal offers little further improvement in terms of reducing error. Comparison to expression data obtained using Arabidopsis samples demonstrates that the large number of genes in each sample showing a low level of transcription reflect the real complexity of the cellular transcriptome. Multidimensional scaling is used to show that the processed data identifies an underlying structure which reflect some of the key biological variables which define the data set. This structure is robust, allowing reliable comparison of samples collected over a number of years and collected by a variety of operators.
Conclusions
This study outlines a robust and easily implemented pipeline for extracting, transforming normalising and visualising transcriptomic array data from Agilent expression platform. The analysis is used to obtain quantitative estimates of the SD arising from experimental (non biological) intra- and interarray variability, and for a lower threshold for determining whether an individual gene is expressed. The study provides a reliable basis for further more extensive studies of the systems biology of eukaryotic cells
Pooling Faces: Template based Face Recognition with Pooled Face Images
We propose a novel approach to template based face recognition. Our dual goal
is to both increase recognition accuracy and reduce the computational and
storage costs of template matching. To do this, we leverage on an approach
which was proven effective in many other domains, but, to our knowledge, never
fully explored for face images: average pooling of face photos. We show how
(and why!) the space of a template's images can be partitioned and then pooled
based on image quality and head pose and the effect this has on accuracy and
template size. We perform extensive tests on the IJB-A and Janus CS2 template
based face identification and verification benchmarks. These show that not only
does our approach outperform published state of the art despite requiring far
fewer cross template comparisons, but also, surprisingly, that image pooling
performs on par with deep feature pooling.Comment: Appeared in the IEEE Computer Society Workshop on Biometrics, IEEE
Conf. on Computer Vision and Pattern Recognition (CVPR), June, 201
Visualizing and Interacting with Concept Hierarchies
Concept Hierarchies and Formal Concept Analysis are theoretically well
grounded and largely experimented methods. They rely on line diagrams called
Galois lattices for visualizing and analysing object-attribute sets. Galois
lattices are visually seducing and conceptually rich for experts. However they
present important drawbacks due to their concept oriented overall structure:
analysing what they show is difficult for non experts, navigation is
cumbersome, interaction is poor, and scalability is a deep bottleneck for
visual interpretation even for experts. In this paper we introduce semantic
probes as a means to overcome many of these problems and extend usability and
application possibilities of traditional FCA visualization methods. Semantic
probes are visual user centred objects which extract and organize reduced
Galois sub-hierarchies. They are simpler, clearer, and they provide a better
navigation support through a rich set of interaction possibilities. Since probe
driven sub-hierarchies are limited to users focus, scalability is under control
and interpretation is facilitated. After some successful experiments, several
applications are being developed with the remaining problem of finding a
compromise between simplicity and conceptual expressivity
- …