1,734 research outputs found
Assessment of the effectiveness of head only and back-of-the-head electrical stunning of chickens
The study assesses the effectiveness of reversible head-only and back-of-the-head electrical stunning of chickens using 130–950 mA per bird at 50 Hz AC
Graph Sparsification in the Semi-streaming Model
Analyzing massive data sets has been one of the key motivations for studying
streaming algorithms. In recent years, there has been significant progress in
analysing distributions in a streaming setting, but the progress on graph
problems has been limited. A main reason for this has been the existence of
linear space lower bounds for even simple problems such as determining the
connectedness of a graph. However, in many new scenarios that arise from social
and other interaction networks, the number of vertices is significantly less
than the number of edges. This has led to the formulation of the semi-streaming
model where we assume that the space is (near) linear in the number of vertices
(but not necessarily the edges), and the edges appear in an arbitrary (and
possibly adversarial) order.
In this paper we focus on graph sparsification, which is one of the major
building blocks in a variety of graph algorithms. There has been a long history
of (non-streaming) sampling algorithms that provide sparse graph approximations
and it a natural question to ask if the sparsification can be achieved using a
small space, and in addition using a single pass over the data? The question is
interesting from the standpoint of both theory and practice and we answer the
question in the affirmative, by providing a one pass
space algorithm that produces a sparsification that
approximates each cut to a factor. We also show that space is necessary for a one pass streaming algorithm to
approximate the min-cut, improving upon the lower bound that arises
from lower bounds for testing connectivity
Information scraps: how and why information eludes our personal information management tools
In this paper we describe information scraps -- a class of personal information whose content is scribbled on Post-it notes, scrawled on corners of random sheets of paper, buried inside the bodies of e-mail messages sent to ourselves, or typed haphazardly into text files. Information scraps hold our great ideas, sketches, notes, reminders, driving directions, and even our poetry. We define information scraps to be the body of personal information that is held outside of its natural or We have much still to learn about these loose forms of information capture. Why are they so often held outside of our traditional PIM locations and instead on Post-its or in text files? Why must we sometimes go around our traditional PIM applications to hold on to our scraps, such as by e-mailing ourselves? What are information scraps' role in the larger space of personal information management, and what do they uniquely offer that we find so appealing? If these unorganized bits truly indicate the failure of our PIM tools, how might we begin to build better tools? We have pursued these questions by undertaking a study of 27 knowledge workers. In our findings we describe information scraps from several angles: their content, their location, and the factors that lead to their use, which we identify as ease of capture, flexibility of content and organization, and avilability at the time of need. We also consider the personal emotive responses around scrap management. We present a set of design considerations that we have derived from the analysis of our study results. We present our work on an application platform, jourknow, to test some of these design and usability findings
Distributed Minimum Cut Approximation
We study the problem of computing approximate minimum edge cuts by
distributed algorithms. We use a standard synchronous message passing model
where in each round, bits can be transmitted over each edge (a.k.a.
the CONGEST model). We present a distributed algorithm that, for any weighted
graph and any , with high probability finds a cut of size
at most in
rounds, where is the size of the minimum cut. This algorithm is based
on a simple approach for analyzing random edge sampling, which we call the
random layering technique. In addition, we also present another distributed
algorithm, which is based on a centralized algorithm due to Matula [SODA '93],
that with high probability computes a cut of size at most
in rounds for any .
The time complexities of both of these algorithms almost match the
lower bound of Das Sarma et al. [STOC '11], thus
leading to an answer to an open question raised by Elkin [SIGACT-News '04] and
Das Sarma et al. [STOC '11].
Furthermore, we also strengthen the lower bound of Das Sarma et al. by
extending it to unweighted graphs. We show that the same lower bound also holds
for unweighted multigraphs (or equivalently for weighted graphs in which
bits can be transmitted in each round over an edge of weight ),
even if the diameter is . For unweighted simple graphs, we show
that even for networks of diameter , finding an -approximate minimum cut
in networks of edge connectivity or computing an
-approximation of the edge connectivity requires rounds
End-users publishing structured information on the web: an observational study of what, why, and how
End-users are accustomed to filtering and browsing styled collections of data on professional web sites, but they have few ways to create and publish such information architectures for themselves. This paper presents a full-lifecycle analysis of the Exhibit framework - an end-user tool which provides such functionality - to understand the needs, capabilities, and practices of this class of users. We include interviews, as well as analysis of over 1,800 visualizations and 200,000 web interactions with these visualizations. Our analysis reveals important findings about this user population which generalize to the task of providing better end-user structured content publication tools.Intel Science & Technology Center for Big Dat
On -Simple -Path
An -simple -path is a {path} in the graph of length that passes
through each vertex at most times. The -SIMPLE -PATH problem, given a
graph as input, asks whether there exists an -simple -path in . We
first show that this problem is NP-Complete. We then show that there is a graph
that contains an -simple -path and no simple path of length greater
than . So this, in a sense, motivates this problem especially
when one's goal is to find a short path that visits many vertices in the graph
while bounding the number of visits at each vertex.
We then give a randomized algorithm that runs in time that solves the -SIMPLE -PATH on a graph with
vertices with one-sided error. We also show that a randomized algorithm
with running time with gives a
randomized algorithm with running time \poly(n)\cdot 2^{cn} for the
Hamiltonian path problem in a directed graph - an outstanding open problem. So
in a sense our algorithm is optimal up to an factor
Addition of Bevacizumab to Temsirolimus in Kidney Cancer Patients
Treatment of metastatic kidney cancer has changed dramatically in the past years with the use of VEGF-targeted therapies and mTOR inhibitors. However, resistance occurs. We report here two cases of patients who benefited, both on disease control and side effects, from the addition of bevacizumab to temsirolimus, after progression on the mTOR inhibitor alone
Almost-Tight Distributed Minimum Cut Algorithms
We study the problem of computing the minimum cut in a weighted distributed
message-passing networks (the CONGEST model). Let be the minimum cut,
be the number of nodes in the network, and be the network diameter. Our
algorithm can compute exactly in time. To the best of our knowledge, this is the first paper that
explicitly studies computing the exact minimum cut in the distributed setting.
Previously, non-trivial sublinear time algorithms for this problem are known
only for unweighted graphs when due to Pritchard and
Thurimella's -time and -time algorithms for
computing -edge-connected and -edge-connected components.
By using the edge sampling technique of Karger's, we can convert this
algorithm into a -approximation -time algorithm for any . This improves
over the previous -approximation -time algorithm and
-approximation -time algorithm of Ghaffari and Kuhn. Due to the lower
bound of by Das Sarma et al. which holds for any
approximation algorithm, this running time is tight up to a factor.
To get the stated running time, we developed an approximation algorithm which
combines the ideas of Thorup's algorithm and Matula's contraction algorithm. It
saves an factor as compared to applying Thorup's tree
packing theorem directly. Then, we combine Kutten and Peleg's tree partitioning
algorithm and Karger's dynamic programming to achieve an efficient distributed
algorithm that finds the minimum cut when we are given a spanning tree that
crosses the minimum cut exactly once
A -Vertex Kernel for Maximum Internal Spanning Tree
We consider the parameterized version of the maximum internal spanning tree
problem, which, given an -vertex graph and a parameter , asks for a
spanning tree with at least internal vertices. Fomin et al. [J. Comput.
System Sci., 79:1-6] crafted a very ingenious reduction rule, and showed that a
simple application of this rule is sufficient to yield a -vertex kernel.
Here we propose a novel way to use the same reduction rule, resulting in an
improved -vertex kernel. Our algorithm applies first a greedy procedure
consisting of a sequence of local exchange operations, which ends with a
local-optimal spanning tree, and then uses this special tree to find a
reducible structure. As a corollary of our kernel, we obtain a deterministic
algorithm for the problem running in time
Climatologies at high resolution for the earth's land surface areas
High resolution information of climatic conditions is essential to many application in environmental sciences. Here we present the CHELSA algorithm to downscale temperature and precipitation estimates from the European Centre for Medium-Range Weather Forecast (ECMWF) climatic reanalysis interim (ERA-Interim) to a high resolution of 30 arc sec. The algorithm for temperature is based on a statistical downscaling of atmospheric temperature from the ERA-Interim climatic reanalysis. The precipitation algorithm incorporates orographic predictors such as wind fields, valley exposition, and boundary layer height, and a bias correction using Global Precipitation Climatology Center (GPCC) gridded and Global Historical Climate Network (GHCN) station data. The resulting data consist of a monthly temperature and precipitation climatology for the years 1979-2013. We present a comparison of data derived from the CHELSA algorithm with two other high resolution gridded products with overlapping temporal resolution (Tropical Rain Measuring Mission (TRMM) for precipitation, Moderate Resolution Imaging Spectroradiometer (MODIS) for temperature) and station data from the Global Historical Climate Network (GHCN). We show that the climatological data from CHELSA has a similar accuracy to other products for temperature, but that the predictions of orographic precipitation patterns are both better and at a high spatial resolution
- …