1,733 research outputs found

    Assessment of the effectiveness of head only and back-of-the-head electrical stunning of chickens

    Get PDF
    The study assesses the effectiveness of reversible head-only and back-of-the-head electrical stunning of chickens using 130–950 mA per bird at 50 Hz AC

    Graph Sparsification in the Semi-streaming Model

    Get PDF
    Analyzing massive data sets has been one of the key motivations for studying streaming algorithms. In recent years, there has been significant progress in analysing distributions in a streaming setting, but the progress on graph problems has been limited. A main reason for this has been the existence of linear space lower bounds for even simple problems such as determining the connectedness of a graph. However, in many new scenarios that arise from social and other interaction networks, the number of vertices is significantly less than the number of edges. This has led to the formulation of the semi-streaming model where we assume that the space is (near) linear in the number of vertices (but not necessarily the edges), and the edges appear in an arbitrary (and possibly adversarial) order. In this paper we focus on graph sparsification, which is one of the major building blocks in a variety of graph algorithms. There has been a long history of (non-streaming) sampling algorithms that provide sparse graph approximations and it a natural question to ask if the sparsification can be achieved using a small space, and in addition using a single pass over the data? The question is interesting from the standpoint of both theory and practice and we answer the question in the affirmative, by providing a one pass O~(n/ϵ2)\tilde{O}(n/\epsilon^{2}) space algorithm that produces a sparsification that approximates each cut to a (1+ϵ)(1+\epsilon) factor. We also show that Ω(nlog1ϵ)\Omega(n \log \frac1\epsilon) space is necessary for a one pass streaming algorithm to approximate the min-cut, improving upon the Ω(n)\Omega(n) lower bound that arises from lower bounds for testing connectivity

    Information scraps: how and why information eludes our personal information management tools

    No full text
    In this paper we describe information scraps -- a class of personal information whose content is scribbled on Post-it notes, scrawled on corners of random sheets of paper, buried inside the bodies of e-mail messages sent to ourselves, or typed haphazardly into text files. Information scraps hold our great ideas, sketches, notes, reminders, driving directions, and even our poetry. We define information scraps to be the body of personal information that is held outside of its natural or We have much still to learn about these loose forms of information capture. Why are they so often held outside of our traditional PIM locations and instead on Post-its or in text files? Why must we sometimes go around our traditional PIM applications to hold on to our scraps, such as by e-mailing ourselves? What are information scraps' role in the larger space of personal information management, and what do they uniquely offer that we find so appealing? If these unorganized bits truly indicate the failure of our PIM tools, how might we begin to build better tools? We have pursued these questions by undertaking a study of 27 knowledge workers. In our findings we describe information scraps from several angles: their content, their location, and the factors that lead to their use, which we identify as ease of capture, flexibility of content and organization, and avilability at the time of need. We also consider the personal emotive responses around scrap management. We present a set of design considerations that we have derived from the analysis of our study results. We present our work on an application platform, jourknow, to test some of these design and usability findings

    Distributed Minimum Cut Approximation

    Full text link
    We study the problem of computing approximate minimum edge cuts by distributed algorithms. We use a standard synchronous message passing model where in each round, O(logn)O(\log n) bits can be transmitted over each edge (a.k.a. the CONGEST model). We present a distributed algorithm that, for any weighted graph and any ϵ(0,1)\epsilon \in (0, 1), with high probability finds a cut of size at most O(ϵ1λ)O(\epsilon^{-1}\lambda) in O(D)+O~(n1/2+ϵ)O(D) + \tilde{O}(n^{1/2 + \epsilon}) rounds, where λ\lambda is the size of the minimum cut. This algorithm is based on a simple approach for analyzing random edge sampling, which we call the random layering technique. In addition, we also present another distributed algorithm, which is based on a centralized algorithm due to Matula [SODA '93], that with high probability computes a cut of size at most (2+ϵ)λ(2+\epsilon)\lambda in O~((D+n)/ϵ5)\tilde{O}((D+\sqrt{n})/\epsilon^5) rounds for any ϵ>0\epsilon>0. The time complexities of both of these algorithms almost match the Ω~(D+n)\tilde{\Omega}(D + \sqrt{n}) lower bound of Das Sarma et al. [STOC '11], thus leading to an answer to an open question raised by Elkin [SIGACT-News '04] and Das Sarma et al. [STOC '11]. Furthermore, we also strengthen the lower bound of Das Sarma et al. by extending it to unweighted graphs. We show that the same lower bound also holds for unweighted multigraphs (or equivalently for weighted graphs in which O(wlogn)O(w\log n) bits can be transmitted in each round over an edge of weight ww), even if the diameter is D=O(logn)D=O(\log n). For unweighted simple graphs, we show that even for networks of diameter O~(1λnαλ)\tilde{O}(\frac{1}{\lambda}\cdot \sqrt{\frac{n}{\alpha\lambda}}), finding an α\alpha-approximate minimum cut in networks of edge connectivity λ\lambda or computing an α\alpha-approximation of the edge connectivity requires Ω~(D+nαλ)\tilde{\Omega}(D + \sqrt{\frac{n}{\alpha\lambda}}) rounds

    End-users publishing structured information on the web: an observational study of what, why, and how

    Get PDF
    End-users are accustomed to filtering and browsing styled collections of data on professional web sites, but they have few ways to create and publish such information architectures for themselves. This paper presents a full-lifecycle analysis of the Exhibit framework - an end-user tool which provides such functionality - to understand the needs, capabilities, and practices of this class of users. We include interviews, as well as analysis of over 1,800 visualizations and 200,000 web interactions with these visualizations. Our analysis reveals important findings about this user population which generalize to the task of providing better end-user structured content publication tools.Intel Science & Technology Center for Big Dat

    On rr-Simple kk-Path

    Full text link
    An rr-simple kk-path is a {path} in the graph of length kk that passes through each vertex at most rr times. The rr-SIMPLE kk-PATH problem, given a graph GG as input, asks whether there exists an rr-simple kk-path in GG. We first show that this problem is NP-Complete. We then show that there is a graph GG that contains an rr-simple kk-path and no simple path of length greater than 4logk/logr4\log k/\log r. So this, in a sense, motivates this problem especially when one's goal is to find a short path that visits many vertices in the graph while bounding the number of visits at each vertex. We then give a randomized algorithm that runs in time poly(n)2O(klogr/r)\mathrm{poly}(n)\cdot 2^{O( k\cdot \log r/r)} that solves the rr-SIMPLE kk-PATH on a graph with nn vertices with one-sided error. We also show that a randomized algorithm with running time poly(n)2(c/2)k/r\mathrm{poly}(n)\cdot 2^{(c/2)k/ r} with c<1c<1 gives a randomized algorithm with running time \poly(n)\cdot 2^{cn} for the Hamiltonian path problem in a directed graph - an outstanding open problem. So in a sense our algorithm is optimal up to an O(logr)O(\log r) factor

    Addition of Bevacizumab to Temsirolimus in Kidney Cancer Patients

    Get PDF
    Treatment of metastatic kidney cancer has changed dramatically in the past years with the use of VEGF-targeted therapies and mTOR inhibitors. However, resistance occurs. We report here two cases of patients who benefited, both on disease control and side effects, from the addition of bevacizumab to temsirolimus, after progression on the mTOR inhibitor alone

    Almost-Tight Distributed Minimum Cut Algorithms

    Full text link
    We study the problem of computing the minimum cut in a weighted distributed message-passing networks (the CONGEST model). Let λ\lambda be the minimum cut, nn be the number of nodes in the network, and DD be the network diameter. Our algorithm can compute λ\lambda exactly in O((nlogn+D)λ4log2n)O((\sqrt{n} \log^{*} n+D)\lambda^4 \log^2 n) time. To the best of our knowledge, this is the first paper that explicitly studies computing the exact minimum cut in the distributed setting. Previously, non-trivial sublinear time algorithms for this problem are known only for unweighted graphs when λ3\lambda\leq 3 due to Pritchard and Thurimella's O(D)O(D)-time and O(D+n1/2logn)O(D+n^{1/2}\log^* n)-time algorithms for computing 22-edge-connected and 33-edge-connected components. By using the edge sampling technique of Karger's, we can convert this algorithm into a (1+ϵ)(1+\epsilon)-approximation O((nlogn+D)ϵ5log3n)O((\sqrt{n}\log^{*} n+D)\epsilon^{-5}\log^3 n)-time algorithm for any ϵ>0\epsilon>0. This improves over the previous (2+ϵ)(2+\epsilon)-approximation O((nlogn+D)ϵ5log2nloglogn)O((\sqrt{n}\log^{*} n+D)\epsilon^{-5}\log^2 n\log\log n)-time algorithm and O(ϵ1)O(\epsilon^{-1})-approximation O(D+n12+ϵpolylogn)O(D+n^{\frac{1}{2}+\epsilon} \mathrm{poly}\log n)-time algorithm of Ghaffari and Kuhn. Due to the lower bound of Ω(D+n1/2/logn)\Omega(D+n^{1/2}/\log n) by Das Sarma et al. which holds for any approximation algorithm, this running time is tight up to a polylogn \mathrm{poly}\log n factor. To get the stated running time, we developed an approximation algorithm which combines the ideas of Thorup's algorithm and Matula's contraction algorithm. It saves an ϵ9log7n\epsilon^{-9}\log^{7} n factor as compared to applying Thorup's tree packing theorem directly. Then, we combine Kutten and Peleg's tree partitioning algorithm and Karger's dynamic programming to achieve an efficient distributed algorithm that finds the minimum cut when we are given a spanning tree that crosses the minimum cut exactly once

    A 2k2k-Vertex Kernel for Maximum Internal Spanning Tree

    Full text link
    We consider the parameterized version of the maximum internal spanning tree problem, which, given an nn-vertex graph and a parameter kk, asks for a spanning tree with at least kk internal vertices. Fomin et al. [J. Comput. System Sci., 79:1-6] crafted a very ingenious reduction rule, and showed that a simple application of this rule is sufficient to yield a 3k3k-vertex kernel. Here we propose a novel way to use the same reduction rule, resulting in an improved 2k2k-vertex kernel. Our algorithm applies first a greedy procedure consisting of a sequence of local exchange operations, which ends with a local-optimal spanning tree, and then uses this special tree to find a reducible structure. As a corollary of our kernel, we obtain a deterministic algorithm for the problem running in time 4knO(1)4^k \cdot n^{O(1)}

    Climatologies at high resolution for the earth's land surface areas

    No full text
    High resolution information of climatic conditions is essential to many application in environmental sciences. Here we present the CHELSA algorithm to downscale temperature and precipitation estimates from the European Centre for Medium-Range Weather Forecast (ECMWF) climatic reanalysis interim (ERA-Interim) to a high resolution of 30 arc sec. The algorithm for temperature is based on a statistical downscaling of atmospheric temperature from the ERA-Interim climatic reanalysis. The precipitation algorithm incorporates orographic predictors such as wind fields, valley exposition, and boundary layer height, and a bias correction using Global Precipitation Climatology Center (GPCC) gridded and Global Historical Climate Network (GHCN) station data. The resulting data consist of a monthly temperature and precipitation climatology for the years 1979-2013. We present a comparison of data derived from the CHELSA algorithm with two other high resolution gridded products with overlapping temporal resolution (Tropical Rain Measuring Mission (TRMM) for precipitation, Moderate Resolution Imaging Spectroradiometer (MODIS) for temperature) and station data from the Global Historical Climate Network (GHCN). We show that the climatological data from CHELSA has a similar accuracy to other products for temperature, but that the predictions of orographic precipitation patterns are both better and at a high spatial resolution
    corecore