1,487 research outputs found

    Distributed Minimum Cut Approximation

    Full text link
    We study the problem of computing approximate minimum edge cuts by distributed algorithms. We use a standard synchronous message passing model where in each round, O(logn)O(\log n) bits can be transmitted over each edge (a.k.a. the CONGEST model). We present a distributed algorithm that, for any weighted graph and any ϵ(0,1)\epsilon \in (0, 1), with high probability finds a cut of size at most O(ϵ1λ)O(\epsilon^{-1}\lambda) in O(D)+O~(n1/2+ϵ)O(D) + \tilde{O}(n^{1/2 + \epsilon}) rounds, where λ\lambda is the size of the minimum cut. This algorithm is based on a simple approach for analyzing random edge sampling, which we call the random layering technique. In addition, we also present another distributed algorithm, which is based on a centralized algorithm due to Matula [SODA '93], that with high probability computes a cut of size at most (2+ϵ)λ(2+\epsilon)\lambda in O~((D+n)/ϵ5)\tilde{O}((D+\sqrt{n})/\epsilon^5) rounds for any ϵ>0\epsilon>0. The time complexities of both of these algorithms almost match the Ω~(D+n)\tilde{\Omega}(D + \sqrt{n}) lower bound of Das Sarma et al. [STOC '11], thus leading to an answer to an open question raised by Elkin [SIGACT-News '04] and Das Sarma et al. [STOC '11]. Furthermore, we also strengthen the lower bound of Das Sarma et al. by extending it to unweighted graphs. We show that the same lower bound also holds for unweighted multigraphs (or equivalently for weighted graphs in which O(wlogn)O(w\log n) bits can be transmitted in each round over an edge of weight ww), even if the diameter is D=O(logn)D=O(\log n). For unweighted simple graphs, we show that even for networks of diameter O~(1λnαλ)\tilde{O}(\frac{1}{\lambda}\cdot \sqrt{\frac{n}{\alpha\lambda}}), finding an α\alpha-approximate minimum cut in networks of edge connectivity λ\lambda or computing an α\alpha-approximation of the edge connectivity requires Ω~(D+nαλ)\tilde{\Omega}(D + \sqrt{\frac{n}{\alpha\lambda}}) rounds

    Globally Optimal Crowdsourcing Quality Management

    Full text link
    We study crowdsourcing quality management, that is, given worker responses to a set of tasks, our goal is to jointly estimate the true answers for the tasks, as well as the quality of the workers. Prior work on this problem relies primarily on applying Expectation-Maximization (EM) on the underlying maximum likelihood problem to estimate true answers as well as worker quality. Unfortunately, EM only provides a locally optimal solution rather than a globally optimal one. Other solutions to the problem (that do not leverage EM) fail to provide global optimality guarantees as well. In this paper, we focus on filtering, where tasks require the evaluation of a yes/no predicate, and rating, where tasks elicit integer scores from a finite domain. We design algorithms for finding the global optimal estimates of correct task answers and worker quality for the underlying maximum likelihood problem, and characterize the complexity of these algorithms. Our algorithms conceptually consider all mappings from tasks to true answers (typically a very large number), leveraging two key ideas to reduce, by several orders of magnitude, the number of mappings under consideration, while preserving optimality. We also demonstrate that these algorithms often find more accurate estimates than EM-based algorithms. This paper makes an important contribution towards understanding the inherent complexity of globally optimal crowdsourcing quality management

    On rr-Simple kk-Path

    Full text link
    An rr-simple kk-path is a {path} in the graph of length kk that passes through each vertex at most rr times. The rr-SIMPLE kk-PATH problem, given a graph GG as input, asks whether there exists an rr-simple kk-path in GG. We first show that this problem is NP-Complete. We then show that there is a graph GG that contains an rr-simple kk-path and no simple path of length greater than 4logk/logr4\log k/\log r. So this, in a sense, motivates this problem especially when one's goal is to find a short path that visits many vertices in the graph while bounding the number of visits at each vertex. We then give a randomized algorithm that runs in time poly(n)2O(klogr/r)\mathrm{poly}(n)\cdot 2^{O( k\cdot \log r/r)} that solves the rr-SIMPLE kk-PATH on a graph with nn vertices with one-sided error. We also show that a randomized algorithm with running time poly(n)2(c/2)k/r\mathrm{poly}(n)\cdot 2^{(c/2)k/ r} with c<1c<1 gives a randomized algorithm with running time \poly(n)\cdot 2^{cn} for the Hamiltonian path problem in a directed graph - an outstanding open problem. So in a sense our algorithm is optimal up to an O(logr)O(\log r) factor

    Almost-Tight Distributed Minimum Cut Algorithms

    Full text link
    We study the problem of computing the minimum cut in a weighted distributed message-passing networks (the CONGEST model). Let λ\lambda be the minimum cut, nn be the number of nodes in the network, and DD be the network diameter. Our algorithm can compute λ\lambda exactly in O((nlogn+D)λ4log2n)O((\sqrt{n} \log^{*} n+D)\lambda^4 \log^2 n) time. To the best of our knowledge, this is the first paper that explicitly studies computing the exact minimum cut in the distributed setting. Previously, non-trivial sublinear time algorithms for this problem are known only for unweighted graphs when λ3\lambda\leq 3 due to Pritchard and Thurimella's O(D)O(D)-time and O(D+n1/2logn)O(D+n^{1/2}\log^* n)-time algorithms for computing 22-edge-connected and 33-edge-connected components. By using the edge sampling technique of Karger's, we can convert this algorithm into a (1+ϵ)(1+\epsilon)-approximation O((nlogn+D)ϵ5log3n)O((\sqrt{n}\log^{*} n+D)\epsilon^{-5}\log^3 n)-time algorithm for any ϵ>0\epsilon>0. This improves over the previous (2+ϵ)(2+\epsilon)-approximation O((nlogn+D)ϵ5log2nloglogn)O((\sqrt{n}\log^{*} n+D)\epsilon^{-5}\log^2 n\log\log n)-time algorithm and O(ϵ1)O(\epsilon^{-1})-approximation O(D+n12+ϵpolylogn)O(D+n^{\frac{1}{2}+\epsilon} \mathrm{poly}\log n)-time algorithm of Ghaffari and Kuhn. Due to the lower bound of Ω(D+n1/2/logn)\Omega(D+n^{1/2}/\log n) by Das Sarma et al. which holds for any approximation algorithm, this running time is tight up to a polylogn \mathrm{poly}\log n factor. To get the stated running time, we developed an approximation algorithm which combines the ideas of Thorup's algorithm and Matula's contraction algorithm. It saves an ϵ9log7n\epsilon^{-9}\log^{7} n factor as compared to applying Thorup's tree packing theorem directly. Then, we combine Kutten and Peleg's tree partitioning algorithm and Karger's dynamic programming to achieve an efficient distributed algorithm that finds the minimum cut when we are given a spanning tree that crosses the minimum cut exactly once

    Quantum effect induced reverse kinetic molecular sieving in microporous materials

    Get PDF
    We report kinetic molecular sieving of hydrogen and deuterium in zeolite rho at low temperatures, using atomistic molecular dynamics simulations incorporating quantum effects via the Feynman-Hibbs approach. We find that diffusivities of confined molecules decrease when quantum effects are considered, in contrast with bulk fluids which show an increase. Indeed, at low temperatures, a reverse kinetic sieving effect is demonstrated in which the heavier isotope, deuterium, diffuses faster than hydrogen. At 65 K, the flux selectivity is as high as 46, indicating a good potential for isotope separation

    Tunable Electron Multibunch Production in Plasma Wakefield Accelerators

    Get PDF
    Synchronized, independently tunable and focused μ\muJ-class laser pulses are used to release multiple electron populations via photo-ionization inside an electron-beam driven plasma wave. By varying the laser foci in the laboratory frame and the position of the underdense photocathodes in the co-moving frame, the delays between the produced bunches and their energies are adjusted. The resulting multibunches have ultra-high quality and brightness, allowing for hitherto impossible bunch configurations such as spatially overlapping bunch populations with strictly separated energies, which opens up a new regime for light sources such as free-electron-lasers

    The future of social is personal: the potential of the personal data store

    No full text
    This chapter argues that technical architectures that facilitate the longitudinal, decentralised and individual-centric personal collection and curation of data will be an important, but partial, response to the pressing problem of the autonomy of the data subject, and the asymmetry of power between the subject and large scale service providers/data consumers. Towards framing the scope and role of such Personal Data Stores (PDSes), the legalistic notion of personal data is examined, and it is argued that a more inclusive, intuitive notion expresses more accurately what individuals require in order to preserve their autonomy in a data-driven world of large aggregators. Six challenges towards realising the PDS vision are set out: the requirement to store data for long periods; the difficulties of managing data for individuals; the need to reconsider the regulatory basis for third-party access to data; the need to comply with international data handling standards; the need to integrate privacy-enhancing technologies; and the need to future-proof data gathering against the evolution of social norms. The open experimental PDS platform INDX is introduced and described, as a means of beginning to address at least some of these six challenges

    Biogenic Nitrogen Gas Production at the Oxic–Anoxic Interface in the Cariaco Basin, Venezuela

    Get PDF
    Excess nitrogen gas (N2xs) was measured in samples collected at six locations in the eastern and western sub-basins of the Cariaco Basin, Venezuela, in September 2008 (non-upwelling conditions) and March 2009 (upwelling conditions). During both sampling periods, N2xs concentrations were below detection in surface waters, increasing to ~ 22 μmol N kg−1 at the oxic–anoxic interface ([O2] \u3c ~ 4 μmol kg−1, ~ 250 m). Below the oxic–anoxic interface (300–400 m), the average concentration of N2xs was 24.7 ± 1.9 μmol N kg−1 in September 2008 and 27.5 ± 2.0 μmol N kg−1 in March 2009, i.e., N2xs concentrations within this depth interval were ~ 3 μmol N kg−1 higher (p \u3c 0.001) during the upwelling season compared to the non-upwelling period. These results suggest that N-loss in the Cariaco Basin may vary seasonally in response to changes in the flux of sinking particulate organic matter. We attribute the increase in N2xs concentrations, or N-loss, observed during upwelling to: (1) higher availability of fixed nitrogen derived from suspended and sinking particles at the oxic–anoxic interface and/or (2) enhanced ventilation at the oxic–anoxic interface during upwelling

    Variability of Surface Pigment Concentrations in the South Atlantic Bight

    Get PDF
    A 1‐year time sequence (November 1978 through October 1979) of surface pigment images from the South Atlantic Bight (SAB) was derived from the Nimbus 7 coastal zone color scanner. This data set is augmented with in situ observations of hydrographic parameters, freshwater discharge, sea level, coastal winds, and currents for the purpose of examining the coupling between physical processes and the spatial and temporal variability of the surface pigment fields. The SAB is divided into three regions: the east Florida shelf, the Georgia‐South Carolina shelf and the Carolina Capes. Six‐month seasonal mean pigment fields and time series of mean values within subregions were generated. While the seasonal mean isopleths were closely oriented along isobaths, significant differences between seasons in each region were found to exist. These differences are explained by correlating the pigment time series with physical parameters and processes known to be important in the SAB. Specifically, summertime concentrations between Cape Romain and Cape Canaveral were greater than those in winter, but the opposite was true north of Cape Romain. It is suggested that during the abnormally high freshwater discharge in the winter‐spring of 1979, Cape Romain and Cape Fear were the major sites of cross‐shelf transport, while the cross‐shelf exchange during the fall of 1979 occurred just north of Cape Canaveral. Finally, the alongshore band of high pigment concentrations increased in width throughout the year in the vicinity of Charleston, but near Jacksonville it exhibited a minimum width in the summer and a maximum width in the fall of 1979
    corecore