1,695 research outputs found

    Mechanical and abrasion wear properties of hydrogenated nitrile butadiene rubber of identical hardness filled with carbon black and silica

    Get PDF
    The mechanical and abrasive wear properties of a hydrogenated nitrile butadiene rubber filled with 35 part per hundred rubber carbon black or silica with and without silane surface treatment (SI-si and SI, respectively), were investigated. Specimens were subjected to dynamic mechanical thermal analysis (also to study the Payne effect), mechanical (hardness, tensile modulus, ultimate tensile strength and strain, Mullins effect and tear strength), and fracture mechanical (J-integral) tests. The abrasive coefficient of friction and wear (specific wear rate, Ws) of the hydrogenated nitrile butadiene rubbers of identical hardness were measured against abrasive papers of different grit sizes (P600-P5000).The worn surface of the HNBR systems was inspected in scanning electron microscopy and the typical wear mechanisms deduced and discussed. Coefficient of friction did not change with the grit size by contrast to Ws which was markedly reduced with decreasing surface roughness of the abrasive paper. Ws of the compounds did not vary when wearing against P3000 and P5000 abrasive papers, representing mean surface roughness values of 7 and 5  μm, respectively. This was attributed to a change from abrasion to sliding type wear. hydrogenated nitrile butadiene rubber- carbon black outperformed the silica filled versions with respect to Ws though exhibited the highest coefficient of friction. No definite correlation could be found between the abrasive wear and the studied dynamic mechanical thermal analysis and (fracture) mechanical properties. </jats:p

    Distributed Minimum Cut Approximation

    Full text link
    We study the problem of computing approximate minimum edge cuts by distributed algorithms. We use a standard synchronous message passing model where in each round, O(logn)O(\log n) bits can be transmitted over each edge (a.k.a. the CONGEST model). We present a distributed algorithm that, for any weighted graph and any ϵ(0,1)\epsilon \in (0, 1), with high probability finds a cut of size at most O(ϵ1λ)O(\epsilon^{-1}\lambda) in O(D)+O~(n1/2+ϵ)O(D) + \tilde{O}(n^{1/2 + \epsilon}) rounds, where λ\lambda is the size of the minimum cut. This algorithm is based on a simple approach for analyzing random edge sampling, which we call the random layering technique. In addition, we also present another distributed algorithm, which is based on a centralized algorithm due to Matula [SODA '93], that with high probability computes a cut of size at most (2+ϵ)λ(2+\epsilon)\lambda in O~((D+n)/ϵ5)\tilde{O}((D+\sqrt{n})/\epsilon^5) rounds for any ϵ>0\epsilon>0. The time complexities of both of these algorithms almost match the Ω~(D+n)\tilde{\Omega}(D + \sqrt{n}) lower bound of Das Sarma et al. [STOC '11], thus leading to an answer to an open question raised by Elkin [SIGACT-News '04] and Das Sarma et al. [STOC '11]. Furthermore, we also strengthen the lower bound of Das Sarma et al. by extending it to unweighted graphs. We show that the same lower bound also holds for unweighted multigraphs (or equivalently for weighted graphs in which O(wlogn)O(w\log n) bits can be transmitted in each round over an edge of weight ww), even if the diameter is D=O(logn)D=O(\log n). For unweighted simple graphs, we show that even for networks of diameter O~(1λnαλ)\tilde{O}(\frac{1}{\lambda}\cdot \sqrt{\frac{n}{\alpha\lambda}}), finding an α\alpha-approximate minimum cut in networks of edge connectivity λ\lambda or computing an α\alpha-approximation of the edge connectivity requires Ω~(D+nαλ)\tilde{\Omega}(D + \sqrt{\frac{n}{\alpha\lambda}}) rounds

    Efficient crowdsourcing for multi-class labeling

    Get PDF
    Crowdsourcing systems like Amazon's Mechanical Turk have emerged as an effective large-scale human-powered platform for performing tasks in domains such as image classification, data entry, recommendation, and proofreading. Since workers are low-paid (a few cents per task) and tasks performed are monotonous, the answers obtained are noisy and hence unreliable. To obtain reliable estimates, it is essential to utilize appropriate inference algorithms (e.g. Majority voting) coupled with structured redundancy through task assignment. Our goal is to obtain the best possible trade-off between reliability and redundancy. In this paper, we consider a general probabilistic model for noisy observations for crowd-sourcing systems and pose the problem of minimizing the total price (i.e. redundancy) that must be paid to achieve a target overall reliability. Concretely, we show that it is possible to obtain an answer to each task correctly with probability 1-ε as long as the redundancy per task is O((K/q) log (K/ε)), where each task can have any of the KK distinct answers equally likely, q is the crowd-quality parameter that is defined through a probabilistic model. Further, effectively this is the best possible redundancy-accuracy trade-off any system design can achieve. Such a single-parameter crisp characterization of the (order-)optimal trade-off between redundancy and reliability has various useful operational consequences. Further, we analyze the robustness of our approach in the presence of adversarial workers and provide a bound on their influence on the redundancy-accuracy trade-off. Unlike recent prior work [GKM11, KOS11, KOS11], our result applies to non-binary (i.e. K>2) tasks. In effect, we utilize algorithms for binary tasks (with inhomogeneous error model unlike that in [GKM11, KOS11, KOS11]) as key subroutine to obtain answers for K-ary tasks. Technically, the algorithm is based on low-rank approximation of weighted adjacency matrix for a random regular bipartite graph, weighted according to the answers provided by the workers.National Science Foundation (U.S.

    Globally Optimal Crowdsourcing Quality Management

    Full text link
    We study crowdsourcing quality management, that is, given worker responses to a set of tasks, our goal is to jointly estimate the true answers for the tasks, as well as the quality of the workers. Prior work on this problem relies primarily on applying Expectation-Maximization (EM) on the underlying maximum likelihood problem to estimate true answers as well as worker quality. Unfortunately, EM only provides a locally optimal solution rather than a globally optimal one. Other solutions to the problem (that do not leverage EM) fail to provide global optimality guarantees as well. In this paper, we focus on filtering, where tasks require the evaluation of a yes/no predicate, and rating, where tasks elicit integer scores from a finite domain. We design algorithms for finding the global optimal estimates of correct task answers and worker quality for the underlying maximum likelihood problem, and characterize the complexity of these algorithms. Our algorithms conceptually consider all mappings from tasks to true answers (typically a very large number), leveraging two key ideas to reduce, by several orders of magnitude, the number of mappings under consideration, while preserving optimality. We also demonstrate that these algorithms often find more accurate estimates than EM-based algorithms. This paper makes an important contribution towards understanding the inherent complexity of globally optimal crowdsourcing quality management

    On rr-Simple kk-Path

    Full text link
    An rr-simple kk-path is a {path} in the graph of length kk that passes through each vertex at most rr times. The rr-SIMPLE kk-PATH problem, given a graph GG as input, asks whether there exists an rr-simple kk-path in GG. We first show that this problem is NP-Complete. We then show that there is a graph GG that contains an rr-simple kk-path and no simple path of length greater than 4logk/logr4\log k/\log r. So this, in a sense, motivates this problem especially when one's goal is to find a short path that visits many vertices in the graph while bounding the number of visits at each vertex. We then give a randomized algorithm that runs in time poly(n)2O(klogr/r)\mathrm{poly}(n)\cdot 2^{O( k\cdot \log r/r)} that solves the rr-SIMPLE kk-PATH on a graph with nn vertices with one-sided error. We also show that a randomized algorithm with running time poly(n)2(c/2)k/r\mathrm{poly}(n)\cdot 2^{(c/2)k/ r} with c<1c<1 gives a randomized algorithm with running time \poly(n)\cdot 2^{cn} for the Hamiltonian path problem in a directed graph - an outstanding open problem. So in a sense our algorithm is optimal up to an O(logr)O(\log r) factor
    corecore