2,400 research outputs found

    Shape memory performance of asymmetrically reinforced epoxy/carbon fibre fabric composites in flexure

    Get PDF
    In this study asymmetrically reinforced epoxy (EP)/carbon fibre (CF) fabric composites were prepared and their shape memory properties were quantified in both unconstrained and fully constrained flexural tests performed in a dynamic mechanical analyser (DMA). Asymmetric layering was achieved by incorporating two and four CF fabric layers whereby setting a resin- and reinforcement-rich layer ratio of 1/4 and 1/2, respectively. The recovery stress was markedly increased with increasing CF content. The related stress was always higher when the CF-rich layer experienced tension load locally. Specimens with CF-rich layers on the tension side yielded better shape fixity ratio, than those with reinforcement layering on the compression side. Cyclic unconstrained shape memory tests were also run up to five cycles on specimens having the CF-rich layer under local tension. This resulted in marginal changes in the shape fixity and recovery ratios

    The Lazy Bureaucrat Scheduling Problem

    Full text link
    We introduce a new class of scheduling problems in which the optimization is performed by the worker (single ``machine'') who performs the tasks. A typical worker's objective is to minimize the amount of work he does (he is ``lazy''), or more generally, to schedule as inefficiently (in some sense) as possible. The worker is subject to the constraint that he must be busy when there is work that he can do; we make this notion precise both in the preemptive and nonpreemptive settings. The resulting class of ``perverse'' scheduling problems, which we denote ``Lazy Bureaucrat Problems,'' gives rise to a rich set of new questions that explore the distinction between maximization and minimization in computing optimal schedules.Comment: 19 pages, 2 figures, Latex. To appear, Information and Computatio

    Information scraps: how and why information eludes our personal information management tools

    No full text
    In this paper we describe information scraps -- a class of personal information whose content is scribbled on Post-it notes, scrawled on corners of random sheets of paper, buried inside the bodies of e-mail messages sent to ourselves, or typed haphazardly into text files. Information scraps hold our great ideas, sketches, notes, reminders, driving directions, and even our poetry. We define information scraps to be the body of personal information that is held outside of its natural or We have much still to learn about these loose forms of information capture. Why are they so often held outside of our traditional PIM locations and instead on Post-its or in text files? Why must we sometimes go around our traditional PIM applications to hold on to our scraps, such as by e-mailing ourselves? What are information scraps' role in the larger space of personal information management, and what do they uniquely offer that we find so appealing? If these unorganized bits truly indicate the failure of our PIM tools, how might we begin to build better tools? We have pursued these questions by undertaking a study of 27 knowledge workers. In our findings we describe information scraps from several angles: their content, their location, and the factors that lead to their use, which we identify as ease of capture, flexibility of content and organization, and avilability at the time of need. We also consider the personal emotive responses around scrap management. We present a set of design considerations that we have derived from the analysis of our study results. We present our work on an application platform, jourknow, to test some of these design and usability findings

    Decremental All-Pairs ALL Shortest Paths and Betweenness Centrality

    Full text link
    We consider the all pairs all shortest paths (APASP) problem, which maintains the shortest path dag rooted at every vertex in a directed graph G=(V,E) with positive edge weights. For this problem we present a decremental algorithm (that supports the deletion of a vertex, or weight increases on edges incident to a vertex). Our algorithm runs in amortized O(\vstar^2 \cdot \log n) time per update, where n=|V|, and \vstar bounds the number of edges that lie on shortest paths through any given vertex. Our APASP algorithm can be used for the decremental computation of betweenness centrality (BC), a graph parameter that is widely used in the analysis of large complex networks. No nontrivial decremental algorithm for either problem was known prior to our work. Our method is a generalization of the decremental algorithm of Demetrescu and Italiano [DI04] for unique shortest paths, and for graphs with \vstar =O(n), we match the bound in [DI04]. Thus for graphs with a constant number of shortest paths between any pair of vertices, our algorithm maintains APASP and BC scores in amortized time O(n^2 \log n) under decremental updates, regardless of the number of edges in the graph.Comment: An extended abstract of this paper will appear in Proc. ISAAC 201

    The Transcriptional Landscape of Marek’s Disease Virus in Primary Chicken B Cells Reveals Novel Splice Variants and Genes

    Get PDF
    Marek’s disease virus (MDV) is an oncogenic alphaherpesvirus that infects chickens and poses a serious threat to poultry health. In infected animals, MDV efficiently replicates in B cells in various lymphoid organs. Despite many years of research, the viral transcriptome in primary target cells of MDV remained unknown. In this study, we uncovered the transcriptional landscape of the very virulent RB1B strain and the attenuated CVI988/Rispens vaccine strain in primary chicken B cells using high-throughput RNA-sequencing. Our data confirmed the expression of known genes, but also identified a novel spliced MDV gene in the unique short region of the genome. Furthermore, de novo transcriptome assembly revealed extensive splicing of viral genes resulting in coding and non-coding RNA transcripts. A novel splicing isoform of MDV UL15 could also be confirmed by mass spectrometry and RT-PCR. In addition, we could demonstrate that the associated transcriptional motifs are highly conserved and closely resembled those of the host transcriptional machinery. Taken together, our data allow a comprehensive re-annotation of the MDV genome with novel genes and splice variants that could be targeted in further research on MDV replication and tumorigenesis

    Almost-Tight Distributed Minimum Cut Algorithms

    Full text link
    We study the problem of computing the minimum cut in a weighted distributed message-passing networks (the CONGEST model). Let λ\lambda be the minimum cut, nn be the number of nodes in the network, and DD be the network diameter. Our algorithm can compute λ\lambda exactly in O((nlogn+D)λ4log2n)O((\sqrt{n} \log^{*} n+D)\lambda^4 \log^2 n) time. To the best of our knowledge, this is the first paper that explicitly studies computing the exact minimum cut in the distributed setting. Previously, non-trivial sublinear time algorithms for this problem are known only for unweighted graphs when λ3\lambda\leq 3 due to Pritchard and Thurimella's O(D)O(D)-time and O(D+n1/2logn)O(D+n^{1/2}\log^* n)-time algorithms for computing 22-edge-connected and 33-edge-connected components. By using the edge sampling technique of Karger's, we can convert this algorithm into a (1+ϵ)(1+\epsilon)-approximation O((nlogn+D)ϵ5log3n)O((\sqrt{n}\log^{*} n+D)\epsilon^{-5}\log^3 n)-time algorithm for any ϵ>0\epsilon>0. This improves over the previous (2+ϵ)(2+\epsilon)-approximation O((nlogn+D)ϵ5log2nloglogn)O((\sqrt{n}\log^{*} n+D)\epsilon^{-5}\log^2 n\log\log n)-time algorithm and O(ϵ1)O(\epsilon^{-1})-approximation O(D+n12+ϵpolylogn)O(D+n^{\frac{1}{2}+\epsilon} \mathrm{poly}\log n)-time algorithm of Ghaffari and Kuhn. Due to the lower bound of Ω(D+n1/2/logn)\Omega(D+n^{1/2}/\log n) by Das Sarma et al. which holds for any approximation algorithm, this running time is tight up to a polylogn \mathrm{poly}\log n factor. To get the stated running time, we developed an approximation algorithm which combines the ideas of Thorup's algorithm and Matula's contraction algorithm. It saves an ϵ9log7n\epsilon^{-9}\log^{7} n factor as compared to applying Thorup's tree packing theorem directly. Then, we combine Kutten and Peleg's tree partitioning algorithm and Karger's dynamic programming to achieve an efficient distributed algorithm that finds the minimum cut when we are given a spanning tree that crosses the minimum cut exactly once

    Speeding up shortest path algorithms

    Full text link
    Given an arbitrary, non-negatively weighted, directed graph G=(V,E)G=(V,E) we present an algorithm that computes all pairs shortest paths in time O(mn+mlgn+nTψ(m,n))\mathcal{O}(m^* n + m \lg n + nT_\psi(m^*, n)), where mm^* is the number of different edges contained in shortest paths and Tψ(m,n)T_\psi(m^*, n) is a running time of an algorithm to solve a single-source shortest path problem (SSSP). This is a substantial improvement over a trivial nn times application of ψ\psi that runs in O(nTψ(m,n))\mathcal{O}(nT_\psi(m,n)). In our algorithm we use ψ\psi as a black box and hence any improvement on ψ\psi results also in improvement of our algorithm. Furthermore, a combination of our method, Johnson's reweighting technique and topological sorting results in an O(mn+mlgn)\mathcal{O}(m^*n + m \lg n) all-pairs shortest path algorithm for arbitrarily-weighted directed acyclic graphs. In addition, we also point out a connection between the complexity of a certain sorting problem defined on shortest paths and SSSP.Comment: 10 page

    Distributed Minimum Cut Approximation

    Full text link
    We study the problem of computing approximate minimum edge cuts by distributed algorithms. We use a standard synchronous message passing model where in each round, O(logn)O(\log n) bits can be transmitted over each edge (a.k.a. the CONGEST model). We present a distributed algorithm that, for any weighted graph and any ϵ(0,1)\epsilon \in (0, 1), with high probability finds a cut of size at most O(ϵ1λ)O(\epsilon^{-1}\lambda) in O(D)+O~(n1/2+ϵ)O(D) + \tilde{O}(n^{1/2 + \epsilon}) rounds, where λ\lambda is the size of the minimum cut. This algorithm is based on a simple approach for analyzing random edge sampling, which we call the random layering technique. In addition, we also present another distributed algorithm, which is based on a centralized algorithm due to Matula [SODA '93], that with high probability computes a cut of size at most (2+ϵ)λ(2+\epsilon)\lambda in O~((D+n)/ϵ5)\tilde{O}((D+\sqrt{n})/\epsilon^5) rounds for any ϵ>0\epsilon>0. The time complexities of both of these algorithms almost match the Ω~(D+n)\tilde{\Omega}(D + \sqrt{n}) lower bound of Das Sarma et al. [STOC '11], thus leading to an answer to an open question raised by Elkin [SIGACT-News '04] and Das Sarma et al. [STOC '11]. Furthermore, we also strengthen the lower bound of Das Sarma et al. by extending it to unweighted graphs. We show that the same lower bound also holds for unweighted multigraphs (or equivalently for weighted graphs in which O(wlogn)O(w\log n) bits can be transmitted in each round over an edge of weight ww), even if the diameter is D=O(logn)D=O(\log n). For unweighted simple graphs, we show that even for networks of diameter O~(1λnαλ)\tilde{O}(\frac{1}{\lambda}\cdot \sqrt{\frac{n}{\alpha\lambda}}), finding an α\alpha-approximate minimum cut in networks of edge connectivity λ\lambda or computing an α\alpha-approximation of the edge connectivity requires Ω~(D+nαλ)\tilde{\Omega}(D + \sqrt{\frac{n}{\alpha\lambda}}) rounds

    Thermal, viscoelastic and mechanical behavior of polypropylene with synthetic boehmite alumina nanoparticles

    Get PDF
    Effects of nanofiller concentration and surface treatments on the morphology, thermal, viscoelastic and mechanical behaviors of polypropylene copolymer (PP)/boehmite alumina (BA) nanocomposites were investigated. Both untreated and treated BA particles with octylsilane (OS) and with sulphonic acid compound (OS2) were added up to 10 wt% to produce nanocomposites by melt mixing followed by film blow molding and hot pressing. Dispersion of BA was studied by scanning electron microscopy. Differential scanning calorimetry and wide-angle X-ray scattering were adopted to detect changes in the crystalline structure of PP. Thermooxidative degradation of the nanocomposites was assessed by thermogravimetrical analysis. Dynamic mechanical analysis served for studying the viscoelastic, whereas quasi-static tensile, creep and Elmendorf tear tests were used to detect changes in the mechanical performance. BA nanoparticles were finely dispersed in PP up to 10 wt%, even when they were not surface modified. The resistance to thermal degradation was markedly improved by BA nanomodification. Changes observed in the mechanical properties were attributed to BA dispersion, filler/matrix interactions and related effects because the crystalline characteristics of the PP matrix practically did not change with BA modification

    Incentivizing High Quality Crowdwork

    Full text link
    We study the causal effects of financial incentives on the quality of crowdwork. We focus on performance-based payments (PBPs), bonus payments awarded to workers for producing high quality work. We design and run randomized behavioral experiments on the popular crowdsourcing platform Amazon Mechanical Turk with the goal of understanding when, where, and why PBPs help, identifying properties of the payment, payment structure, and the task itself that make them most effective. We provide examples of tasks for which PBPs do improve quality. For such tasks, the effectiveness of PBPs is not too sensitive to the threshold for quality required to receive the bonus, while the magnitude of the bonus must be large enough to make the reward salient. We also present examples of tasks for which PBPs do not improve quality. Our results suggest that for PBPs to improve quality, the task must be effort-responsive: the task must allow workers to produce higher quality work by exerting more effort. We also give a simple method to determine if a task is effort-responsive a priori. Furthermore, our experiments suggest that all payments on Mechanical Turk are, to some degree, implicitly performance-based in that workers believe their work may be rejected if their performance is sufficiently poor. Finally, we propose a new model of worker behavior that extends the standard principal-agent model from economics to include a worker's subjective beliefs about his likelihood of being paid, and show that the predictions of this model are in line with our experimental findings. This model may be useful as a foundation for theoretical studies of incentives in crowdsourcing markets.Comment: This is a preprint of an Article accepted for publication in WWW \c{opyright} 2015 International World Wide Web Conference Committe
    corecore