2,410 research outputs found
Shape memory performance of asymmetrically reinforced epoxy/carbon fibre fabric composites in flexure
In this study asymmetrically reinforced epoxy (EP)/carbon fibre (CF) fabric composites were prepared and their shape memory properties were quantified in both unconstrained and fully constrained flexural tests performed in a dynamic mechanical analyser (DMA). Asymmetric layering was achieved by incorporating two and four CF fabric layers whereby setting a resin- and reinforcement-rich layer ratio of 1/4 and 1/2, respectively. The recovery stress was markedly increased with increasing CF content. The related stress was always higher when the CF-rich layer experienced tension load locally. Specimens with CF-rich layers on the tension side yielded better shape fixity ratio, than those with reinforcement layering on the compression side. Cyclic unconstrained shape memory tests were also run up to five cycles on specimens having the CF-rich layer under local tension. This resulted in marginal changes in the shape fixity and recovery ratios
The Lazy Bureaucrat Scheduling Problem
We introduce a new class of scheduling problems in which the optimization is
performed by the worker (single ``machine'') who performs the tasks. A typical
worker's objective is to minimize the amount of work he does (he is ``lazy''),
or more generally, to schedule as inefficiently (in some sense) as possible.
The worker is subject to the constraint that he must be busy when there is work
that he can do; we make this notion precise both in the preemptive and
nonpreemptive settings. The resulting class of ``perverse'' scheduling
problems, which we denote ``Lazy Bureaucrat Problems,'' gives rise to a rich
set of new questions that explore the distinction between maximization and
minimization in computing optimal schedules.Comment: 19 pages, 2 figures, Latex. To appear, Information and Computatio
Information scraps: how and why information eludes our personal information management tools
In this paper we describe information scraps -- a class of personal information whose content is scribbled on Post-it notes, scrawled on corners of random sheets of paper, buried inside the bodies of e-mail messages sent to ourselves, or typed haphazardly into text files. Information scraps hold our great ideas, sketches, notes, reminders, driving directions, and even our poetry. We define information scraps to be the body of personal information that is held outside of its natural or We have much still to learn about these loose forms of information capture. Why are they so often held outside of our traditional PIM locations and instead on Post-its or in text files? Why must we sometimes go around our traditional PIM applications to hold on to our scraps, such as by e-mailing ourselves? What are information scraps' role in the larger space of personal information management, and what do they uniquely offer that we find so appealing? If these unorganized bits truly indicate the failure of our PIM tools, how might we begin to build better tools? We have pursued these questions by undertaking a study of 27 knowledge workers. In our findings we describe information scraps from several angles: their content, their location, and the factors that lead to their use, which we identify as ease of capture, flexibility of content and organization, and avilability at the time of need. We also consider the personal emotive responses around scrap management. We present a set of design considerations that we have derived from the analysis of our study results. We present our work on an application platform, jourknow, to test some of these design and usability findings
Decremental All-Pairs ALL Shortest Paths and Betweenness Centrality
We consider the all pairs all shortest paths (APASP) problem, which maintains
the shortest path dag rooted at every vertex in a directed graph G=(V,E) with
positive edge weights. For this problem we present a decremental algorithm
(that supports the deletion of a vertex, or weight increases on edges incident
to a vertex). Our algorithm runs in amortized O(\vstar^2 \cdot \log n) time per
update, where n=|V|, and \vstar bounds the number of edges that lie on shortest
paths through any given vertex. Our APASP algorithm can be used for the
decremental computation of betweenness centrality (BC), a graph parameter that
is widely used in the analysis of large complex networks. No nontrivial
decremental algorithm for either problem was known prior to our work. Our
method is a generalization of the decremental algorithm of Demetrescu and
Italiano [DI04] for unique shortest paths, and for graphs with \vstar =O(n), we
match the bound in [DI04]. Thus for graphs with a constant number of shortest
paths between any pair of vertices, our algorithm maintains APASP and BC scores
in amortized time O(n^2 \log n) under decremental updates, regardless of the
number of edges in the graph.Comment: An extended abstract of this paper will appear in Proc. ISAAC 201
The Transcriptional Landscape of Marek’s Disease Virus in Primary Chicken B Cells Reveals Novel Splice Variants and Genes
Marek’s disease virus (MDV) is an oncogenic alphaherpesvirus that infects chickens and poses a serious threat to poultry health. In infected animals, MDV efficiently replicates in B cells in various lymphoid organs. Despite many years of research, the viral transcriptome in primary target cells of MDV remained unknown. In this study, we uncovered the transcriptional landscape of the very virulent RB1B strain and the attenuated CVI988/Rispens vaccine strain in primary chicken B cells using high-throughput RNA-sequencing. Our data confirmed the expression of known genes, but also identified a novel spliced MDV gene in the unique short region of the genome. Furthermore, de novo transcriptome assembly revealed extensive splicing of viral genes resulting in coding and non-coding RNA transcripts. A novel splicing isoform of MDV UL15 could also be confirmed by mass spectrometry and RT-PCR. In addition, we could demonstrate that the associated transcriptional motifs are highly conserved and closely resembled those of the host transcriptional machinery. Taken together, our data allow a comprehensive re-annotation of the MDV genome with novel genes and splice variants that could be targeted in further research on MDV replication and tumorigenesis
Almost-Tight Distributed Minimum Cut Algorithms
We study the problem of computing the minimum cut in a weighted distributed
message-passing networks (the CONGEST model). Let be the minimum cut,
be the number of nodes in the network, and be the network diameter. Our
algorithm can compute exactly in time. To the best of our knowledge, this is the first paper that
explicitly studies computing the exact minimum cut in the distributed setting.
Previously, non-trivial sublinear time algorithms for this problem are known
only for unweighted graphs when due to Pritchard and
Thurimella's -time and -time algorithms for
computing -edge-connected and -edge-connected components.
By using the edge sampling technique of Karger's, we can convert this
algorithm into a -approximation -time algorithm for any . This improves
over the previous -approximation -time algorithm and
-approximation -time algorithm of Ghaffari and Kuhn. Due to the lower
bound of by Das Sarma et al. which holds for any
approximation algorithm, this running time is tight up to a factor.
To get the stated running time, we developed an approximation algorithm which
combines the ideas of Thorup's algorithm and Matula's contraction algorithm. It
saves an factor as compared to applying Thorup's tree
packing theorem directly. Then, we combine Kutten and Peleg's tree partitioning
algorithm and Karger's dynamic programming to achieve an efficient distributed
algorithm that finds the minimum cut when we are given a spanning tree that
crosses the minimum cut exactly once
Speeding up shortest path algorithms
Given an arbitrary, non-negatively weighted, directed graph we
present an algorithm that computes all pairs shortest paths in time
, where is the number of
different edges contained in shortest paths and is a running
time of an algorithm to solve a single-source shortest path problem (SSSP).
This is a substantial improvement over a trivial times application of
that runs in . In our algorithm we use
as a black box and hence any improvement on results also in improvement
of our algorithm.
Furthermore, a combination of our method, Johnson's reweighting technique and
topological sorting results in an all-pairs
shortest path algorithm for arbitrarily-weighted directed acyclic graphs.
In addition, we also point out a connection between the complexity of a
certain sorting problem defined on shortest paths and SSSP.Comment: 10 page
Distributed Minimum Cut Approximation
We study the problem of computing approximate minimum edge cuts by
distributed algorithms. We use a standard synchronous message passing model
where in each round, bits can be transmitted over each edge (a.k.a.
the CONGEST model). We present a distributed algorithm that, for any weighted
graph and any , with high probability finds a cut of size
at most in
rounds, where is the size of the minimum cut. This algorithm is based
on a simple approach for analyzing random edge sampling, which we call the
random layering technique. In addition, we also present another distributed
algorithm, which is based on a centralized algorithm due to Matula [SODA '93],
that with high probability computes a cut of size at most
in rounds for any .
The time complexities of both of these algorithms almost match the
lower bound of Das Sarma et al. [STOC '11], thus
leading to an answer to an open question raised by Elkin [SIGACT-News '04] and
Das Sarma et al. [STOC '11].
Furthermore, we also strengthen the lower bound of Das Sarma et al. by
extending it to unweighted graphs. We show that the same lower bound also holds
for unweighted multigraphs (or equivalently for weighted graphs in which
bits can be transmitted in each round over an edge of weight ),
even if the diameter is . For unweighted simple graphs, we show
that even for networks of diameter , finding an -approximate minimum cut
in networks of edge connectivity or computing an
-approximation of the edge connectivity requires rounds
Thermal, viscoelastic and mechanical behavior of polypropylene with synthetic boehmite alumina nanoparticles
Effects of nanofiller concentration and surface treatments on the morphology, thermal, viscoelastic and mechanical behaviors of polypropylene copolymer (PP)/boehmite alumina (BA) nanocomposites were investigated. Both untreated and treated BA particles with octylsilane (OS) and with sulphonic acid compound (OS2) were added up to 10 wt% to produce nanocomposites by melt mixing followed by film blow molding and hot pressing. Dispersion of BA was studied by scanning electron microscopy. Differential scanning calorimetry and wide-angle X-ray scattering were adopted to detect changes in the crystalline structure of PP. Thermooxidative degradation of the nanocomposites was assessed by thermogravimetrical analysis. Dynamic mechanical analysis served for studying the viscoelastic, whereas quasi-static tensile, creep and Elmendorf tear tests were used to detect changes in the mechanical performance. BA nanoparticles were finely dispersed in PP up to 10 wt%, even when they were not surface modified. The resistance to thermal degradation was markedly improved by BA nanomodification. Changes observed in the mechanical properties were attributed to BA dispersion, filler/matrix interactions and related effects because the crystalline characteristics of the PP matrix practically did not change with BA modification
Incentivizing High Quality Crowdwork
We study the causal effects of financial incentives on the quality of
crowdwork. We focus on performance-based payments (PBPs), bonus payments
awarded to workers for producing high quality work. We design and run
randomized behavioral experiments on the popular crowdsourcing platform Amazon
Mechanical Turk with the goal of understanding when, where, and why PBPs help,
identifying properties of the payment, payment structure, and the task itself
that make them most effective. We provide examples of tasks for which PBPs do
improve quality. For such tasks, the effectiveness of PBPs is not too sensitive
to the threshold for quality required to receive the bonus, while the magnitude
of the bonus must be large enough to make the reward salient. We also present
examples of tasks for which PBPs do not improve quality. Our results suggest
that for PBPs to improve quality, the task must be effort-responsive: the task
must allow workers to produce higher quality work by exerting more effort. We
also give a simple method to determine if a task is effort-responsive a priori.
Furthermore, our experiments suggest that all payments on Mechanical Turk are,
to some degree, implicitly performance-based in that workers believe their work
may be rejected if their performance is sufficiently poor. Finally, we propose
a new model of worker behavior that extends the standard principal-agent model
from economics to include a worker's subjective beliefs about his likelihood of
being paid, and show that the predictions of this model are in line with our
experimental findings. This model may be useful as a foundation for theoretical
studies of incentives in crowdsourcing markets.Comment: This is a preprint of an Article accepted for publication in WWW
\c{opyright} 2015 International World Wide Web Conference Committe
- …