5,475 research outputs found
Cascades: A view from Audience
Cascades on online networks have been a popular subject of study in the past
decade, and there is a considerable literature on phenomena such as diffusion
mechanisms, virality, cascade prediction, and peer network effects. However, a
basic question has received comparatively little attention: how desirable are
cascades on a social media platform from the point of view of users? While
versions of this question have been considered from the perspective of the
producers of cascades, any answer to this question must also take into account
the effect of cascades on their audience. In this work, we seek to fill this
gap by providing a consumer perspective of cascade.
Users on online networks play the dual role of producers and consumers.
First, we perform an empirical study of the interaction of Twitter users with
retweet cascades. We measure how often users observe retweets in their home
timeline, and observe a phenomenon that we term the "Impressions Paradox": the
share of impressions for cascades of size k decays much slower than frequency
of cascades of size k. Thus, the audience for cascades can be quite large even
for rare large cascades. We also measure audience engagement with retweet
cascades in comparison to non-retweeted content. Our results show that cascades
often rival or exceed organic content in engagement received per impression.
This result is perhaps surprising in that consumers didn't opt in to see tweets
from these authors. Furthermore, although cascading content is widely popular,
one would expect it to eventually reach parts of the audience that may not be
interested in the content. Motivated by our findings, we posit a theoretical
model that focuses on the effect of cascades on the audience. Our results on
this model highlight the balance between retweeting as a high-quality content
selection mechanism and the role of network users in filtering irrelevant
content
Performance Evaluation of Rarem Dam
28.0 m high zoned Rarem dam in Indonesia was instrumented with hydraulic piezometers, electrical Carlson type piezometers Cassagrande type vertical stand pipe piezometers, inclinometers, and surface settlement points. The analysis of observational data has indicated that settlement took place almost simultaneously with construction of dam and reservoir filling. Very low construction pore pressures were observed and phreatic line developed almost simultaneously with reservoir filling. The results of efficiency of grout curtain based on electrical analogy model studies are also discussed in the paper
Decremental All-Pairs ALL Shortest Paths and Betweenness Centrality
We consider the all pairs all shortest paths (APASP) problem, which maintains
the shortest path dag rooted at every vertex in a directed graph G=(V,E) with
positive edge weights. For this problem we present a decremental algorithm
(that supports the deletion of a vertex, or weight increases on edges incident
to a vertex). Our algorithm runs in amortized O(\vstar^2 \cdot \log n) time per
update, where n=|V|, and \vstar bounds the number of edges that lie on shortest
paths through any given vertex. Our APASP algorithm can be used for the
decremental computation of betweenness centrality (BC), a graph parameter that
is widely used in the analysis of large complex networks. No nontrivial
decremental algorithm for either problem was known prior to our work. Our
method is a generalization of the decremental algorithm of Demetrescu and
Italiano [DI04] for unique shortest paths, and for graphs with \vstar =O(n), we
match the bound in [DI04]. Thus for graphs with a constant number of shortest
paths between any pair of vertices, our algorithm maintains APASP and BC scores
in amortized time O(n^2 \log n) under decremental updates, regardless of the
number of edges in the graph.Comment: An extended abstract of this paper will appear in Proc. ISAAC 201
Efficacy of Grout Curtain at Ramganga Dam
The analysis of foundation piezometer records at main dam and saddle dam of Ramganga Project, has indicated that the single row grout curtain at main dam, is ineffective so far as the hydrostatic pressure reduction in foundation is concerned, whereas under similar conditions, upstream impervious blanket at saddle dam, is more effective in pressure reduction. Experimental test results by electrical analogy technique using graphite paper, has indicated that in case of Ramganga main dam, the total net pressure reduction for fully effective grout curtain would have been only 25%. The design curve for efficiency versus different openings in grout curtain is also given
First exit times and residence times for discrete random walks on finite lattices
In this paper, we derive explicit formulas for the surface averaged first
exit time of a discrete random walk on a finite lattice. We consider a wide
class of random walks and lattices, including random walks in a non-trivial
potential landscape. We also compute quantities of interest for modelling
surface reactions and other dynamic processes, such as the residence time in a
subvolume, the joint residence time of several particles and the number of hits
on a reflecting surface.Comment: 19 pages, 2 figure
Colbond Drains for Rapid Consolidation at Manggar Besar Dam
7. 3m high and 280 m long Manggar Besar homogeneous earthen dam resting on 12. 0 m thick soft silty clay, is under construction to supply water to the city of Balikpapan in Kalimantan island of Indonesia. To accelerate the anticipated 1. 6 m settlement of dam, 30 cm wide strip type drains (Colbond CX 1000) using polyester no-woven fabric are being used 3m centre to centre. It is expected that 70 percent consolidation shall take place within thirteen months of construction by these drains
Deep Learning and Image data-based surface cracks recognition of laser nitrided Titanium alloy
Laser nitriding, a high-precision surface modification process, enhances the hardness, wear resistance and corrosion resistance of the materials. However, laser nitriding process is prone to appearance of cracks when the process is performed at high laser energy levels. Traditional techniques to detect the cracks are time consuming, costly and lack standardization. Thus, this research aims to put forth deep learning-based crack recognition for the laser nitriding of Ti–6Al–4V alloy. The process of laser nitriding has been performed by varying duty cycles, and other process parameters. The laser nitrided sample has then been processed through optical 3D surface measurements (Alicona Infinite Focus G5), creating high resolution images. The images were then pre-processed which included 2D conversion, patchification, image augmentation and subsequent removal of anomalies. After preprocessing, the investigation focused on employing robust binary classification method based on CNN models and its variants, including ResNet-50, VGG-19, VGG-16, GoogLeNet (Inception V3), and DenseNet-121, to recognize surface cracks. The performance of these models has been optimized by fine tuning different hyper parameters and it is found that CNN base model along with models having less trainable parameters like VGG-19, VGG-16 exhibit better performance with accuracy of more than 98% to recognize cracks. Through the achieved results, it is found that VGG-19 is the most preferable model for this crack recognition problem to effectively recognize the surface cracks on laser nitrided Ti–6Al–4V material, owing to its best accuracy and lesser parameters compared to complex models like ResNet-50 and Inception-V3
Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment
Automated data-driven decision making systems are increasingly being used to
assist, or even replace humans in many settings. These systems function by
learning from historical decisions, often taken by humans. In order to maximize
the utility of these systems (or, classifiers), their training involves
minimizing the errors (or, misclassifications) over the given historical data.
However, it is quite possible that the optimally trained classifier makes
decisions for people belonging to different social groups with different
misclassification rates (e.g., misclassification rates for females are higher
than for males), thereby placing these groups at an unfair disadvantage. To
account for and avoid such unfairness, in this paper, we introduce a new notion
of unfairness, disparate mistreatment, which is defined in terms of
misclassification rates. We then propose intuitive measures of disparate
mistreatment for decision boundary-based classifiers, which can be easily
incorporated into their formulation as convex-concave constraints. Experiments
on synthetic as well as real world datasets show that our methodology is
effective at avoiding disparate mistreatment, often at a small cost in terms of
accuracy.Comment: To appear in Proceedings of the 26th International World Wide Web
Conference (WWW), 2017. Code available at:
https://github.com/mbilalzafar/fair-classificatio
Inferring individual attributes from search engine queries and auxiliary information
Internet data has surfaced as a primary source for investigation of different
aspects of human behavior. A crucial step in such studies is finding a suitable
cohort (i.e., a set of users) that shares a common trait of interest to
researchers. However, direct identification of users sharing this trait is
often impossible, as the data available to researchers is usually anonymized to
preserve user privacy. To facilitate research on specific topics of interest,
especially in medicine, we introduce an algorithm for identifying a trait of
interest in anonymous users. We illustrate how a small set of labeled examples,
together with statistical information about the entire population, can be
aggregated to obtain labels on unseen examples. We validate our approach using
labeled data from the political domain.
We provide two applications of the proposed algorithm to the medical domain.
In the first, we demonstrate how to identify users whose search patterns
indicate they might be suffering from certain types of cancer. In the second,
we detail an algorithm to predict the distribution of diseases given their
incidence in a subset of the population at study, making it possible to predict
disease spread from partial epidemiological data
- …