4,770 research outputs found
Using Text Similarity to Detect Social Interactions not Captured by Formal Reply Mechanisms
In modeling social interaction online, it is important to understand when
people are reacting to each other. Many systems have explicit indicators of
replies, such as threading in discussion forums or replies and retweets in
Twitter. However, it is likely these explicit indicators capture only part of
people's reactions to each other, thus, computational social science approaches
that use them to infer relationships or influence are likely to miss the mark.
This paper explores the problem of detecting non-explicit responses, presenting
a new approach that uses tf-idf similarity between a user's own tweets and
recent tweets by people they follow. Based on a month's worth of posting data
from 449 ego networks in Twitter, this method demonstrates that it is likely
that at least 11% of reactions are not captured by the explicit reply and
retweet mechanisms. Further, these uncaptured reactions are not evenly
distributed between users: some users, who create replies and retweets without
using the official interface mechanisms, are much more responsive to followees
than they appear. This suggests that detecting non-explicit responses is an
important consideration in mitigating biases and building more accurate models
when using these markers to study social interaction and information diffusion.Comment: A final version of this work was published in the 2015 IEEE 11th
International Conference on e-Science (e-Science
Automatic Face Recognition System Based on Local Fourier-Bessel Features
We present an automatic face verification system inspired by known properties
of biological systems. In the proposed algorithm the whole image is converted
from the spatial to polar frequency domain by a Fourier-Bessel Transform (FBT).
Using the whole image is compared to the case where only face image regions
(local analysis) are considered. The resulting representations are embedded in
a dissimilarity space, where each image is represented by its distance to all
the other images, and a Pseudo-Fisher discriminator is built. Verification test
results on the FERET database showed that the local-based algorithm outperforms
the global-FBT version. The local-FBT algorithm performed as state-of-the-art
methods under different testing conditions, indicating that the proposed system
is highly robust for expression, age, and illumination variations. We also
evaluated the performance of the proposed system under strong occlusion
conditions and found that it is highly robust for up to 50% of face occlusion.
Finally, we automated completely the verification system by implementing face
and eye detection algorithms. Under this condition, the local approach was only
slightly superior to the global approach.Comment: 2005, Brazilian Symposium on Computer Graphics and Image Processing,
18 (SIBGRAPI
Design of an RSFQ Control Circuit to Observe MQC on an rf-SQUID
We believe that the best chance to observe macroscopic quantum coherence
(MQC) in a rf-SQUID qubit is to use on-chip RSFQ digital circuits for
preparing, evolving and reading out the qubit's quantum state. This approach
allows experiments to be conducted on a very short time scale (sub-nanosecond)
without the use of large bandwidth control lines that would couple
environmental degrees of freedom to the qubit thus contributing to its
decoherence. In this paper we present our design of a RSFQ digital control
circuit for demonstrating MQC in a rf-SQUID. We assess some of the key
practical issues in the circuit design including the achievement of the
necessary flux bias stability. We present an "active" isolation structure to be
used to increase coherence times. The structure decouples the SQUID from
external degrees of freedom, and then couples it to the output measurement
circuitry when required, all under the active control of RSFQ circuits.
Research supported in part by ARO grant # DAAG55-98-1-0367.Comment: 4 pages. More information and publications at
http://www.ece.rochester.edu:8080/users/sde/research/publications/index.htm
Retinal Vessel Segmentation Using the 2-D Morlet Wavelet and Supervised Classification
We present a method for automated segmentation of the vasculature in retinal
images. The method produces segmentations by classifying each image pixel as
vessel or non-vessel, based on the pixel's feature vector. Feature vectors are
composed of the pixel's intensity and continuous two-dimensional Morlet wavelet
transform responses taken at multiple scales. The Morlet wavelet is capable of
tuning to specific frequencies, thus allowing noise filtering and vessel
enhancement in a single step. We use a Bayesian classifier with
class-conditional probability density functions (likelihoods) described as
Gaussian mixtures, yielding a fast classification, while being able to model
complex decision surfaces and compare its performance with the linear minimum
squared error classifier. The probability distributions are estimated based on
a training set of labeled pixels obtained from manual segmentations. The
method's performance is evaluated on publicly available DRIVE and STARE
databases of manually labeled non-mydriatic images. On the DRIVE database, it
achieves an area under the receiver operating characteristic (ROC) curve of
0.9598, being slightly superior than that presented by the method of Staal et
al.Comment: 9 pages, 7 figures and 1 table. Accepted for publication in IEEE
Trans Med Imag; added copyright notic
- …
