930 research outputs found
Since We Said Good Bye
https://digitalcommons.library.umaine.edu/mmb-vp/3826/thumbnail.jp
DeepInf: Social Influence Prediction with Deep Learning
Social and information networking activities such as on Facebook, Twitter,
WeChat, and Weibo have become an indispensable part of our everyday life, where
we can easily access friends' behaviors and are in turn influenced by them.
Consequently, an effective social influence prediction for each user is
critical for a variety of applications such as online recommendation and
advertising.
Conventional social influence prediction approaches typically design various
hand-crafted rules to extract user- and network-specific features. However,
their effectiveness heavily relies on the knowledge of domain experts. As a
result, it is usually difficult to generalize them into different domains.
Inspired by the recent success of deep neural networks in a wide range of
computing applications, we design an end-to-end framework, DeepInf, to learn
users' latent feature representation for predicting social influence. In
general, DeepInf takes a user's local network as the input to a graph neural
network for learning her latent social representation. We design strategies to
incorporate both network structures and user-specific features into
convolutional neural and attention networks. Extensive experiments on Open
Academic Graph, Twitter, Weibo, and Digg, representing different types of
social and information networks, demonstrate that the proposed end-to-end
model, DeepInf, significantly outperforms traditional feature engineering-based
approaches, suggesting the effectiveness of representation learning for social
applications.Comment: 10 pages, 5 figures, to appear in KDD 2018 proceeding
Representation Learning for Attributed Multiplex Heterogeneous Network
Network embedding (or graph embedding) has been widely used in many
real-world applications. However, existing methods mainly focus on networks
with single-typed nodes/edges and cannot scale well to handle large networks.
Many real-world networks consist of billions of nodes and edges of multiple
types, and each node is associated with different attributes. In this paper, we
formalize the problem of embedding learning for the Attributed Multiplex
Heterogeneous Network and propose a unified framework to address this problem.
The framework supports both transductive and inductive learning. We also give
the theoretical analysis of the proposed framework, showing its connection with
previous works and proving its better expressiveness. We conduct systematical
evaluations for the proposed framework on four different genres of challenging
datasets: Amazon, YouTube, Twitter, and Alibaba. Experimental results
demonstrate that with the learned embeddings from the proposed framework, we
can achieve statistically significant improvements (e.g., 5.99-28.23% lift by
F1 scores; p<<0.01, t-test) over previous state-of-the-art methods for link
prediction. The framework has also been successfully deployed on the
recommendation system of a worldwide leading e-commerce company, Alibaba Group.
Results of the offline A/B tests on product recommendation further confirm the
effectiveness and efficiency of the framework in practice.Comment: Accepted to KDD 2019. Website: https://sites.google.com/view/gatn
Markov-chain reconstruction of the 2dF Galaxy Redshift Survey real-space power spectrum
The real-space power spectrum of L* galaxies measured from the 2dF Galaxy
Redshift Survey (2dFGRS) is presented. Markov-Chain Monte-Carlo (MCMC) sampling
was used to fit radial and angular modes resulting from a Spherical Harmonics
decomposition of the 2dFGRS overdensity field (described in Percival et al.
2004) with 16 real-space power spectrum values and linear redshift-space
distortion parameter \beta(L*,0). The recovered marginalised band-powers are
compared to previous estimates of galaxy power spectra. Additionally, we
provide a simple model for the 17 dimensional likelihood hyper-surface in order
to allow the likelihood to be quickly estimated given a set of model
band-powers and \beta(L*,0). The likelihood surface is not well approximated by
a multi-variate Gaussian distribution with model-independent covariances.
Instead, a model is presented in which the distribution of each band-power has
a Gaussian distribution in a combination of the band-power and its logarithm.
The relative contribution of each component was determined by fitting the MCMC
output. Using these distributions, we demonstrate how the likelihood of a given
cosmological model can be quickly and accurately estimated, and use a simple
set of models to compare estimated likelihoods with likelihoods calculated
using the full Spherical Harmonics procedure. All of the data are made
publically available (from http://www.roe.ac.uk/~wjp/, enabling the Spherical
Harmonics decomposition of the 2dFGRS of Percival et al. (2004) to be easily
used as a cosmological constraint.Comment: 9 pages, 5 figures, MNRAS accepte
Equivalence Principle Implications of Modified Gravity Models
Theories that attempt to explain the observed cosmic acceleration by
modifying general relativity all introduce a new scalar degree of freedom that
is active on large scales, but is screened on small scales to match
experiments. We show that if such screening occurrs via the chameleon mechanism
such as in f(R), it is possible to have order one violation of the equivalence
principle, despite the absence of explicit violation in the microscopic action.
Namely, extended objects such as galaxies or constituents thereof do not all
fall at the same rate. The chameleon mechanism can screen the scalar charge for
large objects but not for small ones (large/small is defined by the
gravitational potential and controlled by the scalar coupling). This leads to
order one fluctuations in the inertial to gravitational mass ratio. In Jordan
frame, it is no longer true that all objects move on geodesics. In contrast, if
the scalar screening occurrs via strong coupling, such as in the DGP braneworld
model, equivalence principle violation occurrs at a much reduced level. We
propose several observational tests of the chameleon mechanism: 1. small
galaxies should fall faster than large galaxies, even when dynamical friction
is negligible; 2. voids defined by small galaxies would be larger compared to
standard expectations; 3. stars and diffuse gas in small galaxies should have
different velocities, even on the same orbits; 4. lensing and dynamical mass
estimates should agree for large galaxies but disagree for small ones. We
discuss possible pitfalls in some of these tests. The cleanest is the third one
where mass estimate from HI rotational velocity could exceed that from stars by
30 % or more. To avoid blanket screening of all objects, the most promising
place to look is in voids.Comment: 27 pages, 4 figures, minor revisions, references added. Accepted for
publication in Phys. Rev.
Ricci Nilsoliton Black Holes
We follow a constructive approach and find higher-dimensional black holes
with Ricci nilsoliton horizons. The spacetimes are solutions to Einstein's
equation with a negative cosmological constant and generalises therefore,
anti-de Sitter black hole spacetimes. The approach combines a work by Lauret --
which relate so-called Ricci nilsolitons and Einstein solvmanifolds -- and an
earlier work by the author. The resulting black hole spacetimes are
asymptotically Einstein solvmanifolds and thus, are examples of solutions which
are not asymptotically Anti-de Sitter. We show that any nilpotent group in
dimension has a corresponding Ricci nilsoliton black hole solution in
dimension (n+2). Furthermore, we show that in dimensions (n+2)>8, there exists
an infinite number of locally distinct Ricci nilsoliton black hole metrics.Comment: 19 pages; fixed formatting problem
Verbal fluency as a quick and simple tool to help in deciding when to refer patients with a possible brain tumour
BACKGROUND: Patients with brain tumours often present with non-specific symptoms. Correctly identifying who to prioritise for urgent brain imaging is challenging. Brain tumours are amongst the commonest cancers diagnosed as an emergency presentation. A verbal fluency task (VFT) is a rapid triage test affected by disorders of executive function, language and processing speed. We tested whether a VFT could support identification of patients with a brain tumour. METHODS: This proof-of-concept study examined whether a VFT can help differentiate patients with a brain tumour from those with similar symptoms (i.e. headache) without a brain tumour. Two patient populations were recruited, (a) patients with known brain tumour, and (b) patients with headache referred for Direct-Access Computed-Tomography (DACT) from primary care with a suspicion of a brain tumour. Semantic and phonemic verbal fluency data were collected prospectively. RESULTS: 180 brain tumour patients and 90 DACT patients were recruited. Semantic verbal fluency score was significantly worse for patients with a brain tumour than those without (P â< â0.001), whether comparing patients with headache, or patients without headache. Phonemic fluency showed a similar but weaker difference. Raw and incidence-weighted positive and negative predictive values were calculated. CONCLUSION: We have demonstrated the potential role of adding semantic VFT score performance into clinical decision making to support triage of patients for urgent brain imaging. A relatively small improvement in the true positive rate in patients referred for DACT has the potential to increase the timeliness and efficiency of diagnosis and improve patient outcomes. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12883-022-02655-9
Measuring Redshift-Space Distortions using Photometric Surveys
We outline how redshift-space distortions (RSD) can be measured from the
angular correlation function w({\theta}), of galaxies selected from photometric
surveys. The natural degeneracy between RSD and galaxy bias can be minimized by
comparing results from bins with top-hat galaxy selection in redshift, and bins
based on the radial position of galaxy pair centres. This comparison can also
be used to test the accuracy of the photometric redshifts. The presence of RSD
will be clearly detectable with the next generation of photometric redshift
surveys. We show that the Dark Energy Survey (DES) will be able to measure
f(z){\sigma}_8(z) to a 1{\sigma} accuracy of (17 {\times} b)%, using galaxies
drawn from a single narrow redshift slice centered at z = 1. Here b is the
linear bias, and f is the logarithmic rate of change of the linear growth rate
with respect to the scale factor. Extending to measurements of w({\theta}) for
a series of bins of width 0.02(1 + z) over 0.5 < z < 1.4 will measure {\gamma}
to a 1{\sigma} accuracy of 25%, given the model f = {\Omega}_m(z)^{\gamma}, and
assuming a linear bias model that evolves such that b = 0.5 + z (and fixing
other cosmological parameters). The accuracy of our analytic predictions is
confirmed using mock catalogs drawn from simulations conducted by the MICE
collaboration.Comment: Accepted by MNRAS, revisions include fixing of typos and
clarification of the tex
Forecasting Cosmological Constraints from Redshift Surveys
Observations of redshift-space distortions in spectroscopic galaxy surveys
offer an attractive method for observing the build-up of cosmological
structure, which depends both on the expansion rate of the Universe and our
theory of gravity. In this paper we present a formalism for forecasting the
constraints on the growth of structure which would arise in an idealized
survey. This Fisher matrix based formalism can be used to study the power and
aid in the design of future surveys.Comment: 7 pages, 5 figures, minor revisions to match version accepted by
MNRA
Contrasting effects of high-starch and high-sugar diets on ruminal function in cattle
The experiment reported in this research paper aimed to determine whether clinical and subclinical effects on cattle were similar if provided with isoenergetic and isonitrogenous challenge diets in which carbohydrate sources were predominantly starch or sugar. The study was a 3 Ă 3 Latin square using six adult Jersey cows with rumen cannulae, over 9 weeks. In the first 2 weeks of each 3 week experimental period cows were fed with a maintenance diet and, in the last week, each animal was assigned to one of three diets: a control diet (CON), being a continuation of the maintenance diet; a high starch (HSt) or a high sugar (HSu) diet. Reticuloruminal pH and motility were recorded throughout the study period. Blood and ruminal samples were taken on day-1 (TP-1), day-2 (TP-2) and day-7 (TP-7) of each challenge week. Four clinical variables were recorded daily: diarrhoea, inappetence, depression and ruminal tympany. The effects of treatment, hour of day and day after treatment on clinical parameters were analysed using linear mixed effects (LME) models. Although both challenge diets resulted in a decline in pH, an increase in the absolute pH residuals and an increase in the number of minutes per day under pH 5.8, systemic inflammation was only detected with the HSt diet. The challenge diets differentially modified amplitude and period of reticuloruminal contractions compared with CON diet and both were associated with an increased probability of diarrhoea. The HSu diet reduced the probability of an animal consuming its complete allocation. Because the challenge diets were derived from complex natural materials (barley and molasses respectively), it is not possible to assign all the differential effects to the difference in starch and sugar concentration: non-starch components of barley or non-sugar components of molasses might have contributed to some of the observations. In conclusion, substituting much of the starch with sugar caused no substantial reduction in the acidosis load, but inflammatory response was reduced while feed rejection was increased
- âŚ