1,546 research outputs found
Dynamics of Social Balance on Networks
We study the evolution of social networks that contain both friendly and
unfriendly pairwise links between individual nodes. The network is endowed with
dynamics in which the sense of a link in an imbalanced triad--a triangular loop
with 1 or 3 unfriendly links--is reversed to make the triad balanced. With this
dynamics, an infinite network undergoes a dynamic phase transition from a
steady state to "paradise"--all links are friendly--as the propensity p for
friendly links in an update event passes through 1/2. A finite network always
falls into a socially-balanced absorbing state where no imbalanced triads
remain. If the additional constraint that the number of imbalanced triads in
the network does not increase in an update is imposed, then the network quickly
reaches a balanced final state.Comment: 10 pages, 7 figures, 2-column revtex4 forma
A Precision Measurement of pp Elastic Scattering Cross Sections at Intermediate Energies
We have measured differential cross sections for \pp elastic scattering with
internal fiber targets in the recirculating beam of the proton synchrotron
COSY. Measurements were made continuously during acceleration for projectile
kinetic energies between 0.23 and 2.59 GeV in the angular range deg. Details of the apparatus and the data analysis are
given and the resulting excitation functions and angular distributions
presented. The precision of each data point is typically better than 4%, and a
relative normalization uncertainty of only 2.5% within an excitation function
has been reached. The impact on phase shift analysis as well as upper bounds on
possible resonant contributions in lower partial waves are discussed.Comment: 23 pages 29 figure
A Relational Event Approach to Modeling Behavioral Dynamics
This chapter provides an introduction to the analysis of relational event
data (i.e., actions, interactions, or other events involving multiple actors
that occur over time) within the R/statnet platform. We begin by reviewing the
basics of relational event modeling, with an emphasis on models with piecewise
constant hazards. We then discuss estimation for dyadic and more general
relational event models using the relevent package, with an emphasis on
hands-on applications of the methods and interpretation of results. Statnet is
a collection of packages for the R statistical computing system that supports
the representation, manipulation, visualization, modeling, simulation, and
analysis of relational data. Statnet packages are contributed by a team of
volunteer developers, and are made freely available under the GNU Public
License. These packages are written for the R statistical computing
environment, and can be used with any computing platform that supports R
(including Windows, Linux, and Mac).
Dark Matter Spin-Dependent Limits for WIMP Interactions on 19-F by PICASSO
The PICASSO experiment at SNOLAB reports new results for spin-dependent WIMP
interactions on F using the superheated droplet technique. A new
generation of detectors and new features which enable background discrimination
via the rejection of non-particle induced events are described. First results
are presented for a subset of two detectors with target masses of F of
65 g and 69 g respectively and a total exposure of 13.75 0.48 kgd. No
dark matter signal was found and for WIMP masses around 24 GeV/c new limits
have been obtained on the spin-dependent cross section on F of
= 13.9 pb (90% C.L.) which can be converted into cross section
limits on protons and neutrons of = 0.16 pb and = 2.60 pb
respectively (90% C.L). The obtained limits on protons restrict recent
interpretations of the DAMA/LIBRA annual modulations in terms of spin-dependent
interactions.Comment: Revised version, accepted for publication in Phys. Lett. B, 20 pages,
7 figure
Incentives as connectors : insights into a breastfeeding incentive intervention in a disadvantaged area of North-West England
PMID: 22458841 [PubMed - indexed for MEDLINE] PMCID: PMC3414740 Free PMC ArticlePeer reviewedPublisher PD
Improved Bevirimat resistance prediction by combination of structural and sequence-based classifiers
<p>Abstract</p> <p>Background</p> <p>Maturation inhibitors such as Bevirimat are a new class of antiretroviral drugs that hamper the cleavage of HIV-1 proteins into their functional active forms. They bind to these preproteins and inhibit their cleavage by the HIV-1 protease, resulting in non-functional virus particles. Nevertheless, there exist mutations in this region leading to resistance against Bevirimat. Highly specific and accurate tools to predict resistance to maturation inhibitors can help to identify patients, who might benefit from the usage of these new drugs.</p> <p>Results</p> <p>We tested several methods to improve Bevirimat resistance prediction in HIV-1. It turned out that combining structural and sequence-based information in classifier ensembles led to accurate and reliable predictions. Moreover, we were able to identify the most crucial regions for Bevirimat resistance computationally, which are in line with experimental results from other studies.</p> <p>Conclusions</p> <p>Our analysis demonstrated the use of machine learning techniques to predict HIV-1 resistance against maturation inhibitors such as Bevirimat. New maturation inhibitors are already under development and might enlarge the arsenal of antiretroviral drugs in the future. Thus, accurate prediction tools are very useful to enable a personalized therapy.</p
Machine learning on normalized protein sequences
<p>Abstract</p> <p>Background</p> <p>Machine learning techniques have been widely applied to biological sequences, e.g. to predict drug resistance in HIV-1 from sequences of drug target proteins and protein functional classes. As deletions and insertions are frequent in biological sequences, a major limitation of current methods is the inability to handle varying sequence lengths.</p> <p>Findings</p> <p>We propose to normalize sequences to uniform length. To this end, we tested one linear and four different non-linear interpolation methods for the normalization of sequence lengths of 19 classification datasets. Classification tasks included prediction of HIV-1 drug resistance from drug target sequences and sequence-based prediction of protein function. We applied random forests to the classification of sequences into "positive" and "negative" samples. Statistical tests showed that the linear interpolation outperforms the non-linear interpolation methods in most of the analyzed datasets, while in a few cases non-linear methods had a small but significant advantage. Compared to other published methods, our prediction scheme leads to an improvement in prediction accuracy by up to 14%.</p> <p>Conclusions</p> <p>We found that machine learning on sequences normalized by simple linear interpolation gave better or at least competitive results compared to state-of-the-art procedures, and thus, is a promising alternative to existing methods, especially for protein sequences of variable length.</p
An Adaptive Sublinear-Time Block Sparse Fourier Transform
The problem of approximately computing the dominant Fourier coefficients of a vector quickly, and using few samples in time domain, is known as the Sparse Fourier Transform (sparse FFT) problem. A long line of work on the sparse FFT has resulted in algorithms with runtime [Hassanieh \emph{et al.}, STOC'12] and sample complexity [Indyk \emph{et al.}, FOCS'14]. These results are proved using non-adaptive algorithms, and the latter sample complexity result is essentially the best possible under the sparsity assumption alone: It is known that even adaptive algorithms must use samples [Hassanieh \emph{et al.}, STOC'12]. By {\em adaptive}, we mean being able to exploit previous samples in guiding the selection of further samples. This paper revisits the sparse FFT problem with the added twist that the sparse coefficients approximately obey a -block sparse model. In this model, signal frequencies are clustered in intervals with width in Fourier space, and is the total sparsity. Signals arising in applications are often well approximated by this model with . Our main result is the first sparse FFT algorithm for -block sparse signals with a sample complexity of at constant signal-to-noise ratios, and sublinear runtime. A similar sample complexity was previously achieved in the works on {\em model-based compressive sensing} using random Gaussian measurements, but used runtime. To the best of our knowledge, our result is the first sublinear-time algorithm for model based compressed sensing, and the first sparse FFT result that goes below the sample complexity bound. Interestingly, the aforementioned model-based compressive sensing result that relies on Gaussian measurements is non-adaptive, whereas our algorithm crucially uses {\em adaptivity} to achieve the improved sample complexity bound. We prove that adaptivity is in fact necessary in the Fourier setting: Any {\em non-adaptive} algorithm must use samples for the )-block sparse model, ruling out improvements over the vanilla sparsity assumption. Our main technical innovation for adaptivity is a new randomized energy-based importance sampling technique that may be of independent interest
Health-related quality of life in patients with surgically treated lumbar disc herniation: 2- and 7-year follow-up of 117 patients
To access publisher full text version of this article. Please click on the hyperlink in Additional Links field.BACKGROUND AND PURPOSE: Health-related quality of life (HRQoL) instruments have been of increasing interest for evaluation of medical treatments over the past 10-15 years. In this prospective, long-term follow-up study we investigated the influence of preoperative factors and the change in HRQoL over time after lumbar disc herniation surgery. METHODS: 117 patients surgically treated for lumbar disc herniation (L4-L5 or L5-S1) were evaluated with a self-completion HRQoL instrument (EQ-5D) preoperatively, after 2 years (96 patients) and after 7 years (89 patients). Baseline data (age, sex, duration of leg pain, surgical level) and degree of leg and back pain (VAS) were obtained preoperatively. The mean age was 39 (18-66) years, 54% were men, and the surgical level was L5-S1 in 58% of the patients. The change in EQ-5D score at the 2-year follow-up was analyzed by testing for correlation and by using a multiple regression model including all baseline factors (age, sex, duration of pain, degree of leg and back pain, and baseline EQ-5D score) as potential predictors. RESULTS: 85% of the patients reported improvement in EQ-5D two years after surgery and this result remained at the long-term follow-up. The mean difference (change) between the preoperative EQ-5D score and the 2-year and 7-year scores was 0.59 (p < 0.001) and 0.62 (p < 0.001), respectively. However, the HRQoL for this patient group did not reach the mean level of previously reported values for a normal population of the same age range at any of the follow-ups. The changes in EQ-5D score between the 2- and 7-year follow-ups were not statistically significant (mean change 0.03, p = 0.2). There was a correlation between baseline leg pain and the change in EQ-5D at the 2-year (r = 0.33, p = 0.002) and 7-year follow-up (r = 0.23, p = 0.04). However, when using regression analysis the only statistically significant predictor for change in EQ-5D was baseline EQ-5D score. INTERPRETATION: Our findings suggest that HRQoL (as measured by EQ-5D) improved 2 years after lumbar disc herniation surgery, but there was no further improvement after 5 more years. Low quality of life and severe leg pain at baseline are important predictors of improvement in quality of life after lumbar disc herniation surgery.Marianne och Marcus Wallenberg Foundation
ALF Vastra Gotaland.
Gothenburg Medical Association.
Swedish Society of Medicine.
Felix Neubergh Foundation
- …