323 research outputs found
Pharmacist intervention in primary care to improve outcomes in patients with left ventricular systolic dysfunction
<b>Background</b> Meta-analysis of small trials suggests that pharmacist-led collaborative review and revision of medical treatment may improve outcomes in heart failure.<p></p>
<b>Methods and results</b> We studied patients with left ventricular systolic dysfunction in a cluster-randomized controlled, event driven, trial in primary care. We allocated 87 practices (1090 patients) to pharmacist intervention and 87 practices (1074 patients) to usual care. The intervention was delivered by non-specialist pharmacists working with family doctors to optimize medical treatment. The primary outcome was a composite of death or hospital admission for worsening heart failure. This trial is registered, number ISRCTN70118765. The median follow-up was 4.7 years. At baseline, 86% of patients in both groups were treated with an angiotensin-converting enzyme inhibitor or an angiotensin receptor blocker. In patients not receiving one or other of these medications, or receiving less than the recommended dose, treatment was started, or the dose increased, in 33.1% of patients in the intervention group and in 18.5% of the usual care group [odds ratio (OR) 2.26, 95% CI 1.64–3.10; P< 0.001]. At baseline, 62% of each group were treated with a β-blocker and the proportions starting or having an increase in the dose were 17.9% in the intervention group and 11.1% in the usual care group (OR 1.76, 95% CI 1.31–2.35; P< 0.001). The primary outcome occurred in 35.8% of patients in the intervention group and 35.4% in the usual care group (hazard ratio 0.97, 95% CI 0.83–1.14; P = 0.72). There was no difference in any secondary outcome.<p></p>
<b>Conclusion</b> A low-intensity, pharmacist-led collaborative intervention in primary care resulted in modest improvements in prescribing of disease-modifying medications but did not improve clinical outcomes in a population that was relatively well treated at baseline
Effects of multiple-dose ponesimod, a selective SIP1 receptor modulator, on lymphocyte subsets in healthy humans
This study investigated the effects of ponesimod, a selective SIP1 receptor modulator, on T lymphocyte subsets in 16 healthy subjects. Lymphocyte subset proportions and absolute numbers were determined at baseline and on Day 10, after once-daily administration of ponesimod (10 mg, 20 mg, and 40 mg each consecutively for 3 days) or placebo (ratio 3: 1). The overall change from baseline in lymphocyte count was -1,292 +/- 340x10(6) cells/L and 275 +/- 486x10(6) cells/L in ponesimod- and placebo-treated subjects, respectively. This included a decrease in both T and B lymphocytes following ponesimod treatment. A decrease in naive CD4(+) T cells (CD45RA(+)CCR7(+)) from baseline was observed only after ponesimod treatment (-113 +/- 98x10(6) cells/L, placebo: 0 +/- 18x10(6) cells/L). The number of T-cytotoxic (CD3(+)CD8(+)) and T-helper (CD3(+)CD4(+)) cells was significantly altered following ponesimod treatment compared with placebo. Furthermore, ponesimod treatment resulted in marked decreases in CD4(+) T-central memory (CD45RA(-)CCR7(+)) cells (-437 +/- 164x10(6) cells/L) and CD4(+) T-effector memory (CD45RA(-)CCR7(-)) cells (-131 +/- 57x10(6) cells/L). In addition, ponesimod treatment led to a decrease of -228 +/- 90x10(6) cells/L of gut-homing T cells (CLA(-)integrin beta 7(+)). In contrast, when compared with placebo, CD8(+) T-effector memory and natural killer (NK) cells were not significantly reduced following multiple-dose administration of ponesimod. In summary, ponesimod treatment led to a marked reduction in overall T and B cells. Further investigations revealed that the number of CD4(+) cells was dramatically reduced, whereas CD8(+) and NK cells were less affected, allowing the body to preserve critical viral-clearing functions
Regular realizability problems and context-free languages
We investigate regular realizability (RR) problems, which are the problems of
verifying whether intersection of a regular language -- the input of the
problem -- and fixed language called filter is non-empty. In this paper we
focus on the case of context-free filters. Algorithmic complexity of the RR
problem is a very coarse measure of context-free languages complexity. This
characteristic is compatible with rational dominance. We present examples of
P-complete RR problems as well as examples of RR problems in the class NL. Also
we discuss RR problems with context-free filters that might have intermediate
complexity. Possible candidates are the languages with polynomially bounded
rational indices.Comment: conference DCFS 201
Assessing adherence to dermatology treatments: a review of self-report and electronic measures
Nonadherence to prescribed medications is a common problem in dermatology, and assessing adherence can be difficult. Electronic monitors are not always practical, but self-report measures may be less reliable.To review the literature for self-report instruments and electronic monitors used to measure medication adherence in patients with chronic disease.A PubMed literature search was conducted using the terms ‘scale,’‘measure,’‘self-report,’‘electronic,’ and ‘medication adherence.’ Relevant articles were reviewed and selected if they addressed self-report or electronic measures of adherence in chronic disease.Eleven self-report instruments for the measurement of adherence were identified. Four were validated using electronic monitors. All produced an estimate of adherence that correlated with actual behavior, although this correlation was not strong for any of the measures. None of the scales was tested in patients who had dermatologic disease and/or used topical medications. Several electronic monitoring systems were identified, including pill counts, pharmacy refill logs, and the Medication Event Monitoring System (MEMS ® ). Validity was higher among electronic monitoring systems compared with self-report measures.While several self-report measures of adherence have been validated in chronic disease populations, their relevance in dermatology patients has not been studied. A dermatology-specific instrument for the measurement of adherence would contribute to improved outcomes; until such a tool exists, researchers and clinicians should consider nonadherence as a possible factor in skin disease that is not responsive to treatment. Electronic monitoring provides the most reliable means of measuring adherence, and may provide additional clues to identify barriers to adherence.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/79087/1/j.1600-0846.2010.00431.x.pd
Theoretically Efficient Parallel Graph Algorithms Can Be Fast and Scalable
There has been significant recent interest in parallel graph processing due
to the need to quickly analyze the large graphs available today. Many graph
codes have been designed for distributed memory or external memory. However,
today even the largest publicly-available real-world graph (the Hyperlink Web
graph with over 3.5 billion vertices and 128 billion edges) can fit in the
memory of a single commodity multicore server. Nevertheless, most experimental
work in the literature report results on much smaller graphs, and the ones for
the Hyperlink graph use distributed or external memory. Therefore, it is
natural to ask whether we can efficiently solve a broad class of graph problems
on this graph in memory.
This paper shows that theoretically-efficient parallel graph algorithms can
scale to the largest publicly-available graphs using a single machine with a
terabyte of RAM, processing them in minutes. We give implementations of
theoretically-efficient parallel algorithms for 20 important graph problems. We
also present the optimizations and techniques that we used in our
implementations, which were crucial in enabling us to process these large
graphs quickly. We show that the running times of our implementations
outperform existing state-of-the-art implementations on the largest real-world
graphs. For many of the problems that we consider, this is the first time they
have been solved on graphs at this scale. We have made the implementations
developed in this work publicly-available as the Graph-Based Benchmark Suite
(GBBS).Comment: This is the full version of the paper appearing in the ACM Symposium
on Parallelism in Algorithms and Architectures (SPAA), 201
Natural Complexity, Computational Complexity and Depth
Depth is a complexity measure for natural systems of the kind studied in
statistical physics and is defined in terms of computational complexity. Depth
quantifies the length of the shortest parallel computation required to
construct a typical system state or history starting from simple initial
conditions. The properties of depth are discussed and it is compared to other
complexity measures. Depth can only be large for systems with embedded
computation.Comment: 21 pages, 1 figur
Parallel Algorithm and Dynamic Exponent for Diffusion-limited Aggregation
A parallel algorithm for ``diffusion-limited aggregation'' (DLA) is described
and analyzed from the perspective of computational complexity. The dynamic
exponent z of the algorithm is defined with respect to the probabilistic
parallel random-access machine (PRAM) model of parallel computation according
to , where L is the cluster size, T is the running time, and the
algorithm uses a number of processors polynomial in L\@. It is argued that
z=D-D_2/2, where D is the fractal dimension and D_2 is the second generalized
dimension. Simulations of DLA are carried out to measure D_2 and to test
scaling assumptions employed in the complexity analysis of the parallel
algorithm. It is plausible that the parallel algorithm attains the minimum
possible value of the dynamic exponent in which case z characterizes the
intrinsic history dependence of DLA.Comment: 24 pages Revtex and 2 figures. A major improvement to the algorithm
and smaller dynamic exponent in this versio
Space Efficient Algorithms for Breadth-Depth Search
Continuing the recent trend, in this article we design several
space-efficient algorithms for two well-known graph search methods. Both these
search methods share the same name {\it breadth-depth search} (henceforth {\sf
BDS}), although they work entirely in different fashion. The classical
implementation for these graph search methods takes time and bits of space in the standard word RAM model (with word size being
bits), where and denotes the number of edges and
vertices of the input graph respectively. Our goal here is to beat the space
bound of the classical implementations, and design space
algorithms for these search methods by paying little to no penalty in the
running time. Note that our space bounds (i.e., with bits of
space) do not even allow us to explicitly store the required information to
implement the classical algorithms, yet our algorithms visits and reports all
the vertices of the input graph in correct order.Comment: 12 pages, This work will appear in FCT 201
- …