305 research outputs found
Teachers\u27 Read-Aloud Preferences: Perpetuating Sex-Role Stereotypes
Just what influence does children\u27s literature, sexist or not, have upon socialization of children
Effects of multiple-dose ponesimod, a selective SIP1 receptor modulator, on lymphocyte subsets in healthy humans
This study investigated the effects of ponesimod, a selective SIP1 receptor modulator, on T lymphocyte subsets in 16 healthy subjects. Lymphocyte subset proportions and absolute numbers were determined at baseline and on Day 10, after once-daily administration of ponesimod (10 mg, 20 mg, and 40 mg each consecutively for 3 days) or placebo (ratio 3: 1). The overall change from baseline in lymphocyte count was -1,292 +/- 340x10(6) cells/L and 275 +/- 486x10(6) cells/L in ponesimod- and placebo-treated subjects, respectively. This included a decrease in both T and B lymphocytes following ponesimod treatment. A decrease in naive CD4(+) T cells (CD45RA(+)CCR7(+)) from baseline was observed only after ponesimod treatment (-113 +/- 98x10(6) cells/L, placebo: 0 +/- 18x10(6) cells/L). The number of T-cytotoxic (CD3(+)CD8(+)) and T-helper (CD3(+)CD4(+)) cells was significantly altered following ponesimod treatment compared with placebo. Furthermore, ponesimod treatment resulted in marked decreases in CD4(+) T-central memory (CD45RA(-)CCR7(+)) cells (-437 +/- 164x10(6) cells/L) and CD4(+) T-effector memory (CD45RA(-)CCR7(-)) cells (-131 +/- 57x10(6) cells/L). In addition, ponesimod treatment led to a decrease of -228 +/- 90x10(6) cells/L of gut-homing T cells (CLA(-)integrin beta 7(+)). In contrast, when compared with placebo, CD8(+) T-effector memory and natural killer (NK) cells were not significantly reduced following multiple-dose administration of ponesimod. In summary, ponesimod treatment led to a marked reduction in overall T and B cells. Further investigations revealed that the number of CD4(+) cells was dramatically reduced, whereas CD8(+) and NK cells were less affected, allowing the body to preserve critical viral-clearing functions
Regular realizability problems and context-free languages
We investigate regular realizability (RR) problems, which are the problems of
verifying whether intersection of a regular language -- the input of the
problem -- and fixed language called filter is non-empty. In this paper we
focus on the case of context-free filters. Algorithmic complexity of the RR
problem is a very coarse measure of context-free languages complexity. This
characteristic is compatible with rational dominance. We present examples of
P-complete RR problems as well as examples of RR problems in the class NL. Also
we discuss RR problems with context-free filters that might have intermediate
complexity. Possible candidates are the languages with polynomially bounded
rational indices.Comment: conference DCFS 201
The Computational Complexity of Generating Random Fractals
In this paper we examine a number of models that generate random fractals.
The models are studied using the tools of computational complexity theory from
the perspective of parallel computation. Diffusion limited aggregation and
several widely used algorithms for equilibrating the Ising model are shown to
be highly sequential; it is unlikely they can be simulated efficiently in
parallel. This is in contrast to Mandelbrot percolation that can be simulated
in constant parallel time. Our research helps shed light on the intrinsic
complexity of these models relative to each other and to different growth
processes that have been recently studied using complexity theory. In addition,
the results may serve as a guide to simulation physics.Comment: 28 pages, LATEX, 8 Postscript figures available from
[email protected]
The Computational Complexity of the Lorentz Lattice Gas
The Lorentz lattice gas is studied from the perspective of computational
complexity theory. It is shown that using massive parallelism, particle
trajectories can be simulated in a time that scales logarithmically in the
length of the trajectory. This result characterizes the ``logical depth" of the
Lorentz lattice gas and allows us to compare it to other models in statistical
physics.Comment: 9 pages, LaTeX, to appear in J. Stat. Phy
Theoretically Efficient Parallel Graph Algorithms Can Be Fast and Scalable
There has been significant recent interest in parallel graph processing due
to the need to quickly analyze the large graphs available today. Many graph
codes have been designed for distributed memory or external memory. However,
today even the largest publicly-available real-world graph (the Hyperlink Web
graph with over 3.5 billion vertices and 128 billion edges) can fit in the
memory of a single commodity multicore server. Nevertheless, most experimental
work in the literature report results on much smaller graphs, and the ones for
the Hyperlink graph use distributed or external memory. Therefore, it is
natural to ask whether we can efficiently solve a broad class of graph problems
on this graph in memory.
This paper shows that theoretically-efficient parallel graph algorithms can
scale to the largest publicly-available graphs using a single machine with a
terabyte of RAM, processing them in minutes. We give implementations of
theoretically-efficient parallel algorithms for 20 important graph problems. We
also present the optimizations and techniques that we used in our
implementations, which were crucial in enabling us to process these large
graphs quickly. We show that the running times of our implementations
outperform existing state-of-the-art implementations on the largest real-world
graphs. For many of the problems that we consider, this is the first time they
have been solved on graphs at this scale. We have made the implementations
developed in this work publicly-available as the Graph-Based Benchmark Suite
(GBBS).Comment: This is the full version of the paper appearing in the ACM Symposium
on Parallelism in Algorithms and Architectures (SPAA), 201
Natural Complexity, Computational Complexity and Depth
Depth is a complexity measure for natural systems of the kind studied in
statistical physics and is defined in terms of computational complexity. Depth
quantifies the length of the shortest parallel computation required to
construct a typical system state or history starting from simple initial
conditions. The properties of depth are discussed and it is compared to other
complexity measures. Depth can only be large for systems with embedded
computation.Comment: 21 pages, 1 figur
Parallel Algorithm and Dynamic Exponent for Diffusion-limited Aggregation
A parallel algorithm for ``diffusion-limited aggregation'' (DLA) is described
and analyzed from the perspective of computational complexity. The dynamic
exponent z of the algorithm is defined with respect to the probabilistic
parallel random-access machine (PRAM) model of parallel computation according
to , where L is the cluster size, T is the running time, and the
algorithm uses a number of processors polynomial in L\@. It is argued that
z=D-D_2/2, where D is the fractal dimension and D_2 is the second generalized
dimension. Simulations of DLA are carried out to measure D_2 and to test
scaling assumptions employed in the complexity analysis of the parallel
algorithm. It is plausible that the parallel algorithm attains the minimum
possible value of the dynamic exponent in which case z characterizes the
intrinsic history dependence of DLA.Comment: 24 pages Revtex and 2 figures. A major improvement to the algorithm
and smaller dynamic exponent in this versio
The Parallel Complexity of Growth Models
This paper investigates the parallel complexity of several non-equilibrium
growth models. Invasion percolation, Eden growth, ballistic deposition and
solid-on-solid growth are all seemingly highly sequential processes that yield
self-similar or self-affine random clusters. Nonetheless, we present fast
parallel randomized algorithms for generating these clusters. The running times
of the algorithms scale as , where is the system size, and the
number of processors required scale as a polynomial in . The algorithms are
based on fast parallel procedures for finding minimum weight paths; they
illuminate the close connection between growth models and self-avoiding paths
in random environments. In addition to their potential practical value, our
algorithms serve to classify these growth models as less complex than other
growth models, such as diffusion-limited aggregation, for which fast parallel
algorithms probably do not exist.Comment: 20 pages, latex, submitted to J. Stat. Phys., UNH-TR94-0
Dental attendance, restoration and extractions in adults with intellectual disabilities compared with the general population: a record linkage study
Background:
Oral health may be poorer in adults with intellectual disabilities (IDs) who rely on carer support and medications with increased dental risks.
Methods:
Record linkage study of dental outcomes, and associations with anticholinergic (e.g. antipsychotics) and sugarâcontaining liquid medication, in adults with IDs compared with ageâsexâneighbourhood deprivationâmatched general population controls.
Results:
A total of 2933/4305 (68.1%) with IDs and 7761/12 915 (60.1%) without IDs attended dental care: odds ratio (OR) = 1.42 [1.32, 1.53]; 1359 (31.6%) with IDs versus 5233 (40.5%) without IDs had restorations: OR = 0.68 [0.63, 0.73]; and 567 (13.2%) with IDs versus 2048 (15.9%) without IDs had dental extractions: OR = 0.80 [0.73, 0.89]. Group differences for attendance were greatest in younger ages, and restoration/extractions differences were greatest in older ages. Adults with IDs were more likely prescribed with anticholinergics (2493 (57.9%) vs. 6235 (48.3%): OR = 1.49 [1.39, 1.59]) and sugarâcontaining liquids (1641 (38.1%) vs. 2315 (17.9%): OR = 2.89 [2.67, 3.12]).
Conclusion:
Carers support dental appointments, but dentists may be less likely to restore teeth, possibly extracting multiple teeth at individual appointments instead
- âŠ