3,962 research outputs found

    Persistent currents in a circular array of Bose-Einstein condensates

    Full text link
    A ring-shaped array of Bose-Einstein condensed atomic gases can display circular currents if the relative phase of neighboring condensates becomes locked to certain values. It is shown that, irrespective of the mechanism responsible for generating these states, only a restricted set of currents are stable, depending on the number of condensates, on the interaction and tunneling energies, and on the total number of particles. Different instabilities due to quasiparticle excitations are characterized and possible experimental setups for testing the stability prediction are also discussed.Comment: 7 pages, REVTex

    Probabilistic Guarded P Systems, A New Formal Modelling Framework

    Get PDF
    Multienvironment P systems constitute a general, formal framework for modelling the dynamics of population biology, which consists of two main approaches: stochastic and probabilistic. The framework has been successfully used to model biologic systems at both micro (e.g. bacteria colony) and macro (e.g. real ecosystems) levels, respectively. In this paper, we extend the general framework in order to include a new case study related to P. Oleracea species. The extension is made by a new variant within the probabilistic approach, called Probabilistic Guarded P systems (in short, PGP systems). We provide a formal definition, a simulation algorithm to capture the dynamics, and a survey of the associated software.Ministerio de Economía y Competitividad TIN2012- 37434Junta de Andalucía P08-TIC-0420

    The normalization of online campaigning in the web.2.0 era

    Get PDF
    This article is based on a comparative study of online campaigning and its effects by country and over time, using four of the largest European Union member states (France, Germany, Poland and the United Kingdom) as a case study. Our research explores the extent of embeddedness of online campaigning, the strategic uses of the whole online environment and in particular the use of the interactive features associated with web.2.0 era. However, our research goes beyond studies of online campaigning as we also determine whether online campaigning across platforms matters in electoral terms. Our data support the normalization hypothesis which shows overall low levels of innovation but that the parties with the highest resources tend to develop online campaigns with the highest functionality. We find that there is a vote dividend for those parties which utilized web.2.0 features the most and so offered visitors to their web presence a more interactive experience

    Evaluation of the health-related quality of life of children in Schistosoma haematobium-endemic communities in Kenya: a cross-sectional study.

    Get PDF
    BACKGROUND: Schistosomiasis remains a global public health challenge, with 93% of the ~237 million infections occurring in sub-Saharan Africa. Though rarely fatal, its recurring nature makes it a lifetime disorder with significant chronic health burdens. Much of its negative health impact is due to non-specific conditions such as anemia, undernutrition, pain, exercise intolerance, poor school performance, and decreased work capacity. This makes it difficult to estimate the disease burden specific to schistosomiasis using the standard DALY metric. METHODOLOGY/PRINCIPAL FINDINGS: In our study, we used Pediatric Quality of Life Inventory (PedsQL), a modular instrument available for ages 2-18 years, to assess health-related quality of life (HrQoL) among children living in a Schistosoma haematobium-endemic area in coastal Kenya. The PedsQL questionnaires were administered by interview to children aged 5-18 years (and their parents) in five villages spread across three districts. HrQoL (total score) was significantly lower in villages with high prevalence of S. haematobium (-4.0%, p<0.001) and among the lower socioeconomic quartiles (-2.0%, p<0.05). A greater effect was seen in the psychosocial scales as compared to the physical function scale. In moderate prevalence villages, detection of any parasite eggs in the urine was associated with a significant 2.1% (p<0.05) reduction in total score. The PedsQL reliabilities were generally high (Cronbach alphas ≥0.70), floor effects were acceptable, and identification of children from low socioeconomic standing was valid. CONCLUSIONS/SIGNIFICANCE: We conclude that exposure to urogenital schistosomiasis is associated with a 2-4% reduction in HrQoL. Further research is warranted to determine the reproducibility and responsiveness properties of QoL testing in relation to schistosomiasis. We anticipate that a case definition based on more sensitive parasitological diagnosis among younger children will better define the immediate and long-term HrQoL impact of Schistosoma infection

    P-splines with derivative based penalties and tensor product smoothing of unevenly distributed data

    Get PDF
    The P-splines of Eilers and Marx (1996) combine a B-spline basis with a discrete quadratic penalty on the basis coefficients, to produce a reduced rank spline like smoother. P-splines have three properties that make them very popular as reduced rank smoothers: i) the basis and the penalty are sparse, enabling efficient computation, especially for Bayesian stochastic simulation; ii) it is possible to flexibly `mix-and-match' the order of B-spline basis and penalty, rather than the order of penalty controlling the order of the basis as in spline smoothing; iii) it is very easy to set up the B-spline basis functions and penalties. The discrete penalties are somewhat less interpretable in terms of function shape than the traditional derivative based spline penalties, but tend towards penalties proportional to traditional spline penalties in the limit of large basis size. However part of the point of P-splines is not to use a large basis size. In addition the spline basis functions arise from solving functional optimization problems involving derivative based penalties, so moving to discrete penalties for smoothing may not always be desirable. The purpose of this note is to point out that the three properties of basis-penalty sparsity, mix-and-match penalization and ease of setup are readily obtainable with B-splines subject to derivative based penalization. The penalty setup typically requires a few lines of code, rather than the two lines typically required for P-splines, but this one off disadvantage seems to be the only one associated with using derivative based penalties. As an example application, it is shown how basis-penalty sparsity enables efficient computation with tensor product smoothers of scattered data

    Estimating the robustness and uncertainty of animal social networks using different observational methods

    Get PDF
    Social network analysis is quickly becoming an established framework to study the structure of animal social systems. To explore the social network of a population, observers must capture data on the interactions or associations between individuals. Sampling decisions significantly impact the outcome of data collection, notably the amount of data available from which to construct social networks. However, little is known about how different sampling methods, and more generally the extent of sampling effort, impact the robustness of social network analyses. Here, we generate proximity networks from data obtained via nearly continuous GPS tracking of members of a wild baboon troop (Papio anubis). These data allow us to produce networks based on complete observations of inter-individual distances between group members. We then mimic several widely used focal animal sampling and group scanning methods by subsampling the complete dataset to simulate observational data comparable to that produced by human observers. We explore how sampling effort, sampling methods, network definitions, and levels and types of sampling error affect the correlation between the estimated and complete networks. Our results suggest that for some scenarios, even low levels of sampling effort (5-10 samples/individual) can provide the same information as high sampling effort (>64 samples/individual). However, we find that insufficient data collected across all potentially interacting individuals, certain network definitions (how edge weights and distance thresholds are calculated) and misidentifications of individuals in the network can generate spurious network structure with little or no correlation to the underlying or “real” social structure. Our results suggest that data collection methods should be designed to maximize the number of potential interactions (edges) recorded each observation. We discuss the relative trade-offs between maximizing the amount of data collected across as many individuals as possible and the potential for erroneous observations
    corecore