3,276 research outputs found

    Forgiveness and interpersonal relationships: A nepalese investigation

    Get PDF
    The present study examined the practice of forgiveness in Nepal. A model relating collectivism and forgiveness was examined. Participants (N = 221) completed measures of collectivism, individualism, forgiveness, conciliatory behavior, and motivations for avoidance and revenge toward the offender. Collectivism was positively related to forgiveness. Forgiveness was strongly related to conciliatory behavior and motivations for avoidance and revenge toward the offender. Decisional forgiveness was a stronger predictor of motivations for revenge than was emotional forgiveness. © Taylor & Francis Group, LLC.postprin

    Consistency for 0-1 Programming

    Full text link
    Concepts of consistency have long played a key role in constraint programming but never developed in integer programming (IP). Consistency nonetheless plays a role in IP as well. For example, cutting planes can reduce backtracking by achieving various forms of consistency as well as by tightening the linear programming (LP) relaxation. We introduce a type of consistency that is particularly suited for 0-1 programming and develop the associated theory. We define a 0-1 constraint set as LP-consistent when any partial assignment that is consistent with its linear programming relaxation is consistent with the original 0-1 constraint set. We prove basic properties of LP-consistency, including its relationship with Chvatal-Gomory cuts and the integer hull. We show that a weak form of LP-consistency can reduce or eliminate backtracking in a way analogous to k-consistency but is easier to achieve. In so doing, we identify a class of valid inequalities that can be more effective than traditional cutting planes at cutting off infeasible 0-1 partial assignments

    Nuclear reactions in the Sun after SNO and KamLAND

    Full text link
    In this brief review we discuss the possibility of studying the solar interior by means of neutrinos, in the light of the enormous progress of neutrino physics in the last few years. The temperature near the solar center can be extracted from Boron neutrino experiments as: T=(1.57±0.01)107K T= (1.57 \pm 0.01) 10^7 K. The energy production rate in the Sun from pp chain and CNO cycle, as deduced from neutrino measurements, agrees with the observed solar luminosity to about twenty per cent. Progress in extracting astrophysical information from solar neutrinos requires improvement in the measurements of 3He+^3He+ \\4He→7Be+γ^4He \to ^7Be+\gamma and p+14N→15O+γp+^{14}N \to ^{15}O+ \gamma.Comment: To appear in the Proceedings of Beyond the Desert '03, Fourth International Conference on Physics Beyond the Standard Model, Schloss Ringberg, Germany, June 9-14, 200

    Unforeseen Costs of Cutting Mosquito Surveillance Budgets

    Get PDF
    A budget proposal to stop the U.S. Centers for Disease Control and Prevention (CDC) funding in surveillance and research for mosquito-borne diseases such as dengue and West Nile virus has the potential to leave the country ill-prepared to handle new emerging diseases and manage existing ones. In order to demonstrate the consequences of such a measure, if implemented, we evaluated the impact of delayed control responses to dengue epidemics (a likely scenario emerging from the proposed CDC budget cut) in an economically developed urban environment. We used a mathematical model to generate hypothetical scenarios of delayed response to a dengue introduction (a consequence of halted mosquito surveillance) in the City of Cairns, Queensland, Australia. We then coupled the results of such a model with mosquito surveillance and case management costs to estimate the cumulative costs of each response scenario. Our study shows that halting mosquito surveillance can increase the management costs of epidemics by up to an order of magnitude in comparison to a strategy with sustained surveillance and early case detection. Our analysis shows that the total costs of preparedness through surveillance are far lower than the ones needed to respond to the introduction of vector-borne pathogens, even without consideration of the cost in human lives and well-being. More specifically, our findings provide a science-based justification for the re-assessment of the current proposal to slash the budget of the CDC vector-borne diseases program, and emphasize the need for improved and sustainable systems for vector-borne disease surveillance

    Research, evidence and policymaking: the perspectives of policy actors on improving uptake of evidence in health policy development and implementation in Uganda

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Use of evidence in health policymaking plays an important role, especially in resource-constrained settings where informed decisions on resource allocation are paramount. Several knowledge translation (KT) models have been developed, but few have been applied to health policymaking in low income countries. If KT models are expected to explain evidence uptake and implementation, or lack of it, they must be contextualized and take into account the specificity of low income countries for example, the strong influence of donors. The main objective of this research is to elaborate a Middle Range Theory (MRT) of KT in Uganda that can also serve as a reference for other low- and middle income countries.</p> <p>Methods</p> <p>This two-step study employed qualitative approaches to examine the principal barriers and facilitating factors to KT. Step 1 involved a literature review and identification of common themes. The results informed the development of the initial MRT, which details the facilitating factors and barriers to KT at the different stages of research and policy development. In Step 2, these were further refined through key informant interviews with policymakers and researchers in Uganda. Deductive content and thematic analysis was carried out to assess the degree of convergence with the elements of the initial MRT and to identify other emerging issues.</p> <p>Results</p> <p>Review of the literature revealed that the most common emerging facilitating factors could be grouped under institutional strengthening for KT, research characteristics, dissemination, partnerships and political context. The analysis of interviews, however, showed that policymakers and researchers ranked institutional strengthening for KT, research characteristics and partnerships as the most important. New factors emphasized by respondents were the use of mainstreamed structures within MoH to coordinate and disseminate research, the separation of roles between researchers and policymakers, and the role of the community and civil society in KT.</p> <p>Conclusions</p> <p>This study refined an initial MRT on KT in policymaking in the health sector in Uganda that was based on a literature review. It provides a framework that can be used in empirical research of the process of KT on specific policy issues.</p

    Evaluating QBF Solvers: Quantifier Alternations Matter

    Full text link
    We present an experimental study of the effects of quantifier alternations on the evaluation of quantified Boolean formula (QBF) solvers. The number of quantifier alternations in a QBF in prenex conjunctive normal form (PCNF) is directly related to the theoretical hardness of the respective QBF satisfiability problem in the polynomial hierarchy. We show empirically that the performance of solvers based on different solving paradigms substantially varies depending on the numbers of alternations in PCNFs. In related theoretical work, quantifier alternations have become the focus of understanding the strengths and weaknesses of various QBF proof systems implemented in solvers. Our results motivate the development of methods to evaluate orthogonal solving paradigms by taking quantifier alternations into account. This is necessary to showcase the broad range of existing QBF solving paradigms for practical QBF applications. Moreover, we highlight the potential of combining different approaches and QBF proof systems in solvers.Comment: preprint of a paper to be published at CP 2018, LNCS, Springer, including appendi

    Large Scale Structure of the Universe

    Full text link
    Galaxies are not uniformly distributed in space. On large scales the Universe displays coherent structure, with galaxies residing in groups and clusters on scales of ~1-3 Mpc/h, which lie at the intersections of long filaments of galaxies that are >10 Mpc/h in length. Vast regions of relatively empty space, known as voids, contain very few galaxies and span the volume in between these structures. This observed large scale structure depends both on cosmological parameters and on the formation and evolution of galaxies. Using the two-point correlation function, one can trace the dependence of large scale structure on galaxy properties such as luminosity, color, stellar mass, and track its evolution with redshift. Comparison of the observed galaxy clustering signatures with dark matter simulations allows one to model and understand the clustering of galaxies and their formation and evolution within their parent dark matter halos. Clustering measurements can determine the parent dark matter halo mass of a given galaxy population, connect observed galaxy populations at different epochs, and constrain cosmological parameters and galaxy evolution models. This chapter describes the methods used to measure the two-point correlation function in both redshift and real space, presents the current results of how the clustering amplitude depends on various galaxy properties, and discusses quantitative measurements of the structures of voids and filaments. The interpretation of these results with current theoretical models is also presented.Comment: Invited contribution to be published in Vol. 8 of book "Planets, Stars, and Stellar Systems", Springer, series editor T. D. Oswalt, volume editor W. C. Keel, v2 includes additional references, updated to match published versio

    Prospective Investigation of Markers of Elevated Delirium Risk (PRIMED Risk) study protocol: a prospective, observational cohort study investigating blood and cerebrospinal fluid biomarkers for delirium and cognitive dysfunction in older patients [version 1; peer review: awaiting peer review]

    Get PDF
    BACKGROUND: Delirium is a common post-operative complication, particularly in older adults undergoing major or emergency procedures. It is associated with increased length of intensive care and hospital stay, post-operative mortality and subsequent dementia risk. Current methods of predicting delirium incidence, duration and severity have limitations. Investigation of blood and cerebrospinal fluid (CSF) biomarkers linked to delirium may improve understanding of the underlying pathophysiology, particularly with regard to the extent this is shared or distinct with underlying dementia. Together, these have the potential for development of better risk stratification tools and perioperative interventions. / METHODS: 200 patients over the age of 70 scheduled for surgery with routine spinal anaesthetic will be recruited from UK hospitals. Their cognitive and functional baseline status will be assessed pre-operatively by telephone. Time-matched CSF and blood samples will be taken at the time of surgery and analysed for known biomarkers of neurodegeneration and neuroinflammation. Patients will be assessed daily for delirium until hospital discharge and will have regular cognitive follow-up for two years. Primary outcomes will be change in modified Telephone Interview for Cognitive Status (TICS-m) score at 12 months and rate of change of TICS-m score. Delirium severity, duration and biomarker levels will be treated as exposures in a random effects linear regression models. PRIMED Risk has received regulatory approvals from Health Research Authority and London – South East Research Ethics Committee. / DISCUSSION: The main anticipated output from this study will be the quantification of biomarkers of acute and chronic contributors to cognitive impairment after surgery. In addition, we aim to develop better risk prediction models for adverse cognitive outcomes
    • …
    corecore