710,050 research outputs found

    Conceptual graph-based knowledge representation for supporting reasoning in African traditional medicine

    Get PDF
    Although African patients use both conventional or modern and traditional healthcare simultaneously, it has been proven that 80% of people rely on African traditional medicine (ATM). ATM includes medical activities stemming from practices, customs and traditions which were integral to the distinctive African cultures. It is based mainly on the oral transfer of knowledge, with the risk of losing critical knowledge. Moreover, practices differ according to the regions and the availability of medicinal plants. Therefore, it is necessary to compile tacit, disseminated and complex knowledge from various Tradi-Practitioners (TP) in order to determine interesting patterns for treating a given disease. Knowledge engineering methods for traditional medicine are useful to model suitably complex information needs, formalize knowledge of domain experts and highlight the effective practices for their integration to conventional medicine. The work described in this paper presents an approach which addresses two issues. First it aims at proposing a formal representation model of ATM knowledge and practices to facilitate their sharing and reusing. Then, it aims at providing a visual reasoning mechanism for selecting best available procedures and medicinal plants to treat diseases. The approach is based on the use of the Delphi method for capturing knowledge from various experts which necessitate reaching a consensus. Conceptual graph formalism is used to model ATM knowledge with visual reasoning capabilities and processes. The nested conceptual graphs are used to visually express the semantic meaning of Computational Tree Logic (CTL) constructs that are useful for formal specification of temporal properties of ATM domain knowledge. Our approach presents the advantage of mitigating knowledge loss with conceptual development assistance to improve the quality of ATM care (medical diagnosis and therapeutics), but also patient safety (drug monitoring)

    Incorporation of genuine prior information in cost-effectiveness analysis of clinical trial data

    Get PDF
    The Bayesian approach to statistics has been growing rapidly in popularity as an alternative to the frequentist approach in the appraisal of heathcare technologies in clinical trials. Bayesian methods have significant advantages over classical frequentist statistical methods and the presentation of evidence to decision makers. A fundamental feature of a Bayesian analysis is the use of prior information as well as the clinical trial data in the final analysis. However, the incorporation of prior information remains a controversial subject that provides a potential barrier to the acceptance of practical uses of Bayesian methods. The pur pose of this paper is to stimulate a debate on the use of prior information in evidence submitted to decision makers. We discuss the advantages of incorporating genuine prior information in cost-effectiveness analyses of clinical trial data and explore mechanisms to safeguard scientific rigor in the use of such prior information

    A Formal Treatment of Sequential Ignorability

    Full text link
    Taking a rigorous formal approach, we consider sequential decision problems involving observable variables, unobservable variables, and action variables. We can typically assume the property of extended stability, which allows identification (by means of G-computation) of the consequence of a specified treatment strategy if the unobserved variables are, in fact, observed - but not generally otherwise. However, under certain additional special conditions we can infer simple stability (or sequential ignorability), which supports G-computation based on the observed variables alone. One such additional condition is sequential randomization, where the unobserved variables essentially behave as random noise in their effects on the actions. Another is sequential irrelevance, where the unobserved variables do not influence future observed variables. In the latter case, to deduce sequential ignorability in full generality requires additional positivity conditions. We show here that these positivity conditions are not required when all variables are discrete.Comment: 25 pages, 5 figures, 1 tabl

    Impacts of household credit on education and healthcare spending by the poor in peri-urban areas in Vietnam

    Get PDF
    There is debate about whether microfinance has positive impacts on education and health for borrowing households in developing countries. To provide evidence for this debate we use a new survey designed to meet the conditions for propensity score matching (PSM) and examine the impact of household credit on education and healthcare spending by the poor in peri-urban areas of Ho Chi Minh City, Vietnam. In addition to matching statistically identical non-borrowers with borrowers, our estimates also control for household pre-treatment income and assets, which may be associated with unobservable factors affecting both credit participation and the outcomes of interest. The PSM estimates of binary treatment effect show significant and positive impacts of borrowing on education and healthcare spending. However, multiple ordered treatment effect estimates reveal that only formal credit has significant and positive impacts on education and healthcare spending, while informal credit has insignificant impacts on the spending

    Randomization does not help much, comparability does

    Full text link
    Following Fisher, it is widely believed that randomization "relieves the experimenter from the anxiety of considering innumerable causes by which the data may be disturbed." In particular, it is said to control for known and unknown nuisance factors that may considerably challenge the validity of a result. Looking for quantitative advice, we study a number of straightforward, mathematically simple models. However, they all demonstrate that the optimism with respect to randomization is wishful thinking rather than based on fact. In small to medium-sized samples, random allocation of units to treatments typically yields a considerable imbalance between the groups, i.e., confounding due to randomization is the rule rather than the exception. In the second part of this contribution, we extend the reasoning to a number of traditional arguments for and against randomization. This discussion is rather non-technical, and at times even "foundational" (Frequentist vs. Bayesian). However, its result turns out to be quite similar. While randomization's contribution remains questionable, comparability contributes much to a compelling conclusion. Summing up, classical experimentation based on sound background theory and the systematic construction of exchangeable groups seems to be advisable
    • 

    corecore