51 research outputs found

    The oil-dispersion bath in anthroposophic medicine – an integrative review

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Anthroposophic medicine offers a variety of treatments, among others the oil-dispersion bath, developed in the 1930s by Werner Junge. Based on the phenomenon that oil and water do not mix and on recommendations of Rudolf Steiner, Junge developed a vortex mechanism which churns water and essential oils into a fine mist. The oil-covered droplets empty into a tub, where the patient immerses for 15–30 minutes. We review the current literature on oil-dispersion baths.</p> <p>Methods</p> <p>The following databases were searched: Medline, Pubmed, Embase, AMED and CAMbase. The search terms were 'oil-dispersion bath' and 'oil bath', and their translations in German and French. An Internet search was also performed using Google Scholar, adding the search terms 'study' and 'case report' to the search terms above. Finally, we asked several experts for gray literature not listed in the above-mentioned databases. We included only articles which met the criterion of a clinical study or case report, and excluded theoretical contributions.</p> <p>Results</p> <p>Among several articles found in books, journals and other publications, we identified 1 prospective clinical study, 3 experimental studies (enrolling healthy individuals), 5 case reports, and 3 field-reports. In almost all cases, the studies described beneficial effects – although the methodological quality of most studies was weak. Main indications were internal/metabolic diseases and psychiatric/neurological disorders.</p> <p>Conclusion</p> <p>Beyond the obvious beneficial effects of warm bathes on the subjective well-being, it remains to be clarified what the unique contribution of the distinct essential oils dispersed in the water can be. There is a lack of clinical studies exploring the efficacy of oil-dispersion baths. Such studies are recommended for the future.</p

    The adaptive significance of chromosomal inversion polymorphisms in Drosophila melanogaster

    Get PDF
    Chromosomal inversions, structural mutations that reverse a segment of a chromosome, cause suppression of recombination in the heterozygous state. Several studies have shown that inversion polymorphisms can form clines or fluctuate predictably in frequency over seasonal time spans. These observations prompted the hypothesis that chromosomal rearrangements might be subject to spatially and/or temporally varying selection. Here, we review what has been learned about the adaptive significance of inversion polymorphisms in the vinegar fly Drosophila melanogaster, the species in which they were first discovered by Sturtevant in 1917. A large body of work provides compelling evidence that several inversions in this system are adaptive; however, the precise selective mechanisms that maintain them polymorphic in natural populations remain poorly understood. Recent advances in population genomics, modelling and functional genetics promise to greatly improve our understanding of this long‐standing and fundamental problem in the near future

    Exploring, exploiting and evolving diversity of aquatic ecosystem models: a community perspective

    Get PDF

    Towards a theory of Evidence.

    No full text
    this paper we present our work towards a theory of evidence. The need for a theoretical framework can be appreciated through the recent empirical work on combining the models from data mining algorithms, [LSM99] and [CS99]. Our theory permits the communication/comparison to take place in a consistent manner. Before going into the details of our theory, it is worth spending a little extra time explaining the KDD requirements and hence the motivation for our work. We regard an enterprise&apos;s database as an accurate sample of the empirical world. This simplification allows us to ignore issues such as over fitting. We wish to discover implied knowledge about this world, via a KDD modeling process which is both efficient and effective. KDD efficiency is concerned with computational complexity and the practical overheads incurred when handling commercially credible volumes of data. KDD effectiveness is concerned with predictive accuracy (e.g. classification error-rate) and the comprehensibility of the revealed knowledge. The notions of both efficiency and effectiveness are relative ones. For example, failure to achieve 100% classification arises from intrinsic uncertainty in the database and the deficiency of the algorithm. It is therefore not surprising that several KDD paradigms exist, within each of which there are several candidate KDD algorithms, [FPSSU96]. Examples of KDD algorithms are: rule induction; instance based learning; neural networks; genetic algorithms and genetic programming. It can be shown that there is a degree of complementarity between the strengths and weaknesses of, for example, the rule-induction and instance based paradigms, and that certain algorithms perform better on certain kinds of data sets. The data mining practitioner therefore prefers to have..

    Interfacing knowledge discovery algorithms to large database management systems

    No full text
    The efficient mining of large, commercially credible, databases requires a solution to at least two problems: (a) better integration between existing Knowledge Discovery algorithms and popular DBMS; (b) ability to exploit opportunities for computational speedup such as data parallelism. Both problems need to be addressed in a generic manner, since the stated requirements of end-users cover a range of data mining paradigms, DBMS, and (parallel) platforms. In this paper we present a family of generic, set-based, primitive operations for Knowledge Discovery in Databases (KDD). We show how a number of well-known KDD classification metrics, drawn from paradigms such as Bayesian classifiers, Rule-Induction/Decision Tree algorithms, Instance-Based Learning methods, and Genetic Programming, can all be computed via our generic primitives. We then show how these primitives may be mapped into SQL and, where appropriate, optimised for good performance in respect of practical factors such as client-server communication overheads. We demonstrate how our primitives can support C4.5, a widely-used rule induction system. Performance evaluation figures are presented for commercially available parallel platforms, such as the IBM SP/2

    Testing validity inferences for Genetic Drift Inventory scores using Rasch modeling and item order analyses

    No full text
    Abstract Background Concept inventories (CIs) are commonly used tools for assessing student understanding of scientific and naive ideas, yet the body of empirical evidence supporting the inferences drawn from CI scores is often limited in scope and remains deeply rooted in Classical Test Theory. The Genetic Drift Inventory (GeDI) is a relatively new CI designed for use in diagnosing undergraduate students’ conceptual understanding of genetic drift. This study seeks to expand the sources of evidence examining validity and reliability inferences produced by GeDI scores. Specifically, our research focused on: (1) GeDI instrument and item properties as revealed by Rasch modeling, (2) item order effects on response patterns, and (3) generalization to a new geographic sample. Methods A sample of 336 advanced undergraduate biology majors completed four equivalent versions of the GeDI. Rasch analysis was used to examine instrument dimensionality, item fit properties, person and item reliability, and alignment of item difficulty with person ability. To investigate whether the presentation order of GeDI item suites influenced overall student performance, scores were compared from randomly assigned, equivalent test versions varying in item-suite presentation order. Scores from this sample were also compared with scores from similar but geographically distinct samples to examine generalizability of score patterns. Results Rasch analysis indicated that the GeDI was unidimensional, with good fit to the Rasch model. Items had high reliability and were well matched to the ability of the sample. Person reliability was low. Rotating the GeDI’s item suites had no significant impact on scores, suggesting each suite functioned independently. Scores from our new sample from the NE United States were comparable to those from other geographic regions and provide evidence in support of score generalizability. Overall, most instrument features were robust. Suggestions for improvement include: (1) incorporation of additional items to differentiate high-ability persons and improve person reliability, and (2) re-examination of items with redundant or low difficulty levels. Conclusions Rasch analyses of the GEDI instrument and item order effects expand the range and quality of evidence in support of validity claims and illustrate changes that are likely to improve the quality of this (and other) evolution education instruments

    DDB Graph Operations for the IFS/2

    No full text
    The IFS/2 is an add-on hardware unit which provides whole-structure operations on data such as sets, relations and graphs. This report, a sequel to CSM-164 and CSM-168, describes 16 procedures which perform operations on graphs held in the IFS/2&apos;s persistent memory. The graph operations are especially intended to support recursive query evaluation in deductive databases (DDB), though some of the operations clearly have a wider application. Additional IFS/2 graph operations may be implemented at a later date
    • 

    corecore