214 research outputs found

    A complementary method for detecting qi vacuity

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Qi vacuity (QV) is defined by traditional Chinese medicine as a loss of energy in the human body. An objective method for detecting QV was not available until recently, however. The automatic reflective diagnosis system (ARDK) is a device that detects human bioenergy through measuring skin conductance at 24 special acupoints on the wrists and ankles.</p> <p>Methods</p> <p>This study used the ARDK to measure skin conductance on 193 patients with QV and 89 sex- and age-matched healthy controls to investigate whether the device is useful in detecting QV. Patients diagnosed with QV have three or more of five symptoms or signs; symptom severity is measured on 5 levels and scored from 0 to 4 points. We compared the difference in the mean ARDK values between patients with QV and healthy controls, and further used linear regression analysis to investigate the correlation between the mean ARDK values and QV scores in patients diagnosed with QV.</p> <p>Results</p> <p>The mean ARDK values in patients with QV (30.2 ± 16.8 μA) are significantly lower than those of healthy controls (37.7 ± 10.8 μA; <it>P </it>< 0.001). A negative correlation was found between the mean ARDK values and QV scores (<it>r </it>coefficient = -0.61; <it>P </it>< 0.001). After adjusting for age, the decreased mean ARDK values in patients with QV showed a significant correlation with the QV scores.</p> <p>Conclusion</p> <p>These results suggest that the mean ARDK values reflect the severity of QV in patients diagnosed with the disorder. They also suggest that the bioenergy level of the human body can be measured by skin conductance. ARDK is a safe and effective complementary method for detecting and diagnosing QV.</p

    Erythropoietin in amyotrophic lateral sclerosis: a multicentre, randomised, double blind, placebo controlled, phase III study

    Get PDF
    OBJECTIVE: To assess the efficacy of recombinant human erythropoietin (rhEPO) in amyotrophic lateral sclerosis (ALS). METHODS: Patients with probable laboratory-supported, probable or definite ALS were enrolled by 25 Italian centres and randomly assigned (1:1) to receive intravenous rhEPO 40,000 IU or placebo fortnightly as add-on treatment to riluzole 100 mg daily for 12 months. The primary composite outcome was survival, tracheotomy or &gt;23 h non-invasive ventilation (NIV). Secondary outcomes were ALSFRS-R, slow vital capacity (sVC) and quality of life (ALSAQ-40) decline. Tolerability was evaluated analysing adverse events (AEs) causing withdrawal. The randomisation sequence was computer-generated by blocks, stratified by centre, disease severity (ALSFRS-R cut-off score of 33) and onset (spinal or bulbar). The main outcome analysis was performed in all randomised patients and by intention-to-treat for the entire population and patients stratified by severity and onset. The study is registered, EudraCT 2009-016066-91. RESULTS: We randomly assigned 208 patients, of whom 5 (1 rhEPO and 4 placebo) withdrew consent and 3 (placebo) became ineligible (retinal thrombosis, respiratory insufficiency, SOD1 mutation) before receiving treatment; 103 receiving rhEPO and 97 placebo were eligible for analysis. At 12 months, the annualised rate of death (rhEPO 0.11, 95% CI 0.06 to 0.20; placebo: 0.08, CI 0.04 to 0.17), tracheotomy or &gt;23 h NIV (rhEPO 0.16, CI 0.10 to 0.27; placebo 0.18, CI 0.11 to 0.30) did not differ between groups, also after stratification by onset and ALSFRS-R at baseline. Withdrawal due to AE was 16.5% in rhEPO and 8.3% in placebo. No differences were found for secondary outcomes. CONCLUSIONS: RhEPO 40,000 IU fortnightly did not change the course of ALS

    Testo e computer. Introduzione alla linguistica computazionale

    No full text
    In che modo il computer può aiutarci a comprendere come funziona la nostra lingua? Cosa significa analizzare un testo con l'aiuto di un calcolatore? In che misura possiamo estendere le potenzialità del computer rendendolo capace di interagire con gli utenti umani nella loro lingua? Queste e altre domande sono l'oggetto di indagine della linguistica computazionale, una disciplina che ha al suo centro proprio il rapporto tra lingua e computer. Il libro fornisce gli elementi di base della linguistica computazionale partendo da un interesse primario per il testo, la sua struttura e il suo contenuto. Il volume propone una sintesi equilibrata e accessibile tra sapere e fare, nozioni di base e loro applicazione

    Acquiring and Representing Meaning. Theoretical and Computational Perspectives

    No full text
    Modelling the way word meanings dynamically function and combine in communicative contexts, evolve through learning and are categorised through high-level semantic classes presents one of the most di±cult challenges for cognitive science, and is a large stumbling block on the way to developing advanced arti¯cial systems for full text understanding. The problem of the form of semantic knowledge typically presents itself as a representation issue: i.e. what is the aptest way of representing the meaning of words and of the complex expressions they enter into? Pro- viding satisfactory answers to these questions is an essential requirement for explaining the e®ective use of semantic knowledge in concrete cog- nitive abilities. This is extremely important also in an engineering and computational perspective, as a key to a deeper understanding of the constructive principles underpinning the design of intelligent artifacts like robots and other arti¯cial intelligent agents. Similarly, the issue of semantic dynamics has a crucial role in modelling human cognition, since cognitive agents constantly update and revise their knowledge, ac- quire new words, assign new meanings to already known words, et

    CHUNK-IT. An Italian Shallow Parser for Robust Syntactic Annotation

    No full text
    This paper reports on the experience of developing and applying a shallow parsing scheme, “chunking”, to unrestricted Italian texts, with a view to the prospective definition of further, more complex levels of syntactic analysis. A text is chunked into structured units which can be identified with certainty on the basis of an empty syntactic lexicon. The chunking process stops at that level of granularity beyond which the analysis gets undecidable. We argue that a chunked syntactic representation can usefully be exploited as such for non trivial NLP applications, which do not require full text understanding such as automatic lexical acquisition and information retrieval. The first part of the paper illustrates in detail the adopted annotation scheme, by relating it to some specific issues of Italian syntactic analysis. In the second part, after giving some theoretical justification of the notion of chunking, we describe some applications of this technique of shallow parsing to robust syntactic annotation of texts

    Annotazione sintattica di corpora: aspetti metodologici

    No full text
    Un assunto sempre più condiviso nell’ambito degli studi sull’acquisizione sia di L1 che di L2 è che l’evidenza empirica privilegiata debba essere rappresentata da corpora di produzioni scritte o orali degli apprendenti, estensivamente annotate a molteplici livelli di rappresentazione linguistica. Più in generale, corpora lemmatizzati e annotati a livello morfosintattico fanno ormai parte dello strumentario comune del linguista. Accanto ad essi, si fa però strada l’esigenza di disporre di risorse testuali più sofisticate dal punto di vista delle modalità di esplorazione linguistica, come ad esempio corpora annotati a livello sintattico (le cosiddette treebank). Questi consentono infatti di osservare i processi di convergenza degli apprendenti verso la lingua “obiettivo” anche a livello di specifici tratti grammaticali astratti o di macro-strutture linguistiche
    • …
    corecore