34 research outputs found

    Response to combination therapy with interferon alfa-2a and ribavirin in chronic hepatitis C according to a TNF-alpha promoter polymorphism

    Get PDF
    Background. Tumor necrosis factor-alpha (TNF-alpha) is involved in the pathogenesis of chronic active hepatitis C. Polymorphisms in the promoter region of the TNF-alpha gene can alter the TNF-alpha expression and modify the host immune response. The present study aimed at the correlation of the G308A TNF-alpha polymorphism with the response to antiviral combination therapy in chronic hepatitis C. Patients and Methods: 62 patients with HCV and 119 healthy unrelated controls were genotyped for the G308A TNF-alpha promoter polymorphism. The patients received 3 x 3 million units of interferon alfa-2a and 1,0001,200 mg ribavirin daily according to their body weight. A response was defined as absence of HCV-RNA and normalization of S-ALT after 6 months of combination therapy. Results:With respect to the allele and genotype frequency, a significant difference was not observed between controls and patients with chronic hepatitis C. Furthermore, such a difference was also not observed if responders and non-responders to antiviral therapy were compared. Conclusions: The promoter polymorphism of the TNF-alpha gene investigated herein is equally distributed in healthy individuals and patients with hepatitis C and does not seem to predict the response to therapy with interferon alfa-2a and ribavirin. Copyright (C) 2003 S. Karger AG, Basel

    Human Computation and Convergence

    Full text link
    Humans are the most effective integrators and producers of information, directly and through the use of information-processing inventions. As these inventions become increasingly sophisticated, the substantive role of humans in processing information will tend toward capabilities that derive from our most complex cognitive processes, e.g., abstraction, creativity, and applied world knowledge. Through the advancement of human computation - methods that leverage the respective strengths of humans and machines in distributed information-processing systems - formerly discrete processes will combine synergistically into increasingly integrated and complex information processing systems. These new, collective systems will exhibit an unprecedented degree of predictive accuracy in modeling physical and techno-social processes, and may ultimately coalesce into a single unified predictive organism, with the capacity to address societies most wicked problems and achieve planetary homeostasis.Comment: Pre-publication draft of chapter. 24 pages, 3 figures; added references to page 1 and 3, and corrected typ

    A Model for Language Annotations on the Web

    Get PDF
    Several annotation models have been proposed to enable a multilingual Semantic Web. Such models hone in on the word and its morphology and assume the language tag and URI comes from external resources. These resources, such as ISO 639 and Glottolog, have limited coverage of the world's languages and have a very limited thesaurus-like structure at best, which hampers language annotation, hence constraining research in Digital Humanities and other fields. To resolve this `outsourced' task of the current models, we developed a model for representing information about languages, the \textbf{Mo}del for \textbf{L}anguage \textbf{A}nnotation (\langmod{}), such that basic language information can be recorded consistently and therewith queried and analyzed as well. This includes the various types of languages, families, and the relations among them. \langmod{} is formalized in OWL so that it can integrate with Linguistic Linked Data resources. Sufficient coverage of \langmod{} is demonstrated with the use case of French

    An architecture for the autonomic curation of crowdsourced knowledge

    Get PDF
    Human knowledge curators are intrinsically better than their digital counterparts at providing relevant answers to queries. That is mainly due to the fact that an experienced biological brain will account for relevant community expertise as well as exploit the underlying connections between knowledge pieces when offering suggestions pertinent to a specific question, whereas most automated database managers will not. We address this problem by proposing an architecture for the autonomic curation of crowdsourced knowledge, that is underpinned by semantic technologies. The architecture is instantiated in the career data domain, thus yielding Aviator, a collaborative platform capable of producing complete, intuitive and relevant answers to career related queries, in a time effective manner. In addition to providing numeric and use case based evidence to support these research claims, this extended work also contains a detailed architectural analysis of Aviator to outline its suitability for automatically curating knowledge to a high standard of quality

    Data Work in a Knowledge-Broker Organization: How Cross-Organizational Data Maintenance shapes Human Data Interactions.

    Get PDF

    FOLCOM or the costs of tagging

    No full text
    This paper introduces FOLCOM, a FOLksonomy Cost estimatiOn Method that uses a story-points-approach to quantitatively assess the efforts that are cumulatively associated with tagging a collection of information objects by a community of users. The method was evaluated through individual, face-to-face structured interviews with eight knowledge management experts from several large ICT enterprises interested in either adopting tagging internally as a knowledge management solution, or just in tangible evidence of its added value. As a second theme of our evaluation, we calibrated the parameters of the method based on data collected from a series of six user experiments, reaching a promising prediction accuracy within a margin of ±25% in 75% of the cases

    ONTOCOM: a cost estimation model for ontology engineering

    No full text
    The technical challenges associated with the development and deployment of ontologies have been subject to a considerable number of research initiatives since the beginning of the nineties. The economical aspects of these processes are, however, still poorly exploited, impeding the dissemination of ontology-driven technologies beyond the boundaries of the academic community. This paper aims at contributing to the alleviation of this situation by proposing ONTOCOM (Ontology Cost Model), a model to predict the costs arising in ontology engineering processes. We introduce a methodology to generate a cost model adapted to a particular ontology development strategy, and an inventory of cost drivers which influence the amount of effort invested in activities performed during an ontology life cycle. We further present the results of the model validation procedure, which covered an expert-driven evaluation and a statistical calibration on 36 data points collected from real-world projects. The validation revealed that ontology engineering processes have a high learning rate, indicating that the building of very large ontologies is feasible from an economic point of view. Moreover, the complexity of ontology evaluation, domain analysis and conceptualization activities proved to have a major impact on the final ontology engineering process duration
    corecore