42 research outputs found

    Response to combination therapy with interferon alfa-2a and ribavirin in chronic hepatitis C according to a TNF-alpha promoter polymorphism

    Get PDF
    Background. Tumor necrosis factor-alpha (TNF-alpha) is involved in the pathogenesis of chronic active hepatitis C. Polymorphisms in the promoter region of the TNF-alpha gene can alter the TNF-alpha expression and modify the host immune response. The present study aimed at the correlation of the G308A TNF-alpha polymorphism with the response to antiviral combination therapy in chronic hepatitis C. Patients and Methods: 62 patients with HCV and 119 healthy unrelated controls were genotyped for the G308A TNF-alpha promoter polymorphism. The patients received 3 x 3 million units of interferon alfa-2a and 1,0001,200 mg ribavirin daily according to their body weight. A response was defined as absence of HCV-RNA and normalization of S-ALT after 6 months of combination therapy. Results:With respect to the allele and genotype frequency, a significant difference was not observed between controls and patients with chronic hepatitis C. Furthermore, such a difference was also not observed if responders and non-responders to antiviral therapy were compared. Conclusions: The promoter polymorphism of the TNF-alpha gene investigated herein is equally distributed in healthy individuals and patients with hepatitis C and does not seem to predict the response to therapy with interferon alfa-2a and ribavirin. Copyright (C) 2003 S. Karger AG, Basel

    Human Computation and Convergence

    Full text link
    Humans are the most effective integrators and producers of information, directly and through the use of information-processing inventions. As these inventions become increasingly sophisticated, the substantive role of humans in processing information will tend toward capabilities that derive from our most complex cognitive processes, e.g., abstraction, creativity, and applied world knowledge. Through the advancement of human computation - methods that leverage the respective strengths of humans and machines in distributed information-processing systems - formerly discrete processes will combine synergistically into increasingly integrated and complex information processing systems. These new, collective systems will exhibit an unprecedented degree of predictive accuracy in modeling physical and techno-social processes, and may ultimately coalesce into a single unified predictive organism, with the capacity to address societies most wicked problems and achieve planetary homeostasis.Comment: Pre-publication draft of chapter. 24 pages, 3 figures; added references to page 1 and 3, and corrected typ

    Data Work in a Knowledge-Broker Organization: How Cross-Organizational Data Maintenance shapes Human Data Interactions.

    Get PDF
    The term Human-Data Interaction (HDI) conceptualizes the growing importance of understanding how people need and desire to use and interact with data. Previous HDI cases have mainly focused on the interface between personal health data and the healthcare sector. This paper argues that it is relevant to consider HDI at an organisational level and examines how HDI can look in such a context, where data and data maintenance are core assets and activities. We report on initial findings of a study of a knowledge-broker organisation, where we follow how data are produced, shared, and maintained in a cross-organisational context. We discuss similarities and differences of HDI aroundpersonal health data and cross-organisational data maintenance. We propose to extend the notion of HDI to include the complexity of cross-organisational data work

    Data Quality Barriers for Transparency in Public Procurement

    Get PDF
    Governments need to be accountable and transparent for their public spending decisions in order to prevent losses through fraud and corruption as well as to build healthy and sustainable economies. Open data act as a major instrument in this respect by enabling public administrations, service providers, data journalists, transparency activists, and regular citizens to identify fraud or uncompetitive markets through connecting related, heterogeneous, and originally unconnected data sources. To this end, in this article, we present our experience in the case of Slovenia, where we successfully applied a number of anomaly detection techniques over a set of open disparate data sets integrated into a Knowledge Graph, including procurement, company, and spending data, through a linked data-based platform called TheyBuyForYou. We then report a set of guidelines for publishing high quality procurement data for better procurement analytics, since our experience has shown us that there are significant shortcomings in the quality of data being published. This article contributes to enhanced policy making by guiding public administrations at local, regional, and national levels on how to improve the way they publish and use procurement-related data; developing technologies and solutions that buyers in the public and private sectors can use and adapt to become more transparent, make markets more competitive, and reduce waste and fraud; and providing a Knowledge Graph, which is a data resource that is designed to facilitate integration across multiple data silos by showing how it adds context and domain knowledge to machine-learning-based procurement analytics.</p

    A Model for Language Annotations on the Web

    Get PDF
    Several annotation models have been proposed to enable a multilingual Semantic Web. Such models hone in on the word and its morphology and assume the language tag and URI comes from external resources. These resources, such as ISO 639 and Glottolog, have limited coverage of the world's languages and have a very limited thesaurus-like structure at best, which hampers language annotation, hence constraining research in Digital Humanities and other fields. To resolve this `outsourced' task of the current models, we developed a model for representing information about languages, the \textbf{Mo}del for \textbf{L}anguage \textbf{A}nnotation (\langmod{}), such that basic language information can be recorded consistently and therewith queried and analyzed as well. This includes the various types of languages, families, and the relations among them. \langmod{} is formalized in OWL so that it can integrate with Linguistic Linked Data resources. Sufficient coverage of \langmod{} is demonstrated with the use case of French

    An architecture for the autonomic curation of crowdsourced knowledge

    Get PDF
    Human knowledge curators are intrinsically better than their digital counterparts at providing relevant answers to queries. That is mainly due to the fact that an experienced biological brain will account for relevant community expertise as well as exploit the underlying connections between knowledge pieces when offering suggestions pertinent to a specific question, whereas most automated database managers will not. We address this problem by proposing an architecture for the autonomic curation of crowdsourced knowledge, that is underpinned by semantic technologies. The architecture is instantiated in the career data domain, thus yielding Aviator, a collaborative platform capable of producing complete, intuitive and relevant answers to career related queries, in a time effective manner. In addition to providing numeric and use case based evidence to support these research claims, this extended work also contains a detailed architectural analysis of Aviator to outline its suitability for automatically curating knowledge to a high standard of quality

    Trusts, co-ops, and crowd workers: Could we include crowd data workers as stakeholders in data trust design?

    No full text
    AbstractData trusts have been proposed as a mechanism through which data can be more readily exploited for a variety of aims, including economic development and social-benefit goals such as medical research or policy-making. Data trusts, and similar data governance mechanisms such as data co-ops, aim to facilitate the use and re-use of datasets across organizational boundaries and, in the process, to protect the interests of stakeholders such as data subjects. However, the current discourse on data trusts does not acknowledge another common stakeholder in the data value chain—the crowd workers who are employed to collect, validate, curate, and transform data. In this paper, we report on a preliminary qualitative investigation into how crowd data workers themselves feel datasets should be used and governed. We find that while overall remuneration is important to those workers, they also value public-benefit data use but have reservations about delayed remuneration and the trustworthiness of both administrative processes and the crowd itself. We discuss the implications of our findings for how data trusts could be designed, and how data trusts could be used to give crowd workers a more enduring stake in the product of their work.</jats:p

    FOLCOM or the costs of tagging

    No full text
    This paper introduces FOLCOM, a FOLksonomy Cost estimatiOn Method that uses a story-points-approach to quantitatively assess the efforts that are cumulatively associated with tagging a collection of information objects by a community of users. The method was evaluated through individual, face-to-face structured interviews with eight knowledge management experts from several large ICT enterprises interested in either adopting tagging internally as a knowledge management solution, or just in tangible evidence of its added value. As a second theme of our evaluation, we calibrated the parameters of the method based on data collected from a series of six user experiments, reaching a promising prediction accuracy within a margin of ±25% in 75% of the cases
    corecore