447,142 research outputs found

    An informatics model for guiding assembly of telemicrobiology workstations for malaria collaborative diagnostics using commodity products and open-source software

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Deficits in clinical microbiology infrastructure exacerbate global infectious disease burdens. This paper examines how commodity computation, communication, and measurement products combined with open-source analysis and communication applications can be incorporated into laboratory medicine microbiology protocols. Those commodity components are all now sourceable globally. An informatics model is presented for guiding the use of low-cost commodity components and free software in the assembly of clinically useful and usable telemicrobiology workstations.</p> <p>Methods</p> <p>The model incorporates two general principles: 1) collaborative diagnostics, where free and open communication and networking applications are used to link distributed collaborators for reciprocal assistance in organizing and interpreting digital diagnostic data; and 2) commodity engineering, which leverages globally available consumer electronics and open-source informatics applications, to build generic open systems that measure needed information in ways substantially equivalent to more complex proprietary systems. Routine microscopic examination of Giemsa and fluorescently stained blood smears for diagnosing malaria is used as an example to validate the model.</p> <p>Results</p> <p>The model is used as a constraint-based guide for the design, assembly, and testing of a functioning, open, and commoditized telemicroscopy system that supports distributed acquisition, exploration, analysis, interpretation, and reporting of digital microscopy images of stained malarial blood smears while also supporting remote diagnostic tracking, quality assessment and diagnostic process development.</p> <p>Conclusion</p> <p>The open telemicroscopy workstation design and use-process described here can address clinical microbiology infrastructure deficits in an economically sound and sustainable manner. It can boost capacity to deal with comprehensive measurement of disease and care outcomes in individuals and groups in a distributed and collaborative fashion. The workstation enables local control over the creation and use of diagnostic data, while allowing for remote collaborative support of diagnostic data interpretation and tracking. It can enable global pooling of malaria disease information and the development of open, participatory, and adaptable laboratory medicine practices. The informatic model highlights how the larger issue of access to generic commoditized measurement, information processing, and communication technology in both high- and low-income countries can enable diagnostic services that are much less expensive, but substantially equivalent to those currently in use in high-income countries.</p

    Informatics for devices within telehealth systems for monitoring chronic diseases

    Get PDF
    Preliminary investigation at the beginning of this research showed that informatics on point-of-care (POC) devices was limited to basic data generation and processing. This thesis is based on publications of several studies during the course of the research. The aim of the research is to model and analyse information generation and exchange in telehealth systems and to identify and analyse the capabilities of these systems in managing chronic diseases which utilise point-of-care devices. The objectives to meet the aim are as follows: (i) to review the state-of-the-art in informatics and decision support on point-of-care devices. (ii) to assess the current level of servitization of POC devices used within the home environment. (iii) to identify current models of information generation and exchange for POC devices using a telehealth perspective. (iv) to identify the capabilities of telehealth systems. (v) to evaluate key components of telehealth systems (i.e. POC devices and intermediate devices). (vi) to analyse the capabilities of telehealth systems as enablers to a healthcare policy. The literature review showed that data transfer from devices is an important part of generating information. The implication of this is that future designs of devices should have efficient ways of transferring data to minimise the errors that may be introduced through manual data entry/transfer. The full impact of a servitized model for point-of-care devices is possible within a telehealth system, since capabilities of interpreting data for the patient will be offered as a service (c.f. NHS Direct). This research helped to deduce components of telehealth systems which are important in supporting informatics and decision making for actors of the system. These included actors and devices. Telehealth systems also help facilitate the exchange of data to help decision making to be faster for all actors concerned. This research has shown that a large number of capability categories existed for the patients and health professionals. There were no capabilities related to the caregiver that had a direct impact on the patient and health professional. This was not surprising since the numbers of caregivers in current telehealth systems was low. Two types of intermediate devices were identified in telehealth systems: generic and proprietary. Patients and caregivers used both types, while health professionals only used generic devices. However, there was a higher incidence of proprietary devices used by patients. Proprietary devices possess features to support patients better thus promoting their independence in managing their chronic condition. This research developed a six-step methodology for working from government objectives to appropriate telehealth capability categories. This helped to determine objectives for which a telehealth system is suitable

    Recommendation vs sentiment analysis: A text-driven latent factor model for rating prediction with cold-start awareness

    Get PDF
    Review rating prediction is an important research topic. The problem was approached from either the perspective of recommender systems (RS) or that of sentiment analysis (SA). Recent SA research using deep neural networks (DNNs) has realized the importance of user and product interaction for better interpreting the sentiment of reviews. However, the complexity of DNN models in terms of the scale of parameters is very high, and the performance is not always satisfying especially when user-product interaction is sparse. In this paper, we propose a simple, extensible RS-based model, called Text-driven Latent Factor Model (TLFM), to capture the semantics of reviews, user preferences and product characteristics by jointly optimizing two components, a user-specific LFM and a product-specific LFM, each of which decomposes text into a specific low-dimension representation. Furthermore, we address the cold-start issue by developing a novel Pairwise Rating Comparison strategy (PRC), which utilizes the difference between ratings on common user/product as supplementary information to calibrate parameter estimation. Experiments conducted on IMDB and Yelp datasets validate the advantage of our approach over stateof-the-art baseline methods

    BiGG: a Biochemical Genetic and Genomic knowledgebase of large scale metabolic reconstructions

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Genome-scale metabolic reconstructions under the Constraint Based Reconstruction and Analysis (COBRA) framework are valuable tools for analyzing the metabolic capabilities of organisms and interpreting experimental data. As the number of such reconstructions and analysis methods increases, there is a greater need for data uniformity and ease of distribution and use.</p> <p>Description</p> <p>We describe BiGG, a knowledgebase of Biochemically, Genetically and Genomically structured genome-scale metabolic network reconstructions. BiGG integrates several published genome-scale metabolic networks into one resource with standard nomenclature which allows components to be compared across different organisms. BiGG can be used to browse model content, visualize metabolic pathway maps, and export SBML files of the models for further analysis by external software packages. Users may follow links from BiGG to several external databases to obtain additional information on genes, proteins, reactions, metabolites and citations of interest.</p> <p>Conclusions</p> <p>BiGG addresses a need in the systems biology community to have access to high quality curated metabolic models and reconstructions. It is freely available for academic use at <url>http://bigg.ucsd.edu</url>.</p

    Grey Box Data Refinement

    Get PDF
    We introduce the concepts of grey box and display box data types. These make explicit the idea that state variables in abstract data types are not always hidden. Programming languages have visibility rules which make representations observable and modifiable. Specifications in model-based notations may have implicit assumptions about visible state components, or are used in contexts where the representation does matter. Grey box data types are like the ``standard'' black box data types, except that they contain explicit subspaces of the state which are modifiable and observable. Display boxes indirectly observe the state by adding displays to a black box. Refinement rules for both these alternative data types are given, based on their interpretations as black boxes

    Applying formal methods to standard development: the open distributed processing experience

    Get PDF
    Since their introduction, formal methods have been applied in various ways to different standards. This paper gives an account of these applications, focusing on one application in particular: the development of a framework for creating standards for Open Distributed Processing (ODP). Following an introduction to ODP, the paper gives an insight into the current work on formalising the architecture of the Reference Model of ODP (RM-ODP), highlighting the advantages to be gained. The different approaches currently being taken are shown, together with their associated advantages and disadvantages. The paper concludes that there is no one all-purpose approach which can be used in preference to all others, but that a combination of approaches is desirable to best fulfil the potential of formal methods in developing an architectural semantics for OD

    Using Qualitative Hypotheses to Identify Inaccurate Data

    Full text link
    Identifying inaccurate data has long been regarded as a significant and difficult problem in AI. In this paper, we present a new method for identifying inaccurate data on the basis of qualitative correlations among related data. First, we introduce the definitions of related data and qualitative correlations among related data. Then we put forward a new concept called support coefficient function (SCF). SCF can be used to extract, represent, and calculate qualitative correlations among related data within a dataset. We propose an approach to determining dynamic shift intervals of inaccurate data, and an approach to calculating possibility of identifying inaccurate data, respectively. Both of the approaches are based on SCF. Finally we present an algorithm for identifying inaccurate data by using qualitative correlations among related data as confirmatory or disconfirmatory evidence. We have developed a practical system for interpreting infrared spectra by applying the method, and have fully tested the system against several hundred real spectra. The experimental results show that the method is significantly better than the conventional methods used in many similar systems.Comment: See http://www.jair.org/ for any accompanying file

    Automatic Structural Scene Digitalization

    Get PDF
    In this paper, we present an automatic system for the analysis and labeling of structural scenes, floor plan drawings in Computer-aided Design (CAD) format. The proposed system applies a fusion strategy to detect and recognize various components of CAD floor plans, such as walls, doors, windows and other ambiguous assets. Technically, a general rule-based filter parsing method is fist adopted to extract effective information from the original floor plan. Then, an image-processing based recovery method is employed to correct information extracted in the first step. Our proposed method is fully automatic and real-time. Such analysis system provides high accuracy and is also evaluated on a public website that, on average, archives more than ten thousands effective uses per day and reaches a relatively high satisfaction rate.Comment: paper submitted to PloS On
    • …
    corecore