7,739 research outputs found
Towards an interoperable healthcare information infrastructure - working from the bottom up
Historically, the healthcare system has not made effective use of information technology. On the face of things, it would seem to provide a natural and richly varied domain in which to target benefit from IT solutions. But history shows that it is one of the most difficult domains in which to bring them to fruition. This paper provides an overview of the changing context and information requirements of healthcare that help to explain these characteristics.First and foremost, the disciplines and professions that healthcare encompasses have immense complexity and diversity to deal with, in structuring knowledge about what medicine and healthcare are, how they function, and what differentiates good practice and good performance. The need to maintain macro-economic stability of the health service, faced with this and many other uncertainties, means that management bottom lines predominate over choices and decisions that have to be made within everyday individual patient services. Individual practice and care, the bedrock of healthcare, is, for this and other reasons, more and more subject to professional and managerial control and regulation.One characteristic of organisations shown to be good at making effective use of IT is their capacity to devolve decisions within the organisation to where they can be best made, for the purpose of meeting their customers' needs. IT should, in this context, contribute as an enabler and not as an enforcer of good information services. The information infrastructure must work effectively, both top down and bottom up, to accommodate these countervailing pressures. This issue is explored in the context of infrastructure to support electronic health records.Because of the diverse and changing requirements of the huge healthcare sector, and the need to sustain health records over many decades, standardised systems must concentrate on doing the easier things well and as simply as possible, while accommodating immense diversity of requirements and practice. The manner in which the healthcare information infrastructure can be formulated and implemented to meet useful practical goals is explored, in the context of two case studies of research in CHIME at UCL and their user communities.Healthcare has severe problems both as a provider of information and as a purchaser of information systems. This has an impact on both its customer and its supplier relationships. Healthcare needs to become a better purchaser, more aware and realistic about what technology can and cannot do and where research is needed. Industry needs a greater awareness of the complexity of the healthcare domain, and the subtle ways in which information is part of the basic contract between healthcare professionals and patients, and the trust and understanding that must exist between them. It is an ideal domain for deeper collaboration between academic institutions and industry
Recommended from our members
Computerization of workflows, guidelines and care pathways: a review of implementation challenges for process-oriented health information systems
There is a need to integrate the various theoretical frameworks and formalisms for modeling clinical guidelines, workflows, and pathways, in order to move beyond providing support for individual clinical decisions and toward the provision of process-oriented, patient-centered, health information systems (HIS). In this review, we analyze the challenges in developing process-oriented HIS that formally model guidelines, workflows, and care pathways. A qualitative meta-synthesis was performed on studies published in English between 1995 and 2010 that addressed the modeling process and reported the exposition of a new methodology, model, system implementation, or system architecture. Thematic analysis, principal component analysis (PCA) and data visualisation techniques were used to identify and cluster the underlying implementation ‘challenge’ themes. One hundred and eight relevant studies were selected for review. Twenty-five underlying ‘challenge’ themes were identified. These were clustered into 10 distinct groups, from which a conceptual model of the implementation process was developed. We found that the development of systems supporting individual clinical decisions is evolving toward the implementation of adaptable care pathways on the semantic web, incorporating formal, clinical, and organizational ontologies, and the use of workflow management systems. These architectures now need to be implemented and evaluated on a wider scale within clinical settings
The Neuroscience Information Framework: A Data and Knowledge Environment for Neuroscience
With support from the Institutes and Centers forming the NIH Blueprint for Neuroscience Research, we have designed and implemented a new initiative for integrating access to and use of Web-based neuroscience resources: the Neuroscience Information Framework. The Framework arises from the expressed need of the neuroscience community for neuroinformatic tools and resources to aid scientific inquiry, builds upon prior development of neuroinformatics by the Human Brain Project and others, and directly derives from the Society for Neuroscience’s Neuroscience Database Gateway. Partnered with the Society, its Neuroinformatics Committee, and volunteer consultant-collaborators, our multi-site consortium has developed: (1) a comprehensive, dynamic, inventory of Web-accessible neuroscience resources, (2) an extended and integrated terminology describing resources and contents, and (3) a framework accepting and aiding concept-based queries. Evolving instantiations of the Framework may be viewed at http://nif.nih.gov, http://neurogateway.org, and other sites as they come on line
From Offshore Operation to Onshore Simulator: Using Visualized Ethnographic Outcomes to Work with Systems Developers
This paper focuses on the process of translating insights from a Computer Supported Cooperative Work (CSCW)-based study, conducted on a vessel at sea, into a model that can assist systems developers working with simulators, which are used by vessel operators for training purposes on land. That is, the empirical study at sea brought about rich insights into cooperation, which is important for systems developers to know about and consider in their designs. In the paper, we establish a model that primarily consists of a ‘computational artifact’. The model is designed to support researchers working with systems developers. Drawing on marine examples, we focus on the translation process and investigate how the model serves to visualize work activities; how it addresses relations between technical and computational artifacts, as well as between functions in technical systems and functionalities in cooperative systems. In turn, we link design back to fieldwork studies
Educating the educators: Incorporating bioinformatics into biological science education in Malaysia
Bioinformatics can be defined as a fusion of computational and biological sciences. The urgency to process and analyse the deluge of data created by proteomics and genomics studies has caused bioinformatics to gain prominence and importance. However, its multidisciplinary nature has created a unique demand for specialist trained in both biology and computing. In this review, we described the components that constitute the bioinformatics field and distinctive education criteria that are required to produce individuals with bioinformatics training. This paper will also provide an introduction and overview of bioinformatics in Malaysia. The existing bioinformatics scenario in Malaysia was surveyed to gauge its advancement and to plan for future bioinformatics education strategies. For comparison, we surveyed methods and strategies used in education by other countries so that lessons can be learnt to further improve the implementation of bioinformatics in Malaysia. It is believed that accurate and sufficient steerage from the academia and industry will enable Malaysia to produce quality bioinformaticians in the future
Validating archetypes for the Multiple Sclerosis Functional Composite
Background Numerous information models for electronic health records, such as
openEHR archetypes are available. The quality of such clinical models is
important to guarantee standardised semantics and to facilitate their
interoperability. However, validation aspects are not regarded sufficiently
yet. The objective of this report is to investigate the feasibility of
archetype development and its community-based validation process, presuming
that this review process is a practical way to ensure high-quality information
models amending the formal reference model definitions. Methods A standard
archetype development approach was applied on a case set of three clinical
tests for multiple sclerosis assessment: After an analysis of the tests, the
obtained data elements were organised and structured. The appropriate
archetype class was selected and the data elements were implemented in an
iterative refinement process. Clinical and information modelling experts
validated the models in a structured review process. Results Four new
archetypes were developed and publicly deployed in the openEHR Clinical
Knowledge Manager, an online platform provided by the openEHR Foundation.
Afterwards, these four archetypes were validated by domain experts in a team
review. The review was a formalised process, organised in the Clinical
Knowledge Manager. Both, development and review process turned out to be time-
consuming tasks, mostly due to difficult selection processes between
alternative modelling approaches. The archetype review was a straightforward
team process with the goal to validate archetypes pragmatically. Conclusions
The quality of medical information models is crucial to guarantee standardised
semantic representation in order to improve interoperability. The validation
process is a practical way to better harmonise models that diverge due to
necessary flexibility left open by the underlying formal reference model
definitions. This case study provides evidence that both community- and tool-
enabled review processes, structured in the Clinical Knowledge Manager, ensure
archetype quality. It offers a pragmatic but feasible way to reduce variation
in the representation of clinical information models towards a more unified
and interoperable model
- …