259 research outputs found

    Enforcement of Quality Attributes for Net-Centric Systems through Modeling and Validation with Architecture Description Languages

    Get PDF
    International audienceIn this paper we discuss and demonstrate how to conduct validation of data quality attributes, e.g., security, data accuracy, data confidence, and temporal correctness, can be modeled and validated using an architecture description language such as AADL. We focus on security, specifically confidentiality

    Trust and Risk Relationship Analysis on a Workflow Basis: A Use Case

    Get PDF
    Trust and risk are often seen in proportion to each other; as such, high trust may induce low risk and vice versa. However, recent research argues that trust and risk relationship is implicit rather than proportional. Considering that trust and risk are implicit, this paper proposes for the first time a novel approach to view trust and risk on a basis of a W3C PROV provenance data model applied in a healthcare domain. We argue that high trust in healthcare domain can be placed in data despite of its high risk, and low trust data can have low risk depending on data quality attributes and its provenance. This is demonstrated by our trust and risk models applied to the BII case study data. The proposed theoretical approach first calculates risk values at each workflow step considering PROV concepts and second, aggregates the final risk score for the whole provenance chain. Different from risk model, trust of a workflow is derived by applying DS/AHP method. The results prove our assumption that trust and risk relationship is implicit

    Toward a framework for data quality in cloud-based health information system

    No full text
    This Cloud computing is a promising platform for health information systems in order to reduce costs and improve accessibility. Cloud computing represents a shift away from computing being purchased as a product to be a service delivered over the Internet to customers. Cloud computing paradigm is becoming one of the popular IT infrastructures for facilitating Electronic Health Record (EHR) integration and sharing. EHR is defined as a repository of patient data in digital form. This record is stored and exchanged securely and accessible by different levels of authorized users. Its key purpose is to support the continuity of care, and allow the exchange and integration of medical information for a patient. However, this would not be achieved without ensuring the quality of data populated in the healthcare clouds as the data quality can have a great impact on the overall effectiveness of any system. The assurance of the quality of data used in healthcare systems is a pressing need to help the continuity and quality of care. Identification of data quality dimensions in healthcare clouds is a challenging issue as data quality of cloud-based health information systems arise some issues such as the appropriateness of use, and provenance. Some research proposed frameworks of the data quality dimensions without taking into consideration the nature of cloud-based healthcare systems. In this paper, we proposed an initial framework that fits the data quality attributes. This framework reflects the main elements of the cloud-based healthcare systems and the functionality of EHR

    Is There an App for That? Electronic Health Records (EHRs) and a New Environment of Conflict Prevention and Resolution

    Get PDF
    Katsh discusses the new problems that are a consequence of a new technological environment in healthcare, one that has an array of elements that makes the emergence of disputes likely. Novel uses of technology have already addressed both the problem and its source in other contexts, such as e-commerce, where large numbers of transactions have generated large numbers of disputes. If technology-supported healthcare is to improve the field of medicine, a similar effort at dispute prevention and resolution will be necessary

    MSUO Information Technology and Geographical Information Systems: Common Protocols & Procedures. Report to the Marine Safety Umbrella Operation

    Get PDF
    The Marine Safety Umbrella Operation (MSUO) facilitates the cooperation between Interreg funded Marine Safety Projects and maritime stakeholders. The main aim of MSUO is to permit efficient operation of new projects through Project Cooperation Initiatives, these include the review of the common protocols and procedures for Information Technology (IT) and Geographical Information Systems (GIS). This study carried out by CSA Group and the National Centre for Geocomputation (NCG) reviews current spatial information standards in Europe and the data management methodologies associated with different marine safety projects. International best practice was reviewed based on the combined experience of spatial data research at NCG and initiatives in the US, Canada and the UK relating to marine security service information and acquisition and integration of large marine datasets for ocean management purposes. This report identifies the most appropriate international data management practices that could be adopted for future MSUO projects

    Competences of IT Architects

    Get PDF
    The field of architecture in the digital world uses a plethora of terms to refer to different kinds of architects, and recognises a confusing variety of competences that these architects are required to have. Different service providers use different terms for similar architects and even if they use the same term, they may mean something different. This makes it hard for customers to know what competences an architect can be expected to have.\ud \ud This book combines competence profiles of the NGI Platform for IT Professionals, The Open Group Architecture Framework (TOGAF), as well as a number of Dutch IT service providers in a comprehensive framework. Using this framework, the book shows that notwithstanding a large variety in terminology, there is convergence towards a common set of competence profiles. In other words, when looking beyond terminological differences by using the framework, one sees that organizations recognize similar types of architects, and that similar architects in different organisations have similar competence profiles. The framework presented in this book thus provides an instrument to position architecture services as offered by IT service providers and as used by their customers.\ud \ud The framework and the competence profiles presented in this book are the main results of the special interest group “Professionalisation” of the Netherlands Architecture Forum for the Digital World (NAF). Members of this group, as well as students of the universities of Twente and Nijmegen have contributed to the research on which this book is based

    RISK MANAGEMENT AND DATA QUALITY SELECTION: AN INFORMATION ECONOMICS APPROACH

    Get PDF
    Data quality has been shown to be a major determinant of the value of systems that utilize input data feeds and transform them into valuable information under a variety of business contexts. For this study, we have chosen a financial risk management context to investigate the relationship between data quality and value of risk management forecasting systems. Three attributes of data quality, frequency, response time, and accuracy, along with the cost of data are considered. Joint impacts of attributes are also considered. It is shown that an increase in report frequency results in an increase in the utility of a risk management forecasting system, but this increase is limited by the responsiveness of the hedging scheme. Frequency is shown to improve the utility of the forecasting systems in two ways: First, an increase in frequency pushes the predicted states closer to the actual states and second, an increase in frequency causes the reliability of the forecasting model to increase. A delay in response time of reports is predicted to have a greater impact on utility for high frequency reports than for low frequency reports. Finally, data inaccuracies are recommended to be the first concern of a portfolio manager before an attempt is made to increase the reporting frequency.Information Systems Working Papers Serie

    Oikean datan löytämisen tärkeys: Case terveydenhuollon operaatioiden kehitysprojektit

    Get PDF
    The utilization of data in healthcare improvement projects is currently a very topical subject. Several public and private companies have shown the value of utilizing data to improve operational efficiency. Not all datasets are, however, equally useful – thus, understanding of the data quality is required to ensure correct decision-making. Currently, two streams of literature exist to guide the improvement teams: the literature on operational improvement, e.g. through methods such as Total Quality Management, Lean, and Six Sigma, and the literature on data quality. From the point-of-view of an improvement project team, a linkage between these two streams of literature is missing. This paper aims to bridge the gap between the two streams of literature by helping healthcare improvement teams to assess whether the data quality is sufficient to support decision-making. The academic framework illustrates, how the viewpoint of data quality has transformed from an intrinsic focus on the 1970s, to fitness for use on the 1990s, finally to describing the specifics of the new trends, such as big data or unstructured data, in the 2010 onwards. Using the case study method, the findings were expanded by observing an improvement project in a private Finnish healthcare company. Together with the project team, I went through an iterative process with five steps: each of which was guided by a distinctive, new set of data. Finally, the actual improvement was gained by gathering the data manually: a dataset which was highly relevant for the end users, but likely to be intrinsically less robust as the previous datasets. As a conclusion, the current data quality literature can bring only modest guidance for the improvement teams in terms of choosing the right dataset. Rather, a new model for the data quality in healthcare operational improvement was created. The model suggests that the teams should first consider whether the dataset is relevant for the goal of the improvement project. After that, the improvement team should consider if the dataset can add value to reaching the goal of the project. After these two steps, the other key data quality attributes linking to the following four dimensions come to play: accessibility, intrinsic, representational, and contextual quality.Datan käyttäminen terveydenhuollon prosessikehityksessä on laajaa kiinnostusta herättävä aihe. Kaksi pää kirjallisuussuuntaa on kehittynyt datan laadun tutkimiseksi: kirjallisuus operaatiokehityksestä eli aiheista, kuten TQM, Lean ja Six Sigma, ja kirjallisuus datan laadusta. Nämä kaksi suuntausta ovat kuitenkin usein riittämättömiä kehitystiimien päätöksenteon tueksi. Tämän diplomityön tarkoitus on yhdistää nämä kaksi kirjallisuussuuntausta frameworkiksi, joka auttaa tiimejä arvioimaan datan soveltuvuutta omaan kehitysprojektiinsa. Työn kirjallisuuskatsaus kuvaa, miten käsitys datan laadusta on muuttunut 1970-luvulta nykypäivään. 1970-luvulla datalaadun kirjallisuuden fokus oli sisäisessä laadussa (intrinsic quality). 1990-luvulle siirtyessä painopiste siirtyi kuvailemaan datan laatua sen soveltuvuuden kautta (fitness for use), ja 2010-luvulle siirryttäessä kirjallisuuteen tuli mukaan uusia trendejä, kuten big data tai strukturoimaton data. Tuloksien tueksi seurattiin kehitysprojektia, joka toteutettiin suomalaisessa yksityisessä terveydenhuollon yrityksessä. Yhdessä projektitiimin kanssa, kirjoittajan matka projektin edetessä voidaan tiivistää viiteen vaiheeseen, joista jokaisessa uusi datasetti näytteli tärkeää roolia. Lopulta suurin edistysaskel projektissa saatiin keräämällä data manuaalisesti. Manuaalisesti kerätty data oli erittäin relevantti projektille, mutta sisäisiltä ominaisuuksiltaan huonompi. Tulosten pohjalta voidaan päätellä, että nykyinen kirjallisuus datan laadusta voi tuoda enintään keskinkertaista tukea kehitystiimien datan laadun arvioinnille. Tästä syystä uusi malli data laadun tutkimiselle terveydenhuollossa luotiin työn tuloksena. Malli ehdottaa, että projekti tiimien pitäisi ensimmäisenä arvioida datasetin relevanttiutta käyttötarkoitukselle. Toisena askeleena tiimin kannattaa miettiä onko data arvokasta vastaamaan projektin senhetkisiin haasteisiin. Näiden kahden askeleen jälkeen, tiimin kannattaa käyttää kirjallisuudessa laajasti tunnistettuja datalaadun tekijöitä oman datasetin laatunsa arviointiin

    Adding dimensions to the analysis of the quality of health information of websites returned by Google. Cluster analysis identifies patterns of websites according to their classification and the type of intervention described.

    Get PDF
    Background and aims: Most of the instruments used to assess the quality of health information on the Web (e.g. the JAMA criteria) only analyze one dimension of information quality, trustworthiness. We try to compare these characteristics with the type of treatments the website describe, whether evidence-based medicine or note, and correlate this with the established criteria. Methods: We searched Google for “migraine cure” and analyzed the first 200 websites for: 1) JAMA criteria (authorship, attribution, disclosure, currency); 2) class of websites (commercial, health portals, professional, patient groups, no-profit); and 3) type of intervention described (approved drugs, alternative medicine, food, procedures, lifestyle, drugs still at the research stage). We used hierarchical cluster analysis to assess associations between classes of websites and types of intervention described. Subgroup analysis on the first 10 websites returned was performed. Results: Google returned health portals (44%), followed by commercial websites (31%) and journalism websites (11%). The type of intervention mentioned most often was alternative medicine (55%), followed by procedures (49%), lifestyle (42%), food (41%) and approved drugs (35%). Cluster analysis indicated that health portals are more likely to describe more than one type of treatment while commercial websites most often describe only one. The average JAMA score of commercial websites was significantly lower than for health portals or journalism websites, and this was mainly due to lack of information on the authors of the text and indication of the date the information was written. Looking at the first 10 websites from Google, commercial websites are under-represented and approved drugs over-represented. Conclusions: This approach allows the appraisal of the quality of health-related information on the Internet focusing on the type of therapies/prevention methods that are shown to the patient
    corecore