269 research outputs found

    Assessing and Improving the Usability of the Medical Data Models Portal

    Get PDF
    Case report forms (CRF) specify data definitions and encodings for data to be collected in clinical trials. To enable exchange of data definitions and in this way to avoid creation of variants of CRF for similar study designs, the Medical Data Model portal (MDM) has been developed since 2011. This work aims at studying the usability of the MDM portal. We identify issues that hamper its adoption by researchers in order to derive measurements for improving it. We selected relevant tools (e.g. Nibbler, Hotjar, SUPR-Q) for usability testing and generated a structured test protocol. More specifically, the portal was assessed by means of a static analysis, user analysis (n=10), a usability test (n=10) and statistical evaluations. Regarding accessibility and technology, the static code analysis resulted in high scores. Presentation of information and functions as well as interaction with the portal still has to be improved: The results show that only limited functions of the webpage are used regularly and some user navigation errors occur due to the portal's design. In total, six major problems were identified which will be addressed in future. A continuous evaluation using the same structured test protocol allows to continuously measure the website quality, to compare it after changes have been implemented and in this way, to realise a continuous improvement. The effort for a repeated evaluation of the same evaluation with 10 persons is estimated with 10 hours

    Assessing the Flexibility of a Service Oriented Architecture to that of the Classic Data Warehouse

    Get PDF
    The flexibility of a service oriented architecture (SOA) is compared to that of the classic data warehouse across three categories: (1) source system access, (2) integration and transformation, and (3) end user access. The findings suggest that an SOA allows better upgrade and migration flexibility if back-end systems expose their source data via adapters. However, the providers of such adapters must deal with the complexity of maintaining consistent interfaces. An SOA also appears to provide more flexibility at the integration tier due to its ability to merge batch with real-time source system data. This has the potential to retain source system data semantics (e.g., code translations and business rules) without having to reproduce such logic in a transformation tier. Additionally, the tight coupling of operational metadata and source system data within XML in an SOA allows more flexibility in downstream analysis and auditing of output . SOA does lag behind the classic data warehouse at the end user level, mainly due to the latter\u27s use of mature SQL and relational database technology. Users of all technical levels can easily work with these technologies in the classic data warehouse environment to query data in a number of ways. The SOA end user likely requires developer support for such activities

    Semantic data integration for supply chain management: with a specific focus on applications in the semiconductor industry

    Get PDF
    Supply Chain Management (SCM) is essential to monitor, control, and enhance the performance of SCs. Increasing globalization and diversity of Supply Chains (SC)s lead to complex SC structures, limited visibility among SC partners, and challenging collaboration caused by dispersed data silos. Digitalization is responsible for driving and transforming SCs of fundamental sectors such as the semiconductor industry. This is further accelerated due to the inevitable role that semiconductor products play in electronics, IoT, and security systems. Semiconductor SCM is unique as the SC operations exhibit special features, e.g., long production lead times and short product life. Hence, systematic SCM is required to establish information exchange, overcome inefficiency resulting from incompatibility, and adapt to industry-specific challenges. The Semantic Web is designed for linking data and establishing information exchange. Semantic models provide high-level descriptions of the domain that enable interoperability. Semantic data integration consolidates the heterogeneous data into meaningful and valuable information. The main goal of this thesis is to investigate Semantic Web Technologies (SWT) for SCM with a specific focus on applications in the semiconductor industry. As part of SCM, End-to-End SC modeling ensures visibility of SC partners and flows. Existing models are limited in the way they represent operational SC relationships beyond one-to-one structures. The scarcity of empirical data from multiple SC partners hinders the analysis of the impact of supply network partners on each other and the benchmarking of the overall SC performance. In our work, we investigate (i) how semantic models can be used to standardize and benchmark SCs. Moreover, in a volatile and unpredictable environment, SC experts require methodical and efficient approaches to integrate various data sources for informed decision-making regarding SC behavior. Thus, this work addresses (ii) how semantic data integration can help make SCs more efficient and resilient. Moreover, to secure a good position in a competitive market, semiconductor SCs strive to implement operational strategies to control demand variation, i.e., bullwhip, while maintaining sustainable relationships with customers. We examine (iii) how we can apply semantic technologies to specifically support semiconductor SCs. In this thesis, we provide semantic models that integrate, in a standardized way, SC processes, structure, and flows, ensuring both an elaborate understanding of the holistic SCs and including granular operational details. We demonstrate that these models enable the instantiation of a synthetic SC for benchmarking. We contribute with semantic data integration applications to enable interoperability and make SCs more efficient and resilient. Moreover, we leverage ontologies and KGs to implement customer-oriented bullwhip-taming strategies. We create semantic-based approaches intertwined with Artificial Intelligence (AI) algorithms to address semiconductor industry specifics and ensure operational excellence. The results prove that relying on semantic technologies contributes to achieving rigorous and systematic SCM. We deem that better standardization, simulation, benchmarking, and analysis, as elaborated in the contributions, will help master more complex SC scenarios. SCs stakeholders can increasingly understand the domain and thus are better equipped with effective control strategies to restrain disruption accelerators, such as the bullwhip effect. In essence, the proposed Sematic Web Technology-based strategies unlock the potential to increase the efficiency, resilience, and operational excellence of supply networks and the semiconductor SC in particular

    Front-Line Physicians' Satisfaction with Information Systems in Hospitals

    Get PDF
    Day-to-day operations management in hospital units is difficult due to continuously varying situations, several actors involved and a vast number of information systems in use. The aim of this study was to describe front-line physicians' satisfaction with existing information systems needed to support the day-to-day operations management in hospitals. A cross-sectional survey was used and data chosen with stratified random sampling were collected in nine hospitals. Data were analyzed with descriptive and inferential statistical methods. The response rate was 65 % (n = 111). The physicians reported that information systems support their decision making to some extent, but they do not improve access to information nor are they tailored for physicians. The respondents also reported that they need to use several information systems to support decision making and that they would prefer one information system to access important information. Improved information access would better support physicians' decision making and has the potential to improve the quality of decisions and speed up the decision making process.Peer reviewe

    Designing Data Spaces

    Get PDF
    This open access book provides a comprehensive view on data ecosystems and platform economics from methodical and technological foundations up to reports from practical implementations and applications in various industries. To this end, the book is structured in four parts: Part I “Foundations and Contexts” provides a general overview about building, running, and governing data spaces and an introduction to the IDS and GAIA-X projects. Part II “Data Space Technologies” subsequently details various implementation aspects of IDS and GAIA-X, including eg data usage control, the usage of blockchain technologies, or semantic data integration and interoperability. Next, Part III describes various “Use Cases and Data Ecosystems” from various application areas such as agriculture, healthcare, industry, energy, and mobility. Part IV eventually offers an overview of several “Solutions and Applications”, eg including products and experiences from companies like Google, SAP, Huawei, T-Systems, Innopay and many more. Overall, the book provides professionals in industry with an encompassing overview of the technological and economic aspects of data spaces, based on the International Data Spaces and Gaia-X initiatives. It presents implementations and business cases and gives an outlook to future developments. In doing so, it aims at proliferating the vision of a social data market economy based on data spaces which embrace trust and data sovereignty

    Toward the Inter-organizational Product Information Supply Chain – Evidence from the Retail and Consumer Goods Industries

    Get PDF
    Since the 1980s, the retail and consumer goods industries have been making very extensive use of EDI-based data exchange and subsequently developed the vision of Efficient Consumer Response (ECR). In the meantime, a growing number of studies report that poor data quali¬ty, in particular out¬dated or wrong product information, negatively impacts demand and supply chain performance. Whereas prior literature intensively studied the positive effects of information sharing on the coordination of supply and demand, this research is aimed at establishing a basis for understanding the phenomena of the underlying inter-organizational product information supply chain. Using coordination theory as an overarching framework, the main research contribution is a set of dependencies, coordination problems, and coordination mechanisms that characterize the product information supply chain. From an analysis of two retailer-manufacturer relationships, we conclude that flow and sharing dependencies evolve into reciprocal dependencies as the intensity of demand and supply collaboration increases. We also find that industry standards ?notably Global Data Synchronization (GDS) ?do not yet fully cover the inter-organizational coordination requirements that result from the identified set of sharing and flow dependencies

    Advanced Methods for Entity Linking in the Life Sciences

    Get PDF
    The amount of knowledge increases rapidly due to the increasing number of available data sources. However, the autonomy of data sources and the resulting heterogeneity prevent comprehensive data analysis and applications. Data integration aims to overcome heterogeneity by unifying different data sources and enriching unstructured data. The enrichment of data consists of different subtasks, amongst other the annotation process. The annotation process links document phrases to terms of a standardized vocabulary. Annotated documents enable effective retrieval methods, comparability of different documents, and comprehensive data analysis, such as finding adversarial drug effects based on patient data. A vocabulary allows the comparability using standardized terms. An ontology can also represent a vocabulary, whereas concepts, relationships, and logical constraints additionally define an ontology. The annotation process is applicable in different domains. Nevertheless, there is a difference between generic and specialized domains according to the annotation process. This thesis emphasizes the differences between the domains and addresses the identified challenges. The majority of annotation approaches focuses on the evaluation of general domains, such as Wikipedia. This thesis evaluates the developed annotation approaches with case report forms that are medical documents for examining clinical trials. The natural language provides different challenges, such as similar meanings using different phrases. The proposed annotation method, AnnoMap, considers the fuzziness of natural language. A further challenge is the reuse of verified annotations. Existing annotations represent knowledge that can be reused for further annotation processes. AnnoMap consists of a reuse strategy that utilizes verified annotations to link new documents to appropriate concepts. Due to the broad spectrum of areas in the biomedical domain, different tools exist. The tools perform differently regarding a particular domain. This thesis proposes a combination approach to unify results from different tools. The method utilizes existing tool results to build a classification model that can classify new annotations as correct or incorrect. The results show that the reuse and the machine learning-based combination improve the annotation quality compared to existing approaches focussing on the biomedical domain. A further part of data integration is entity resolution to build unified knowledge bases from different data sources. A data source consists of a set of records characterized by attributes. The goal of entity resolution is to identify records representing the same real-world entity. Many methods focus on linking data sources consisting of records being characterized by attributes. Nevertheless, only a few methods can handle graph-structured knowledge bases or consider temporal aspects. The temporal aspects are essential to identify the same entities over different time intervals since these aspects underlie certain conditions. Moreover, records can be related to other records so that a small graph structure exists for each record. These small graphs can be linked to each other if they represent the same. This thesis proposes an entity resolution approach for census data consisting of person records for different time intervals. The approach also considers the graph structure of persons given by family relationships. For achieving qualitative results, current methods apply machine-learning techniques to classify record pairs as the same entity. The classification task used a model that is generated by training data. In this case, the training data is a set of record pairs that are labeled as a duplicate or not. Nevertheless, the generation of training data is a time-consuming task so that active learning techniques are relevant for reducing the number of training examples. The entity resolution method for temporal graph-structured data shows an improvement compared to previous collective entity resolution approaches. The developed active learning approach achieves comparable results to supervised learning methods and outperforms other limited budget active learning methods. Besides the entity resolution approach, the thesis introduces the concept of evolution operators for communities. These operators can express the dynamics of communities and individuals. For instance, we can formulate that two communities merged or split over time. Moreover, the operators allow observing the history of individuals. Overall, the presented annotation approaches generate qualitative annotations for medical forms. The annotations enable comprehensive analysis across different data sources as well as accurate queries. The proposed entity resolution approaches improve existing ones so that they contribute to the generation of qualitative knowledge graphs and data analysis tasks

    GeoHealth:a location-based service for home healthcare workers

    Get PDF
    We describe a map-based location-based service ‘GeoHealth ’ for home healthcare workers who attend patients at home within a large geographical area. Informed by field studies of work activities and interviews with care providers, we have designed a mobile location-based service prototype supporting collaboration through information sharing and distributed electronic patient records. The GeoHealth prototype gives the users live contextual information about patients, coworkers, current and scheduled work activities and alarms adapted to their geographical location. The application is web-based and uses Google Maps, Global Positioning System (GPS) and Web 2.0 technology to provide a lightweight, dynamic and interactive representation of the work domain supporting distributed collaboration, communication and peripheral awareness among nomadic workers. Through a user-based evaluation, we found that the healthcare workers were positive towards the use of location-based services in their work, and that the dynamic and interactive geospatial representation of the work domain provided by GeoHealth supported distributed collaboration, communication and peripheral awareness. We also identified areas for improvements
    corecore