479 research outputs found

    A manifesto on explainability for artificial intelligence in medicine.

    Get PDF
    The rapid increase of interest in, and use of, artificial intelligence (AI) in computer applications has raised a parallel concern about its ability (or lack thereof) to provide understandable, or explainable, output to users. This concern is especially legitimate in biomedical contexts, where patient safety is of paramount importance. This position paper brings together seven researchers working in the field with different roles and perspectives, to explore in depth the concept of explainable AI, or XAI, offering a functional definition and conceptual framework or model that can be used when considering XAI. This is followed by a series of desiderata for attaining explainability in AI, each of which touches upon a key domain in biomedicine

    Process-Driven Data Quality Management Through Integration of Data Quality into Existing Process Models - Application of Complexity-Reducing Patterns and the Impact on Complexity Metrics

    Get PDF
    The importance of high data quality and the need to consider data quality in the context of business processes are well acknowledged. Process modeling is mandatory for process-driven data quality management, which seeks to improve and sustain data quality by redesigning processes that create ormodify data. A variety of process modeling languages exist, which organizations heterogeneously apply. The purpose of this article is to present a context-independent approach to integrate data quality into the variety of existing process models. The authors aim to improve communication of data quality issues across stakeholders while considering process model complexity. They build on a keyword-based literature review in 74 IS journals and three conferences, reviewing 1,555 articles from 1995 onwards. 26 articles, including 46 process models, were examined in detail. The literature review reveals the need for a context-independent and visible integration of data quality into process models. First, the authors present the enhancement of existing process models with data quality characteristics. Second, they present the integration of a data-quality-centric process model with existing process models. Since process models are mainly used for communicating processes, they consider the impact of integrating data quality and the application of patterns for complexity reduction on the models’ complexity metrics. There is need for further research on complexity metrics to improve the applicability of complexity reduction patterns. Lacking knowledge about interdependency between metrics and missing complexitymetrics impede assessment and prediction of process model complexity and thus understandability. Finally, our contextindependent approach can be used complementarily for data quality integration with specific process modeling languages

    Converging transnational financial reporting standards: Validating the joint FASB/IASB concept of information quality

    Get PDF
    Accelerating cross-border investing activity transformed global financial markets during the latter part of the 20th century. Due to lack of trans-cultural consistency comparability in financial reporting was compromised hindering multinational investment. In light thereof there is a movement afoot among international authorities to converge national financial reporting standards into a single international financial reporting system. In September 2010 Financial Accounting Standards Board and International Accounting Standards Board agreed on a concept of information quality to guide formulation of internationally acceptable financial reporting standards. The Boards\u27 goal is sustenance of local relevance while achieving transnational comparability. Toward that end, instead of trade-offs among qualities of information assumed by previous concepts, the new concept posits faithful representation working in concert with relevance in a sequential approach to information quality. Variously referred to as Framework 2010 the purpose of this dissertation was to determine its validity. The concept was tested using Partial Least Squares methodology over a survey of US accountants. Fundamental qualities of relevance and faithful representation were found to be significant predictors of decision usefulness as were enhancing qualities of verifiability and comparability. Faithful representation was found to be a significant, partial mediator of relationship between relevance (predictor) and decision usefulness (outcome). Final model predicted 43.1% of variance in decision usefulness

    Management of data quality when integrating data with known provenance

    Get PDF
    Abstract unavailable please refer to PD

    Model-Driven Information Security Risk Assessment of Socio-Technical Systems

    Get PDF

    ICSEA 2021: the sixteenth international conference on software engineering advances

    Get PDF
    The Sixteenth International Conference on Software Engineering Advances (ICSEA 2021), held on October 3 - 7, 2021 in Barcelona, Spain, continued a series of events covering a broad spectrum of software-related topics. The conference covered fundamentals on designing, implementing, testing, validating and maintaining various kinds of software. The tracks treated the topics from theory to practice, in terms of methodologies, design, implementation, testing, use cases, tools, and lessons learnt. The conference topics covered classical and advanced methodologies, open source, agile software, as well as software deployment and software economics and education. The conference had the following tracks: Advances in fundamentals for software development Advanced mechanisms for software development Advanced design tools for developing software Software engineering for service computing (SOA and Cloud) Advanced facilities for accessing software Software performance Software security, privacy, safeness Advances in software testing Specialized software advanced applications Web Accessibility Open source software Agile and Lean approaches in software engineering Software deployment and maintenance Software engineering techniques, metrics, and formalisms Software economics, adoption, and education Business technology Improving productivity in research on software engineering Trends and achievements Similar to the previous edition, this event continued to be very competitive in its selection process and very well perceived by the international software engineering community. As such, it is attracting excellent contributions and active participation from all over the world. We were very pleased to receive a large amount of top quality contributions. We take here the opportunity to warmly thank all the members of the ICSEA 2021 technical program committee as well as the numerous reviewers. The creation of such a broad and high quality conference program would not have been possible without their involvement. We also kindly thank all the authors that dedicated much of their time and efforts to contribute to the ICSEA 2021. We truly believe that thanks to all these efforts, the final conference program consists of top quality contributions. This event could also not have been a reality without the support of many individuals, organizations and sponsors. We also gratefully thank the members of the ICSEA 2021 organizing committee for their help in handling the logistics and for their work that is making this professional meeting a success. We hope the ICSEA 2021 was a successful international forum for the exchange of ideas and results between academia and industry and to promote further progress in software engineering research

    Multidimensional process discovery

    Get PDF

    Information Systems and Health Care XI: Public Health Knowledge Management Architecture Design: A Case Study

    Get PDF
    This paper presents the results of a case study based on the creation of a knowledge management program architecture in the public health domain. Data were gathered in the study using the program logic model as a framework for conducting a series of six focus groups. Results illustrate major elements and branches of the final design with commentary on the knowledge management implications of outcomes of this design effort. The methodology used provides an artifact in the form of an information requirement process that may be suited to other contexts. Discussion of findings focuses on six themes regarding knowledge management systems, particularly in the public health context and during the design process

    Approach for testing the extract-transform-load process in data warehouse systems, An

    Get PDF
    2018 Spring.Includes bibliographical references.Enterprises use data warehouses to accumulate data from multiple sources for data analysis and research. Since organizational decisions are often made based on the data stored in a data warehouse, all its components must be rigorously tested. In this thesis, we first present a comprehensive survey of data warehouse testing approaches, and then develop and evaluate an automated testing approach for validating the Extract-Transform-Load (ETL) process, which is a common activity in data warehousing. In the survey we present a classification framework that categorizes the testing and evaluation activities applied to the different components of data warehouses. These approaches include both dynamic analysis as well as static evaluation and manual inspections. The classification framework uses information related to what is tested in terms of the data warehouse component that is validated, and how it is tested in terms of various types of testing and evaluation approaches. We discuss the specific challenges and open problems for each component and propose research directions. The ETL process involves extracting data from source databases, transforming it into a form suitable for research and analysis, and loading it into a data warehouse. ETL processes can use complex one-to-one, many-to-one, and many-to-many transformations involving sources and targets that use different schemas, databases, and technologies. Since faulty implementations in any of the ETL steps can result in incorrect information in the target data warehouse, ETL processes must be thoroughly validated. In this thesis, we propose automated balancing tests that check for discrepancies between the data in the source databases and that in the target warehouse. Balancing tests ensure that the data obtained from the source databases is not lost or incorrectly modified by the ETL process. First, we categorize and define a set of properties to be checked in balancing tests. We identify various types of discrepancies that may exist between the source and the target data, and formalize three categories of properties, namely, completeness, consistency, and syntactic validity that must be checked during testing. Next, we automatically identify source-to-target mappings from ETL transformation rules provided in the specifications. We identify one-to-one, many-to-one, and many-to-many mappings for tables, records, and attributes involved in the ETL transformations. We automatically generate test assertions to verify the properties for balancing tests. We use the source-to-target mappings to automatically generate assertions corresponding to each property. The assertions compare the data in the target data warehouse with the corresponding data in the sources to verify the properties. We evaluate our approach on a health data warehouse that uses data sources with different data models running on different platforms. We demonstrate that our approach can find previously undetected real faults in the ETL implementation. We also provide an automatic mutation testing approach to evaluate the fault finding ability of our balancing tests. Using mutation analysis, we demonstrated that our auto-generated assertions can detect faults in the data inside the target data warehouse when faulty ETL scripts execute on mock source data
    • 

    corecore