9,795 research outputs found

    IUPC: Identification and Unification of Process Constraints

    Full text link
    Business Process Compliance (BPC) has gained significant momentum in research and practice during the last years. Although many approaches address BPC, they mostly assume the existence of some kind of unified base of process constraints and focus on their verification over the business processes. However, it remains unclear how such an inte- grated process constraint base can be built up, even though this con- stitutes the essential prerequisite for all further compliance checks. In addition, the heterogeneity of process constraints has been neglected so far. Without identification and separation of process constraints from domain rules as well as unification of process constraints, the success- ful IT support of BPC will not be possible. In this technical report we introduce a unified representation framework that enables the identifica- tion of process constraints from domain rules and their later unification within a process constraint base. Separating process constraints from domain rules can lead to significant reduction of compliance checking effort. Unification enables consistency checks and optimizations as well as maintenance and evolution of the constraint base on the other side.Comment: 13 pages, 4 figures, technical repor

    Sub-grid modelling for two-dimensional turbulence using neural networks

    Get PDF
    In this investigation, a data-driven turbulence closure framework is introduced and deployed for the sub-grid modelling of Kraichnan turbulence. The novelty of the proposed method lies in the fact that snapshots from high-fidelity numerical data are used to inform artificial neural networks for predicting the turbulence source term through localized grid-resolved information. In particular, our proposed methodology successfully establishes a map between inputs given by stencils of the vorticity and the streamfunction along with information from two well-known eddy-viscosity kernels. Through this we predict the sub-grid vorticity forcing in a temporally and spatially dynamic fashion. Our study is both a-priori and a-posteriori in nature. In the former, we present an extensive hyper-parameter optimization analysis in addition to learning quantification through probability density function based validation of sub-grid predictions. In the latter, we analyse the performance of our framework for flow evolution in a classical decaying two-dimensional turbulence test case in the presence of errors related to temporal and spatial discretization. Statistical assessments in the form of angle-averaged kinetic energy spectra demonstrate the promise of the proposed methodology for sub-grid quantity inference. In addition, it is also observed that some measure of a-posteriori error must be considered during optimal model selection for greater accuracy. The results in this article thus represent a promising development in the formalization of a framework for generation of heuristic-free turbulence closures from data

    Toward Semantics-aware Representation of Digital Business Processes

    Get PDF
    An extended enterprise (EE) can be described by a set of models each representing a specific aspect of the EE. Aspects can for example be the process flow or the value description. However, different models are done by different people, which may use different terminology, which prevents relating the models. Therefore, we propose a framework consisting of process flow and value aspects and in addition a static domain model with structural and relational components. Further, we outline the usage of the static domain model to enable relating the different aspects

    Audit-based Compliance Control (AC2) for EHR Systems

    Get PDF
    Traditionally, medical data is stored and processed using paper-based files. Recently, medical facilities have started to store, access and exchange medical data in digital form. The drivers for this change are mainly demands for cost reduction, and higher quality of health care. The main concerns when dealing with medical data are availability and confidentiality. Unavailability (even temporary) of medical data is expensive. Physicians may not be able to diagnose patients correctly, or they may have to repeat exams, adding to the overall costs of health care. In extreme cases availability of medical data can even be a matter of life or death. On the other hand, confidentiality of medical data is also important. Legislation requires medical facilities to observe the privacy of the patients, and states that patients have a final say on whether or not their medical data can be processed or not. Moreover, if physicians, or their EHR systems, are not trusted by the patients, for instance because of frequent privacy breaches, then patients may refuse to submit (correct) information, complicating the work of the physicians greatly. \ud \ud In traditional data protection systems, confidentiality and availability are conflicting requirements. The more data protection methods are applied to shield data from outsiders the more likely it becomes that authorized persons will not get access to the data in time. Consider for example, a password verification service that is temporarily not available, an access pass that someone forgot to bring, and so on. In this report we discuss a novel approach to data protection, Audit-based Compliance Control (AC2), and we argue that it is particularly suited for application in EHR systems. In AC2, a-priori access control is minimized to the mere authentication of users and objects, and their basic authorizations. More complex security procedures, such as checking user compliance to policies, are performed a-posteriori by using a formal and automated auditing mechanism. To support our claim we discuss legislation concerning the processing of health records, and we formalize a scenario involving medical personnel and a basic EHR system to show how AC2 can be used in practice. \ud \ud This report is based on previous work (Dekker & Etalle 2006) where we assessed the applicability of a-posteriori access control in a health care scenario. A more technically detailed article about AC2 recently appeared in the IJIS journal, where we focussed however on collaborative work environments (Cederquist, Corin, Dekker, Etalle, & Hartog, 2007). In this report we first provide background and related work before explaining the principal components of the AC2 framework. Moreover we model a detailed EHR case study to show its operation in practice. We conclude by discussing how this framework meets current trends in healthcare and by highlighting the main advantages and drawbacks of using an a-posteriori access control mechanism as opposed to more traditional access control mechanisms

    Fatores que afetam a adoção de análises de Big Data em empresas

    Get PDF
    With the total quantity of data doubling every two years, the low price of computing and data storage, make Big Data analytics (BDA) adoption desirable for companies, as a tool to get competitive advantage. Given the availability of free software, why have some companies failed to adopt these techniques? To answer this question, we extend the unified theory of technology adoption and use of technology model (UTAUT) adapted for the BDA context, adding two variables: resistance to use and perceived risk. We used the level of implementation of these techniques to divide companies into users and non-users of BDA. The structural models were evaluated by partial least squares (PLS). The results show the importance of good infrastructure exceeds the difficulties companies face in implementing it. While companies planning to use Big Data expect strong results, current users are more skeptical about its performance.Con la cantidad total de datos duplicándose cada dos años, el bajo precio de la informática y del almacenamiento de datos, la adopción del análisis Big Data (BDA) es altamente deseable para las empresas, como un instrumento para conseguir una ventaja competitiva. Dada la disponibilidad de software libre, ¿por qué algunas empresas no han adoptado estas técnicas? Para responder a esta pregunta, ampliamos la teoría unificada de la adopción y uso de tecnología (UTAUT) adaptado para el contexto BDA, agregando dos variables: resistencia al uso y riesgo percibido. Utilizamos el grado de implantación de estas técnicas para dividir las empresas entre: usuarias y no usuarias de BDA. Los modelos estructurales fueron evaluados con partial least squres (PLS). Los resultados muestran que la importancia de una buena infraestructura excede las dificultades que enfrentan las empresas para implementarla. Mientras que las compañías que planean usar BDA esperan muy buenos resultados, las usuarias actuales son más escépticos sobre su rendimiento.Com a quantidade total de dados duplicando a cada dois anos, o baixo preço da computação e do armazenamento de dados tornam a adoção de análises de Big Data (BDA) desejável para as empresas, como aquelas que obterão uma vantagem competitiva. Dada a disponibilidade de software livre, por que algumas empresas não adotaram essas técnicas? Para responder a essa pergunta, estendemos a teoria unificada de adoção e uso de tecnologia (UTAUT) adaptado para o contexto do BDA, adicionando duas variáveis: resistência ao uso e risco percebido. Usamos a nível da implementação da tecnologia para dividir as empresas em usuários e não usuários de técnicas de BDA. Os modelos estruturais foram avaliados por partial least squares (PLS). Os resultados mostram que a importância de uma boa infraestrutura excede as dificuldades que as empresas enfrentam para implementá-la. Enquanto as empresas que planejam usar Big Data esperam resultados fortes, os usuários atuais são mais céticos em relação ao seu desempenho

    Mapping Big Data into Knowledge Space with Cognitive Cyber-Infrastructure

    Full text link
    Big data research has attracted great attention in science, technology, industry and society. It is developing with the evolving scientific paradigm, the fourth industrial revolution, and the transformational innovation of technologies. However, its nature and fundamental challenge have not been recognized, and its own methodology has not been formed. This paper explores and answers the following questions: What is big data? What are the basic methods for representing, managing and analyzing big data? What is the relationship between big data and knowledge? Can we find a mapping from big data into knowledge space? What kind of infrastructure is required to support not only big data management and analysis but also knowledge discovery, sharing and management? What is the relationship between big data and science paradigm? What is the nature and fundamental challenge of big data computing? A multi-dimensional perspective is presented toward a methodology of big data computing.Comment: 59 page
    corecore