70 research outputs found

    Utilizing Algorithms for Decision Mining Discovery

    Get PDF
    Organizations are executing operational decisions in fast changing environments, which increases the necessity for managing these decisions adequately. Information systems store information about such decisions in decision- and event logs that could be used for analyzing decisions. This study aims to find relevant algorithms that could be used to mine decisions from such decision- and event logs, which is called decision mining. By conducting a literature review, together with interviews conducted with experts with a scientific background as well as participants with a commercial background, relevant classifier algorithms and requirements for mining decisions are identified and mapped to find algorithms that could be used for the discovery of decisions. Five of the twelve algorithms identified have a lot of potential to use for decision mining, with small adaptations, while six out of the twelve do have potential but the required adaptation would demand too many alterations to their core design. One of the twelve was not suitable for the discovery of decisions

    Decision as a Service (DaaS):A service-oriented architecture approach for decisions in processes

    Get PDF
    Separating decision modelling from the processes modelling concern recently gained significant support in literature, as incorporating both concerns into a single model impairs the scalability, maintainability, flexibility and understandability of both processes and decisions. Most notably the introduction of the Decision Model and Notation (DMN) standard by the Object Management Group provides a suitable solution for externalising decisions from processes and automating decision enactments for processes. This paper introduces a systematic way of tackling the separation of the decision modelling concern from process modelling by providing a Decision as a Service (DaaS) layered Service-Oriented Architecture (SOA) which approaches decisions as automated and externalised services that processes need to invoke on demand to obtain the decision outcome. The DaaS mechanism is elucidated by a formalisation of DMN constructs and the relevant layer elements. Furthermore, DaaS is evaluated against the fundamental characteristics of the SOA paradigm, proving its contribution in terms of abstraction, reusability, loose coupling, and other pertinent SOA principles. Additionally, the benefits of the DaaS design on process-decision modelling and mining are discussed. Finally, the DaaS design is illustrated on a real-life event log of a bank loan application and approval process, and the SOA maturity of DaaS is assessed.status: Published onlin

    Improving data preparation for the application of process mining

    Get PDF
    Immersed in what is already known as the fourth industrial revolution, automation and data exchange are taking on a particularly relevant role in complex environments, such as industrial manufacturing environments or logistics. This digitisation and transition to the Industry 4.0 paradigm is causing experts to start analysing business processes from other perspectives. Consequently, where management and business intelligence used to dominate, process mining appears as a link, trying to build a bridge between both disciplines to unite and improve them. This new perspective on process analysis helps to improve strategic decision making and competitive capabilities. Process mining brings together data and process perspectives in a single discipline that covers the entire spectrum of process management. Through process mining, and based on observations of their actual operations, organisations can understand the state of their operations, detect deviations, and improve their performance based on what they observe. In this way, process mining is an ally, occupying a large part of current academic and industrial research. However, although this discipline is receiving more and more attention, it presents severe application problems when it is implemented in real environments. The variety of input data in terms of form, content, semantics, and levels of abstraction makes the execution of process mining tasks in industry an iterative, tedious, and manual process, requiring multidisciplinary experts with extensive knowledge of the domain, process management, and data processing. Currently, although there are numerous academic proposals, there are no industrial solutions capable of automating these tasks. For this reason, in this thesis by compendium we address the problem of improving business processes in complex environments thanks to the study of the state-of-the-art and a set of proposals that improve relevant aspects in the life cycle of processes, from the creation of logs, log preparation, process quality assessment, and improvement of business processes. Firstly, for this thesis, a systematic study of the literature was carried out in order to gain an in-depth knowledge of the state-of-the-art in this field, as well as the different challenges faced by this discipline. This in-depth analysis has allowed us to detect a number of challenges that have not been addressed or received insufficient attention, of which three have been selected and presented as the objectives of this thesis. The first challenge is related to the assessment of the quality of input data, known as event logs, since the requeriment of the application of techniques for improving the event log must be based on the level of quality of the initial data, which is why this thesis presents a methodology and a set of metrics that support the expert in selecting which technique to apply to the data according to the quality estimation at each moment, another challenge obtained as a result of our analysis of the literature. Likewise, the use of a set of metrics to evaluate the quality of the resulting process models is also proposed, with the aim of assessing whether improvement in the quality of the input data has a direct impact on the final results. The second challenge identified is the need to improve the input data used in the analysis of business processes. As in any data-driven discipline, the quality of the results strongly depends on the quality of the input data, so the second challenge to be addressed is the improvement of the preparation of event logs. The contribution in this area is the application of natural language processing techniques to relabel activities from textual descriptions of process activities, as well as the application of clustering techniques to help simplify the results, generating more understandable models from a human point of view. Finally, the third challenge detected is related to the process optimisation, so we contribute with an approach for the optimisation of resources associated with business processes, which, through the inclusion of decision-making in the creation of flexible processes, enables significant cost reductions. Furthermore, all the proposals made in this thesis are validated and designed in collaboration with experts from different fields of industry and have been evaluated through real case studies in public and private projects in collaboration with the aeronautical industry and the logistics sector

    AI-Enhanced Hybrid Decision Management

    Get PDF
    The Decision Model and Notation (DMN) modeling language allows the precise specification of business decisions and business rules. DMN is readily understandable by business users involved in decision management. However, as the models get complex, the cognitive abilities of humans threaten manual maintainability and comprehensibility. Proper design of the decision logic thus requires comprehensive automated analysis of e.g., all possible cases the decision shall cover; correlations between inputs and outputs; and the importance of inputs for deriving the output. In the paper, the authors explore the mutual benefits of combining human-driven DMN decision modeling with the computational power of Artificial Intelligence for DMN model analysis and improved comprehension. The authors propose a model-driven approach that uses DMN models to generate Machine Learning (ML) training data and show, how the trained ML models can inform human decision modelers by means of superimposing the feature importance within the original DMN models. An evaluation with multiple real DMN models from an insurance company evaluates the feasibility and the utility of the approach

    On the enhancement of Big Data Pipelines through Data Preparation, Data Quality, and the distribution of Optimisation Problems

    Get PDF
    Nowadays, data are fundamental for companies, providing operational support by facilitating daily transactions. Data has also become the cornerstone of strategic decision-making processes in businesses. For this purpose, there are numerous techniques that allow to extract knowledge and value from data. For example, optimisation algorithms excel at supporting decision-making processes to improve the use of resources, time and costs in the organisation. In the current industrial context, organisations usually rely on business processes to orchestrate their daily activities while collecting large amounts of information from heterogeneous sources. Therefore, the support of Big Data technologies (which are based on distributed environments) is required given the volume, variety and speed of data. Then, in order to extract value from the data, a set of techniques or activities is applied in an orderly way and at different stages. This set of techniques or activities, which facilitate the acquisition, preparation, and analysis of data, is known in the literature as Big Data pipelines. In this thesis, the improvement of three stages of the Big Data pipelines is tackled: Data Preparation, Data Quality assessment, and Data Analysis. These improvements can be addressed from an individual perspective, by focussing on each stage, or from a more complex and global perspective, implying the coordination of these stages to create data workflows. The first stage to improve is the Data Preparation by supporting the preparation of data with complex structures (i.e., data with various levels of nested structures, such as arrays). Shortcomings have been found in the literature and current technologies for transforming complex data in a simple way. Therefore, this thesis aims to improve the Data Preparation stage through Domain-Specific Languages (DSLs). Specifically, two DSLs are proposed for different use cases. While one of them is a general-purpose Data Transformation language, the other is a DSL aimed at extracting event logs in a standard format for process mining algorithms. The second area for improvement is related to the assessment of Data Quality. Depending on the type of Data Analysis algorithm, poor-quality data can seriously skew the results. A clear example are optimisation algorithms. If the data are not sufficiently accurate and complete, the search space can be severely affected. Therefore, this thesis formulates a methodology for modelling Data Quality rules adjusted to the context of use, as well as a tool that facilitates the automation of their assessment. This allows to discard the data that do not meet the quality criteria defined by the organisation. In addition, the proposal includes a framework that helps to select actions to improve the usability of the data. The third and last proposal involves the Data Analysis stage. In this case, this thesis faces the challenge of supporting the use of optimisation problems in Big Data pipelines. There is a lack of methodological solutions that allow computing exhaustive optimisation problems in distributed environments (i.e., those optimisation problems that guarantee the finding of an optimal solution by exploring the whole search space). The resolution of this type of problem in the Big Data context is computationally complex, and can be NP-complete. This is caused by two different factors. On the one hand, the search space can increase significantly as the amount of data to be processed by the optimisation algorithms increases. This challenge is addressed through a technique to generate and group problems with distributed data. On the other hand, processing optimisation problems with complex models and large search spaces in distributed environments is not trivial. Therefore, a proposal is presented for a particular case in this type of scenario. As a result, this thesis develops methodologies that have been published in scientific journals and conferences.The methodologies have been implemented in software tools that are integrated with the Apache Spark data processing engine. The solutions have been validated through tests and use cases with real datasets

    Decision-making support for the alignment of business-process-driven organization with strategic plans.

    Get PDF
    Los planes de negocio son documentos en los que los equipos ejecutivos de las organizaciones (BETs, por sus siglas en inglés), especifican todos y cada uno de los aspectos de la organización. Dos componentes muy importantes de los planes de negocio son el plan de operaciones y el plan estratégico. El plan de operaciones recoge, tanto las actividades/tareas que se pueden realizar en la organización, para proporcionar los productos o servicios que se ofrecen, cómo la forma en que se han de llevar a cabo estas actividades/tareas; El plan estratégico especifica la dirección y los objetivos de la organización, elabora objetivos e identifica estrategias para alcanzar éstos objetivos. Las organizaciones siguen la dirección establecida sus planes estratégicos, pero debido a diversos factores, a menudo esto, mantener la dirección establecida es difícil. Uno de estos factores es la influencia de las personas, las cuales toman decisiones, en ocasiones basadas en su conocimiento local de la organización, sus experiencias previas y/o su intuición, en lugar de hacerlo mediante un análisis cuantitativo de cómo sus decisiones pueden afectar a la organización, y por tanto cómo de alineadas están con la dirección establecida. Esto hace que, en ocasiones, las decisiones no estén alineadas con la dirección marcada por la organización, y que además que ni se tenga constancia de este hecho. En esta tesis doctoral se proponen metodologías y mecanismos para ayudar a las personas, a tomar decisiones alineadas con la dirección establecida por la organización. La consultora GartnerTM considera que la capacidad de ayudar en el proceso de toma de decisión es crucial para los sistemas que respaldan las operaciones de la empresa (BPMSs, por sus siglas en inglés). Por esta razón, las metodologías y mecanismos propuestos en esta tesis, se integran dentro con los BPMSs, como sistemas de ayuda a la toma de decisiones (DSS, por sus siglas en inglés). De un análisis sistemático de la literatura existente, se derivaron varias propuestas para la mejora de DSSs, y se identificaron tres tipos de decisiones que se toman en procesos de negocio, las cuales no están ampliamente respaldadas por los DSSs actuales: (1) decisiones que direccionan la instancia de proceso de negocio (BPI, por sus siglas en inglés); (2) decisiones sobre el valor de las variables de entrada; y (3) decisiones sobre qué proceso de negocio (BP, por sus siglas in inglés) ejecutar. En esta tesis se proponen tres DSSs, cada uno de ellos, alineado con uno de los tipos de decisión antes mencionados. Los DSSs para el direccionamiento de BPIs constituyen uno de los campos de estudio más conocidos en el contexto de la toma de decisiones en BPs, sin embargo, las propuestas encontradas en la literatura, no permiten considerar el contexto en el que se está ejecutando la BPI, es decir, estas propuestas solo consideran la información relacionada con BPI en ejecución (es decir, sólo tienen en cuenta información local) y no consideran el estado global de la organización. El DSS, para direccionar BPIs presentado en esta tesis, propone un lenguaje que permite definir variables, la cuales representan el estado global de la organización, y además mecanismos para utilizar éstas variables en las decisiones de direccionamiento de la BPI. Gracias a esto, las decisiones se pueden tomar de manera global a la organización. Otro tipo de decisiones que se toman en BP está relacionado con elegir valores de entrada de los BP (por ejemplo, la cantidad a invertir o la cantidad de empleados que se asignan a una tarea). La elección de valores de entrada en BP puede influir directamente en que la empresa consiga los objetivos marcados, o no. Para determinar los valores más adecuados para las variables de entrada de los BP, se deben analizar tanto las instancias pasadas, como los modelos de procesos de negocio. En el DSS propuesto en esta tesis para decidir sobre los valores de entrada, la información extraída de instancias pasadas es utilizada utiliza para sugerir el rango de valores dentro del cual, el valor de la variable está alineado con los objetivos marcados por la organización. Dado que la información empleada para extraer el conocimiento de los BPI finalizados se almacena en bases de datos, también se propone una metodología para validar la alineación de los datos de estas instancias anteriores con el BP. Los DSS descritos anteriormente están relacionados con las decisiones tomadas sobre BPI, es decir BP que se están ya ejecutando; sin embargo, la elección de qué BP se debe ejecutar, también constituye una decisión en sí misma. Esta decisión también puede afectar el estado de la organización y, por lo tanto, puede afectar el logro de los objetivos especificados en los planes estratégicos de la organización. Estas decisiones se conocen como decisiones de gobernanza, y también deben estar alineadas con los planes estratégicos. Con el fin de conseguir este alineamiento, en esta tesis se propone una metodología para modelar, tanto los BP, como la medida en que la ejecución de éstos afecta a los indicadores de la organización. Éste modelad lo hacen personas (expertos en negocios), por lo que en esta tesis también se proponen mecanismos para su validación respecto de la actividad de la organización en el pasado. El DSS propuesto para decisiones de gobernanza, se basa en la capacidad de simular estos modelos, para predecir el estado final de la organización en caso de ejecutar uno varios procesos de negocio, en un momento determinado. Los DSSs y técnicas propuestas en esta tesis mejoran la capacidad de toma de decisiones en cuatro aspectos: 1. Ayudan a los usuarios a tomar decisiones alineadas con la dirección marcada por la organización, en función del estado general de la empresa y de lo que sucedió en el pasado. 2. Aseguran que las decisiones tomadas estén alineadas con los planes estratégicos, por lo que todas las personas involucradas en la organización toman decisiones de acuerdo con los objetivos definidos por la organización. 3. Aprovechan la información de ejecuciones pasadas de BP de la empresa, para mejorar la organización. 4. Aprovechan el conocimiento de las personas involucradas en la organización tienen del funcionamiento de la misma, al tiempo que permiten tomar decisión razonadas sobre por qué se toma realiza una acción u otra. Por otro lado, estas técnicas están orientadas a: ser utilizadas por expertos del negocio, es decir, personas sin formación técnica; contribuir a una mejor comprensión de cómo las acciones realizadas en la organización pueden afectar el logro de los objetivos definidos; y a permitir que información del estado de la organización pueda ser utilizada por terceras aplicaciones. Por último, destacar que las propuestas desarrolladas en el contexto de esta tesis y los ejemplos utilizados para ilustrarlas han sido extraídas de casos de empresas reales

    Data-driven planned maintenance for MRI-Scanners

    Get PDF

    A Smart Products Lifecycle Management (sPLM) Framework - Modeling for Conceptualization, Interoperability, and Modularity

    Get PDF
    Autonomy and intelligence have been built into many of today’s mechatronic products, taking advantage of low-cost sensors and advanced data analytics technologies. Design of product intelligence (enabled by analytics capabilities) is no longer a trivial or additional option for the product development. The objective of this research is aimed at addressing the challenges raised by the new data-driven design paradigm for smart products development, in which the product itself and the smartness require to be carefully co-constructed. A smart product can be seen as specific compositions and configurations of its physical components to form the body, its analytics models to implement the intelligence, evolving along its lifecycle stages. Based on this view, the contribution of this research is to expand the “Product Lifecycle Management (PLM)” concept traditionally for physical products to data-based products. As a result, a Smart Products Lifecycle Management (sPLM) framework is conceptualized based on a high-dimensional Smart Product Hypercube (sPH) representation and decomposition. First, the sPLM addresses the interoperability issues by developing a Smart Component data model to uniformly represent and compose physical component models created by engineers and analytics models created by data scientists. Second, the sPLM implements an NPD3 process model that incorporates formal data analytics process into the new product development (NPD) process model, in order to support the transdisciplinary information flows and team interactions between engineers and data scientists. Third, the sPLM addresses the issues related to product definition, modular design, product configuration, and lifecycle management of analytics models, by adapting the theoretical frameworks and methods for traditional product design and development. An sPLM proof-of-concept platform had been implemented for validation of the concepts and methodologies developed throughout the research work. The sPLM platform provides a shared data repository to manage the product-, process-, and configuration-related knowledge for smart products development. It also provides a collaborative environment to facilitate transdisciplinary collaboration between product engineers and data scientists

    Secure and Reliable Resource Allocation and Caching in Aerial-Terrestrial Cloud Networks (ATCNs)

    Get PDF
    Aerial-terrestrial cloud networks (ATCNs), global integration of air and ground communication systems, pave a way for a large set of applications such as surveillance, on-demand transmissions, data-acquisition, and navigation. However, such networks suffer from crucial challenges of secure and reliable resource allocation and content-caching as the involved entities are highly dynamic and there is no fine-tuned strategy to accommodate their connectivity. To resolve this quandary, cog-chain, a novel paradigm for secure and reliable resource allocation and content-caching in ATCNs, is presented. Various requirements, key concepts, and issues with ATCNs are also presented along with basic concepts to establish a cog-chain in ATCNs. Feed and fetch modes are utilized depending on the involved entities and caching servers. In addition, a cog-chain communication protocol is presented which avails to evaluate the formation of a virtual cog-chain between the nodes and the content-caching servers. The efficacy of the proposed solution is demonstrated through consequential gains observed for signaling overheads, computational time, reliability, and resource allocation growth. The proposed approach operates with the signaling overheads ranging between 30.36 and 303.6 bytes?hops/sec and the formation time between 186 and 195 ms. Furthermore, the overall time consumption is 83.33% lower than the sequential-verification model and the resource allocation growth is 27.17% better than the sequential-verification model. - 2019 IEEE.This work was supported in part by the Institute for Information and Communications Technology Promotion (IITP) grant through the Korean Government (MSIT) (Rule Specification-Based Misbehavior Detection for IoT-Embedded Cyber-Physical Systems) under Grant 2017-0-00664, and in part by the Soonchunhyang University Research Fund.Scopu
    corecore