1,954 research outputs found

    Can biological quantum networks solve NP-hard problems?

    Full text link
    There is a widespread view that the human brain is so complex that it cannot be efficiently simulated by universal Turing machines. During the last decades the question has therefore been raised whether we need to consider quantum effects to explain the imagined cognitive power of a conscious mind. This paper presents a personal view of several fields of philosophy and computational neurobiology in an attempt to suggest a realistic picture of how the brain might work as a basis for perception, consciousness and cognition. The purpose is to be able to identify and evaluate instances where quantum effects might play a significant role in cognitive processes. Not surprisingly, the conclusion is that quantum-enhanced cognition and intelligence are very unlikely to be found in biological brains. Quantum effects may certainly influence the functionality of various components and signalling pathways at the molecular level in the brain network, like ion ports, synapses, sensors, and enzymes. This might evidently influence the functionality of some nodes and perhaps even the overall intelligence of the brain network, but hardly give it any dramatically enhanced functionality. So, the conclusion is that biological quantum networks can only approximately solve small instances of NP-hard problems. On the other hand, artificial intelligence and machine learning implemented in complex dynamical systems based on genuine quantum networks can certainly be expected to show enhanced performance and quantum advantage compared with classical networks. Nevertheless, even quantum networks can only be expected to efficiently solve NP-hard problems approximately. In the end it is a question of precision - Nature is approximate.Comment: 38 page

    Estimation and Inference about Heterogeneous Treatment Effects in High-Dimensional Dynamic Panels

    Full text link
    This paper provides estimation and inference methods for a large number of heterogeneous treatment effects in a panel data setting with many potential controls. We assume that heterogeneous treatment is the result of a low-dimensional base treatment interacting with many heterogeneity-relevant controls, but only a small number of these interactions have a non-zero heterogeneous effect relative to the average. The method has two stages. First, we use modern machine learning techniques to estimate the expectation functions of the outcome and base treatment given controls and take the residuals of each variable. Second, we estimate the treatment effect by l1-regularized regression (i.e., Lasso) of the outcome residuals on the base treatment residuals interacted with the controls. We debias this estimator to conduct pointwise inference about a single coefficient of treatment effect vector and simultaneous inference about the whole vector. To account for the unobserved unit effects inherent in panel data, we use an extension of correlated random effects approach of Mundlak (1978) and Chamberlain (1982) to a high-dimensional setting. As an empirical application, we estimate a large number of heterogeneous demand elasticities based on a novel dataset from a major European food distributor

    Modeling, Executing and Monitoring IoT-Driven Business Rules in BPMN and DMN: Current Support and Challenges

    Get PDF
    The involvement of the Internet of Things (IoT) in Business Process Management (BPM) solutions is continuously increasing. While BPM enables the modeling, implementation, execution, monitoring, and analysis of business processes, IoT fosters the collection and exchange of data over the Internet. By enriching BPM solutions with real-world IoT data both process automation and process monitoring can be improved. Furthermore, IoT data can be utilized during process execution to realize IoT-driven business rules that consider the state of the physical environment. The aggregation of low-level IoT data into processrelevant, high-level IoT data is a paramount step towards IoT-driven business processes and business rules respectively. In this context, Business Process Modeling and Notation (BPMN) and Decision Model and Notation (DMN) provide support to model, execute, and monitor IoTdriven business rules, but some challenges remain. This paper derives the challenges that emerge when modeling, executing, and monitoring IoT-driven business rules using BPMN 2.0 and DMN standards

    DMN for Data Quality Measurement and Assessment

    Get PDF
    Data Quality assessment is aimed at evaluating the suitability of a dataset for an intended task. The extensive literature on data quality describes the various methodologies for assessing data quality by means of data profiling techniques of the whole datasets. Our investigations are aimed to provide solutions to the need of automatically assessing the level of quality of the records of a dataset, where data profiling tools do not provide an adequate level of information. As most of the times, it is easier to describe when a record has quality enough than calculating a qualitative indicator, we propose a semi-automatically business rule-guided data quality assessment methodology for every record. This involves first listing the business rules that describe the data (data requirements), then those describing how to produce measures (business rules for data quality measurements), and finally, those defining how to assess the level of data quality of a data set (business rules for data quality assessment). The main contribution of this paper is the adoption of the OMG standard DMN (Decision Model and Notation) to support the data quality requirement description and their automatic assessment by using the existing DMN engines.Ministerio de Ciencia y Tecnología RTI2018-094283-B-C33Ministerio de Ciencia y Tecnología RTI2018-094283-B-C31European Regional Development Fund SBPLY/17/180501/00029

    On the enhancement of Big Data Pipelines through Data Preparation, Data Quality, and the distribution of Optimisation Problems

    Get PDF
    Nowadays, data are fundamental for companies, providing operational support by facilitating daily transactions. Data has also become the cornerstone of strategic decision-making processes in businesses. For this purpose, there are numerous techniques that allow to extract knowledge and value from data. For example, optimisation algorithms excel at supporting decision-making processes to improve the use of resources, time and costs in the organisation. In the current industrial context, organisations usually rely on business processes to orchestrate their daily activities while collecting large amounts of information from heterogeneous sources. Therefore, the support of Big Data technologies (which are based on distributed environments) is required given the volume, variety and speed of data. Then, in order to extract value from the data, a set of techniques or activities is applied in an orderly way and at different stages. This set of techniques or activities, which facilitate the acquisition, preparation, and analysis of data, is known in the literature as Big Data pipelines. In this thesis, the improvement of three stages of the Big Data pipelines is tackled: Data Preparation, Data Quality assessment, and Data Analysis. These improvements can be addressed from an individual perspective, by focussing on each stage, or from a more complex and global perspective, implying the coordination of these stages to create data workflows. The first stage to improve is the Data Preparation by supporting the preparation of data with complex structures (i.e., data with various levels of nested structures, such as arrays). Shortcomings have been found in the literature and current technologies for transforming complex data in a simple way. Therefore, this thesis aims to improve the Data Preparation stage through Domain-Specific Languages (DSLs). Specifically, two DSLs are proposed for different use cases. While one of them is a general-purpose Data Transformation language, the other is a DSL aimed at extracting event logs in a standard format for process mining algorithms. The second area for improvement is related to the assessment of Data Quality. Depending on the type of Data Analysis algorithm, poor-quality data can seriously skew the results. A clear example are optimisation algorithms. If the data are not sufficiently accurate and complete, the search space can be severely affected. Therefore, this thesis formulates a methodology for modelling Data Quality rules adjusted to the context of use, as well as a tool that facilitates the automation of their assessment. This allows to discard the data that do not meet the quality criteria defined by the organisation. In addition, the proposal includes a framework that helps to select actions to improve the usability of the data. The third and last proposal involves the Data Analysis stage. In this case, this thesis faces the challenge of supporting the use of optimisation problems in Big Data pipelines. There is a lack of methodological solutions that allow computing exhaustive optimisation problems in distributed environments (i.e., those optimisation problems that guarantee the finding of an optimal solution by exploring the whole search space). The resolution of this type of problem in the Big Data context is computationally complex, and can be NP-complete. This is caused by two different factors. On the one hand, the search space can increase significantly as the amount of data to be processed by the optimisation algorithms increases. This challenge is addressed through a technique to generate and group problems with distributed data. On the other hand, processing optimisation problems with complex models and large search spaces in distributed environments is not trivial. Therefore, a proposal is presented for a particular case in this type of scenario. As a result, this thesis develops methodologies that have been published in scientific journals and conferences.The methodologies have been implemented in software tools that are integrated with the Apache Spark data processing engine. The solutions have been validated through tests and use cases with real datasets

    Enhancing the modelling perspective of process performance management.

    Get PDF
    La Gestión del Rendimiento de los Procesos, que comprende las etapas de planificación, supervisión y control del rendimiento de los procesos de negocio, se considera una parte esencial de la gestión de los procesos de negocio y proporciona detalles de cómo se pueden diseñar y rediseñar los procesos de negocio para mejorar su rendimiento. El rendimiento de los procesos de negocio suele medirse por medio de indicadores de rendimiento de procesos (PPI, por sus siglas en inglés), que se evalúan y monitorizan con el fin de determinar si los objetivos estratégicos y operativos están siendo alcanzados. Al igual que los procesos de negocio, y en realidad junto con ellos, estos PPIs deben ser definidos y modelados, por lo que en esta tesis, nos centramos en su Perspectiva de Modelado. Esta perspectiva nos permite describir en detalle todos los PPIs asociados a un proceso de negocio, especificando el conjunto de atributos que los definen y la información que se necesita obtener del proceso de negocio para su cálculo. La mayoría de las propuestas relacionadas con la medición del rendimiento de los procesos de negocio se han centrado en la medición del rendimiento de los procesos de negocio estructurados, repetitivos y altamente definidos. Los cambios y los nuevos requisitos de las empresas han dado lugar a nuevas necesidades en la gestión de los procesos de negocio, que requieren más flexibilidad, por ejemplo, para gestionar colecciones de alternativas de procesos de negocio y procesos altamente dinámicos e intensivos en conocimientos. Sin embargo, las técnicas actuales de gestión del rendimiento de los procesos no han evolucionado al mismo ritmo que los procesos de negocio. Esas propuestas no pueden ser utilizadas “tal cual” en escenarios con procesos de negocio que requieren más flexibilidad porque esos procesos tienen una naturaleza diferente. Por ejemplo, esas propuestas no son capaces de definir PPIs en colecciones de procesos de negocio o de definir PPIs teniendo en cuenta elementos del proceso que no suelen estar presentes en procesos tradicionales, tales como decisiones, interacciones o colaboraciones entre participantes; generando así una brecha entre los procesos de negocio y la medición de su rendimiento. Para hacer frente a este reto, esta tesis pretende ampliar los límites actuales de la medición del rendimiento de los procesos. Para ello, nos basamos en técnicas ya existentes y bien fundamentadas, principalmente aplicables a procesos de negocio estructurados cuyo comportamiento se conoce en su mayor parte a priori, y los ampliamos para hacer frente a los nuevos requisitos identificados. Específicamente, proponemos un conjunto de artefactos que se centran en la definición y la perspectiva de modelado de las medidas e indicadores de rendimiento. Primero, proponemos la extensión del metamodelo PPINOT, un metamodelo previamente utilizado para la definición y modelado de indicadores de rendimiento de procesos, en cuatro direcciones: la reutilización total o parcial de definiciones de PPIs; el modelado de PPIs teniendo en cuenta la variabilidad de los procesos y de los propios PPIs; la definición de PPIs sobre procesos intensivos del conocimiento y la relación de la definición de PPIs con un conjunto particular de elementos de los proceso, como las decisiones. Las extensiones del metamodelo son diseñadas para trabajar con propuestas relevantes en cada una de las áreas abordadas, como KIPO (Knowledge-Intensive Process Ontology), una ontología que proporciona constructores para la definición de procesos intensivos en conocimiento; DMN (Decision Model and Notation), un estándar para la definición y modelado de decisiones; Provop y C-iEPC, dos lenguajes de modelado de procesos de negocio asociados a la variabilidad. Para facilitar la representación de las definiciones y modelos de PPIs, también se propone la extensión de las dos notaciones del PPINOT: Visual PPINOT, una notación gráfica y la notación basada en plantillas que utiliza patrones lingüísticos para las definiciones de PPI. Además de estas extensiones, también proponemos dos contribuciones más: una metodología basada en la integración de PPINOT y KIPO, y en los conceptos de indicadores lead y lag con el objetivo de guiar a los participantes en la implementación de PPIs de acuerdo a las metas de negocio, y directrices en forma de un conjunto de pasos que pueden utilizarse para identificar las decisiones que afectan el rendimiento del proceso. Finalmente, también hemos dado los primeros pasos para implementar las extensiones del metamodelo en las herramientas de modelado de PPINOT. Específicamente hemos modificado el editor gráfico para facilitar el modelo de PPIs por medio de la reutilización de definiciones. Hemos validado las propuestas presentadas a través de diferentes escenarios y casos de estudio: analizando los procesos y las medidas de rendimiento del modelo de referencia SCOR (Supply Chain Operations Reference), utilizando medidas e indicadores proporcionados por el Servicio Andaluz de Salud (España) y a través de un caso de estudio en una empresa de outsourcing de tecnologías de la información y la comunicación en Brasil. Nuestra propuesta de “Mejorar la Perspectiva de Modelado de la Gestión del Rendimiento de Procesos” nos permite: reutilizar definiciones totales o parciales de PPIs en diferentes PPIs o procesos de negocio; definir PPIs en una colección de procesos de negocio (familia de procesos) teniendo en cuenta la variabilidad de los procesos y de los propios PPIs; definir PPIs sobre procesos intensivos en conocimiento; utilizar una metodología para guiar a los participantes en la implementación de PPIs de acuerdo con los objetivos de negocio; analizar el impacto sobre los PPI, de las decisiones relacionadas con los procesos de negocio; definir indicadores de rendimiento de las decisiones (DPIs) para medir el rendimiento de las decisiones relacionadas con los procesos de negocio; y utilizar la información sobre el rendimiento del proceso en la definición de decisiones

    Effects of Quantitative Measures on Understanding Inconsistencies in Business Rules

    Get PDF
    Business Rules have matured to an important aspect in the development of organizations, encoding company knowledge as declarative constraints, aimed to ensure compliant business. The management of business rules is widely acknowledged as a challenging task. A problem here is a potential inconsistency ofbusiness rules, as business rules are often created collaboratively. To support companies in managing inconsistency, many works have suggested that a quantification of inconsistencies could provide valuable insights. However, the actual effects of quantitative insights in business rules management have not yet been evaluated. In this work, we present the results of an empirical experiment using eye-tracking and other performance measures to analyze the effects of quantitative measures on understanding inconsistencies in business rules. Our results indicate that quantitative measures are associated with better understanding accuracy, understanding efficiency and less mental effort in business rules management
    corecore