1,348 research outputs found

    MERGING EPISTEMIC AND TEMPORAL MODELS: A HISTORY-FREE APPROACH

    Get PDF
    There are two approaches to merging temporal and epistemic models. The first one consists in starting with a temporal model and enriching it with epistemic dimension (as tem- poral epistemic logic), while the second one is supposed to start with an epistemic model intro- ducing temporal dimension (dynamic epistemic logic, epistemic temporal logic). The proposed evolutionary epistemic model (EEM) is based on the standard epistemic model with an evolu- tionary relation. EEM captures knowledge changes in terms of the evolution of worlds included in different epistemic contexts. Unlike other temporal-epistemic models, EEM is free from the concept of history and enriched with quantification operators over the worlds’ evolution stages.There are two approaches to merging temporal and epistemic models. The first one consists in starting with a temporal model and enriching it with epistemic dimension (as temporal epistemic logic), while the second one is supposed to start with an epistemic model introducing temporal dimension (dynamic epistemic logic, epistemic temporal logic). The proposed evolutionary epistemic model (EEM) is based on the standard epistemic model with an evolu- tionary relation. EEM captures knowledge changes in terms of the evolution of worlds included in different epistemic contexts. Unlike other temporal-epistemic models, EEM is free from the concept of history and enriched with quantification operators over the worlds’ evolution stages

    State of B\"uchi Complementation

    Full text link
    Complementation of B\"uchi automata has been studied for over five decades since the formalism was introduced in 1960. Known complementation constructions can be classified into Ramsey-based, determinization-based, rank-based, and slice-based approaches. Regarding the performance of these approaches, there have been several complexity analyses but very few experimental results. What especially lacks is a comparative experiment on all of the four approaches to see how they perform in practice. In this paper, we review the four approaches, propose several optimization heuristics, and perform comparative experimentation on four representative constructions that are considered the most efficient in each approach. The experimental results show that (1) the determinization-based Safra-Piterman construction outperforms the other three in producing smaller complements and finishing more tasks in the allocated time and (2) the proposed heuristics substantially improve the Safra-Piterman and the slice-based constructions.Comment: 28 pages, 4 figures, a preliminary version of this paper appeared in the Proceedings of the 15th International Conference on Implementation and Application of Automata (CIAA

    Supporting the grow-and-prune model for evolving software product lines

    Get PDF
    207 p.Software Product Lines (SPLs) aim at supporting the development of a whole family of software products through a systematic reuse of shared assets. To this end, SPL development is separated into two interrelated processes: (1) domain engineering (DE), where the scope and variability of the system is defined and reusable core-assets are developed; and (2) application engineering (AE), where products are derived by selecting core assets and resolving variability. Evolution in SPLs is considered to be more challenging than in traditional systems, as both core-assets and products need to co-evolve. The so-called grow-and-prune model has proven great flexibility to incrementally evolve an SPL by letting the products grow, and later prune the product functionalities deemed useful by refactoring and merging them back to the reusable SPL core-asset base. This Thesis aims at supporting the grow-and-prune model as for initiating and enacting the pruning. Initiating the pruning requires SPL engineers to conduct customization analysis, i.e. analyzing how products have changed the core-assets. Customization analysis aims at identifying interesting product customizations to be ported to the core-asset base. However, existing tools do not fulfill engineers needs to conduct this practice. To address this issue, this Thesis elaborates on the SPL engineers' needs when conducting customization analysis, and proposes a data-warehouse approach to help SPL engineers on the analysis. Once the interesting customizations have been identified, the pruning needs to be enacted. This means that product code needs to be ported to the core-asset realm, while products are upgraded with newer functionalities and bug-fixes available in newer core-asset releases. Herein, synchronizing both parties through sync paths is required. However, the state of-the-art tools are not tailored to SPL sync paths, and this hinders synchronizing core-assets and products. To address this issue, this Thesis proposes to leverage existing Version Control Systems (i.e. git/Github) to provide sync operations as first-class construct

    Logics of Temporal-Epistemic Actions

    Get PDF
    We present Dynamic Epistemic Temporal Logic, a framework for reasoning about operations on multi-agent Kripke models that contain a designated temporal relation. These operations are natural extensions of the well-known "action models" from Dynamic Epistemic Logic. Our "temporal action models" may be used to define a number of informational actions that can modify the "objective" temporal structure of a model along with the agents' basic and higher-order knowledge and beliefs about this structure, including their beliefs about the time. In essence, this approach provides one way to extend the domain of action model-style operations from atemporal Kripke models to temporal Kripke models in a manner that allows actions to control the flow of time. We present a number of examples to illustrate the subtleties involved in interpreting the effects of our extended action models on temporal Kripke models. We also study preservation of important epistemic-temporal properties of temporal Kripke models under temporal action model-induced operations, provide complete axiomatizations for two theories of temporal action models, and connect our approach with previous work on time in Dynamic Epistemic Logic

    Automata Techniques for Epistemic Protocol Synthesis

    Get PDF
    International audienceIn this work we aim at applying automata techniques to problems studied in Dynamic Epistemic Logic, such as epistemic planning. To do so, we first remark that repeatedly executing ad infinitum a propositional event model from an initial epistemic model yields a relational structure that can be finitely represented with automata. This correspondence, together with recent results on uniform strategies, allows us to give an alternative decidability proof of the epistemic planning problem for propositional events, with as by-products accurate upper-bounds on its time complexity, and the possibility to synthesize a finite word automaton that describes the set of all solution plans. In fact, using automata techniques enables us to solve a much more general problem, that we introduce and call epistemic protocol synthesis

    The Monarch Initiative in 2024: an analytic platform integrating phenotypes, genes and diseases across species.

    Get PDF
    Bridging the gap between genetic variations, environmental determinants, and phenotypic outcomes is critical for supporting clinical diagnosis and understanding mechanisms of diseases. It requires integrating open data at a global scale. The Monarch Initiative advances these goals by developing open ontologies, semantic data models, and knowledge graphs for translational research. The Monarch App is an integrated platform combining data about genes, phenotypes, and diseases across species. Monarch\u27s APIs enable access to carefully curated datasets and advanced analysis tools that support the understanding and diagnosis of disease for diverse applications such as variant prioritization, deep phenotyping, and patient profile-matching. We have migrated our system into a scalable, cloud-based infrastructure; simplified Monarch\u27s data ingestion and knowledge graph integration systems; enhanced data mapping and integration standards; and developed a new user interface with novel search and graph navigation features. Furthermore, we advanced Monarch\u27s analytic tools by developing a customized plugin for OpenAI\u27s ChatGPT to increase the reliability of its responses about phenotypic data, allowing us to interrogate the knowledge in the Monarch graph using state-of-the-art Large Language Models. The resources of the Monarch Initiative can be found at monarchinitiative.org and its corresponding code repository at github.com/monarch-initiative/monarch-app

    A family of experiments to validate measures for UML activity diagrams of ETL processes in data warehouses

    Get PDF
    In data warehousing, Extract, Transform, and Load (ETL) processes are in charge of extracting the data from the data sources that will be contained in the data warehouse. Their design and maintenance is thus a cornerstone in any data warehouse development project. Due to their relevance, the quality of these processes should be formally assessed early in the development in order to avoid populating the data warehouse with incorrect data. To this end, this paper presents a set of measures with which to evaluate the structural complexity of ETL process models at the conceptual level. This study is, moreover, accompanied by the application of formal frameworks and a family of experiments whose aim is to theoretical and empirically validate the proposed measures, respectively. Our experiments show that the use of these measures can aid designers to predict the effort associated with the maintenance tasks of ETL processes and to make ETL process models more usable. Our work is based on Unified Modeling Language (UML) activity diagrams for modeling ETL processes, and on the Framework for the Modeling and Evaluation of Software Processes (FMESP) framework for the definition and validation of the measures.In data warehousing, Extract, Transform, and Load (ETL) processes are in charge of extracting the data from the data sources that will be contained in the data warehouse. Their design and maintenance is thus a cornerstone in any data warehouse development project. Due to their relevance, the quality of these processes should be formally assessed early in the development in order to avoid populating the data warehouse with incorrect data. To this end, this paper presents a set of measures with which to evaluate the structural complexity of ETL process models at the conceptual level. This study is, moreover, accompanied by the application of formal frameworks and a family of experiments whose aim is to theoretical and empirically validate the proposed measures, respectively. Our experiments show that the use of these measures can aid designers to predict the effort associated with the maintenance tasks of ETL processes and to make ETL process models more usable. Our work is based on Unified Modeling Language (UML) activity diagrams for modeling ETL processes, and on the Framework for the Modeling and Evaluation of Software Processes (FMESP) framework for the definition and validation of the measures

    Sistema de Información Geográfica para la inteligencia de negocios, caso de estudio en la Superintendencia de Economía Popular y Solidaria

    Get PDF
    The relationship between Business Intelligence and Geographic Information Systems is closer than ever. This leads to the practice of including the geographic component on many business intelligence suites. This research work presents the advantages of the integration between Business Intelligence (BI) and Geographic Information Systems (GIS). The implementation of a system to support decision making that combines a Business Intelligence suite plus a Geographic Information Systems, gives us a comprehensive view of the key indicators of both internal and external information on the organization, which leads us to reduce empiricism in decision. To do this work we have used suites of Business Intelligence with Geographic component, in order to show the close relationship between the two technologies and how useful it is for the end user to generate information through static reports, interactive reports, dashboards and transactional data using more spatial relations. The spatial relationships help us identify characteristics that would not be very explicit through tabular data views, so it is therefore a holistic view of information. To the integrate Business Intelligence with Geographic Information Systems we are not talking exclusively of online analytical processing (OLAP), data mining, process of extraction transformation and load, tabular reports, tabular dashboards; but rather spatial analytical processing Online (SOLAP), spatial data mining, process of extraction transformation and loading (ETL) spatial, maps integrated with reports, dashboards including maps. The integration of these technologies and proper management of its components is done through the Competency Center Business Intelligence, which is a multidisciplinary team of employees with the purpose of achieving the objectives required for supporting the strategic decision-making process in the Organization.La relación entre la Inteligencia de Negocios y los Sistemas de Información Geográfica es cada vez más estrecha, lo que conlleva a que muchas herramientas de Inteligencia de Negocios incluyan es sus suites el componente geográfico. En este trabajo de investigación se presenta las ventajas de contar con los sistemas de Inteligencia de Negocios y los Sistemas de Información Geográficos integrados. La implementación de un sistema de soporte a la toma de decisiones que involucre la Inteligencia de Negocios más los Sistemas de Información Geográfica nos da una visión integral de los indicadores claves tanto de información interna como de información externa de la Organización, lo que nos lleva a reducir el empirismo en la toma de decisiones. Para ello en este trabajo se han utilizado herramientas de Inteligencia de Negocios con el componente Geográfico, con el objetivo de mostrar la estrecha relación entre ambas tecnologías y lo útil que resulta para el usuario final el generar información a través de reportes estáticos, reportes interactivos, tableros de mando utilizando datos transaccionales más relaciones espaciales. Las relaciones espaciales, nos ayudan a identificar características que a través de vistas de datos tabulares no son muy explícitas, es por ello que hablamos de una visión integral de la información. Al integrar la Inteligencia de Negocios con los Sistemas de Información Geográfica ya no hablamos solamente de procesamiento analítico en línea (OLAP), de minería de datos, de procesos de extracción transformación y carga (ETL), de reportes tabulares, de tableros de control tabulares, sino más bien de procesamiento espacial analítico en línea (SOLAP), de minería de datos espacial, de procesos de extracción transformación y carga de datos (ETL) espaciales, de reportes integrados con mapas, de tableros de control incluyendo mapas. La integración de estas tecnologías y la correcta gestión entre sus componentes se la hace a través de un Centro de Competencias de Inteligencia de Negocios; Que es un equipo multidisciplinario de empleados de la organización, con el fin de alcanzar los objetivos en soporte a la toma de decisiones
    corecore