2,102 research outputs found

    Simulation of complex environments:the Fuzzy Cognitive Agent

    Get PDF
    The world is becoming increasingly competitive by the action of liberalised national and global markets. In parallel these markets have become increasingly complex making it difficult for participants to optimise their trading actions. In response, many differing computer simulation techniques have been investigated to develop either a deeper understanding of these evolving markets or to create effective system support tools. In this paper we report our efforts to develop a novel simulation platform using fuzzy cognitive agents (FCA). Our approach encapsulates fuzzy cognitive maps (FCM) generated on the Matlab Simulink platform within commercially available agent software. We firstly present our implementation of Matlab Simulink FCMs and then show how such FCMs can be integrated within a conceptual FCA architecture. Finally we report on our efforts to realise an FCA by the integration of a Matlab Simulink based FCM with the Jack Intelligent Agent Toolkit

    Refactorings of Design Defects using Relational Concept Analysis

    Get PDF
    Software engineers often need to identify and correct design defects, ıe} recurring design problems that hinder development and maintenance\ud by making programs harder to comprehend and--or evolve. While detection\ud of design defects is an actively researched area, their correction---mainly\ud a manual and time-consuming activity --- is yet to be extensively\ud investigated for automation. In this paper, we propose an automated\ud approach for suggesting defect-correcting refactorings using relational\ud concept analysis (RCA). The added value of RCA consists in exploiting\ud the links between formal objects which abound in a software re-engineering\ud context. We validated our approach on instances of the <span class='textit'></span>Blob\ud design defect taken from four different open-source programs

    Attribute Exploration of Gene Regulatory Processes

    Get PDF
    This thesis aims at the logical analysis of discrete processes, in particular of such generated by gene regulatory networks. States, transitions and operators from temporal logics are expressed in the language of Formal Concept Analysis. By the attribute exploration algorithm, an expert or a computer program is enabled to validate a minimal and complete set of implications, e.g. by comparison of predictions derived from literature with observed data. Here, these rules represent temporal dependencies within gene regulatory networks including coexpression of genes, reachability of states, invariants or possible causal relationships. This new approach is embedded into the theory of universal coalgebras, particularly automata, Kripke structures and Labelled Transition Systems. A comparison with the temporal expressivity of Description Logics is made. The main theoretical results concern the integration of background knowledge into the successive exploration of the defined data structures (formal contexts). Applying the method a Boolean network from literature modelling sporulation of Bacillus subtilis is examined. Finally, we developed an asynchronous Boolean network for extracellular matrix formation and destruction in the context of rheumatoid arthritis.Comment: 111 pages, 9 figures, file size 2.1 MB, PhD thesis University of Jena, Germany, Faculty of Mathematics and Computer Science, 2011. Online available at http://www.db-thueringen.de/servlets/DocumentServlet?id=1960

    Algorithmic Regulation using AI and Blockchain Technology

    Get PDF
    This thesis investigates the application of AI and blockchain technology to the domain of Algorithmic Regulation. Algorithmic Regulation refers to the use of intelligent systems for the enabling and enforcement of regulation (often referred to as RegTech in financial services). The research work focuses on three problems: a) Machine interpretability of regulation; b) Regulatory reporting of data; and c) Federated analytics with data compliance. Uniquely, this research was designed, implemented, tested and deployed in collaboration with the Financial Conduct Authority (FCA), Santander, RegulAItion and part funded by the InnovateUK RegNet project. I am a co-founder of RegulAItion. / Using AI to Automate the Regulatory Handbook: In this investigation we propose the use of reasoning systems for encoding financial regulation as machine readable and executable rules. We argue that our rules-based “white-box” approach is needed, as opposed to a “black-box” machine learning approach, as regulators need explainability and outline the theoretical foundation needed to encode regulation from the FCA Handbook into machine readable semantics. We then present the design and implementation of a production-grade regulatory reasoning system built on top of the Java Expert System Shell (JESS) and use it to encode a subset of regulation (consumer credit regulation) from the FCA Handbook. We then perform an empirical evaluation, with the regulator, of the system based on its performance and accuracy in handling 600 “real- world” queries and compare it with its human equivalent. The findings suggest that the proposed approach of using reasoning systems not only provides quicker responses, but also more accurate results to answers from queries that are explainable. / SmartReg: Using Blockchain for Regulatory Reporting: In this investigation we explore the use of distributed ledgers for real-time reporting of data for compliance between firms and regulators. Regulators and firms recognise the growing burden and complexity of regulatory reporting resulting from the lack of data standardisation, increasing complexity of regulation and the lack of machine executable rules. The investigation presents a) the design and implementation of a permissioned Quorum-Ethereum based regulatory reporting network that makes use of an off-chain reporting service to execute machine readable rules on banks’ data through smart contracts b) a means for cross border regulators to share reporting data with each other that can be used to given them a true global view of systemic risk c) a means to carry out regulatory reporting using a novel pull-based approach where the regulator is able to directly “pull” relevant data out of the banks’ environments in an ad-hoc basis- enabling regulators to become more active when addressing risk. We validate the approach and implementation of our system through a pilot use case with a bank and regulator. The outputs of this investigation have informed the Digital Regulatory Reporting initiative- an FCA and UK Government led project to improve regulatory reporting in the financial services. / RegNet: Using Federated Learning and Blockchain for Privacy Preserving Data Access In this investigation we explore the use of Federated Machine Learning and Trusted data access for analytics. With the development of stricter Data Regulation (e.g. GDPR) it is increasingly difficult to share data for collective analytics in a compliant manner. We argue that for data compliance, data does not need to be shared but rather, trusted data access is needed. The investigation presents a) the design and implementation of RegNet- an infrastructure for trusted data access in a secure and privacy preserving manner for a singular algorithmic purpose, where the algorithms (such as Federated Learning) are orchestrated to run within the infrastructure of data owners b) A taxonomy for Federated Learning c) The tokenization and orchestration of Federated Learning through smart contracts for auditable governance. We validate our approach and the infrastructure (RegNet) through a real world use case, involving a number of banks, that makes use of Federated Learning with Epsilon-Differential Privacy for improving the performance of an Anti-Money-Laundering classification model

    Certainty Closure: Reliable Constraint Reasoning with Incomplete or Erroneous Data

    Full text link
    Constraint Programming (CP) has proved an effective paradigm to model and solve difficult combinatorial satisfaction and optimisation problems from disparate domains. Many such problems arising from the commercial world are permeated by data uncertainty. Existing CP approaches that accommodate uncertainty are less suited to uncertainty arising due to incomplete and erroneous data, because they do not build reliable models and solutions guaranteed to address the user's genuine problem as she perceives it. Other fields such as reliable computation offer combinations of models and associated methods to handle these types of uncertain data, but lack an expressive framework characterising the resolution methodology independently of the model. We present a unifying framework that extends the CP formalism in both model and solutions, to tackle ill-defined combinatorial problems with incomplete or erroneous data. The certainty closure framework brings together modelling and solving methodologies from different fields into the CP paradigm to provide reliable and efficient approches for uncertain constraint problems. We demonstrate the applicability of the framework on a case study in network diagnosis. We define resolution forms that give generic templates, and their associated operational semantics, to derive practical solution methods for reliable solutions.Comment: Revised versio

    The CUBIST Project: Combining and Uniting Business Intelligence with Semantic Technologies

    Get PDF
    As a preface to this Special 'CUBIST' Edition of the International Journal of Intelligent Information Technologies IJIIT, this article describes the European Framework Seven Combining and Unifying Business Intelligence with Semantic Technologies CUBIST project, which ran from October 2010 to September 2013. The project aimed to combine the best elements of traditional BI with the newer, semantic, technologies of the Sematic Web, in the form of the Resource Description Framework RDF, and Formal Concept Analysis FCA. CUBIST's purpose was to provide end-users with "conceptually relevant and user friendly visual analytics" to allow them to explore their data in new ways, discovering hidden meaning and solving hitherto difficult problems. To this end, three of the partners in CUBIST were use-cases: recruitment consultancy, computational biology and the space industry. Each use-case provided their own requirements and problems that were finally addressed by the prototype CUBIST visual-analytics developed in the project

    A framework for evaluating the quality of modelling languages in MDE environments

    Full text link
    This thesis presents the Multiple Modelling Quality Evaluation Framework method (hereinafter MMQEF), which is a conceptual, methodological, and technological framework for evaluating quality issues in modelling languages and modelling elements by the application of a taxonomic analysis. It derives some analytic procedures that support the detection of quality issues in model-driven projects, such as the suitability of modelling languages, traces between abstraction levels, specification for model transformations, and integration between modelling proposals. MMQEF also suggests metrics to perform analytic procedures based on the classification obtained for the modelling languages and artifacts under evaluation. MMQEF uses a taxonomy that is extracted from the Zachman framework for Information Systems (Zachman, 1987; Sowa and Zachman, 1992), which proposed a visual language to classify elements that are part of an Information System (IS). These elements can be from organizational to technical artifacts. The visual language contains a bi-dimensional matrix for classifying IS elements (generally expressed as models) and a set of seven rules to perform the classification. As an evaluation method, MMQEF defines activities in order to derive quality analytics based on the classification applied on modelling languages and elements. The Zachman framework was chosen because it was one of the first and most precise proposals for a reference architecture for IS, which is recognized by important standards such as the ISO 42010 (612, 2011). This thesis presents the conceptual foundation of the evaluation framework, which is based on the definition of quality for model-driven engineering (MDE). The methodological and technological support of MMQEF is also described. Finally, some validations for MMQEF are reported.Esta tesis presenta el método MMQEF (Multiple Modelling Quality Evaluation Framework), el cual es un marco de trabajo conceptual, metodológico y tecnológico para evaluar aspectos de calidad sobre lenguajes y elementos de modelado mediante la aplicación de análisis taxonómico. El método deriva procedimientos analíticos que soportan la detección de aspectos de calidad en proyectos model-driven tales como: idoneidad de lenguajes de modelado, trazabilidad entre niveles de abstracción, especificación de transformación de modelos, e integración de propuestas de modelado. MMQEF también sugiere métricas para ejecutar procedimientos analíticos basados en la clasificación obtenida para los lenguajes y artefactos de modelado bajo evaluación. MMQEF usa una taxonomía para Sistemas de Información basada en el framework Zachman (Zachman, 1987; Sowa and Zachman, 1992). Dicha taxonomía propone un lenguaje visual para clasificar elementos que hacen parte de un Sistema de Información. Los elementos pueden ser artefactos asociados a niveles desde organizacionales hasta técnicos. El lenguaje visual contiene una matriz bidimensional para clasificar elementos de Sistemas de Información, y un conjunto de siete reglas para ejecutar la clasificación. Como método de evaluación MMEQF define actividades para derivar analíticas de calidad basadas en la clasificación aplicada sobre lenguajes y elementos de modelado. El marco Zachman fue seleccionado debido a que éste fue una de las primeras y más precisas propuestas de arquitectura de referencia para Sistemas de Información, siendo ésto reconocido por destacados estándares como ISO 42010 (612, 2011). Esta tesis presenta los fundamentos conceptuales del método de evaluación basado en el análisis de la definición de calidad en la ingeniería dirigida por modelos (MDE). Posteriormente se describe el soporte metodológico y tecnológico de MMQEF, y finalmente se reportan validaciones.Aquesta tesi presenta el mètode MMQEF (Multiple Modelling Quality Evaluation Framework), el qual és un marc de treball conceptual, metodològic i tecnològic per avaluar aspectes de qualitat sobre llenguatges i elements de modelatge mitjançant l'aplicació d'anàlisi taxonòmic. El mètode deriva procediments analítics que suporten la detecció d'aspectes de qualitat en projectes model-driven com ara: idoneïtat de llenguatges de modelatge, traçabilitat entre nivells d'abstracció, especificació de transformació de models, i integració de propostes de modelatge. MMQEF també suggereix mètriques per executar procediments analítics basats en la classificació obtinguda pels llenguatges i artefactes de mode-lat avaluats. MMQEF fa servir una taxonomia per a Sistemes d'Informació basada en el framework Zachman (Zachman, 1987; Sowa and Zachman, 1992). Aquesta taxonomia proposa un llenguatge visual per classificar elements que fan part d'un Sistema d'Informació. Els elements poden ser artefactes associats a nivells des organitzacionals fins tècnics. El llenguatge visual conté una matriu bidimensional per classificar elements de Sistemes d'Informació, i un conjunt de set regles per executar la classificació. Com a mètode d'avaluació MMEQF defineix activitats per derivar analítiques de qualitat basades en la classificació aplicada sobre llenguatges i elements de modelatge. El marc Zachman va ser seleccionat a causa de que aquest va ser una de les primeres i més precises propostes d'arquitectura de referència per a Sistemes d'Informació, sent això reconegut per destacats estàndards com ISO 42010 (612, 2011). Aquesta tesi presenta els fonaments conceptuals del mètode d'avaluació basat en l'anàlisi de la definició de qualitat en l'enginyeria dirigida per models (MDE). Posteriorment es descriu el suport metodològic i tecnològic de MMQEF, i finalment es reporten validacions.Giraldo Velásquez, FD. (2017). A framework for evaluating the quality of modelling languages in MDE environments [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/90628TESI

    Requirements for Information Extraction for Knowledge Management

    Get PDF
    Knowledge Management (KM) systems inherently suffer from the knowledge acquisition bottleneck - the difficulty of modeling and formalizing knowledge relevant for specific domains. A potential solution to this problem is Information Extraction (IE) technology. However, IE was originally developed for database population and there is a mismatch between what is required to successfully perform KM and what current IE technology provides. In this paper we begin to address this issue by outlining requirements for IE based KM

    EEG motor imagery decoding: A framework for comparative analysis with channel attention mechanisms

    Full text link
    The objective of this study is to investigate the application of various channel attention mechanisms within the domain of brain-computer interface (BCI) for motor imagery decoding. Channel attention mechanisms can be seen as a powerful evolution of spatial filters traditionally used for motor imagery decoding. This study systematically compares such mechanisms by integrating them into a lightweight architecture framework to evaluate their impact. We carefully construct a straightforward and lightweight baseline architecture designed to seamlessly integrate different channel attention mechanisms. This approach is contrary to previous works which only investigate one attention mechanism and usually build a very complex, sometimes nested architecture. Our framework allows us to evaluate and compare the impact of different attention mechanisms under the same circumstances. The easy integration of different channel attention mechanisms as well as the low computational complexity enables us to conduct a wide range of experiments on four datasets to thoroughly assess the effectiveness of the baseline model and the attention mechanisms. Our experiments demonstrate the strength and generalizability of our architecture framework as well as how channel attention mechanisms can improve the performance while maintaining the small memory footprint and low computational complexity of our baseline architecture. Our architecture emphasizes simplicity, offering easy integration of channel attention mechanisms, while maintaining a high degree of generalizability across datasets, making it a versatile and efficient solution for EEG motor imagery decoding within brain-computer interfaces.Comment: Submitted to: Journal of Neural Engineerin
    corecore