309 research outputs found

    Impact of Fuzzy Logic in Object-Oriented Database Through Blockchain

    Get PDF
    In this article, we show that applying fuzzy reasoning to an object-arranged data set produces noticeably better results than applying it to a social data set by applying it to both social and object-situated data sets. A Relational Data Base Management System (RDBMS) product structure offers a practical and efficient way to locate, store, and retrieve accurate data included inside a data collection. In any case, clients typically have to make vague, ambiguous, or fanciful requests. Our work allows clients the freedom to utilise FRDB to examine the database in everyday language, enabling us to provide a range of solutions that would benefit clients in a variety of ways. Given that the degree of attributes in a fuzzy knowledge base goes from 0 to 1, the term "fuzzy" was coined. This is due to the base's fictitious formalization's reliance on fuzzy reasoning. In order to lessen the fuzziness of the fuzzy social data set as a result of the abundance of uncertainty and vulnerabilities in clinical medical services information, a fuzzy article located information base is designed here for the Health-Care space. In order to validate the presentation and sufficiency of the fuzzy logic on both data sets, certain fuzzy questions are thus posed of the fuzzy social data set and the fuzzy item-situated information base.

    High Performance Frequent Subgraph Mining on Transactional Datasets

    Get PDF
    Graph data mining has been a crucial as well as inevitable area of research. Large amounts of graph data are produced in many areas, such as Bioinformatics, Cheminformatics, Social Networks, and Web etc. Scalable graph data mining methods are getting increasingly popular and necessary due to increased graph complexities. Frequent subgraph mining is one such area where the task is to find overly recurring patterns/subgraphs. To tackle this problem, many main memory-based methods were proposed, which proved to be inefficient as the data size grew exponentially over time. In the past few years several research groups have attempted to handle the frequent subgraph mining (FSM) problem in multiple ways. Many authors have tried to achieve better performance using Graphic Processing Units (GPUs) which has multi-fold improvement over in-memory while dealing with large datasets. Later, Google\u27s MapReduce model with the Hadoop framework proved to be a major breakthrough in high performance large batch processing. Although MapReduce came with many benefits, its disk I/O and non-iterative style model could not help much for FSM domain since subgraph mining process is an iterative approach. In recent years, Spark has emerged to be the De Facto industry standard with its distributed in-memory computing capability. This is a right fit solution for iterative style of programming as well. In this work, we cover how high-performance computing has helped in improving the performance tremendously in the transactional directed and undirected aspect of graphs and performance comparisons of various FSM techniques are done based on experimental results

    Error management in ATLAS TDAQ : an intelligent systems approach

    Get PDF
    This thesis is concerned with the use of intelligent system techniques (IST) within a large distributed software system, specifically the ATLAS TDAQ system which has been developed and is currently in use at the European Laboratory for Particle Physics(CERN). The overall aim is to investigate and evaluate a range of ITS techniques in order to improve the error management system (EMS) currently used within the TDAQ system via error detection and classification. The thesis work will provide a reference for future research and development of such methods in the TDAQ system. The thesis begins by describing the TDAQ system and the existing EMS, with a focus on the underlying expert system approach, in order to identify areas where improvements can be made using IST techniques. It then discusses measures of evaluating error detection and classification techniques and the factors specific to the TDAQ system. Error conditions are then simulated in a controlled manner using an experimental setup and datasets were gathered from two different sources. Analysis and processing of the datasets using statistical and ITS techniques shows that clusters exists in the data corresponding to the different simulated errors. Different ITS techniques are applied to the gathered datasets in order to realise an error detection model. These techniques include Artificial Neural Networks (ANNs), Support Vector Machines (SVMs) and Cartesian Genetic Programming (CGP) and a comparison of the respective advantages and disadvantages is made. The principle conclusions from this work are that IST can be successfully used to detect errors in the ATLAS TDAQ system and thus can provide a tool to improve the overall error management system. It is of particular importance that the IST can be used without having a detailed knowledge of the system, as the ATLAS TDAQ is too complex for a single person to have complete understanding of. The results of this research will benefit researchers developing and evaluating IST techniques in similar large scale distributed systems

    Error management in ATLAS TDAQ : an intelligent systems approach

    Get PDF
    This thesis is concerned with the use of intelligent system techniques (IST) within a large distributed software system, specifically the ATLAS TDAQ system which has been developed and is currently in use at the European Laboratory for Particle Physics(CERN). The overall aim is to investigate and evaluate a range of ITS techniques in order to improve the error management system (EMS) currently used within the TDAQ system via error detection and classification. The thesis work will provide a reference for future research and development of such methods in the TDAQ system. The thesis begins by describing the TDAQ system and the existing EMS, with a focus on the underlying expert system approach, in order to identify areas where improvements can be made using IST techniques. It then discusses measures of evaluating error detection and classification techniques and the factors specific to the TDAQ system. Error conditions are then simulated in a controlled manner using an experimental setup and datasets were gathered from two different sources. Analysis and processing of the datasets using statistical and ITS techniques shows that clusters exists in the data corresponding to the different simulated errors. Different ITS techniques are applied to the gathered datasets in order to realise an error detection model. These techniques include Artificial Neural Networks (ANNs), Support Vector Machines (SVMs) and Cartesian Genetic Programming (CGP) and a comparison of the respective advantages and disadvantages is made. The principle conclusions from this work are that IST can be successfully used to detect errors in the ATLAS TDAQ system and thus can provide a tool to improve the overall error management system. It is of particular importance that the IST can be used without having a detailed knowledge of the system, as the ATLAS TDAQ is too complex for a single person to have complete understanding of. The results of this research will benefit researchers developing and evaluating IST techniques in similar large scale distributed systems.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    A type-2 fuzzy logic based goal-driven simulation for optimising field service delivery

    Get PDF
    This thesis develops an intelligent system capable of incorporating the conditions that drive operational activity while implementing the means to handle unexpected factors to protect business sustainability. This solution aims to optimise field service operations in the utility-based industry, especially within one of the world's leading communications services companies, namely BT (British Telecom), which operates in highly regulated and competitive markets. Notably, the telecommunication sector is an essential driver of economic activity. Consequently, intelligent solutions must incorporate the ability to explain their underlying algorithms that power their final decisions to humans. In this regard, this thesis studies the following research gaps: the lack of integrated solutions that go beyond isolated monolithic architectures, the lack of agile end-to-end frameworks for handling uncertainty while business targets are defined, current solutions that address target-oriented problems do not incorporate explainable methodologies; as a result, limited explainability features result in inapplicability for highly regulated industries, and most tools do not support scalability for real-world scenarios. Hence, the need for an integrated, intelligent solution to address these target-oriented simulation problems. This thesis aims to reduce the gaps mentioned above by exploiting fuzzy logic capabilities such as mimicking human thinking and handling uncertainty. Moreover, this thesis also finds support in the Explainable AI field, particularly in the strategies and characteristics to deploy more transparent intelligent solutions that humans can understand. Hence, these foundations support the thesis to unlock explainability, transparency and interpretability. This thesis develops a series of techniques with the following features: the formalisation of an end-to-end framework that dynamically learns form data, the implementation of a novel fuzzy membership correlation analysis approach to enhance performance, the development of a novel fuzzy logic-based method to evaluate the relevancy of inputs, the modelling of a robust optimisation method for operational sustainability in the telecommunications sector, the design of an agile modelling approach for scalability and consistency, the formalisation of a novel fuzzy-logic system for goal-driven simulation for achieving specific business targets before being implemented in real-life conditions, and a novel simulation environment that incorporates visual tools to enhance interpretability while moving from conventional simulation to a target-oriented model. The proposed tool was developed based on data from BT, reflecting their real-world operational conditions. The data was protected and anonymised in compliance with BT’s sharing of information regulations. The techniques presented in the development of this thesis yield significant improvements aligned to institutional targets. Precisely, as detailed in Section 9.5, the proposed system can model a reduction between 3.78% and 5.36% of footprint carbon emission due to travel times for jobs completion on customer premises for specific geographical areas. The proposed framework allows generating simulation scenarios 13 times faster than conventional approaches. As described in Section 9.6, these improvements contribute to increased productivity and customer satisfaction metrics regarding keeping appointment times, completing orders in the promised timeframe or fixing faults when agreed by an estimated 2.6%. The proposed tool allows to evaluate decisions before acting; as detailed in Section 9.7, this contributes to the ‘promoters’ minus ‘detractors’ across business units measure by an estimated 1%

    Knowledge Based Systems: A Critical Survey of Major Concepts, Issues, and Techniques

    Get PDF
    This Working Paper Series entry presents a detailed survey of knowledge based systems. After being in a relatively dormant state for many years, only recently is Artificial Intelligence (AI) - that branch of computer science that attempts to have machines emulate intelligent behavior - accomplishing practical results. Most of these results can be attributed to the design and use of Knowledge-Based Systems, KBSs (or ecpert systems) - problem solving computer programs that can reach a level of performance comparable to that of a human expert in some specialized problem domain. These systems can act as a consultant for various requirements like medical diagnosis, military threat analysis, project risk assessment, etc. These systems possess knowledge to enable them to make intelligent desisions. They are, however, not meant to replace the human specialists in any particular domain. A critical survey of recent work in interactive KBSs is reported. A case study (MYCIN) of a KBS, a list of existing KBSs, and an introduction to the Japanese Fifth Generation Computer Project are provided as appendices. Finally, an extensive set of KBS-related references is provided at the end of the report

    Socio-Cognitive and Affective Computing

    Get PDF
    Social cognition focuses on how people process, store, and apply information about other people and social situations. It focuses on the role that cognitive processes play in social interactions. On the other hand, the term cognitive computing is generally used to refer to new hardware and/or software that mimics the functioning of the human brain and helps to improve human decision-making. In this sense, it is a type of computing with the goal of discovering more accurate models of how the human brain/mind senses, reasons, and responds to stimuli. Socio-Cognitive Computing should be understood as a set of theoretical interdisciplinary frameworks, methodologies, methods and hardware/software tools to model how the human brain mediates social interactions. In addition, Affective Computing is the study and development of systems and devices that can recognize, interpret, process, and simulate human affects, a fundamental aspect of socio-cognitive neuroscience. It is an interdisciplinary field spanning computer science, electrical engineering, psychology, and cognitive science. Physiological Computing is a category of technology in which electrophysiological data recorded directly from human activity are used to interface with a computing device. This technology becomes even more relevant when computing can be integrated pervasively in everyday life environments. Thus, Socio-Cognitive and Affective Computing systems should be able to adapt their behavior according to the Physiological Computing paradigm. This book integrates proposals from researchers who use signals from the brain and/or body to infer people's intentions and psychological state in smart computing systems. The design of this kind of systems combines knowledge and methods of ubiquitous and pervasive computing, as well as physiological data measurement and processing, with those of socio-cognitive and affective computing

    Development of a toolkit for component-based automation systems

    Get PDF
    From the earliest days of mass production in the automotive industry there has been a progressive move towards the use of flexible manufacturing systems that cater for product variants that meet market demands. In recent years this market has become more demanding with pressures from legislation, globalisation and increased customer expectations. This has lead to the current trends of mass customisation in production. In order to support this manufacturing systems are not only becoming more flexible† to cope with the increased product variants, but also more agile‡ such that they may respond more rapidly to market changes. Modularisation§ is widely used to increase the agility of automation systems, such that they may be more readily reconfigured¶. Also with globalisation into India and Asia semi-automatic machines (machines that interact with human operators) are more frequently used to reduce capital outlay and increase flexibility. There is an increasing need for tools and methodologies that support this in order to improve design robustness, reduce design time and gain a competitive edge in the market. The research presented in this thesis is built upon the work from COMPAG/COMPANION (COMponent- based Paradigm for AGile automation, and COmmon Model for PArtNers in automatION), and as part of the BDA (Business Driven Automation), SOCRADES (Service Oriented Cross-layer infrastructure for Distributed smart Embedded deviceS), and IMC-AESOP (ArchitecturE for Service- Oriented Process – monitoring and control) projects conducted at Loughborough University UK. This research details the design and implementation of a toolkit for building and simulating automation systems comprising components with behaviour described using Finite State Machines (FSM). The research focus is the development of the engineering toolkit that can support the automation system lifecycle from initial design through commissioning to maintenance and reconfiguration as well as the integration of a virtual human. This is achieved using a novel data structure that supports component definitions for control, simulation, maintenance and the novel integration of a virtual human into the automation system operation

    Big data analytics for large-scale wireless networks: Challenges and opportunities

    Full text link
    © 2019 Association for Computing Machinery. The wide proliferation of various wireless communication systems and wireless devices has led to the arrival of big data era in large-scale wireless networks. Big data of large-scale wireless networks has the key features of wide variety, high volume, real-time velocity, and huge value leading to the unique research challenges that are different from existing computing systems. In this article, we present a survey of the state-of-art big data analytics (BDA) approaches for large-scale wireless networks. In particular, we categorize the life cycle of BDA into four consecutive stages: Data Acquisition, Data Preprocessing, Data Storage, and Data Analytics. We then present a detailed survey of the technical solutions to the challenges in BDA for large-scale wireless networks according to each stage in the life cycle of BDA. Moreover, we discuss the open research issues and outline the future directions in this promising area

    Skill-based reconfiguration of industrial mobile robots

    Get PDF
    Caused by a rising mass customisation and the high variety of equipment versions, the exibility of manufacturing systems in car productions has to be increased. In addition to a exible handling of production load changes or hardware breakdowns that are established research areas in literature, this thesis presents a skill-based recon guration mechanism for industrial mobile robots to enhance functional recon gurability. The proposed holonic multi-agent system is able to react to functional process changes while missing functionalities are created by self-organisation. Applied to a mobile commissioning system that is provided by AUDI AG, the suggested mechanism is validated in a real-world environment including the on-line veri cation of the recon gured robot functionality in a Validity Check. The present thesis includes an original contribution in three aspects: First, a recon - guration mechanism is presented that reacts in a self-organised way to functional process changes. The application layer of a hardware system converts a semantic description into functional requirements for a new robot skill. The result of this mechanism is the on-line integration of a new functionality into the running process. Second, the proposed system allows maintaining the productivity of the running process and exibly changing the robot hardware through provision of a hardware-abstraction layer. An encapsulated Recon guration Holon dynamically includes the actual con guration each time a recon guration is started. This allows reacting to changed environment settings. As the resulting agent that contains the new functionality, is identical in shape and behaviour to the existing skills, its integration into the running process is conducted without a considerable loss of productivity. Third, the suggested mechanism is composed of a novel agent design that allows implementing self-organisation during the encapsulated recon guration and dependability for standard process executions. The selective assignment of behaviour-based and cognitive agents is the basis for the exibility and e ectiveness of the proposed recon guration mechanism
    • …
    corecore