226 research outputs found

    Evolving Graphs by Graph Programming

    Get PDF
    Graphs are a ubiquitous data structure in computer science and can be used to represent solutions to difficult problems in many distinct domains. This motivates the use of Evolutionary Algorithms to search over graphs and efficiently find approximate solutions. However, existing techniques often represent and manipulate graphs in an ad-hoc manner. In contrast, rule-based graph programming offers a formal mechanism for describing relations over graphs. This thesis proposes the use of rule-based graph programming for representing and implementing genetic operators over graphs. We present the Evolutionary Algorithm Evolving Graphs by Graph Programming and a number of its extensions which are capable of learning stateful and stateless digital circuits, symbolic expressions and Artificial Neural Networks. We demonstrate that rule-based graph programming may be used to implement new and effective constraint-respecting mutation operators and show that these operators may strictly generalise others found in the literature. Through our proposal of Semantic Neutral Drift, we accelerate the search process by building plateaus into the fitness landscape using domain knowledge of equivalence. We also present Horizontal Gene Transfer, a mechanism whereby graphs may be passively recombined without disrupting their fitness. Through rigorous evaluation and analysis of over 20,000 independent executions of Evolutionary Algorithms, we establish numerous benefits of our approach. We find that on many problems, Evolving Graphs by Graph Programming and its variants may significantly outperform other approaches from the literature. Additionally, our empirical results provide further evidence that neutral drift aids the efficiency of evolutionary search

    Situation Management with Complex Event Processing

    Get PDF
    With the broader dissemination of digital technologies, visionary concepts like the Internet of Things also affect an increasing number of use cases with interfaces to humans, e.g. manufacturing environments with technical operators monitoring the processes. This leads to additional challenges, as besides the technical issues also human aspects have to be considered for a successful implementation of strategic initiatives like Industrie 4.0. From a technical perspective, complex event processing has proven itself in practice to be capable of integrating and analyzing huge amounts of heterogeneous data and establishing a basic level of situation awareness by detecting situations of interests. Whereas this reactive nature of complex event processing systems may be sufficient for machine-to-machine use cases, the new characteristic of application fields with humans remaining in the control loop leads to an increasing action distance and delayed reactions. Taking human aspects into consideration leads to new requirements, with transparency and comprehensibility of the processing of events being the most important ones. Improving the comprehensibility of complex event processing and extending its capabilities towards an effective support of human operators allows tackling technical and non-technical challenges at the same time. The main contribution of this thesis answers the question of how to evolve state-of-the-art complex event processing from its reactive nature towards a transparent and holistic situation management system. The goal is to improve the interaction among systems and humans in use cases with interfaces between both worlds. Realizing a holistic situation management requires three missing capabilities to be introduced by the contributions of this thesis: First, based on the achieved transparency, the retrospective analysis of situations is enabled by collecting information related to a situation\u27s occurrence and development. Therefore, CEP engine-specific situation descriptions are transformed into a common model, allowing the automatic decomposition of the underlying patterns to derive partial patterns describing the intermediate states of processing. Second, by introducing the psychological model of situation awareness into complex event processing, human aspects of information processing are taken into consideration and introduced into the complex event processing paradigm. Based on this model, an extended situation life-cycle and transition method are derived. The introduced concepts and methods allow the implementation of the controlling function of situation management and enable the effective acquisition and maintenance of situation awareness for human operators to purposefully direct their attention towards upcoming situations. Finally, completing the set of capabilities for situation management, an approach is presented to support the generation and integration of prediction models for predictive situation management. Therefore, methods are introduced to automatically label and extract relevant data for the generation of prediction models and to enable the embedding of the resulting models for an automatic evaluation and execution. The contributions are introduced, applied and evaluated along a scenario from the manufacturing domain

    From distributed coordination to field calculus and aggregate computing

    Get PDF
    open6siThis work has been partially supported by: EU Horizon 2020 project HyVar (www.hyvar-project .eu), GA No. 644298; ICT COST Action IC1402 ARVI (www.cost -arvi .eu); Ateneo/CSP D16D15000360005 project RunVar (runvar-project.di.unito.it).Aggregate computing is an emerging approach to the engineering of complex coordination for distributed systems, based on viewing system interactions in terms of information propagating through collectives of devices, rather than in terms of individual devices and their interaction with their peers and environment. The foundation of this approach is the distillation of a number of prior approaches, both formal and pragmatic, proposed under the umbrella of field-based coordination, and culminating into the field calculus, a universal functional programming model for the specification and composition of collective behaviours with equivalent local and aggregate semantics. This foundation has been elaborated into a layered approach to engineering coordination of complex distributed systems, building up to pragmatic applications through intermediate layers encompassing reusable libraries of program components. Furthermore, some of these components are formally shown to satisfy formal properties like self-stabilisation, which transfer to whole application services by functional composition. In this survey, we trace the development and antecedents of field calculus, review the field calculus itself and the current state of aggregate computing theory and practice, and discuss a roadmap of current research directions with implications for the development of a broad range of distributed systems.embargoed_20210910Viroli, Mirko; Beal, Jacob; Damiani, Ferruccio; Audrito, Giorgio; Casadei, Roberto; Pianini, DaniloViroli, Mirko; Beal, Jacob; Damiani, Ferruccio; Audrito, Giorgio; Casadei, Roberto; Pianini, Danil

    Electronic security - risk mitigation in financial transactions : public policy issues

    Get PDF
    This paper builds on a previous series of papers (see Claessens, Glaessner, and Klingebiel, 2001, 2002) that identified electronic security as a key component to the delivery of electronic finance benefits. This paper and its technical annexes (available separately at http://www1.worldbank.org/finance/) identify and discuss seven key pillars necessary to fostering a secure electronic environment. Hence, it is intended for those formulating broad policies in the area of electronic security and those working with financial services providers (for example, executives and management). The detailed annexes of this paper are especially relevant for chief information and security officers responsible for establishing layered security. First, this paper provides definitions of electronic finance and electronic security and explains why these issues deserve attention. Next, it presents a picture of the burgeoning global electronic security industry. Then it develops a risk-management framework for understanding the risks and tradeoffs inherent in the electronic security infrastructure. It also provides examples of tradeoffs that may arise with respect to technological innovation, privacy, quality of service, and security in designing an electronic security policy framework. Finally, it outlines issues in seven interrelated areas that often need attention in building an adequate electronic security infrastructure. These are: 1) The legal framework and enforcement. 2) Electronic security of payment systems. 3) Supervision and prevention challenges. 4) The role of private insurance as an essential monitoring mechanism. 5) Certification, standards, and the role of the public and private sectors. 6) Improving the accuracy of information on electronic security incidents and creating better arrangements for sharing this information. 7) Improving overall education on these issues as a key to enhancing prevention.Knowledge Economy,Labor Policies,International Terrorism&Counterterrorism,Payment Systems&Infrastructure,Banks&Banking Reform,Education for the Knowledge Economy,Knowledge Economy,Banks&Banking Reform,International Terrorism&Counterterrorism,Governance Indicators

    Dynamic Upgrades for High Availability Systems

    Get PDF
    In this thesis I show that it is possible to build general-purpose frameworks for efficient, on-line data transformation in support of flexible system services, especially dynamic software updates (DSU). This approach generalizes some of the ideas from prior work on DSU, making those ideas applicable to more situations. In particular, I generalize DSU's notion of in-memory state transformation---normally used to upgrade run-time state to be consistent with the new software---so that it can be applied to data not necessarily stored in memory, and for services other than DSU. To support this thesis, I present three artifacts. First, I present C-strider, a generic, type-aware C heap traversal framework. C-strider constitutes a flexible, easy-to-use framework with which developers can program reusable services that have a heap traversal at their core, e.g., serialization, profiling, invariant checking, and state transformation (in support of DSU). C-strider supports both parallel and single-threaded heap traversals, and I demonstrate that C-strider requires little programmer effort, and the resulting services are efficient and effective. Second, I present KVolve, a data transformation service for NoSQL databases. KVolve is notable in that transformations are carried out on-line and on-demand, as data is accessed, rather than off-line and all at once, which would reduce service availability. Experiments with on-line upgrades of services using KVolve show little overhead during normal operation, and only brief pauses at update-time. Finally, I present Morpheus, a dynamically updatable software-defined network (SDN) controller. Morpheus' architecture is fundamentally distributed, with each service running as a separate process that accesses a shared KVolve instance. Morpheus can update multiple controller applications without loss of availability or degradation of performance

    Active provenance for data intensive research

    Get PDF
    The role of provenance information in data-intensive research is a significant topic of discussion among technical experts and scientists. Typical use cases addressing traceability, versioning and reproducibility of the research findings are extended with more interactive scenarios in support, for instance, of computational steering and results management. In this thesis we investigate the impact that lineage records can have on the early phases of the analysis, for instance performed through near-real-time systems and Virtual Research Environments (VREs) tailored to the requirements of a specific community. By positioning provenance at the centre of the computational research cycle, we highlight the importance of having mechanisms at the data-scientists’ side that, by integrating with the abstractions offered by the processing technologies, such as scientific workflows and data-intensive tools, facilitate the experts’ contribution to the lineage at runtime. Ultimately, by encouraging tuning and use of provenance for rapid feedback, the thesis aims at improving the synergy between different user groups to increase productivity and understanding of their processes. We present a model of provenance, called S-PROV, that uses and further extends PROV and ProvONE. The relationships and properties characterising the workflow’s abstractions and their concrete executions are re-elaborated to include aspects related to delegation, distribution and steering of stateful streaming operators. The model is supported by the Active framework for tuneable and actionable lineage ensuring the user’s engagement by fostering rapid exploitation. Here, concepts such as provenance types, configuration and explicit state management allow users to capture complex provenance scenarios and activate selective controls based on domain and user-defined metadata. We outline how the traces are recorded in a new comprehensive system, called S-ProvFlow, enabling different classes of consumers to explore the provenance data with services and tools for monitoring, in-depth validation and comprehensive visual-analytics. The work of this thesis will be discussed in the context of an existing computational framework and the experience matured in implementing provenance-aware tools for seismology and climate VREs. It will continue to evolve through newly funded projects, thereby providing generic and user-centred solutions for data-intensive research

    An agent-based service-oriented approach to evolving legacy software systems into a pervasive computing environment.

    Get PDF
    This thesis focuses on an Agent-Based Service-Oriented approach to evolving legacy system into a Pervasive Computing environment. The methodology consists of multiple phases: using reverse engineering techniques to comprehend and decompose legacy systems, employing XML and Web Services to transform and represent a legacy system as pervasive services, and integrating these pervasive services into pervasive computing environments with agent based integration technology. A legacy intelligent building system is used as a case study for experiments with the approach, which demonstrates that the proposed approach has the ability to evolve legacy systems into pervasive service environments seamlessly. Conclusion is drawn based on analysis and further research directions are also discussed
    • …
    corecore