4,663 research outputs found

    The roundtable: an abstract model of conversation dynamics

    Full text link
    Is it possible to abstract a formal mechanism originating schisms and governing the size evolution of social conversations? In this work a constructive solution to such problem is proposed: an abstract model of a generic N-party turn-taking conversation. The model develops from simple yet realistic assumptions derived from experimental evidence, abstracts from conversation content and semantics while including topological information, and is driven by stochastic dynamics. We find that a single mechanism - namely the dynamics of conversational party's individual fitness, as related to conversation size - controls the development of the self-organized schisming phenomenon. Potential generalizations of the model - including individual traits and preferences, memory effects and more elaborated conversational topologies - may find important applications also in other fields of research, where dynamically-interacting and networked agents play a fundamental role.Comment: 18 pages, 4 figures, to be published in Journal of Artificial Societies and Social Simulatio

    Born to learn: The inspiration, progress, and future of evolved plastic artificial neural networks

    Get PDF
    Biological plastic neural networks are systems of extraordinary computational capabilities shaped by evolution, development, and lifetime learning. The interplay of these elements leads to the emergence of adaptive behavior and intelligence. Inspired by such intricate natural phenomena, Evolved Plastic Artificial Neural Networks (EPANNs) use simulated evolution in-silico to breed plastic neural networks with a large variety of dynamics, architectures, and plasticity rules: these artificial systems are composed of inputs, outputs, and plastic components that change in response to experiences in an environment. These systems may autonomously discover novel adaptive algorithms, and lead to hypotheses on the emergence of biological adaptation. EPANNs have seen considerable progress over the last two decades. Current scientific and technological advances in artificial neural networks are now setting the conditions for radically new approaches and results. In particular, the limitations of hand-designed networks could be overcome by more flexible and innovative solutions. This paper brings together a variety of inspiring ideas that define the field of EPANNs. The main methods and results are reviewed. Finally, new opportunities and developments are presented

    Towards a Layered Architectural View for Security Analysis in SCADA Systems

    Full text link
    Supervisory Control and Data Acquisition (SCADA) systems support and control the operation of many critical infrastructures that our society depend on, such as power grids. Since SCADA systems become a target for cyber attacks and the potential impact of a successful attack could lead to disastrous consequences in the physical world, ensuring the security of these systems is of vital importance. A fundamental prerequisite to securing a SCADA system is a clear understanding and a consistent view of its architecture. However, because of the complexity and scale of SCADA systems, this is challenging to acquire. In this paper, we propose a layered architectural view for SCADA systems, which aims at building a common ground among stakeholders and supporting the implementation of security analysis. In order to manage the complexity and scale, we define four interrelated architectural layers, and uses the concept of viewpoints to focus on a subset of the system. We indicate the applicability of our approach in the context of SCADA system security analysis.Comment: 7 pages, 4 figure

    Multi-criteria Evolution of Neural Network Topologies: Balancing Experience and Performance in Autonomous Systems

    Full text link
    Majority of Artificial Neural Network (ANN) implementations in autonomous systems use a fixed/user-prescribed network topology, leading to sub-optimal performance and low portability. The existing neuro-evolution of augmenting topology or NEAT paradigm offers a powerful alternative by allowing the network topology and the connection weights to be simultaneously optimized through an evolutionary process. However, most NEAT implementations allow the consideration of only a single objective. There also persists the question of how to tractably introduce topological diversification that mitigates overfitting to training scenarios. To address these gaps, this paper develops a multi-objective neuro-evolution algorithm. While adopting the basic elements of NEAT, important modifications are made to the selection, speciation, and mutation processes. With the backdrop of small-robot path-planning applications, an experience-gain criterion is derived to encapsulate the amount of diverse local environment encountered by the system. This criterion facilitates the evolution of genes that support exploration, thereby seeking to generalize from a smaller set of mission scenarios than possible with performance maximization alone. The effectiveness of the single-objective (optimizing performance) and the multi-objective (optimizing performance and experience-gain) neuro-evolution approaches are evaluated on two different small-robot cases, with ANNs obtained by the multi-objective optimization observed to provide superior performance in unseen scenarios

    Orchestrator conversation : distributed management of cloud applications

    Get PDF
    Managing cloud applications is complex, and the current state of the art is not addressing this issue. The ever-growing software ecosystem continues to increase the knowledge required to manage cloud applications at a time when there is already an IT skills shortage. Solving this issue requires capturing IT operation knowledge in software so that this knowledge can be reused by system administrators who do not have it. The presented research tackles this issue by introducing a new and fundamentally different way to approach cloud application management: a hierarchical collection of independent software agents, collectively managing the cloud application. Each agent encapsulates knowledge of how to manage specific parts of the cloud application, is driven by sending and receiving cloud models, and collaborates with other agents by communicating using conversations. The entirety of communication and collaboration in this collection is called the orchestrator conversation. A thorough evaluation shows the orchestrator conversation makes it possible to encapsulate IT operations knowledge that current solutions cannot, reduces the complexity of managing a cloud application, and happens inherently concurrent. The evaluation also shows that the conversation figures out how to deploy a single big data cluster in less than 100 milliseconds, which scales linearly to less than 10 seconds for 100 clusters, resulting in a minimal overhead compared with the deployment time of at least 20 minutes with the state of the art

    The Roundtable: An Abstract Model of Conversation Dynamics

    Get PDF
    Is it possible to abstract a formal mechanism originating schisms and governing the size evolution of social conversations? In this work we propose a constructive solution to this problem: an abstract model of a generic N-party turn-taking conversation. The model develops from simple yet realistic assumptions derived from experimental evidence, abstracts from conversation content and semantics while including topological information, and is driven by stochastic dynamics. We find that a single mechanism, namely the dynamics of conversational party\'s individual fitness as related to conversation size, controls the development of the self-organized schisming phenomenon. Potential generalizations of the model - including individual traits and preferences, memory effects and more elaborated conversational topologies - may find important applications also in other fields of research, where dynamically-interacting and networked agents play a fundamental role.ABM, Complexity, Turn-Taking Dynamics, Schism, Stochastic Dynamics

    Monitoring and analysis system for performance troubleshooting in data centers

    Get PDF
    It was not long ago. On Christmas Eve 2012, a war of troubleshooting began in Amazon data centers. It started at 12:24 PM, with an mistaken deletion of the state data of Amazon Elastic Load Balancing Service (ELB for short), which was not realized at that time. The mistake first led to a local issue that a small number of ELB service APIs were affected. In about six minutes, it evolved into a critical one that EC2 customers were significantly affected. One example was that Netflix, which was using hundreds of Amazon ELB services, was experiencing an extensive streaming service outage when many customers could not watch TV shows or movies on Christmas Eve. It took Amazon engineers 5 hours 42 minutes to find the root cause, the mistaken deletion, and another 15 hours and 32 minutes to fully recover the ELB service. The war ended at 8:15 AM the next day and brought the performance troubleshooting in data centers to world’s attention. As shown in this Amazon ELB case.Troubleshooting runtime performance issues is crucial in time-sensitive multi-tier cloud services because of their stringent end-to-end timing requirements, but it is also notoriously difficult and time consuming. To address the troubleshooting challenge, this dissertation proposes VScope, a flexible monitoring and analysis system for online troubleshooting in data centers. VScope provides primitive operations which data center operators can use to troubleshoot various performance issues. Each operation is essentially a series of monitoring and analysis functions executed on an overlay network. We design a novel software architecture for VScope so that the overlay networks can be generated, executed and terminated automatically, on-demand. From the troubleshooting side, we design novel anomaly detection algorithms and implement them in VScope. By running anomaly detection algorithms in VScope, data center operators are notified when performance anomalies happen. We also design a graph-based guidance approach, called VFocus, which tracks the interactions among hardware and software components in data centers. VFocus provides primitive operations by which operators can analyze the interactions to find out which components are relevant to the performance issue. VScope’s capabilities and performance are evaluated on a testbed with over 1000 virtual machines (VMs). Experimental results show that the VScope runtime negligibly perturbs system and application performance, and requires mere seconds to deploy monitoring and analytics functions on over 1000 nodes. This demonstrates VScope’s ability to support fast operation and online queries against a comprehensive set of application to system/platform level metrics, and a variety of representative analytics functions. When supporting algorithms with high computation complexity, VScope serves as a ‘thin layer’ that occupies no more than 5% of their total latency. Further, by using VFocus, VScope can locate problematic VMs that cannot be found via solely application-level monitoring, and in one of the use cases explored in the dissertation, it operates with levels of perturbation of over 400% less than what is seen for brute-force and most sampling-based approaches. We also validate VFocus with real-world data center traces. The experimental results show that VFocus has troubleshooting accuracy of 83% on average.Ph.D
    corecore