279 research outputs found

    Guided self-organisation in open distributed systems

    Get PDF
    [no abstract

    Probabilistic grid scheduling based on job statistics and monitoring information

    Get PDF
    This transfer thesis presents a novel, probabilistic approach to scheduling applications on computational Grids based on their historical behaviour, current state of the Grid and predictions of the future execution times and resource utilisation of such applications. The work lays a foundation for enabling a more intuitive, user-friendly and effective scheduling technique termed deadline scheduling. Initial work has established motivation and requirements for a more efficient Grid scheduler, able to adaptively handle dynamic nature of the Grid resources and submitted workload. Preliminary scheduler research identified the need for a detailed monitoring of Grid resources on the process level, and for a tool to simulate non-deterministic behaviour and statistical properties of Grid applications. A simulation tool, GridLoader, has been developed to enable modelling of application loads similar to a number of typical Grid applications. GridLoader is able to simulate CPU utilisation, memory allocation and network transfers according to limits set through command line parameters or a configuration file. Its specific strength is in achieving set resource utilisation targets in a probabilistic manner, thus creating a dynamic environment, suitable for testing the scheduler’s adaptability and its prediction algorithm. To enable highly granular monitoring of Grid applications, a monitoring framework based on the Ganglia Toolkit was developed and tested. The suite is able to collect resource usage information of individual Grid applications, integrate it into standard XML based information flow, provide visualisation through a Web portal, and export data into a format suitable for off-line analysis. The thesis also presents initial investigation of the utilisation of University College London Central Computing Cluster facility running Sun Grid Engine middleware. Feasibility of basic prediction concepts based on the historical information and process meta-data have been successfully established and possible scheduling improvements using such predictions identified. The thesis is structured as follows: Section 1 introduces Grid computing and its major concepts; Section 2 presents open research issues and specific focus of the author’s research; Section 3 gives a survey of the related literature, schedulers, monitoring tools and simulation packages; Section 4 presents the platform for author’s work – the Self-Organising Grid Resource management project; Sections 5 and 6 give detailed accounts of the monitoring framework and simulation tool developed; Section 7 presents the initial data analysis while Section 8.4 concludes the thesis with appendices and references

    Optimizing Tilera's process scheduling via reinforcement learning

    Get PDF
    Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (pages 45-48).As multicore processors become more prevalent, system complexities are increasing. It is no longer practical for an average programmer to balance all of the system constraints to ensure that the system will always perform optimally. One apparent solution to managing these resources efficiently is to design a self-aware system that utilizes machine learning to optimally manage its own resources and tune its own parameters. Tilera is a multicore processor architecture designed to highly scalable. The aim of the proposed project is to use reinforcement learning to develop a reward function that will enable the Tilera's scheduler to tune its own parameters. By enabling the parameters to come from the system's "reward function," we aim eliminate the burden on the programmer to produce these parameters. Our contribution to this aim is a library of reinforcement learning functions, borrowed from Sutton and Barto (1998) [35], and a lightweight benchmark, capable of modifying processor affinities. When combined, these two tools should provide a sound basis for Tilera's scheduler to tune its own parameters. Furthermore, this thesis describes how this combination may effectively be done and explores several manually tuned processor affinities. The results of this exploration demonstrates the necessity of an autonomously-tuned scheduler.by Deborah Hanus.M. Eng

    Artificial Collective Intelligence Engineering: a Survey of Concepts and Perspectives

    Full text link
    Collectiveness is an important property of many systems--both natural and artificial. By exploiting a large number of individuals, it is often possible to produce effects that go far beyond the capabilities of the smartest individuals, or even to produce intelligent collective behaviour out of not-so-intelligent individuals. Indeed, collective intelligence, namely the capability of a group to act collectively in a seemingly intelligent way, is increasingly often a design goal of engineered computational systems--motivated by recent techno-scientific trends like the Internet of Things, swarm robotics, and crowd computing, just to name a few. For several years, the collective intelligence observed in natural and artificial systems has served as a source of inspiration for engineering ideas, models, and mechanisms. Today, artificial and computational collective intelligence are recognised research topics, spanning various techniques, kinds of target systems, and application domains. However, there is still a lot of fragmentation in the research panorama of the topic within computer science, and the verticality of most communities and contributions makes it difficult to extract the core underlying ideas and frames of reference. The challenge is to identify, place in a common structure, and ultimately connect the different areas and methods addressing intelligent collectives. To address this gap, this paper considers a set of broad scoping questions providing a map of collective intelligence research, mostly by the point of view of computer scientists and engineers. Accordingly, it covers preliminary notions, fundamental concepts, and the main research perspectives, identifying opportunities and challenges for researchers on artificial and computational collective intelligence engineering.Comment: This is the author's final version of the article, accepted for publication in the Artificial Life journal. Data: 34 pages, 2 figure

    Smart Wireless Sensor Networks

    Get PDF
    The recent development of communication and sensor technology results in the growth of a new attractive and challenging area - wireless sensor networks (WSNs). A wireless sensor network which consists of a large number of sensor nodes is deployed in environmental fields to serve various applications. Facilitated with the ability of wireless communication and intelligent computation, these nodes become smart sensors which do not only perceive ambient physical parameters but also be able to process information, cooperate with each other and self-organize into the network. These new features assist the sensor nodes as well as the network to operate more efficiently in terms of both data acquisition and energy consumption. Special purposes of the applications require design and operation of WSNs different from conventional networks such as the internet. The network design must take into account of the objectives of specific applications. The nature of deployed environment must be considered. The limited of sensor nodesďż˝ resources such as memory, computational ability, communication bandwidth and energy source are the challenges in network design. A smart wireless sensor network must be able to deal with these constraints as well as to guarantee the connectivity, coverage, reliability and security of network's operation for a maximized lifetime. This book discusses various aspects of designing such smart wireless sensor networks. Main topics includes: design methodologies, network protocols and algorithms, quality of service management, coverage optimization, time synchronization and security techniques for sensor networks

    A specification method for the scalable self-governance of complex autonomic systems

    Get PDF
    IBM, amongst many others, have sought to endow computer systems with selfmanagement capabilities by delegating vital functions to the software itself and proposed the Autonomic Computing model. Hence inducing the so-called self-* properties including the system's ability to be self-configuring, self-optimising, self-healing and self-protecting. Initial attempts to realise such a vision have so far mostly relied on a passive adaptation whereby Design by Contract and Event-Condition-Action (ECA) type constructs are used to regulate the target systems behaviour: When a specific event makes a certain condition true then an action is triggered which executes either within the system or on its environment Whilst, such a model works well for closed systems, its effectiveness and applicability of approach diminishes as the size and complexity of the managed system increases, necessitating frequent updates to the ECA rule set to cater for new and/or unforeseen systems' behaviour. More recent research works are now adopting the parametric adaptation model, where the events, conditions and actions may be adjusted at runtime in response to the system's observed state. Such an improved control model works well up to a point, but for large scale systems of systems, with very many component interactions, the predictability and traceability of the regulation and its impact on the whole system is intractable. The selforganising systems theory, however, offers a scaleable alternative to systems control utilising emerging behaviour, observed at a global level, resulting from the low-level interactions of the distributed components. Whereby, for instance, key signals (signs) for ECA style feedback control need no longer be recognised or understood in the context of the design time system but are defined by their relevance to the runtime system. Nonetheless this model still suffers from a usually inaccessible control model with no intrinsic meaning assigned to data extraction from the systems operation. In other words, there is no grounded definition of particular observable events occurring in the system. This condition is termed the Signal Grounding Problem. This problem cannot usually be solved by analytical or algorithmic methods, as these solutions generally require precise problem formulations and a static operating domain. Rather cognitive techniques will be needed that perform effectively to evaluate and improve performance in the presence of complex, incomplete, dynamic and evolving environments. In order to develop a specification method for scalable self-governance of autonomic systems of systems, this thesis presents a number of ways to alleviate, or circumvent, the Signal Grounding Problem through the utilisation of cognitive systems and the properties of complex systems. After reviewing the specification methods available for governance models, the Situation Calculus dialect of first order logic is described with the necessary modalities for the specification of deliberative monitoring in partially observable environments with stochastic actions. This permits a specification method that allows the depiction of system guards and norms, under central control, as well as the deliberative functions required for decentralised components to present techniques around the Signal Grounding problem, engineer emergence and generally utilise the properties of large complex systems for their own self-governance. It is shown how these large-scale behaviours may be implemented and the properties assessed and utilised by an Observer System through fully functioning implementations and simulations. The work concludes with two case studies showing how the specification would be achieved in practice: An observer based meta-system for a decision support system in medicine is described, specified and implemented up to parametric adaptation and a NASA project is described with a specification given for the interactions and cooperative behaviour that leads to scale-free connectivity, which the observer system may then utilise for a previously described efficient monitoring strategy

    Developing a global observer programming model for large-scale networks of autonomic systems

    Get PDF
    Computing and software intensive systems are now an inextricable part of modern work, life and entertainment fabric. This consequently has increased our reliance on their dependable operation. While much is known regarding software engineering practices of dependable software systems; the extreme scale, complexity and dynamics of modern software has pushed conventional software engineering tools and techniques to their acceptable limits. Consequently, over the last decade, this has accelerated research into non-conventional methods, many of which are inspired by social and/or biological systems model. Exemplar of which are the DARPA-funded Se1f-Regenerative-Systems (SRS) programme, and Autonomic Computing, where a closed-loop feedback control model is essential to delivering the advocated cognitive immunity and self-management capabilities. While much research work has been conducted on vanous aspects of SRS and autonomy, they are typically based on the assumptions that the structural model (organisation) of managed elements is static and exhaustive monitoring and feedback is computationally scalable. In addition, existing federated approaches to distributed computation and control, such as Multi-Agent-Systems fail to satisfactorily address how global control may be enacted upon the whole system and how an individual component may take on specified monitoring duties - although methods of interaction between federated individuals is well understood. Equally, organic-inspired computing looks to deal with event scale and complexity largely from a mining perspective, with observation concerns deferred to a suitably selective abstraction known as the "observation model". However, computing and mathematical science research, along with other fields has developed problem-specific approaches to help manage complexity; abstraction-based approaches can simplify structural organisation allowing the underlying meaning to be better understood. Statistical and graph-based approaches can both provide identifying features along with selectively reducing the size of a modelled structure by selecting specific areas that conform to certain topological criteria. This research studies the engineering concerns relating to observation of large-scale networks of autonomic systems. It examines methods that can be used to manage scale and generalises and formalises them within a software engineering approach; guiding the development of an automated adaptive observation subsystem - the Global Observer Model. This approach uses a model-based representation of the observed system, represented by appropriately attached modelled elements; adapters between the underlying system and the observation subsystem. The concepts of Signature and Technique definitions describe large-scale or complex system characteristics and target selection techniques respectively. Collections of these objects are then utilised throughout the framework along with decision and deployment logic (collectively referred to as the Observer Behaviour Definition - an ECA-like observational control) to provide a runtime-adaptable observation overlay. The evaluation of this research is provided by demonstrations of the observation framework; firstly in experimental form for assessment of the Signature and Technique approach, and then by application to the Email Exploration Tool (EET), a forensic investigation utility

    Security techniques for sensor systems and the Internet of Things

    Get PDF
    Sensor systems are becoming pervasive in many domains, and are recently being generalized by the Internet of Things (IoT). This wide deployment, however, presents significant security issues. We develop security techniques for sensor systems and IoT, addressing all security management phases. Prior to deployment, the nodes need to be hardened. We develop nesCheck, a novel approach that combines static analysis and dynamic checking to efficiently enforce memory safety on TinyOS applications. As security guarantees come at a cost, determining which resources to protect becomes important. Our solution, OptAll, leverages game-theoretic techniques to determine the optimal allocation of security resources in IoT networks, taking into account fixed and variable costs, criticality of different portions of the network, and risk metrics related to a specified security goal. Monitoring IoT devices and sensors during operation is necessary to detect incidents. We design Kalis, a knowledge-driven intrusion detection technique for IoT that does not target a single protocol or application, and adapts the detection strategy to the network features. As the scale of IoT makes the devices good targets for botnets, we design Heimdall, a whitelist-based anomaly detection technique for detecting and protecting against IoT-based denial of service attacks. Once our monitoring tools detect an attack, determining its actual cause is crucial to an effective reaction. We design a fine-grained analysis tool for sensor networks that leverages resident packet parameters to determine whether a packet loss attack is node- or link-related and, in the second case, locate the attack source. Moreover, we design a statistical model for determining optimal system thresholds by exploiting packet parameters variances. With our techniques\u27 diagnosis information, we develop Kinesis, a security incident response system for sensor networks designed to recover from attacks without significant interruption, dynamically selecting response actions while being lightweight in communication and energy overhead

    CHORUS Deliverable 2.1: State of the Art on Multimedia Search Engines

    Get PDF
    Based on the information provided by European projects and national initiatives related to multimedia search as well as domains experts that participated in the CHORUS Think-thanks and workshops, this document reports on the state of the art related to multimedia content search from, a technical, and socio-economic perspective. The technical perspective includes an up to date view on content based indexing and retrieval technologies, multimedia search in the context of mobile devices and peer-to-peer networks, and an overview of current evaluation and benchmark inititiatives to measure the performance of multimedia search engines. From a socio-economic perspective we inventorize the impact and legal consequences of these technical advances and point out future directions of research
    • …
    corecore