17,193 research outputs found

    Decentralised Control Flow: A Computational Model for Distributed Systems

    Get PDF
    PhD ThesisThis thesis presents two sets of principles for the organisation of distributed computing systems. Details of models of computation based on these principles are together given, with proposals for programming languages based on each model of computation. The recursive control flow principles are based on the concept of recursive control flow computing system structuring. A recursive comprises a group of subordinate computing systems connected together by Each subordinate computing system may either be a communications medium. which a a computing system consists of a processing unit, memory some is itself a recursive component, and input/output devices, or computing components control flow system. The memory of all the computing systems within a recursive control flow computing subordinate system are arranged in a hierarchy. Using suitable addresses, any part of the hierarchy is accessible to any sequence of instructions which may be executed by the processing unit of a subordinate computing system. This rise to serious difficulties in the global accessibility gives understanding of programs written the meaning of in a programming language recursive control flow on the model of computation. based Reasoning about a particular program in isolation is difficult because of the potential interference between the execution different programs cannot be ignored . alternative principles, decentralised control flow, restrict the The accessibility of subordinate global the memory components of the computing The basis of the concept of objects forms the systems. principles. Information channels may flow along unnamed between instances of these objects, this being the only way in which one instance of an object may communicate with some other instance of an object. Reasoning particular program written in a programming language about a based on the decentralised control flow model of computation is easier since it is that there will be no interference between the guaranteed execution of different programs.Science and Engineering Research Council of Great Britain, International Computers Limite

    An Agent-Based Distributed Coordination Mechanism for Wireless Visual Sensor Nodes Using Dynamic Programming

    No full text
    The efficient management of the limited energy resources of a wireless visual sensor network is central to its successful operation. Within this context, this article focuses on the adaptive sampling, forwarding, and routing actions of each node in order to maximise the information value of the data collected. These actions are inter-related in a multi-hop routing scenario because each node’s energy consumption must be optimally allocated between sampling and transmitting its own data, receiving and forwarding the data of other nodes, and routing any data. Thus, we develop two optimal agent-based decentralised algorithms to solve this distributed constraint optimization problem. The first assumes that the route by which data is forwarded to the base station is fixed, and then calculates the optimal sampling, transmitting, and forwarding actions that each node should perform. The second assumes flexible routing, and makes optimal decisions regarding both the integration of actions that each node should choose, and also the route by which the data should be forwarded to the base station. The two algorithms represent a trade-off in optimality, communication cost, and processing time. In an empirical evaluation on sensor networks (whose underlying communication networks exhibit loops), we show that the algorithm with flexible routing is able to deliver approximately twice the quantity of information to the base station compared to the algorithm using fixed routing (where an arbitrary choice of route is made). However, this gain comes at a considerable communication and computational cost (increasing both by a factor of 100 times). Thus, while the algorithm with flexible routing is suitable for networks with a small numbers of nodes, it scales poorly, and as the size of the network increases, the algorithm with fixed routing is favoured

    Proceedings of International Workshop "Global Computing: Programming Environments, Languages, Security and Analysis of Systems"

    Get PDF
    According to the IST/ FET proactive initiative on GLOBAL COMPUTING, the goal is to obtain techniques (models, frameworks, methods, algorithms) for constructing systems that are flexible, dependable, secure, robust and efficient. The dominant concerns are not those of representing and manipulating data efficiently but rather those of handling the co-ordination and interaction, security, reliability, robustness, failure modes, and control of risk of the entities in the system and the overall design, description and performance of the system itself. Completely different paradigms of computer science may have to be developed to tackle these issues effectively. The research should concentrate on systems having the following characteristics: • The systems are composed of autonomous computational entities where activity is not centrally controlled, either because global control is impossible or impractical, or because the entities are created or controlled by different owners. • The computational entities are mobile, due to the movement of the physical platforms or by movement of the entity from one platform to another. • The configuration varies over time. For instance, the system is open to the introduction of new computational entities and likewise their deletion. The behaviour of the entities may vary over time. • The systems operate with incomplete information about the environment. For instance, information becomes rapidly out of date and mobility requires information about the environment to be discovered. The ultimate goal of the research action is to provide a solid scientific foundation for the design of such systems, and to lay the groundwork for achieving effective principles for building and analysing such systems. This workshop covers the aspects related to languages and programming environments as well as analysis of systems and resources involving 9 projects (AGILE , DART, DEGAS , MIKADO, MRG, MYTHS, PEPITO, PROFUNDIS, SECURE) out of the 13 founded under the initiative. After an year from the start of the projects, the goal of the workshop is to fix the state of the art on the topics covered by the two clusters related to programming environments and analysis of systems as well as to devise strategies and new ideas to profitably continue the research effort towards the overall objective of the initiative. We acknowledge the Dipartimento di Informatica and Tlc of the University of Trento, the Comune di Rovereto, the project DEGAS for partially funding the event and the Events and Meetings Office of the University of Trento for the valuable collaboration

    Reforming the implementation of European structural funds: A next development step

    Get PDF
    The authors assess the performance of the Structural Funds’ implementation system in six Member States of the European Union. Considering the strengths and weaknesses, they develop a reform model for the implementation of European structural policy after 1999. The strengths of the existing implementation system lie mainly in innovation effects triggered by the Structural Funds' model of policy implementation. Its main weaknesses, inter alia, are an interwoven structure of the decision-making processes, an insufficient time management and a lack of in-built improvement loops in the implementation process. To overcome these shortcomings, the authors propose a strategic management and decentralisation model. It demands a de-coupling of strategic programming on the one hand, and detailed programming and implementation on the other. Under this model, the Commission and the Member State would negotiate on the strategic issues. In the framework of the agreement, the Member State together with the monitoring committees would be responsible for the implementation of the programmes. Strengthened feedback loops would help to assure the attainment of the strategic objectives. -- Die Autoren untersuchen die Leistungsfähigkeit des Implementationssystems der Strukturfondsförderung in sechs Mitgliedstaaten der Europäischen Union. Vor dem Hintergrund der Stärken und Schwächen entwickeln sie ein Reformmodell zur Implementation der Strukturfonds in der nächsten Förderperiode nach der Reform 1999. Die Stärken des bestehenden Implementationssystems liegen vor allem in den prozeduralen Innovationen, die z.T. auf das Politikmodell der Strukturfonds und seine Kopplung an mitgliedstaatliche Verwaltungsprozesse zurückgeführt werden können. Die wichtigsten Schwächen sind u.a. die verflochtene Struktur der Entscheidungsprozesse, ein ungenügendes Zeitmanagement und fehlende inhärente Verbesserungsmechanismen des Implementationsprozesses. Um diese Schwächen zu überwinden, schlagen die Autoren ein strategisches Management- und Dezentralisierungsmodell vor. Sein Kern besteht in der Trennung von strategischer Programmierung einerseits und Detailprogrammierung und Implementation andererseits. Die Europäische Kommission und der jeweilige Mitgliedstaat handeln demnach die strategischen Teile der Programme aus. Im Rahmen dieser strategischen Vereinbarung ist dann der Mitgliedstaat für die Detailprogrammierung und Umsetzung der Programme verantwortlich, wobei er vom Begleitausschuß unterstützt wird. Verstärkte Feedbackinstrumente tragen dazu bei, die Einhaltung der strategischen Vorgaben zu sichern.

    A Review of Traffic Signal Control.

    Get PDF
    The aim of this paper is to provide a starting point for the future research within the SERC sponsored project "Gating and Traffic Control: The Application of State Space Control Theory". It will provide an introduction to State Space Control Theory, State Space applications in transportation in general, an in-depth review of congestion control (specifically traffic signal control in congested situations), a review of theoretical works, a review of existing systems and will conclude with recommendations for the research to be undertaken within this project

    A Dataflow Language for Decentralised Orchestration of Web Service Workflows

    Full text link
    Orchestrating centralised service-oriented workflows presents significant scalability challenges that include: the consumption of network bandwidth, degradation of performance, and single points of failure. This paper presents a high-level dataflow specification language that attempts to address these scalability challenges. This language provides simple abstractions for orchestrating large-scale web service workflows, and separates between the workflow logic and its execution. It is based on a data-driven model that permits parallelism to improve the workflow performance. We provide a decentralised architecture that allows the computation logic to be moved "closer" to services involved in the workflow. This is achieved through partitioning the workflow specification into smaller fragments that may be sent to remote orchestration services for execution. The orchestration services rely on proxies that exploit connectivity to services in the workflow. These proxies perform service invocations and compositions on behalf of the orchestration services, and carry out data collection, retrieval, and mediation tasks. The evaluation of our architecture implementation concludes that our decentralised approach reduces the execution time of workflows, and scales accordingly with the increasing size of data sets.Comment: To appear in Proceedings of the IEEE 2013 7th International Workshop on Scientific Workflows, in conjunction with IEEE SERVICES 201

    DALiuGE: A Graph Execution Framework for Harnessing the Astronomical Data Deluge

    Full text link
    The Data Activated Liu Graph Engine - DALiuGE - is an execution framework for processing large astronomical datasets at a scale required by the Square Kilometre Array Phase 1 (SKA1). It includes an interface for expressing complex data reduction pipelines consisting of both data sets and algorithmic components and an implementation run-time to execute such pipelines on distributed resources. By mapping the logical view of a pipeline to its physical realisation, DALiuGE separates the concerns of multiple stakeholders, allowing them to collectively optimise large-scale data processing solutions in a coherent manner. The execution in DALiuGE is data-activated, where each individual data item autonomously triggers the processing on itself. Such decentralisation also makes the execution framework very scalable and flexible, supporting pipeline sizes ranging from less than ten tasks running on a laptop to tens of millions of concurrent tasks on the second fastest supercomputer in the world. DALiuGE has been used in production for reducing interferometry data sets from the Karl E. Jansky Very Large Array and the Mingantu Ultrawide Spectral Radioheliograph; and is being developed as the execution framework prototype for the Science Data Processor (SDP) consortium of the Square Kilometre Array (SKA) telescope. This paper presents a technical overview of DALiuGE and discusses case studies from the CHILES and MUSER projects that use DALiuGE to execute production pipelines. In a companion paper, we provide in-depth analysis of DALiuGE's scalability to very large numbers of tasks on two supercomputing facilities.Comment: 31 pages, 12 figures, currently under review by Astronomy and Computin

    Working in decentralised service systems: challenges and choices for the Australian aid program

    Get PDF
    The report examined Australia’s support for service systems in decentralised contexts - the evaluation focussed on the health, education and infrastructure (water, sanitation and roads) sectors. Foreword Public services have been decentralised in most countries where Australia provides aid. This means Australia, like other donors, must be willing and able to engage effectively with developing country governments at all levels to improve service delivery. To ensure sustainable improvements, this engagement should carefully coordinate support for governance reforms with assistance to strengthen or expand service delivery systems. As the World Bank has observed, done well, decentralisation can result in more efficient and effective services for communities. However, done poorly, or where the context is inappropriate, decentralisation may have negative effects. This evaluation builds on the Office of Development Effectiveness’s 2009 evaluation of Australian aid for service delivery. It answers important questions about whether Australian aid has appropriately considered the role of subnational authorities, including specific issues identified in 2009. It assesses how well Australian aid has addressed the challenges of decentralisation, with a focus on the major sectors of education, health and infrastructure. This evaluation utilised a clear methodology, applied it consistently, and draws together a range of evidence to provide a balanced account of Australian aid performance. It concludes that Australian aid is beginning to respond to the challenges of supporting service delivery in decentralised contexts, but notes that results are mixed and there is room for further improvement. The evaluation suggests Australia needs to improve its country-level analysis, program planning and design to better address decentralisation. In particular, there is a need to carefully assess short-term service delivery needs against long-term structures and incentives for governments to achieve sustainable service delivery and meet sovereign responsibilities. Australia needs to get the right balance of engagement with different levels of government, and appropriately address both supply and demand aspects of service delivery, especially to improve equity.   &nbsp

    Monkeys, typewriters and networks: the internet in the light of the theory of accidental excellence

    Get PDF
    Viewed in the light of the theory of accidental excellence, there is much to suggest that the success of the Internet and its various protocols derives from a communications technology accident, or better, a series of accidents. In the early 1990s, many experts still saw the Internet as an academic toy that would soon vanish into thin air again. The Internet probably gained its reputation as an academic toy largely because it violated the basic principles of traditional communications networks. The quarrel about paradigms that erupted in the 1970s between the telephony world and the newly emerging Internet community was not, however, only about transmission technology doctrines. It was also about the question – still unresolved today – as to who actually governs the flow of information: the operators or the users of the network? The paper first describes various network architectures in relation to the communication cultures expressed in their make-up. It then examines the creative environment found at the nodes of the network, whose coincidental importance for the Internet boom must not be forgotten. Finally, the example of Usenet is taken to look at the kind of regulatory practices that have emerged in the communications services provided within the framework of a decentralised network architecture. --
    • …
    corecore