217 research outputs found

    Constructing provenance-aware distributed systems with data propagation

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 93-96).Is it possible to construct a heterogeneous distributed computing architecture capable of solving interesting complex problems? Can we easily use this architecture to maintain a detailed history or provenance of the data processed by it? Most existing distributed architectures can perform only one operation at a time. While they are capable of tracing possession of data, these architectures do not always track the network of operations used to synthesize new data. This thesis presents a distributed implementation of data propagation, a computational model that provides for concurrent processing that is not constrained to a single distributed operation. This system is capable of distributing computation across a heterogeneous network. It allows for the division of multiple simultaneous operations in a single distributed system. I also identify four constraints that may be placed on general-purpose data propagation to allow for deterministic computation in such a distributed propagation network. This thesis also presents an application of distributed propagation by illustrating how a generic transformation may be applied to existing propagator networks to allow for the maintenance of data provenance. I show that the modular structure of data propagation permits the simple modification of a propagator network design to maintain the histories of data.by Ian Campbell Jacobi.S.M

    The Art of the Propagator

    Get PDF
    We develop a programming model built on the idea that the basic computational elements are autonomous machines interconnected by shared cells through which they communicate. Each machine continuously examines the cells it is interested in, and adds information to some based on deductions it can make from information from the others. This model makes it easy to smoothly combine expression-oriented and constraint-based programming; it also easily accommodates implicit incremental distributed search in ordinary programs. This work builds on the original research of Guy Lewis Steele Jr. and was developed more recently with the help of Chris Hanson

    Forensics Based SDN in Data Centers

    Get PDF
    Recently, most data centers have adopted for Software-Defined Network (SDN) architecture to meet the demands for scalability and cost-efficient computer networks. SDN controller separates the data plane and control plane and implements instructions instead of protocols, which improves the Quality of Services (QoS) , enhances energy efficiency and protection mechanisms . However, such centralizations present an opportunity for attackers to utilize the controller of the network and master the entire network devices, which makes it vulnerable. Recent studies efforts have attempted to address the security issue with minimal consideration to the forensics aspects. Based on this, the research will focus on the forensic issue on the SDN network of data center environments. There are diverse approaches to accurately identify the various possible threats to protect the network. For this reason, deep learning approach will used to detect DDoS attacks, which is regarded as the most proper approach for detection of threat. Therefore, the proposed network consists of mobile nodes, head controller, detection engine, domain controller, source controller, Gateway and cloud center. The first stage of the attack is analyzed as serious, where the process includes recording the traffic as criminal evidence to track the criminal, add the IP source of the packet to blacklist and block all packets from this source and eliminate all packets. The second stage not-serious, which includes blocking all packets from the source node for this session, or the non-malicious packets are transmitted using the proposed protocol. This study is evaluated in OMNET ++ environment as a simulation and showed successful results than the existing approaches

    Agoric computation: trust and cyber-physical systems

    Get PDF
    In the past two decades advances in miniaturisation and economies of scale have led to the emergence of billions of connected components that have provided both a spur and a blueprint for the development of smart products acting in specialised environments which are uniquely identifiable, localisable, and capable of autonomy. Adopting the computational perspective of multi-agent systems (MAS) as a technological abstraction married with the engineering perspective of cyber-physical systems (CPS) has provided fertile ground for designing, developing and deploying software applications in smart automated context such as manufacturing, power grids, avionics, healthcare and logistics, capable of being decentralised, intelligent, reconfigurable, modular, flexible, robust, adaptive and responsive. Current agent technologies are, however, ill suited for information-based environments, making it difficult to formalise and implement multiagent systems based on inherently dynamical functional concepts such as trust and reliability, which present special challenges when scaling from small to large systems of agents. To overcome such challenges, it is useful to adopt a unified approach which we term agoric computation, integrating logical, mathematical and programming concepts towards the development of agent-based solutions based on recursive, compositional principles, where smaller systems feed via directed information flows into larger hierarchical systems that define their global environment. Considering information as an integral part of the environment naturally defines a web of operations where components of a systems are wired in some way and each set of inputs and outputs are allowed to carry some value. These operations are stateless abstractions and procedures that act on some stateful cells that cumulate partial information, and it is possible to compose such abstractions into higher-level ones, using a publish-and-subscribe interaction model that keeps track of update messages between abstractions and values in the data. In this thesis we review the logical and mathematical basis of such abstractions and take steps towards the software implementation of agoric modelling as a framework for simulation and verification of the reliability of increasingly complex systems, and report on experimental results related to a few select applications, such as stigmergic interaction in mobile robotics, integrating raw data into agent perceptions, trust and trustworthiness in orchestrated open systems, computing the epistemic cost of trust when reasoning in networks of agents seeded with contradictory information, and trust models for distributed ledgers in the Internet of Things (IoT); and provide a roadmap for future developments of our research

    Computation with spin foam models of quantum gravity

    Get PDF
    The focus of this thesis is the study of spin foam models of quantum gravity on a computer. These models include the standard Barrett-Crane (BC) spin foam model, as well as the new Engle-Pereira-Rovelli (EPR) and Freidel-Krasnov (FK) models. New numerical algorithms are developed and implemented, based on the existing Christensen-Egan (CE) algorithm, to allow computations with the BC model in the presence of a cosmological constant (implemented through g-deformation) and to allow computations with the recently proposed EPR and FK models. For the first time, we show that the inclusion of a positive cosmological constant, a long standing open problem for spin foams, curiously changes the behavior of the BC model, rendering the expectation values of its observables discontinuous in the limit of zero cosmological constant. Also, unlike previous work, this investigation was carried out on large triangulations, which are closer to large semiclassical space-times. Efficient numerical algorithms are described and implemented, for the first time, allowing the evaluation of the EPR and FK spin foam vertex amplitudes. An initial application of these algorithms is the study of the effective single vertex large spin asymptotics of the new models. Their asymptotic behavior is found to be qualitatively similar to that of the BC model. The leading asymptotic behavior does not exhibit the oscillatory character expected by analogy with the Ponzano-Regge model. Two important tests of the spin foam semiclassical limit are wave packet propagation and evaluation of the graviton propagator matrix elements. These tests are generalized to encompass the three major spin foam models. The wave packet propagation test is carried out in greater generality than previously. The results indicate that conjectures about good semiclassical behavior of the new spin foam models may have been premature

    Dynamic application of problem solving strategies : dependency-based flow control

    Get PDF
    Thesis (Elec. E. in Computer Science)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (pages 105-107).While humans may solve problems by applying any one of a number of different problem solving strategies, computerized problem solving is typically brittle, limited in the number of available strategies and ways of combining them to solve a problem. In this thesis, I present a method to flexibly select and combine problem solving strategies by using a constraint-propagation network, informed by higher-order knowledge about goals and what is known, to selectively control the activity of underlying problem solvers. Knowledge within each problem solver as well as the constraint-propagation network are represented as a network of explicit propositions, each described with respect to five interrelated axes of concrete and abstract knowledge about each proposition. Knowledge within each axis is supported by a set of dependencies that allow for both the adjustment of belief based on modifying supports for solutions and the production of justifications of that belief. I show that this method may be used to solve a variety of real-world problems and provide meaningful justifications for solutions to these problems, including decision-making based on numerical evaluation of risk and the evaluation of whether or not a document may be legally sent to a recipient in accordance with a policy controlling its dissemination.by Ian Campbell Jacobi.Elec.E.in Computer Scienc

    Flexible and expressive substrate for computation

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 167-174).In this dissertation I propose a shift in the foundations of computation. Modem programming systems are not expressive enough. The traditional image of a single computer that has global effects on a large memory is too restrictive. The propagation paradigm replaces this with computing by networks of local, independent, stateless machines interconnected with stateful storage cells. In so doing, it offers great flexibility and expressive power, and has therefore been much studied, but has not yet been tamed for general-purpose computation. The novel insight that should finally permit computing with general-purpose propagation is that a cell should not be seen as storing a value, but as accumulating information about a value. Various forms of the general idea of propagation have been used with great success for various special purposes; perhaps the most immediate example is constraint propagation in constraint satisfaction systems. This success is evidence both that traditional linear computation is not expressive enough, and that propagation is more expressive. These special-purpose systems, however, are all complex and all different, and neither compose well, nor interoperate well, nor generalize well. A foundational layer is missing. I present in this dissertation the design and implementation of a prototype general-purpose propagation system. I argue that the structure of the prototype follows from the overarching principle of computing by propagation and of storage by accumulating information-there are no important arbitrary decisions. I illustrate on several worked examples how the resulting organization supports arbitrary computation; recovers the expressivity benefits that have been derived from special-purpose propagation systems in a single general-purpose framework, allowing them to compose and interoperate; and offers further expressive power beyond what we have known in the past. I reflect on the new light the propagation perspective sheds on the deep nature of computation.by Alexey Andreyevich Radul.Ph.D
    • …
    corecore