194 research outputs found

    Supporting distributed computation over wide area gigabit networks

    Get PDF
    The advent of high bandwidth fibre optic links that may be used over very large distances has lead to much research and development in the field of wide area gigabit networking. One problem that needs to be addressed is how loosely coupled distributed systems may be built over these links, allowing many computers worldwide to take part in complex calculations in order to solve "Grand Challenge" problems. The research conducted as part of this PhD has looked at the practicality of implementing a communication mechanism proposed by Craig Partridge called Late-binding Remote Procedure Calls (LbRPC). LbRPC is intended to export both code and data over the network to remote machines for evaluation, as opposed to traditional RPC mechanisms that only send parameters to pre-existing remote procedures. The ability to send code as well as data means that LbRPC requests can overcome one of the biggest problems in Wide Area Distributed Computer Systems (WADCS): the fixed latency due to the speed of light. As machines get faster, the fixed multi-millisecond round trip delay equates to ever increasing numbers of CPU cycles. For a WADCS to be efficient, programs should minimise the number of network transits they incur. By allowing the application programmer to export arbitrary code to the remote machine, this may be achieved. This research has looked at the feasibility of supporting secure exportation of arbitrary code and data in heterogeneous, loosely coupled, distributed computing environments. It has investigated techniques for making placement decisions for the code in cases where there are a large number of widely dispersed remote servers that could be used. The latter has resulted in the development of a novel prototype LbRPC using multicast IP for implicit placement and a sequenced, multi-packet saturation multicast transport protocol. These prototypes show that it is possible to export code and data to multiple remote hosts, thereby removing the need to perform complex and error prone explicit process placement decisions

    Architecture, Services and Protocols for CRUTIAL

    Get PDF
    This document describes the complete specification of the architecture, services and protocols of the project CRUTIAL. The CRUTIAL Architecture intends to reply to a grand challenge of computer science and control engineering: how to achieve resilience of critical information infrastructures (CII), in particular in the electrical sector. In general lines, the document starts by presenting the main architectural options and components of the architecture, with a special emphasis on a protection device called the CRUTIAL Information Switch (CIS). Given the various criticality levels of the equipments that have to be protected, and the cost of using a replicated device, we define a hierarchy of CIS designs incrementally more resilient. The different CIS designs offer various trade offs in terms of capabilities to prevent and tolerate intrusions, both in the device itself and in the information infrastructure. The Middleware Services, APIs and Protocols chapter describes our approach to intrusion tolerant middleware. The CRUTIAL middleware comprises several building blocks that are organized on a set of layers. The Multipoint Network layer is the lowest layer of the middleware, and features an abstraction of basic communication services, such as provided by standard protocols, like IP, IPsec, UDP, TCP and SSL/TLS. The Communication Support layer features three important building blocks: the Randomized Intrusion-Tolerant Services (RITAS), the CIS Communication service and the Fosel service for mitigating DoS attacks. The Activity Support layer comprises the CIS Protection service, and the Access Control and Authorization service. The Access Control and Authorization service is implemented through PolyOrBAC, which defines the rules for information exchange and collaboration between sub-modules of the architecture, corresponding in fact to different facilities of the CII’s organizations. The Monitoring and Failure Detection layer contains a definition of the services devoted to monitoring and failure detection activities. The Runtime Support Services, APIs, and Protocols chapter features as a main component the Proactive-Reactive Recovery service, whose aim is to guarantee perpetual correct execution of any components it protects.Project co-funded by the European Commission within the Sixth Frame-work Programme (2002-2006

    Anticipating and Enacting Worlds: Moods, Illness and Psychobehavioral Adaptation

    Get PDF
    Predictive processing theorists have claimed PTSD and depression are maladaptive and epistemically distorting because they entail undesirably wide gaps between top-down models and bottom-up information inflows. Without denying this is sometimes so, the “maladaptive” label carries questionable normative assumptions. For instance, trauma survivors facing significant risk of subsequent attacks may overestimate threats to circumvent further trauma, “bringing forth” concretely safer personal spaces, to use enactive terminology, ensuring the desired gap between predicted worries and outcomes. The violation of predictive processing can go in the opposite direction too, as when depression coincides with energy-depletion, and hence objectively strenuous situations in which things look farther away because they are (accurately anticipated to be) harder to reach. These examples partly encapsulate what predictive processing theorists call “active inference,” yet with differences. In the first case, actions fruitfully obviate predictions rather than fulfilling them. In the second, mental models do not dysregulate bodily processes, making coping harder. Instead, problems (e.g., personal obstacles, gastric illness) deplete energy, eliciting a depressive and adaptive slow-down. Some predictive proponents apply correspondence criteria when alleging mismatches between internal models and the world, while incongruously asserting that the brain did not evolve to see things veridically, but to execute actions. An alternative is to adopt pluralistic, pragmatic epistemologies suited to the complexity of mind. The upshot is that mental outlooks can depart from the norm without epistemically being distorted and that mismatches between anticipatory worries and outcomes, when they actually exist, can be a measure of adaptive and epistemic success

    The Pilot Land Data System: Report of the Program Planning Workshops

    Get PDF
    An advisory report to be used by NASA in developing a program plan for a Pilot Land Data System (PLDS) was developed. The purpose of the PLDS is to improve the ability of NASA and NASA sponsored researchers to conduct land-related research. The goal of the planning workshops was to provide and coordinate planning and concept development between the land related science and computer science disciplines, to discuss the architecture of the PLDs, requirements for information science technology, and system evaluation. The findings and recommendations of the Working Group are presented. The pilot program establishes a limited scale distributed information system to explore scientific, technical, and management approaches to satisfying the needs of the land science community. The PLDS paves the way for a land data system to improve data access, processing, transfer, and analysis, which land sciences information synthesis occurs on a scale not previously permitted because of limits to data assembly and access

    A common European Spectrum policy

    Get PDF
    This briefing note considers the European Commission\u2019s proposals for a common European spectrum policy through reviewing adopted legislation as well as recent communications and other initiatives. The report was produced against the background of the review of the regulatory framework for electronic communications and the recent World Radiocommunication Conference

    Enhancements to the XNS authentication-by-proxy model

    Get PDF
    Authentication is the secure network architecture mechanism by which a pair of suspicious principals communicating over presumably unsecure channels assure themselves that each is that whom it claims to be. The Xerox Network Systems architecture proposes one such authentication scheme. This thesis examines the system consequences of the XNS model\u27s unique proxy variant, by which a principal may temporarily commission a second network entity to assume its identity as a means of authority transfer. Specific attendant system failure modes are highlighted. The student\u27s associated original contributions include proposed model revisions which rectify authentication shortfalls yet facilitate the temporal authority transfer motivating the proxy model. Consistent with the acknowledgement that no single solution is defensible as best under circumstances of such technical and administrative complexity, three viable such architectures are specified. Finally, the demand for a disciplined agent management mechanism within a distributed system such as XNS is resoundingly affirmed in the course of these first-order pursuits

    Game-Theoretic Frameworks and Strategies for Defense Against Network Jamming and Collocation Attacks

    Get PDF
    Modern networks are becoming increasingly more complex, heterogeneous, and densely connected. While more diverse services are enabled to an ever-increasing number of users through ubiquitous networking and pervasive computing, several important challenges have emerged. For example, densely connected networks are prone to higher levels of interference, which makes them more vulnerable to jamming attacks. Also, the utilization of software-based protocols to perform routing, load balancing and power management functions in Software-Defined Networks gives rise to more vulnerabilities that could be exploited by malicious users and adversaries. Moreover, the increased reliance on cloud computing services due to a growing demand for communication and computation resources poses formidable security challenges due to the shared nature and virtualization of cloud computing. In this thesis, we study two types of attacks: jamming attacks on wireless networks and side-channel attacks on cloud computing servers. The former attacks disrupt the natural network operation by exploiting the static topology and dynamic channel assignment in wireless networks, while the latter attacks seek to gain access to unauthorized data by co-residing with target virtual machines (VMs) on the same physical node in a cloud server. In both attacks, the adversary faces a static attack surface and achieves her illegitimate goal by exploiting a stationary aspect of the network functionality. Hence, this dissertation proposes and develops counter approaches to both attacks using moving target defense strategies. We study the strategic interactions between the adversary and the network administrator within a game-theoretic framework. First, in the context of jamming attacks, we present and analyze a game-theoretic formulation between the adversary and the network defender. In this problem, the attack surface is the network connectivity (the static topology) as the adversary jams a subset of nodes to increase the level of interference in the network. On the other side, the defender makes judicious adjustments of the transmission footprint of the various nodes, thereby continuously adapting the underlying network topology to reduce the impact of the attack. The defender\u27s strategy is based on playing Nash equilibrium strategies securing a worst-case network utility. Moreover, scalable decomposition-based approaches are developed yielding a scalable defense strategy whose performance closely approaches that of the non-decomposed game for large-scale and dense networks. We study a class of games considering discrete as well as continuous power levels. In the second problem, we consider multi-tenant clouds, where a number of VMs are typically collocated on the same physical machine to optimize performance and power consumption and maximize profit. This increases the risk of a malicious virtual machine performing side-channel attacks and leaking sensitive information from neighboring VMs. The attack surface, in this case, is the static residency of VMs on a set of physical nodes, hence we develop a timed migration defense approach. Specifically, we analyze a timing game in which the cloud provider decides when to migrate a VM to a different physical machine to mitigate the risk of being compromised by a collocated malicious VM. The adversary decides the rate at which she launches new VMs to collocate with the victim VMs. Our formulation captures a data leakage model in which the cost incurred by the cloud provider depends on the duration of collocation with malicious VMs. It also captures costs incurred by the adversary in launching new VMs and by the defender in migrating VMs. We establish sufficient conditions for the existence of Nash equilibria for general cost functions, as well as for specific instantiations, and characterize the best response for both players. Furthermore, we extend our model to characterize its impact on the attacker\u27s payoff when the cloud utilizes intrusion detection systems that detect side-channel attacks. Our theoretical findings are corroborated with extensive numerical results in various settings as well as a proof-of-concept implementation in a realistic cloud setting

    Resource provision in object oriented distributed systems

    Get PDF
    corecore