1,109 research outputs found
Survivability analogy for cloud computing
As cloud computing has become the most popular computing platform, and cloud-based applications a commonplace, the methods and mechanisms used to ensure their survivability is increasingly becoming paramount. One of the prevalent trends in recent times is a turn to nature for inspiration in developing and supporting highly survivable environments. This paper aims to address the problems of survivability in cloud environments through inspiration from nature. In particular, the community metaphor in nature's predator-prey systems where autonomous individuals' local decisions focus on ensuring the global survival of the community. Thus, we develop analogies for survivability in cloud computing based on a range of mechanisms which we view as key determinants of prey's survival against predation. For this purpose we investigate some predator-prey systems that will form the basis for our analogical designs. Furthermore, due to a lack of a standardized definition of survivability, we propose a unified definition for survivability, which emphasizes as imperative, a high level of proactiveness to thwart black swan events, as well as high capacity to respond to insecurity in a timely and appropriate manner, inspired by prey's avoidance and anti-predation approaches. © 2017 IEEE
Nature-inspired survivability: Prey-inspired survivability countermeasures for cloud computing security challenges
As cloud computing environments become complex, adversaries have become highly sophisticated and unpredictable. Moreover, they can easily increase attack power and persist longer before detection. Uncertain malicious actions, latent risks, Unobserved or Unobservable risks (UUURs) characterise this new threat domain. This thesis proposes prey-inspired survivability to address unpredictable security challenges borne out of UUURs. While survivability is a well-addressed phenomenon in non-extinct prey animals, applying prey survivability to cloud computing directly is challenging due to contradicting end goals. How to manage evolving survivability goals and requirements under contradicting environmental conditions adds to the challenges. To address these challenges, this thesis proposes a holistic taxonomy which integrate multiple and disparate perspectives of cloud security challenges. In addition, it proposes the TRIZ (Teorija Rezbenija Izobretatelskib Zadach) to derive prey-inspired solutions through resolving contradiction. First, it develops a 3-step process to facilitate interdomain transfer of
concepts from nature to cloud. Moreover, TRIZ’s generic approach suggests specific
solutions for cloud computing survivability. Then, the thesis presents the conceptual prey-inspired cloud computing survivability framework (Pi-CCSF), built upon TRIZ derived solutions. The framework run-time is pushed to the user-space to support evolving survivability design goals. Furthermore, a target-based decision-making technique (TBDM) is proposed to manage survivability decisions. To evaluate the prey-inspired survivability concept, Pi-CCSF simulator is developed and implemented. Evaluation results shows that escalating survivability actions improve the vitality of vulnerable and compromised virtual machines (VMs) by 5% and dramatically improve their overall survivability. Hypothesis testing conclusively supports the hypothesis that the escalation mechanisms can be applied to enhance the survivability of cloud computing systems. Numeric analysis of TBDM shows that by considering survivability preferences and attitudes (these directly impacts survivability actions), the TBDM method brings unpredictable survivability information closer to decision processes. This enables efficient execution of variable escalating survivability actions, which enables the Pi-CCSF’s decision
system (DS) to focus upon decisions that achieve survivability outcomes under unpredictability imposed by UUUR
Joint dimensioning of server and network infrastructure for resilient optical grids/clouds
We address the dimensioning of infrastructure, comprising both network and server resources, for large-scale decentralized distributed systems such as grids or clouds. We design the resulting grid/cloud to be resilient against network link or server failures. To this end, we exploit relocation: Under failure conditions, a grid job or cloud virtual machine may be served at an alternate destination (i.e., different from the one under failure-free conditions). We thus consider grid/cloud requests to have a known origin, but assume a degree of freedom as to where they end up being served, which is the case for grid applications of the bag-of-tasks (BoT) type or hosted virtual machines in the cloud case. We present a generic methodology based on integer linear programming (ILP) that: 1) chooses a given number of sites in a given network topology where to install server infrastructure; and 2) determines the amount of both network and server capacity to cater for both the failure-free scenario and failures of links or nodes. For the latter, we consider either failure-independent (FID) or failure-dependent (FD) recovery. Case studies on European-scale networks show that relocation allows considerable reduction of the total amount of network and server resources, especially in sparse topologies and for higher numbers of server sites. Adopting a failure-dependent backup routing strategy does lead to lower resource dimensions, but only when we adopt relocation (especially for a high number of server sites): Without exploiting relocation, potential savings of FD versus FID are not meaningful
Hybrid Cloud Model Checking Using the Interaction Layer of HARMS for Ambient Intelligent Systems
Soon, humans will be co-living and taking advantage of the help of multi-agent systems in a broader way than the present. Such systems will involve machines or devices of any variety, including robots. These kind of solutions will adapt to the special needs of each individual. However, to the concern of this research effort, systems like the ones mentioned above might encounter situations that will not be seen before execution time. It is understood that there are two possible outcomes that could materialize; either keep working without corrective measures, which could lead to an entirely different end or completely stop working. Both results should be avoided, specially in cases where the end user will depend on a high level guidance provided by the system, such as in ambient intelligence applications.
This dissertation worked towards two specific goals. First, to assure that the system will always work, independently of which of the agents performs the different tasks needed to accomplish a bigger objective. Second, to provide initial steps towards autonomous survivable systems which can change their future actions in order to achieve the original final goals. Therefore, the use of the third layer of the HARMS model was proposed to insure the indistinguishability of the actors accomplishing each task and sub-task without regard of the intrinsic complexity of the activity. Additionally, a framework was proposed using model checking methodology during run-time for providing possible solutions to issues encountered in execution time, as a part of the survivability feature of the systems final goals
Interaction of Supernova Ejecta with Nearby Protoplanetary Disks
The early Solar System contained short-lived radionuclides such as 60Fe (t1/2
= 1.5 Myr) whose most likely source was a nearby supernova. Previous models of
Solar System formation considered a supernova shock that triggered the collapse
of the Sun's nascent molecular cloud. We advocate an alternative hypothesis,
that the Solar System's protoplanetary disk had already formed when a very
close (< 1 pc) supernova injected radioactive material directly into the disk.
We conduct the first numerical simulations designed to answer two questions
related to this hypothesis: will the disk be destroyed by such a close
supernova; and will any of the ejecta be mixed into the disk? Our simulations
demonstrate that the disk does not absorb enough momentum from the shock to
escape the protostar to which it is bound. Only low amounts (< 1%) of mass loss
occur, due to stripping by Kelvin-Helmholtz instabilities across the top of the
disk, which also mix into the disk about 1% of the intercepted ejecta. These
low efficiencies of destruction and injectation are due to the fact that the
high disk pressures prevent the ejecta from penetrating far into the disk
before stalling. Injection of gas-phase ejecta is too inefficient to be
consistent with the abundances of radionuclides inferred from meteorites. On
the other hand, the radionuclides found in meteorites would have condensed into
dust grains in the supernova ejecta, and we argue that such grains will be
injected directly into the disk with nearly 100% efficiency. The meteoritic
abundances of the short-lived radionuclides such as 60Fe therefore are
consistent with injection of grains condensed from the ejecta of a nearby (< 1
pc) supernova, into an already-formed protoplanetary disk.Comment: 57 pages, 16 figure
The Multiple Facets of Software Diversity: Recent Developments in Year 2000 and Beyond
Early experiments with software diversity in the mid 1970's investigated N-version programming and recovery blocks to increase the reliability of embedded systems. Four decades later, the literature about software diversity has expanded in multiple directions: goals (fault-tolerance, security, software engineering); means (managed or automated diversity) and analytical studies (quantification of diversity and its impact). Our paper contributes to the field of software diversity as the first paper that adopts an inclusive vision of the area, with an emphasis on the most recent advances in the field. This survey includes classical work about design and data diversity for fault tolerance, as well as the cybersecurity literature that investigates randomization at different system levels. It broadens this standard scope of diversity, to include the study and exploitation of natural diversity and the management of diverse software products. Our survey includes the most recent works, with an emphasis from 2000 to present. The targeted audience is researchers and practitioners in one of the surveyed fields, who miss the big picture of software diversity. Assembling the multiple facets of this fascinating topic sheds a new light on the field
The Changing Patterns of Internet Usage
Symposium: Essays from Time Warner Cable\u27s Research Program on Digital Communications
The Changing Patterns of Internet Usage
The Internet unquestionably represents one of the most important technological developments in recent history. It has revolutionized the way people communicate with one another and obtain information and created an unimaginable variety of commercial and leisure activities. Interestingly, many members of the engineering community often observe that the current network is ill-suited to handle the demands that end users are placing on it. Indeed, engineering researchers often describe the network as ossified and impervious to significant architectural change. As a result, both the U.S. and the European Commission are sponsoring “clean slate” projects to study how the Internet might be designed differently if it were designed from scratch today. This Essay explores emerging trends that are transforming the way end users are using the Internet and examine their implications both for network architecture and public policy. These trends include Internet protocol video, wireless broadband, cloud computing programmable networking, and pervasive computing and sensor networks. It discusses how these changes in the way people are using the network may require the network to evolve in new directions
- …