9,680 research outputs found
Artificial table testing dynamically adaptive systems
Dynamically Adaptive Systems (DAS) are systems that modify their behavior and
structure in response to changes in their surrounding environment. Critical
mission systems increasingly incorporate adaptation and response to the
environment; examples include disaster relief and space exploration systems.
These systems can be decomposed in two parts: the adaptation policy that
specifies how the system must react according to the environmental changes and
the set of possible variants to reconfigure the system. A major challenge for
testing these systems is the combinatorial explosions of variants and
envi-ronment conditions to which the system must react. In this paper we focus
on testing the adaption policy and propose a strategy for the selection of
envi-ronmental variations that can reveal faults in the policy. Artificial
Shaking Table Testing (ASTT) is a strategy inspired by shaking table testing
(STT), a technique widely used in civil engineering to evaluate building's
structural re-sistance to seismic events. ASTT makes use of artificial
earthquakes that simu-late violent changes in the environmental conditions and
stresses the system adaptation capability. We model the generation of
artificial earthquakes as a search problem in which the goal is to optimize
different types of envi-ronmental variations
Leveraging Semantic Web Technologies for Managing Resources in a Multi-Domain Infrastructure-as-a-Service Environment
This paper reports on experience with using semantically-enabled network
resource models to construct an operational multi-domain networked
infrastructure-as-a-service (NIaaS) testbed called ExoGENI, recently funded
through NSF's GENI project. A defining property of NIaaS is the deep
integration of network provisioning functions alongside the more common storage
and computation provisioning functions. Resource provider topologies and user
requests can be described using network resource models with common base
classes for fundamental cyber-resources (links, nodes, interfaces) specialized
via virtualization and adaptations between networking layers to specific
technologies.
This problem space gives rise to a number of application areas where semantic
web technologies become highly useful - common information models and resource
class hierarchies simplify resource descriptions from multiple providers,
pathfinding and topology embedding algorithms rely on query abstractions as
building blocks.
The paper describes how the semantic resource description models enable
ExoGENI to autonomously instantiate on-demand virtual topologies of virtual
machines provisioned from cloud providers and are linked by on-demand virtual
connections acquired from multiple autonomous network providers to serve a
variety of applications ranging from distributed system experiments to
high-performance computing
DEPAS: A Decentralized Probabilistic Algorithm for Auto-Scaling
The dynamic provisioning of virtualized resources offered by cloud computing
infrastructures allows applications deployed in a cloud environment to
automatically increase and decrease the amount of used resources. This
capability is called auto-scaling and its main purpose is to automatically
adjust the scale of the system that is running the application to satisfy the
varying workload with minimum resource utilization. The need for auto-scaling
is particularly important during workload peaks, in which applications may need
to scale up to extremely large-scale systems.
Both the research community and the main cloud providers have already
developed auto-scaling solutions. However, most research solutions are
centralized and not suitable for managing large-scale systems, moreover cloud
providers' solutions are bound to the limitations of a specific provider in
terms of resource prices, availability, reliability, and connectivity.
In this paper we propose DEPAS, a decentralized probabilistic auto-scaling
algorithm integrated into a P2P architecture that is cloud provider
independent, thus allowing the auto-scaling of services over multiple cloud
infrastructures at the same time. Our simulations, which are based on real
service traces, show that our approach is capable of: (i) keeping the overall
utilization of all the instantiated cloud resources in a target range, (ii)
maintaining service response times close to the ones obtained using optimal
centralized auto-scaling approaches.Comment: Submitted to Springer Computin
Sensor function virtualization to support distributed intelligence in the internet of things
It is estimated that-by 2020-billion devices will be connected to the Internet. This number not only includes TVs, PCs, tablets and smartphones, but also billions of embedded sensors that will make up the "Internet of Things" and enable a whole new range of intelligent services in domains such as manufacturing, health, smart homes, logistics, etc. To some extent, intelligence such as data processing or access control can be placed on the devices themselves. Alternatively, functionalities can be outsourced to the cloud. In reality, there is no single solution that fits all needs. Cooperation between devices, intermediate infrastructures (local networks, access networks, global networks) and/or cloud systems is needed in order to optimally support IoT communication and IoT applications. Through distributed intelligence the right communication and processing functionality will be available at the right place. The first part of this paper motivates the need for such distributed intelligence based on shortcomings in typical IoT systems. The second part focuses on the concept of sensor function virtualization, a potential enabler for distributed intelligence, and presents solutions on how to realize it
Game Theoretic Approaches to Massive Data Processing in Wireless Networks
Wireless communication networks are becoming highly virtualized with
two-layer hierarchies, in which controllers at the upper layer with tasks to
achieve can ask a large number of agents at the lower layer to help realize
computation, storage, and transmission functions. Through offloading data
processing to the agents, the controllers can accomplish otherwise prohibitive
big data processing. Incentive mechanisms are needed for the agents to perform
the controllers' tasks in order to satisfy the corresponding objectives of
controllers and agents. In this article, a hierarchical game framework with
fast convergence and scalability is proposed to meet the demand for real-time
processing for such situations. Possible future research directions in this
emerging area are also discussed
Adaptive Mechanisms for Mobile Spatio-Temporal Applications
Mobile spatio-temporal applications play a key role in many mission critical fields, including Business Intelligence, Traffic Management and Disaster Management. They are characterized by high data volume, velocity and large and variable number of mobile users. The design and implementation of these applications should not only consider this variablility, but also support other quality requirements such as performance and cost. In this thesis we propose an architecture for mobile spatio-temporal applications, which enables multiple angles of adaptivity. We also introduce a two-level adaptation mechanism that ensures system performance while facilitating scalability and context-aware adaptivity. We validate the architecture and adaptation mechanisms by implementing a road quality assessment mobile application as a use case and by performing a series of experiments on cloud environment. We show that our proposed architecture can adapt at runtime and maintain service level objectives while offering cost-efficiency and robustness
Dynamic Adaptive Point Cloud Streaming
High-quality point clouds have recently gained interest as an emerging form
of representing immersive 3D graphics. Unfortunately, these 3D media are bulky
and severely bandwidth intensive, which makes it difficult for streaming to
resource-limited and mobile devices. This has called researchers to propose
efficient and adaptive approaches for streaming of high-quality point clouds.
In this paper, we run a pilot study towards dynamic adaptive point cloud
streaming, and extend the concept of dynamic adaptive streaming over HTTP
(DASH) towards DASH-PC, a dynamic adaptive bandwidth-efficient and view-aware
point cloud streaming system. DASH-PC can tackle the huge bandwidth demands of
dense point cloud streaming while at the same time can semantically link to
human visual acuity to maintain high visual quality when needed. In order to
describe the various quality representations, we propose multiple thinning
approaches to spatially sub-sample point clouds in the 3D space, and design a
DASH Media Presentation Description manifest specific for point cloud
streaming. Our initial evaluations show that we can achieve significant
bandwidth and performance improvement on dense point cloud streaming with minor
negative quality impacts compared to the baseline scenario when no adaptations
is applied.Comment: 6 pages, 23rd ACM Packet Video (PV'18) Workshop, June 12--15, 2018,
Amsterdam, Netherland
- …