440 research outputs found
Configurable node density generation with application to hotspot modelling
Mobility models are very relevant mainly when studying the performance of wireless systems by means of computer simulations. The main problem arises when deciding the best mobility model for a particular application. In some cases, it is very important to emulate hotspots or, in general, zones with different user (or node) densities. Current models do not allow complete control over hotspots, or in other words, they do not allow any general node density to be defined in the simulation area. Usually, when hotspots are modelled, closed zones are created with different numbers of users in each area, thus ensuring a fixed node density in each area. However, this approach results in an unfair comparison among users since they cannot move across zones. This paper proposes a new mechanism to solve these drawbacks. Using this mechanism, any general node density can be emulated allowing nodes to move around the entire simulation area. Any mobility model can be applied together with this density control mechanism, provided that the mobility model ensures a uniform node distribution. © 2010 Elsevier Ltd.This work has been funded by the Spanish Ministry of Science and Innovation under project TEC2008-06817-C02-01/TEC.Calabuig Soler, D.; Monserrat Del RĂo, JF.; Cardona Marcet, N. (2011). Configurable node density generation with application to hotspot modelling. Mathematical and Computer Modelling. 53(11-12):2229-2237. https://doi.org/10.1016/j.mcm.2010.08.028S222922375311-1
An intelligent-agent approach for managing congestion in W-CDMA networks
PhDResource Management is a crucial aspect in the next generation cellular networks
since the use of W-CDMA technology gives an inherent flexibility in managing the
system capacity. The concept of a “Service Level Agreement” (SLA) also plays a
very important role as it is the means to guarantee the quality of service provided to
the customers in response to the level of service to which they have subscribed.
Hence there is a need to introduce effective SLA-based policies as part of the radio
resource management.
This work proposes the application of intelligent agents in SLA-based control in
resource management, especially when congestion occurs. The work demonstrates the
ability of intelligent agents in improving and maintaining the quality of service to
meet the required SLA as the congestion occurs.
A particularly novel aspect of this work is the use of learning (here Case Based
Reasoning) to predict the control strategies to be imposed. As the system environment
changes, the most suitable policy will be implemented. When congestion occurs, the
system either proposes the solution by recalling from experience (if the event is
similar to what has been previously solved) or recalculates the solution from its
knowledge (if the event is new). With this approach, the system performance will be
monitored at all times and a suitable policy can be immediately applied as the system
environment changes, resulting in maintaining the system quality of service
A Survey of Phase Classification Techniques for Characterizing Variable Application Behavior
Adaptable computing is an increasingly important paradigm that specializes
system resources to variable application requirements, environmental
conditions, or user requirements. Adapting computing resources to variable
application requirements (or application phases) is otherwise known as
phase-based optimization. Phase-based optimization takes advantage of
application phases, or execution intervals of an application, that behave
similarly, to enable effective and beneficial adaptability. In order for
phase-based optimization to be effective, the phases must first be classified
to determine when application phases begin and end, and ensure that system
resources are accurately specialized. In this paper, we present a survey of
phase classification techniques that have been proposed to exploit the
advantages of adaptable computing through phase-based optimization. We focus on
recent techniques and classify these techniques with respect to several factors
in order to highlight their similarities and differences. We divide the
techniques by their major defining characteristics---online/offline and
serial/parallel. In addition, we discuss other characteristics such as
prediction and detection techniques, the characteristics used for prediction,
interval type, etc. We also identify gaps in the state-of-the-art and discuss
future research directions to enable and fully exploit the benefits of
adaptable computing.Comment: To appear in IEEE Transactions on Parallel and Distributed Systems
(TPDS
Operating Different Displays in Military Fast Jets Using Eye Gaze Tracker
This paper investigated the use of an eye-gaze-controlled interface in a military aviation environment. We set up a flight simulator and used the gaze-controlled interface in three different configurations of displays (head down, head up, and head mounted) for military fast jets. Our studies found that the gaze-controlled interface statistically significantly increased the speed of interaction for secondary mission control tasks compared to touchscreen- and joystick-based target designation system. Finally, we tested a gaze-controlled system inside an aircraft both on the ground and in different phases of flight with military pilots. Results showed that they could undertake representative pointing and selection tasks in less than two seconds, on average
Autonomous management of cost, performance, and resource uncertainty for migration of applications to infrastructure-as-a-service (IaaS) clouds
2014 Fall.Includes bibliographical references.Infrastructure-as-a-Service (IaaS) clouds abstract physical hardware to provide computing resources on demand as a software service. This abstraction leads to the simplistic view that computing resources are homogeneous and infinite scaling potential exists to easily resolve all performance challenges. Adoption of cloud computing, in practice however, presents many resource management challenges forcing practitioners to balance cost and performance tradeoffs to successfully migrate applications. These challenges can be broken down into three primary concerns that involve determining what, where, and when infrastructure should be provisioned. In this dissertation we address these challenges including: (1) performance variance from resource heterogeneity, virtualization overhead, and the plethora of vaguely defined resource types; (2) virtual machine (VM) placement, component composition, service isolation, provisioning variation, and resource contention for multitenancy; and (3) dynamic scaling and resource elasticity to alleviate performance bottlenecks. These resource management challenges are addressed through the development and evaluation of autonomous algorithms and methodologies that result in demonstrably better performance and lower monetary costs for application deployments to both public and private IaaS clouds. This dissertation makes three primary contributions to advance cloud infrastructure management for application hosting. First, it includes design of resource utilization models based on step-wise multiple linear regression and artificial neural networks that support prediction of better performing component compositions. The total number of possible compositions is governed by Bell's Number that results in a combinatorially explosive search space. Second, it includes algorithms to improve VM placements to mitigate resource heterogeneity and contention using a load-aware VM placement scheduler, and autonomous detection of under-performing VMs to spur replacement. Third, it describes a workload cost prediction methodology that harnesses regression models and heuristics to support determination of infrastructure alternatives that reduce hosting costs. Our methodology achieves infrastructure predictions with an average mean absolute error of only 0.3125 VMs for multiple workloads
Physical parameter-aware Networks-on-Chip design
PhD ThesisNetworks-on-Chip (NoCs) have been proposed as a scalable, reliable
and power-efficient communication fabric for chip multiprocessors
(CMPs) and multiprocessor systems-on-chip (MPSoCs). NoCs determine
both the performance and the reliability of such systems, with a
significant power demand that is expected to increase due to developments
in both technology and architecture. In terms of architecture, an
important trend in many-core systems architecture is to increase the
number of cores on a chip while reducing their individual complexity.
This trend increases communication power relative to computation
power. Moreover, technology-wise, power-hungry wires are dominating
logic as power consumers as technology scales down. For these
reasons, the design of future very large scale integration (VLSI) systems
is moving from being computation-centric to communication-centric.
On the other hand, chip’s physical parameters integrity, especially
power and thermal integrity, is crucial for reliable VLSI systems. However,
guaranteeing this integrity is becoming increasingly difficult with
the higher scale of integration due to increased power density and operating
frequencies that result in continuously increasing temperature
and voltage drops in the chip. This is a challenge that may prevent
further shrinking of devices. Thus, tackling the challenge of power
and thermal integrity of future many-core systems at only one level
of abstraction, the chip and package design for example, is no longer
sufficient to ensure the integrity of physical parameters. New designtime
and run-time strategies may need to work together at different
levels of abstraction, such as package, application, network, to provide
the required physical parameter integrity for these large systems. This
necessitates strategies that work at the level of the on-chip network
with its rising power budget.
This thesis proposes models, techniques and architectures to improve
power and thermal integrity of Network-on-Chip (NoC)-based
many-core systems. The thesis is composed of two major parts: i)
minimization and modelling of power supply variations to improve
power integrity; and ii) dynamic thermal adaptation to improve thermal
integrity. This thesis makes four major contributions. The first is
a computational model of on-chip power supply variations in NoCs.
The proposed model embeds a power delivery model, an NoC activity
simulator and a power model. The model is verified with SPICE simulation
and employed to analyse power supply variations in synthetic
and real NoC workloads. Novel observations regarding power supply
noise correlation with different traffic patterns and routing algorithms
are found. The second is a new application mapping strategy aiming
vii
to minimize power supply noise in NoCs. This is achieved by defining
a new metric, switching activity density, and employing a force-based
objective function that results in minimizing switching density. Significant
reductions in power supply noise (PSN) are achieved with a low
energy penalty. This reduction in PSN also results in a better link timing
accuracy. The third contribution is a new dynamic thermal-adaptive
routing strategy to effectively diffuse heat from the NoC-based threedimensional
(3D) CMPs, using a dynamic programming (DP)-based distributed
control architecture. Moreover, a new approach for efficient extension
of two-dimensional (2D) partially-adaptive routing algorithms
to 3D is presented. This approach improves three-dimensional networkon-
chip (3D NoC) routing adaptivity while ensuring deadlock-freeness.
Finally, the proposed thermal-adaptive routing is implemented in
field-programmable gate array (FPGA), and implementation challenges,
for both thermal sensing and the dynamic control architecture are addressed.
The proposed routing implementation is evaluated in terms
of both functionality and performance.
The methodologies and architectures proposed in this thesis open a
new direction for improving the power and thermal integrity of future
NoC-based 2D and 3D many-core architectures
Degradation in FPGAs: Monitoring, Modeling and Mitigation
This dissertation targets the transistor aging degradation as well as the associated thermal challenges in FPGAs (since there is an exponential relation between aging and chip temperature). The main objectives are to perform experimentation, analysis and device-level model abstraction for modeling the degradation in FPGAs, then to monitor the FPGA to keep track of aging rates and ultimately to propose an aging-aware FPGA design flow to mitigate the aging
Toward cascading failure mitigation in high voltage power system capacitors
As electrical power networks adapt to new challenges, advances in high voltage direct current interconnection offer one means to reinforce alternating current networks with flexibility and control, accordingly improving diversity to become a present-day, viable alternative to network flexibility and energy storage measures. High voltage capacitors support these links and offer simple means of voltage support, harmonic filtering, and are inherent to established and emerging converter designs.
Where research literature predominantly explores use of modern dielectrics in efforts toward improved capacitor technologies, but reveals little about: existing capacitor designs; associated failure modes or statistics; or avenues in monitoring or maintenance, simulation modelling equips engineers with an approach to pre-emptively anticipate probable incipient fault locations toward improving designs for systems yet to be commissioned.
This Dissertation presents a high-voltage capacitor simulation model, before exploring two questions about these hermetically sealed, highly modular assets: where are incipient faults most likely to arise; and how can internal faults be externally located? Nonlinear voltage distributions are found within each and among connected units, induced through parasitic effects with housings supported at rack potential. Consequent implications are considered on: stresses within unit dielectrics, susceptibility to cascading failure, and an ability to locate internal faults.
Corroboration of fault detection and location is additionally found possible using unit housing temperatures. A model is presented, developed to be scalable, configurable, and extensible, and made available for posterity. Opportunities in asset design, modelling, manufacture, and monitoring are proffered toward improvements not only in operational longevity, but in understanding and early awareness of incipient faults as they develop.As electrical power networks adapt to new challenges, advances in high voltage direct current interconnection offer one means to reinforce alternating current networks with flexibility and control, accordingly improving diversity to become a present-day, viable alternative to network flexibility and energy storage measures. High voltage capacitors support these links and offer simple means of voltage support, harmonic filtering, and are inherent to established and emerging converter designs.
Where research literature predominantly explores use of modern dielectrics in efforts toward improved capacitor technologies, but reveals little about: existing capacitor designs; associated failure modes or statistics; or avenues in monitoring or maintenance, simulation modelling equips engineers with an approach to pre-emptively anticipate probable incipient fault locations toward improving designs for systems yet to be commissioned.
This Dissertation presents a high-voltage capacitor simulation model, before exploring two questions about these hermetically sealed, highly modular assets: where are incipient faults most likely to arise; and how can internal faults be externally located? Nonlinear voltage distributions are found within each and among connected units, induced through parasitic effects with housings supported at rack potential. Consequent implications are considered on: stresses within unit dielectrics, susceptibility to cascading failure, and an ability to locate internal faults.
Corroboration of fault detection and location is additionally found possible using unit housing temperatures. A model is presented, developed to be scalable, configurable, and extensible, and made available for posterity. Opportunities in asset design, modelling, manufacture, and monitoring are proffered toward improvements not only in operational longevity, but in understanding and early awareness of incipient faults as they develop
- …