179,598 research outputs found
A Survey of Techniques For Improving Energy Efficiency in Embedded Computing Systems
Recent technological advances have greatly improved the performance and
features of embedded systems. With the number of just mobile devices now
reaching nearly equal to the population of earth, embedded systems have truly
become ubiquitous. These trends, however, have also made the task of managing
their power consumption extremely challenging. In recent years, several
techniques have been proposed to address this issue. In this paper, we survey
the techniques for managing power consumption of embedded systems. We discuss
the need of power management and provide a classification of the techniques on
several important parameters to highlight their similarities and differences.
This paper is intended to help the researchers and application-developers in
gaining insights into the working of power management techniques and designing
even more efficient high-performance embedded systems of tomorrow
Combined Time and Information Redundancy for SEU-Tolerance in Energy-Efficient Real-Time Systems
Recently the trade-off between energy consumption and fault-tolerance in real-time systems has been highlighted. These works have focused on dynamic voltage scaling (DVS) to reduce dynamic energy dissipation and on time redundancy to achieve transient-fault tolerance. While the time redundancy technique exploits the available slack time to increase the fault-tolerance by performing recovery executions, DVS exploits slack time to save energy. Therefore we believe there is a resource conflict between the time-redundancy technique and DVS. The first aim of this paper is to propose the usage of information redundancy to solve this problem. We demonstrate through analytical and experimental studies that it is possible to achieve both higher transient fault-tolerance (tolerance to single event upsets (SEU)) and less energy using a combination of information and time redundancy when compared with using time redundancy alone. The second aim of this paper is to analyze the interplay of transient-fault tolerance (SEU-tolerance) and adaptive body biasing (ABB) used to reduce static leakage energy, which has not been addressed in previous studies. We show that the same technique (i.e. the combination of time and information redundancy) is applicable to ABB-enabled systems and provides more advantages than time redundancy alone
Learning and Management for Internet-of-Things: Accounting for Adaptivity and Scalability
Internet-of-Things (IoT) envisions an intelligent infrastructure of networked
smart devices offering task-specific monitoring and control services. The
unique features of IoT include extreme heterogeneity, massive number of
devices, and unpredictable dynamics partially due to human interaction. These
call for foundational innovations in network design and management. Ideally, it
should allow efficient adaptation to changing environments, and low-cost
implementation scalable to massive number of devices, subject to stringent
latency constraints. To this end, the overarching goal of this paper is to
outline a unified framework for online learning and management policies in IoT
through joint advances in communication, networking, learning, and
optimization. From the network architecture vantage point, the unified
framework leverages a promising fog architecture that enables smart devices to
have proximity access to cloud functionalities at the network edge, along the
cloud-to-things continuum. From the algorithmic perspective, key innovations
target online approaches adaptive to different degrees of nonstationarity in
IoT dynamics, and their scalable model-free implementation under limited
feedback that motivates blind or bandit approaches. The proposed framework
aspires to offer a stepping stone that leads to systematic designs and analysis
of task-specific learning and management schemes for IoT, along with a host of
new research directions to build on.Comment: Submitted on June 15 to Proceeding of IEEE Special Issue on Adaptive
and Scalable Communication Network
Improved Handover Through Dual Connectivity in 5G mmWave Mobile Networks
The millimeter wave (mmWave) bands offer the possibility of orders of
magnitude greater throughput for fifth generation (5G) cellular systems.
However, since mmWave signals are highly susceptible to blockage, channel
quality on any one mmWave link can be extremely intermittent. This paper
implements a novel dual connectivity protocol that enables mobile user
equipment (UE) devices to maintain physical layer connections to 4G and 5G
cells simultaneously. A novel uplink control signaling system combined with a
local coordinator enables rapid path switching in the event of failures on any
one link. This paper provides the first comprehensive end-to-end evaluation of
handover mechanisms in mmWave cellular systems. The simulation framework
includes detailed measurement-based channel models to realistically capture
spatial dynamics of blocking events, as well as the full details of MAC, RLC
and transport protocols. Compared to conventional handover mechanisms, the
study reveals significant benefits of the proposed method under several
metrics.Comment: 16 pages, 13 figures, to appear on the 2017 IEEE JSAC Special Issue
on Millimeter Wave Communications for Future Mobile Network
Fog-supported delay-constrained energy-saving live migration of VMs over multiPath TCP/IP 5G connections
The incoming era of the fifth-generation fog computing-supported radio access networks (shortly, 5G FOGRANs) aims at exploiting computing/networking resource virtualization, in order to augment the limited resources of wireless devices through the seamless live migration of virtual machines (VMs) toward nearby fog data centers. For this purpose, the bandwidths of the multiple wireless network interface cards of the wireless devices may be aggregated under the control of the emerging MultiPathTCP (MPTCP) protocol. However, due to the fading and mobility-induced phenomena, the energy consumptions of the current state-of-the-art VM migration techniques may still offset their expected benefits. Motivated by these considerations, in this paper, we analytically characterize and implement in software and numerically test the optimal minimum-energy settable-complexity bandwidth manager (SCBM) for the live migration of VMs over 5G FOGRAN MPTCP connections. The key features of the proposed SCBM are that: 1) its implementation complexity is settable on-line on the basis of the target energy consumption versus implementation complexity tradeoff; 2) it minimizes the network energy consumed by the wireless device for sustaining the migration process under hard constraints on the tolerated migration times and downtimes; and 3) by leveraging a suitably designed adaptive mechanism, it is capable to quickly react to (possibly, unpredicted) fading and/or mobility-induced abrupt changes of the wireless environment without requiring forecasting. The actual effectiveness of the proposed SCBM is supported by extensive energy versus delay performance comparisons that cover: 1) a number of heterogeneous 3G/4G/WiFi FOGRAN scenarios; 2) synthetic and real-world workloads; and, 3) MPTCP and wireless connections
Recommended from our members
Using agent based simulation to empirically examine complexity in carbon footprint business process
Through the critical analysis of the extant literature, it is observed that Simulation is widely used as a research method in Natural Sciences, Engineering and Social Sciences, in addition to argumentation and formalisation as the third way of carrying out research. Simulation is not so widely used in Business and Management research as it ought to have been, though this is changing for the better with the technological advances in computers and their computational power. These technological advances enhance the capability of theoretical research models, in defining a problem and their use in empirically examining a solution to the problem in simulated reality, like never before. Management journal searches for “Simulation and Complexity Theory” returned nil or zero returns, which explain that this combination is not popular in management research, though they are used individually more often. The major objective of this paper is to analyse some of the conceptual (or theoretical) and methodological (or empirical) contributions that Agent Based Simulation and Complexity Theory can make to the business and management community in their business process related research In view of this, some basic ideas are discussed of using Agent Based Simulation as a method in Business and Management Studies research and how an Agent Based Model can be applied to a business process as complex as Carbon Footprint. It is in this context that the use of Complexity as the base theory to empirically examine a business process is discussed. Throughout this article, our research on complex adaptive systems (e.g., Accounting Information System) in continuously changing organisations managing complex business processes (e.g., Carbon Footprint business process) is considered as the basis for illustrating some of the concepts. Through this article, avenues for further management research using these tools and methodology are suggested
- …