6,002 research outputs found
Neverlast: Towards the Design and Implementation of the NVM-based Everlasting Operating System
Novel non-volatile memory (NVM) technologies allow for the efficient implementation of \u27\u27intermittently-powered\u27\u27 smart dust and edge computing systems in a previously unfamiliar way. Operating with rough environmental conditions where power-supply failures occur often requires adjustments to all parts of the system. This leads to an inevitable trade-off in the design of operating systems -- the overhead of persisting the achieved computation progress over power failures is detrimental to the possible amount of progress with the available energy budgets. It is, therefore, crucial to minimize the overhead of ensuring persistence. This paper presents the case that persistence should be provided as an operating-system service to achieve everlasting operating capabilities. Triggered by power-failure interrupts, an implicit persistence service for the processor status of a process preserves progress on the CPU-instruction level. This interrupt only triggers if necessary so that no power-state polling is needed. We outline architectures for everlasting systems and discuss their benefits and drawbacks compared to existing approaches. Thereby, the operating system provides persistence as a service at run-time to the application, with minimal overhead. Our approach enables the separation of the application from energy-supply state estimation, as well as state-preserving logic for software and hardware components
Rapid Recovery of Program Execution Under Power Failures for Embedded Systems with NVM
After power is switched on, recovering the interrupted program from the
initial state can cause negative impact. Some programs are even unrecoverable.
To rapid recovery of program execution under power failures, the execution
states of checkpoints are backed up by NVM under power failures for embedded
systems with NVM. However, frequent checkpoints will shorten the lifetime of
the NVM and incur significant write overhead. In this paper, the technique of
checkpoint setting triggered by function calls is proposed to reduce the write
on NVM. The evaluation results show an average of 99.8% and 80.5$% reduction on
NVM backup size for stack backup, compared to the log-based method and
step-based method. In order to better achieve this, we also propose
pseudo-function calls to increase backup points to reduce recovery costs, and
exponential incremental call-based backup methods to reduce backup costs in the
loop. To further avoid the content on NVM is cluttered and out of NVM, a method
to clean the contents on the NVM that are useless for restoration is proposed.
Based on aforementioned problems and techniques, the recovery technology is
proposed, and the case is used to analyze how to recover rapidly under
different power failures.Comment: This paper has been accepted for publication to Microprocessors and
Microsystems in March 15, 202
Energy Saving Techniques for Phase Change Memory (PCM)
In recent years, the energy consumption of computing systems has increased
and a large fraction of this energy is consumed in main memory. Towards this,
researchers have proposed use of non-volatile memory, such as phase change
memory (PCM), which has low read latency and power; and nearly zero leakage
power. However, the write latency and power of PCM are very high and this,
along with limited write endurance of PCM present significant challenges in
enabling wide-spread adoption of PCM. To address this, several
architecture-level techniques have been proposed. In this report, we review
several techniques to manage power consumption of PCM. We also classify these
techniques based on their characteristics to provide insights into them. The
aim of this work is encourage researchers to propose even better techniques for
improving energy efficiency of PCM based main memory.Comment: Survey, phase change RAM (PCRAM
Methodologies for Designing Power-Aware Smart Card Systems
Smart cards are some of the smallest
computing platforms in use today. They have
limited resources, but a huge number of
functional requirements. The requirement for
multi-application cards increases the demand
for high performance and security even more,
whereas the limits given by size and energy
consumption remain constant.
We describe new
methodologies for designing and implementing
entire systems with regard to power awareness
and required performance. To make use of this
power-saving potential, also the higher layers
of the system - the operating system layer and
the application domain layer - are required to
be designed together with the rest of the
system.
HW/SW co-design methodologies enable the gain of
system-level optimization. The first part presents the
abstraction of smart cards to optimize system architecture
and memory system. Both functional and transactional-level
models are presented and discussed. The proposed design
flow and preliminary results of the evaluation are depicted.
Another central part of this methodology is a cycle-accurate instruction-set
simulator for secure software development.
The underlaying energy model is designed
to decouple instruction and data dependent energy dissipation,
which leads to an independent characterization process and allows
stepwise model refinement to increase estimation accuracy. The
model has been evaluated for a high-performance smart card CPU and
an use-case for secure software is given
The "MIND" Scalable PIM Architecture
MIND (Memory, Intelligence, and Network Device) is an advanced parallel computer architecture for high performance computing and scalable embedded processing. It is a
Processor-in-Memory (PIM) architecture integrating both DRAM bit cells and CMOS logic devices on the same silicon die. MIND is multicore with multiple memory/processor nodes on
each chip and supports global shared memory across systems of MIND components. MIND is distinguished from other PIM architectures in that it incorporates mechanisms for efficient support of a global parallel execution model based on the semantics of message-driven multithreaded split-transaction processing. MIND is designed to operate either in conjunction with other conventional microprocessors or in standalone arrays of like devices. It also incorporates mechanisms for fault tolerance, real time execution, and active power management. This paper describes the major elements and operational methods of the MIND
architecture
ETAP: Energy-aware Timing Analysis of Intermittent Programs
Energy harvesting battery-free embedded devices rely only on ambient energy
harvesting that enables stand-alone and sustainable IoT applications. These
devices execute programs when the harvested ambient energy in their energy
reservoir is sufficient to operate and stop execution abruptly (and start
charging) otherwise. These intermittent programs have varying timing behavior
under different energy conditions, hardware configurations, and program
structures. This paper presents Energy-aware Timing Analysis of intermittent
Programs (ETAP), a probabilistic symbolic execution approach that analyzes the
timing and energy behavior of intermittent programs at compile time. ETAP
symbolically executes the given program while taking time and energy cost
models for ambient energy and dynamic energy consumption into account. We
evaluated ETAP on several intermittent programs and compared the compile-time
analysis results with executions on real hardware. The results show that ETAP's
normalized prediction accuracy is 99.5%, and it speeds up the timing analysis
by at least two orders of magnitude compared to manual testing.Comment: Corrected typos in the previous submissio
ETAP: Energy-Aware Timing Analysis of Intermittent Programs
Energy harvesting battery-free embedded devices rely only on ambient energy harvesting that enables stand-alone and sustainable IoT applications. These devices execute programs when the harvested ambient energy in their energy reservoir is sufficient to operate and stop execution abruptly (and start charging) otherwise. These intermittent programs have varying timing behavior under different energy conditions, hardware configurations, and program structures. This article presents Energy-aware Timing Analysis of intermittent Programs (ETAP), a probabilistic symbolic execution approach that analyzes the timing and energy behavior of intermittent programs at compile time. ETAP symbolically executes the given program while taking time and energy cost models for ambient energy and dynamic energy consumption into account. We evaluate ETAP by comparing the compile-time analysis results of our benchmark codes and real-world application with the results of their executions on real hardware. Our evaluation shows that ETAP’s prediction error rate is between 0.0076% and 10.8%, and it speeds up the timing analysis by at least two orders of magnitude compared to manual testing.acceptedVersio
- …