198,642 research outputs found
When Do WOM Codes Improve the Erasure Factor in Flash Memories?
Flash memory is a write-once medium in which reprogramming cells requires
first erasing the block that contains them. The lifetime of the flash is a
function of the number of block erasures and can be as small as several
thousands. To reduce the number of block erasures, pages, which are the
smallest write unit, are rewritten out-of-place in the memory. A Write-once
memory (WOM) code is a coding scheme which enables to write multiple times to
the block before an erasure. However, these codes come with significant rate
loss. For example, the rate for writing twice (with the same rate) is at most
0.77.
In this paper, we study WOM codes and their tradeoff between rate loss and
reduction in the number of block erasures, when pages are written uniformly at
random. First, we introduce a new measure, called erasure factor, that reflects
both the number of block erasures and the amount of data that can be written on
each block. A key point in our analysis is that this tradeoff depends upon the
specific implementation of WOM codes in the memory. We consider two systems
that use WOM codes; a conventional scheme that was commonly used, and a new
recent design that preserves the overall storage capacity. While the first
system can improve the erasure factor only when the storage rate is at most
0.6442, we show that the second scheme always improves this figure of merit.Comment: to be presented at ISIT 201
The Potential of Servicizing as a Green Business Model
It has been argued that servicizing business models, under which a firm sells the use of a product rather than the product itself, are environmentally beneficial. The main arguments are: First, under servicizing the firm charges customers based on the product usage. Second, the quantity of products required to meet customer needs may be smaller because the firm may be able to pool customer needs. Third, the firm may have an incentive to offer products with higher efficiency. Motivated by these arguments, we investigate the economic and environmental potential of servicizing business models. We endogenize the firm's choice between a pure sales, a pure servicizing, and a hybrid model with both sales and servicizing options, the pricing decisions and, the resulting customer usage. We consider two extremes of pooling efficacy, viz., no versus strong pooling. We find that under no pooling servicizing leads to higher environmental impact due to production but lower environmental impact due to use. In contrast, under strong pooling, when a hybrid business model is more profitable, it is also environmentally superior. However, a pure servicizing model is environmentally inferior for high production costs as it leads to a larger production quantity even under strong pooling. We also examine the product efficiency choice and find that the firm offers higher efficiency products only under servicizing models with strong pooling
Stochastic Modeling of Hybrid Cache Systems
In recent years, there is an increasing demand of big memory systems so to
perform large scale data analytics. Since DRAM memories are expensive, some
researchers are suggesting to use other memory systems such as non-volatile
memory (NVM) technology to build large-memory computing systems. However,
whether the NVM technology can be a viable alternative (either economically and
technically) to DRAM remains an open question. To answer this question, it is
important to consider how to design a memory system from a "system
perspective", that is, incorporating different performance characteristics and
price ratios from hybrid memory devices.
This paper presents an analytical model of a "hybrid page cache system" so to
understand the diverse design space and performance impact of a hybrid cache
system. We consider (1) various architectural choices, (2) design strategies,
and (3) configuration of different memory devices. Using this model, we provide
guidelines on how to design hybrid page cache to reach a good trade-off between
high system throughput (in I/O per sec or IOPS) and fast cache reactivity which
is defined by the time to fill the cache. We also show how one can configure
the DRAM capacity and NVM capacity under a fixed budget. We pick PCM as an
example for NVM and conduct numerical analysis. Our analysis indicates that
incorporating PCM in a page cache system significantly improves the system
performance, and it also shows larger benefit to allocate more PCM in page
cache in some cases. Besides, for the common setting of performance-price ratio
of PCM, "flat architecture" offers as a better choice, but "layered
architecture" outperforms if PCM write performance can be significantly
improved in the future.Comment: 14 pages; mascots 201
A storage and access architecture for efficient query processing in spatial database systems
Due to the high complexity of objects and queries and also due to extremely
large data volumes, geographic database systems impose stringent requirements on their
storage and access architecture with respect to efficient query processing. Performance
improving concepts such as spatial storage and access structures, approximations, object
decompositions and multi-phase query processing have been suggested and analyzed as
single building blocks. In this paper, we describe a storage and access architecture which
is composed from the above building blocks in a modular fashion. Additionally, we incorporate
into our architecture a new ingredient, the scene organization, for efficiently
supporting set-oriented access of large-area region queries. An experimental performance
comparison demonstrates that the concept of scene organization leads to considerable
performance improvements for large-area region queries by a factor of up to 150
Synthesis and Stochastic Assessment of Cost-Optimal Schedules
We present a novel approach to synthesize good schedules for a class
of scheduling problems that is slightly more general than the
scheduling problem FJm,a|gpr,r_j,d_j|early/tardy. The idea is to prime
the schedule synthesizer with stochastic information more meaningful
than performance factors with the objective to minimize the expected
cost caused by storage or delay. The priming information is
obtained by stochastic simulation of the system environment. The generated
schedules are assessed again by simulation. The approach is
demonstrated by means of a non-trivial scheduling problem from
lacquer production. The experimental results show that our approach
achieves in all considered scenarios better results than the
extended processing times approach
Optimal web-scale tiering as a flow problem
We present a fast online solver for large scale parametric max-flow problems as they occur in portfolio optimization, inventory management, computer vision, and logistics. Our algorithm solves an integer linear program in an online fashion. It exploits total unimodularity of the constraint matrix and a Lagrangian relaxation to solve the problem as a convex online game. The algorithm generates approximate solutions of max-flow problems by performing stochastic gradient descent on a set of flows. We apply the algorithm to optimize tier arrangement of over 84 million web pages on a layered set of caches to serve an incoming query stream optimally
- âŚ