23,709 research outputs found
Time-Space Constrained Codes for Phase-Change Memories
Phase-change memory (PCM) is a promising non-volatile solid-state memory
technology. A PCM cell stores data by using its amorphous and crystalline
states. The cell changes between these two states using high temperature.
However, since the cells are sensitive to high temperature, it is important,
when programming cells, to balance the heat both in time and space.
In this paper, we study the time-space constraint for PCM, which was
originally proposed by Jiang et al. A code is called an
\emph{-constrained code} if for any consecutive
rewrites and for any segment of contiguous cells, the total rewrite
cost of the cells over those rewrites is at most . Here,
the cells are binary and the rewrite cost is defined to be the Hamming distance
between the current and next memory states. First, we show a general upper
bound on the achievable rate of these codes which extends the results of Jiang
et al. Then, we generalize their construction for -constrained codes and show another construction for -constrained codes. Finally, we show that these two
constructions can be used to construct codes for all values of ,
, and
Developing numerical libraries in Java
The rapid and widespread adoption of Java has created a demand for reliable
and reusable mathematical software components to support the growing number of
compute-intensive applications now under development, particularly in science
and engineering. In this paper we address practical issues of the Java language
and environment which have an effect on numerical library design and
development. Benchmarks which illustrate the current levels of performance of
key numerical kernels on a variety of Java platforms are presented. Finally, a
strategy for the development of a fundamental numerical toolkit for Java is
proposed and its current status is described.Comment: 11 pages. Revised version of paper presented to the 1998 ACM
Conference on Java for High Performance Network Computing. To appear in
Concurrency: Practice and Experienc
Optimal uncertainty quantification for legacy data observations of Lipschitz functions
We consider the problem of providing optimal uncertainty quantification (UQ)
--- and hence rigorous certification --- for partially-observed functions. We
present a UQ framework within which the observations may be small or large in
number, and need not carry information about the probability distribution of
the system in operation. The UQ objectives are posed as optimization problems,
the solutions of which are optimal bounds on the quantities of interest; we
consider two typical settings, namely parameter sensitivities (McDiarmid
diameters) and output deviation (or failure) probabilities. The solutions of
these optimization problems depend non-trivially (even non-monotonically and
discontinuously) upon the specified legacy data. Furthermore, the extreme
values are often determined by only a few members of the data set; in our
principal physically-motivated example, the bounds are determined by just 2 out
of 32 data points, and the remainder carry no information and could be
neglected without changing the final answer. We propose an analogue of the
simplex algorithm from linear programming that uses these observations to offer
efficient and rigorous UQ for high-dimensional systems with high-cardinality
legacy data. These findings suggest natural methods for selecting optimal
(maximally informative) next experiments.Comment: 38 page
- …