14 research outputs found
Computing Naturally in the Billiard Ball Model
Fredkin's Billiard Ball Model (BBM) is considered one of the fundamental
models of collision-based computing, and it is essentially based on elastic
collisions of mobile billiard balls. Moreover, fixed mirrors or reflectors are
brought into the model to deflect balls to complete the computation. However,
the use of fixed mirrors is "physically unrealistic" and makes the BBM not
perfectly momentum conserving from a physical point of view, and it imposes an
external architecture onto the computing substrate which is not consistent with
the concept of "architectureless" in collision-based computing. In our initial
attempt to reduce mirrors in the BBM, we present a class of gates: the
m-counting gate, and show that certain circuits can be realized with few
mirrors using this gate. We envisage that our findings can be useful in future
research of collision-based computing in novel chemical and optical computing
substrates.Comment: 10 pages, 7 figure
A serial-parallel architecture for two-dimensional discrete cosine and inverse discrete cosine transforms
The Discrete Cosine and Inverse Discrete Cosine Transforms are widely used tools in many digital signal and image processing applications. The complexity of these algorithms often requires dedicated hardware support to satisfy the performance requirements of hard real-time applications. This paper presents the architecture of an efficient implementation of a two-dimensional DCT/IDCT transform processor via a serial-parallel systolic array that does not require transposition
A parallel implementation of the 2-D discrete wavelet transform without interprocessor communications
The discrete wavelet transform is currently attracting much interest among researchers and practitioners as a powerful tool for a wide variety of digital signal and imaging processing applications. This article presents an efficient approach to compute the two-dimensional (2-D) discrete wavelet transform in standard form on parallel general-purpose computers. This approach does not require transposition of intermediate results and avoids interprocessor communication. Since it is based on matrix-vector multiplication, our technique does not introduce any restriction on the size of the input data or on the transform parameters. Complete use of the available processor parallelism, modularity, and scalability are achieved. Theoretical and experimental evaluations and comparisons are given with respect to traditional parallelization