7,050 research outputs found
A single-photon sampling architecture for solid-state imaging
Advances in solid-state technology have enabled the development of silicon
photomultiplier sensor arrays capable of sensing individual photons. Combined
with high-frequency time-to-digital converters (TDCs), this technology opens up
the prospect of sensors capable of recording with high accuracy both the time
and location of each detected photon. Such a capability could lead to
significant improvements in imaging accuracy, especially for applications
operating with low photon fluxes such as LiDAR and positron emission
tomography.
The demands placed on on-chip readout circuitry imposes stringent trade-offs
between fill factor and spatio-temporal resolution, causing many contemporary
designs to severely underutilize the technology's full potential. Concentrating
on the low photon flux setting, this paper leverages results from group testing
and proposes an architecture for a highly efficient readout of pixels using
only a small number of TDCs, thereby also reducing both cost and power
consumption. The design relies on a multiplexing technique based on binary
interconnection matrices. We provide optimized instances of these matrices for
various sensor parameters and give explicit upper and lower bounds on the
number of TDCs required to uniquely decode a given maximum number of
simultaneous photon arrivals.
To illustrate the strength of the proposed architecture, we note a typical
digitization result of a 120x120 photodiode sensor on a 30um x 30um pitch with
a 40ps time resolution and an estimated fill factor of approximately 70%, using
only 161 TDCs. The design guarantees registration and unique recovery of up to
4 simultaneous photon arrivals using a fast decoding algorithm. In a series of
realistic simulations of scintillation events in clinical positron emission
tomography the design was able to recover the spatio-temporal location of 98.6%
of all photons that caused pixel firings.Comment: 24 pages, 3 figures, 5 table
On Optimal Binary One-Error-Correcting Codes of Lengths and
Best and Brouwer [Discrete Math. 17 (1977), 235-245] proved that
triply-shortened and doubly-shortened binary Hamming codes (which have length
and , respectively) are optimal. Properties of such codes are
here studied, determining among other things parameters of certain subcodes. A
utilization of these properties makes a computer-aided classification of the
optimal binary one-error-correcting codes of lengths 12 and 13 possible; there
are 237610 and 117823 such codes, respectively (with 27375 and 17513
inequivalent extensions). This completes the classification of optimal binary
one-error-correcting codes for all lengths up to 15. Some properties of the
classified codes are further investigated. Finally, it is proved that for any
, there are optimal binary one-error-correcting codes of length
and that cannot be lengthened to perfect codes of length
.Comment: Accepted for publication in IEEE Transactions on Information Theory.
Data available at http://www.iki.fi/opottone/code
The Perfect Binary One-Error-Correcting Codes of Length 15: Part II--Properties
A complete classification of the perfect binary one-error-correcting codes of
length 15 as well as their extensions of length 16 was recently carried out in
[P. R. J. \"Osterg{\aa}rd and O. Pottonen, "The perfect binary
one-error-correcting codes of length 15: Part I--Classification," IEEE Trans.
Inform. Theory vol. 55, pp. 4657--4660, 2009]. In the current accompanying
work, the classified codes are studied in great detail, and their main
properties are tabulated. The results include the fact that 33 of the 80
Steiner triple systems of order 15 occur in such codes. Further understanding
is gained on full-rank codes via switching, as it turns out that all but two
full-rank codes can be obtained through a series of such transformations from
the Hamming code. Other topics studied include (non)systematic codes, embedded
one-error-correcting codes, and defining sets of codes. A classification of
certain mixed perfect codes is also obtained.Comment: v2: fixed two errors (extension of nonsystematic codes, table of
coordinates fixed by symmetries of codes), added and extended many other
result
Characterisation of a three-dimensional Brownian motor in optical lattices
We present here a detailed study of the behaviour of a three dimensional
Brownian motor based on cold atoms in a double optical lattice [P. Sjolund et
al., Phys. Rev. Lett. 96, 190602 (2006)]. This includes both experiments and
numerical simulations of a Brownian particle. The potentials used are spatially
and temporally symmetric, but combined spatiotemporal symmetry is broken by
phase shifts and asymmetric transfer rates between potentials. The diffusion of
atoms in the optical lattices is rectified and controlled both in direction and
speed along three dimensions. We explore a large range of experimental
parameters, where irradiances and detunings of the optical lattice lights are
varied within the dissipative regime. Induced drift velocities in the order of
one atomic recoil velocity have been achieved.Comment: 8 pages, 14 figure
Wet paper codes and the dual distance in steganography
In 1998 Crandall introduced a method based on coding theory to secretly embed
a message in a digital support such as an image. Later Fridrich et al. improved
this method to minimize the distortion introduced by the embedding; a process
called wet paper. However, as previously emphasized in the literature, this
method can fail during the embedding step. Here we find sufficient and
necessary conditions to guarantee a successful embedding by studying the dual
distance of a linear code. Since these results are essentially of combinatorial
nature, they can be generalized to systematic codes, a large family containing
all linear codes. We also compute the exact number of solutions and point out
the relationship between wet paper codes and orthogonal arrays
Low-density MDS codes and factors of complete graphs
We present a class of array code of size n×l, where l=2n or 2n+1, called B-Code. The distances of the B-Code and its dual are 3 and l-1, respectively. The B-Code and its dual are optimal in the sense that i) they are maximum-distance separable (MDS), ii) they have an optimal encoding property, i.e., the number of the parity bits that are affected by change of a single information bit is minimal, and iii) they have optimal length. Using a new graph description of the codes, we prove an equivalence relation between the construction of the B-Code (or its dual) and a combinatorial problem known as perfect one-factorization of complete graphs, thus obtaining constructions of two families of the B-Code and its dual, one of which is new. Efficient decoding algorithms are also given, both for erasure correcting and for error correcting. The existence of perfect one-factorizations for every complete graph with an even number of nodes is a 35 years long conjecture in graph theory. The construction of B-Codes of arbitrary odd length will provide an affirmative answer to the conjecture
- …