67,609 research outputs found
Nonuniform Fast Fourier Transforms Using Min-Max Interpolation
The fast Fourier transform (FFT) is used widely in signal processing for efficient computation of the FT of finite-length signals over a set of uniformly spaced frequency locations. However, in many applications, one requires nonuniform sampling in the frequency domain, i.e., a nonuniform FT. Several papers have described fast approximations for the nonuniform FT based on interpolating an oversampled FFT. This paper presents an interpolation method for the nonuniform FT that is optimal in the min-max sense of minimizing the worst-case approximation error over all signals of unit norm. The proposed method easily generalizes to multidimensional signals. Numerical results show that the min-max approach provides substantially lower approximation errors than conventional interpolation methods. The min-max criterion is also useful for optimizing the parameters of interpolation kernels such as the Kaiser-Bessel function.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/85840/1/Fessler70.pd
Resource Efficient Hardware Architecture for Fast Computation of Running Max/Min Filters
Running max/min filters on rectangular kernels are widely used in many digital signal and image processing applications. Filtering with a k×k kernel requires of k2−1 comparisons per sample for a direct implementation; thus, performance scales expensively with the kernel size k. Faster computations can be achieved by kernel decomposition and using constant time one-dimensional algorithms on custom hardware. This paper presents a hardware architecture for real-time computation of running max/min filters based on the van Herk/Gil-Werman (HGW) algorithm. The proposed architecture design uses less computation and memory resources than previously reported architectures when targeted to Field Programmable Gate Array (FPGA) devices. Implementation results show that the architecture is able to compute max/min filters, on 1024×1024 images with up to 255×255 kernels, in around 8.4 milliseconds, 120 frames per second, at a clock frequency of 250 MHz. The implementation is highly scalable for the kernel size with good performance/area tradeoff suitable for embedded applications. The applicability of the architecture is shown for local adaptive image thresholding
Polynomial-delay Enumeration Kernelizations for Cuts of Bounded Degree
Enumeration kernelization was first proposed by Creignou et al. [TOCS 2017]
and was later refined by Golovach et al. [JCSS 2022] into two different
variants: fully-polynomial enumeration kernelization and polynomial-delay
enumeration kernelization. In this paper, we consider the d-CUT problem from
the perspective of (polynomial-delay) enumeration kenrelization. Given an
undirected graph G = (V, E), a cut F = E(A, B) is a d-cut of G if every u in A
has at most d neighbors in B and every v in B has at most d neighbors in A.
Checking the existence of a d-cut in a graph is a well-known NP-hard problem
and is well-studied in parameterized complexity [Algorithmica 2021, IWOCA
2021]. This problem also generalizes a well-studied problem MATCHING CUT (set d
= 1) that has been a central problem in the literature of polynomial-delay
enumeration kernelization. In this paper, we study three different enumeration
variants of this problem, ENUM d-CUT, ENUM MIN-d-CUT and ENUM MAX-d-CUT that
intends to enumerate all the d-cuts, all the minimal d-cuts and all the maximal
d-cuts respectively. We consider various structural parameters of the input and
provide polynomial-delay enumeration kernels for ENUM d-CUT and ENUM MAX-d-CUT
and fully-polynomial enumeration kernels of polynomial size for ENUM MIN-d-CUT.Comment: 25 page
- …