337 research outputs found
Simplification of the generalized adaptive neural filter and comparative studies with other nonlinear filters
Recently, a new class of adaptive filters called Generalized Adaptive Neural Filters (GANFs) has emerged. They share many characteristics in common with stack filters, include all stack filters as a subset. The GANFs allow a very efficient hardware implementation once they are trained. However, there are some problems associated with GANFs. Three of these arc slow training speeds and the difficulty in choosing a filter structure and neural operator.
This thesis begins with a tutorial on filtering and traces the GANF development up through its origin -- the stack filter. After the GANF is covered in reasonable depth, its use as an image processing filter is examined. Its usefulness is determined based on simulation comparisons with other common filters. Also, some problems of GANFs are looked into. A brief study which investigates different types of neural networks and their applicability to GANFs is presented. Finally, some ideas on increasing the speed of the GANF are discussed. While these improvements do not completely solve the GANF\u27s problems, they make a measurable difference and bring the filter closer to reality
Recommended from our members
surface remeshing in arbitrary codimensions
We present a method for remeshing surfaces that is both general and efficient. Existing efficient methods are restrictive in the type of remeshings they produce, while methods that are able to produce general types of remeshings are generally based on iteration, which prevents them from producing remeshes at interactive rates. In our method, the input surface is directly mapped to an arbitrary (possibly high-dimensional) range space, and uniformly remeshed in this space. Because the mesh is uniform in the range space, all the quantities encoded in the mapping are bounded, resulting in a mesh that is simultaneously adapted to all criteria encoded in the map, and thus we can obtain remeshings of arbitrary characteristics. Because the core operation is a uniform remeshing of a surface embedded in range space, and this operation is direct and local, this remeshing is efficient and can run at interactive rates.Engineering and Applied Science
The Anisotropic Noise in Stochastic Gradient Descent: Its Behavior of Escaping from Sharp Minima and Regularization Effects
Understanding the behavior of stochastic gradient descent (SGD) in the
context of deep neural networks has raised lots of concerns recently. Along
this line, we study a general form of gradient based optimization dynamics with
unbiased noise, which unifies SGD and standard Langevin dynamics. Through
investigating this general optimization dynamics, we analyze the behavior of
SGD on escaping from minima and its regularization effects. A novel indicator
is derived to characterize the efficiency of escaping from minima through
measuring the alignment of noise covariance and the curvature of loss function.
Based on this indicator, two conditions are established to show which type of
noise structure is superior to isotropic noise in term of escaping efficiency.
We further show that the anisotropic noise in SGD satisfies the two conditions,
and thus helps to escape from sharp and poor minima effectively, towards more
stable and flat minima that typically generalize well. We systematically design
various experiments to verify the benefits of the anisotropic noise, compared
with full gradient descent plus isotropic diffusion (i.e. Langevin dynamics).Comment: ICML 2019 camera read
The Euclidean distance degree of an algebraic variety
The nearest point map of a real algebraic variety with respect to Euclidean
distance is an algebraic function. For instance, for varieties of low rank
matrices, the Eckart-Young Theorem states that this map is given by the
singular value decomposition. This article develops a theory of such nearest
point maps from the perspective of computational algebraic geometry. The
Euclidean distance degree of a variety is the number of critical points of the
squared distance to a generic point outside the variety. Focusing on varieties
seen in applications, we present numerous tools for exact computations.Comment: to appear in Foundations of Computational Mathematic
Hierarchy of surface models and irreducible triangulations
AbstractGiven a triangulated closed surface, the problem of constructing a hierarchy of surface models of decreasing level of detail has attracted much attention in computer graphics. A hierarchy provides view-dependent refinement and facilitates the computation of parameterization. For a triangulated closed surface of n vertices and genus g, we prove that there is a constant c>0 such that if n>c·g, a greedy strategy can identify Θ(n) topology-preserving edge contractions that do not interfere with each other. Further, each of them affects only a constant number of triangles. Repeatedly identifying and contracting such edges produces a topology-preserving hierarchy of O(n+g2) size and O(logn+g) depth. Although several implementations exist for constructing hierarchies, our work is the first to show that a greedy algorithm can efficiently compute a hierarchy of provably small size and low depth. When no contractible edge exists, the triangulation is irreducible. Nakamoto and Ota showed that any irreducible triangulation of an orientable 2-manifold has at most max{342g−72,4} vertices. Using our proof techniques we obtain a new bound of max{240g,4}
- …