132 research outputs found
Adaptive two-pass rank order filter to remove impulse noise in highly corrupted images
This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of Brunel University's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to [email protected]. © 2004 IEEE.In this paper, we present an adaptive two-pass rank order filter to remove impulse noise in highly corrupted images.
When the noise ratio is high, rank order filters, such as the median filter for example, can produce unsatisfactory results. Better results can be obtained by applying the filter twice, which we call two-pass filtering. To further improve the performance, we develop an adaptive two-pass rank order filter. Between the passes of
filtering, an adaptive process is used to detect irregularities in the spatial distribution of the estimated impulse noise. The adaptive process then selectively replaces some pixels changed by the first
pass of filtering with their original observed pixel values. These pixels are then kept unchanged during the second filtering. In combination, the adaptive process and the sec ond filter eliminate more impulse noise and restore some pixels that are mistakenly
altered by the first filtering. As a final result, the reconstructed image maintains a higher degree of fidelity and has a smaller
amount of noise. The idea of adaptive two-pass processing can be applied to many rank order filters, such as a center-weighted
median filter (CWMF), adaptive CWMF, lower-upper-middle filter, and soft-decision rank-order-mean filter. Results from computer simulations are used to demonstrate the performance of this type of adaptation using a number of basic rank order filters.This work was supported in part by CenSSIS, the Center for Subsurface Sensing and Imaging Systems, under the Engineering Research Centers Program of the National Science Foundation (NSF) under Award EEC-9986821, by an ARO MURI on Demining under Grant DAAG55-97-1-0013, and by the NSF under Award 0208548
On enforcing non-negativity in polynomial approximations in high dimensions
Polynomial approximations of functions are widely used in scientific
computing. In certain applications, it is often desired to require the
polynomial approximation to be non-negative (resp. non-positive), or bounded
within a given range, due to constraints posed by the underlying physical
problems. Efficient numerical methods are thus needed to enforce such
conditions. In this paper, we discuss effective numerical algorithms for
polynomial approximation under non-negativity constraints. We first formulate
the constrained optimization problem, its primal and dual forms, and then
discuss efficient first-order convex optimization methods, with a particular
focus on high dimensional problems. Numerical examples are provided, for up to
dimensions, to demonstrate the effectiveness and scalability of the
methods
Modeling Unknown Stochastic Dynamical System via Autoencoder
We present a numerical method to learn an accurate predictive model for an
unknown stochastic dynamical system from its trajectory data. The method seeks
to approximate the unknown flow map of the underlying system. It employs the
idea of autoencoder to identify the unobserved latent random variables. In our
approach, we design an encoding function to discover the latent variables,
which are modeled as unit Gaussian, and a decoding function to reconstruct the
future states of the system. Both the encoder and decoder are expressed as deep
neural networks (DNNs). Once the DNNs are trained by the trajectory data, the
decoder serves as a predictive model for the unknown stochastic system. Through
an extensive set of numerical examples, we demonstrate that the method is able
to produce long-term system predictions by using short bursts of trajectory
data. It is also applicable to systems driven by non-Gaussian noises
Boosting Continuous Control with Consistency Policy
Due to its training stability and strong expression, the diffusion model has
attracted considerable attention in offline reinforcement learning. However,
several challenges have also come with it: 1) The demand for a large number of
diffusion steps makes the diffusion-model-based methods time inefficient and
limits their applications in real-time control; 2) How to achieve policy
improvement with accurate guidance for diffusion model-based policy is still an
open problem. Inspired by the consistency model, we propose a novel
time-efficiency method named Consistency Policy with Q-Learning (CPQL), which
derives action from noise by a single step. By establishing a mapping from the
reverse diffusion trajectories to the desired policy, we simultaneously address
the issues of time efficiency and inaccurate guidance when updating diffusion
model-based policy with the learned Q-function. We demonstrate that CPQL can
achieve policy improvement with accurate guidance for offline reinforcement
learning, and can be seamlessly extended for online RL tasks. Experimental
results indicate that CPQL achieves new state-of-the-art performance on 11
offline and 21 online tasks, significantly improving inference speed by nearly
45 times compared to Diffusion-QL. We will release our code later.Comment: 18 pages, 9 page
- …