570 research outputs found
Neural Nearest Neighbors Networks
Non-local methods exploiting the self-similarity of natural signals have been
well studied, for example in image analysis and restoration. Existing
approaches, however, rely on k-nearest neighbors (KNN) matching in a fixed
feature space. The main hurdle in optimizing this feature space w.r.t.
application performance is the non-differentiability of the KNN selection rule.
To overcome this, we propose a continuous deterministic relaxation of KNN
selection that maintains differentiability w.r.t. pairwise distances, but
retains the original KNN as the limit of a temperature parameter approaching
zero. To exploit our relaxation, we propose the neural nearest neighbors block
(N3 block), a novel non-local processing layer that leverages the principle of
self-similarity and can be used as building block in modern neural network
architectures. We show its effectiveness for the set reasoning task of
correspondence classification as well as for image restoration, including image
denoising and single image super-resolution, where we outperform strong
convolutional neural network (CNN) baselines and recent non-local models that
rely on KNN selection in hand-chosen features spaces.Comment: to appear at NIPS*2018, code available at
https://github.com/visinf/n3net
Manifold Graph Signal Restoration using Gradient Graph Laplacian Regularizer
In the graph signal processing (GSP) literature, graph Laplacian regularizer
(GLR) was used for signal restoration to promote piecewise smooth / constant
reconstruction with respect to an underlying graph. However, for signals slowly
varying across graph kernels, GLR suffers from an undesirable "staircase"
effect. In this paper, focusing on manifold graphs -- collections of uniform
discrete samples on low-dimensional continuous manifolds -- we generalize GLR
to gradient graph Laplacian regularizer (GGLR) that promotes planar / piecewise
planar (PWP) signal reconstruction. Specifically, for a graph endowed with
sampling coordinates (e.g., 2D images, 3D point clouds), we first define a
gradient operator, using which we construct a gradient graph for nodes'
gradients in sampling manifold space. This maps to a gradient-induced nodal
graph (GNG) and a positive semi-definite (PSD) Laplacian matrix with planar
signals as the 0 frequencies. For manifold graphs without explicit sampling
coordinates, we propose a graph embedding method to obtain node coordinates via
fast eigenvector computation. We derive the means-square-error minimizing
weight parameter for GGLR efficiently, trading off bias and variance of the
signal estimate. Experimental results show that GGLR outperformed previous
graph signal priors like GLR and graph total variation (GTV) in a range of
graph signal restoration tasks
Adaptive Optimized Discriminative Learning based Image Deblurring using Deep CNN
Image degradation plays a major problem in many image processing applications. Due to blurring, the quality of an image is degraded and there will be a reduction in bandwidth. Blur in an image is due to variations in atmospheric turbulence, focal length, camera settings, etc. Various types of blurs include Gaussian blur, Motion blur, Out-of-focus blur. The effect of noise along with blur further corrupts the captured image. Many techniques have evolved to deblur the degraded image. The leading approach to solve various degraded images are either based on discriminative learning models or on optimization models. Each method has its own advantages and disadvantages. Learning by discriminative methods is faster but restricted to a specific task whereas optimization models handle flexibly but consume more time. Integrating optimization models suitably by learning with discriminative manner results in effective image restoration. In this paper, a set of effective and fast Convolutional Neural Networks (CNNs) are employed to deblur the Gaussian, motion and out-of-focus blurred images that integrate with optimization models to further avoid noise effects. The proposed methods work more efficiently for applications with low-level vision
Recommended from our members
Applications and Advances in Similarity-based Machine Learning
Similarity-based machine learning methods differ from traditional machine learning methods in that they also use pairwise similarity relations between objects to infer the labels of unlabeled objects. A recent comparative study for classification problems by Baumann et al. [2019] demonstrated that similarity-based techniques have superior performance and robustness when compared to well-established machine learning techniques. Similarity-based machine learning methods benefit from two advantages that could explain superior their performance: They can make use of the pairwise relations between unlabeled objects, and they are robust due to the transitive property of pairwise similarities. A challenge for similarity-based machine learning methods on large datasets is that the number of pairwise similarity grows quadratically in the size of the dataset. For large datasets, it thus becomes practically impossible to compute all possible pairwise similarities. In 2016, Hochbaum and Baumann proposed the technique of sparse computation to address this growth by computing only those pairwise similarities that are relevant. Their proposed implementation of sparse computation is still difficult to scale to millions objects. This dissertation focuses on advancing the practical implementations of sparse computation to larger datasets and on two applications for which similarity-based machine learning was particularly effective. The applications that are studied here are cell identification in calcium-imaging movies and detecting aberrant linking behavior in directed networks. For sparse computation we present faster, geometric algorithms and a technique, named sparse-reduced computation, that combines sparse computation with compression. The geometric algorithms compute the exact same output as the original implementation of sparse computation, but identify the relevant pairwise similarities faster by using the concept of data shifting for identifying objects in the same or neighboring blocks. Empirical results on datasets with up to 10 million objects show a significant reduction in running time. Sparse-reduced computation combines sparse computation with a technique for compressing highly-similar or identical objects, enabling the use of similarity-based machine learning on massively-large datasets. The computational results demonstrate that sparse-reduced computation provides a significant reduction in running time with a minute loss in accuracy.A major problem facing neuroscientists today is cell identification in calcium-imaging movies. These movies are in-vivo recordings of thousands of neurons at cellular resolution. There is a great need for automated approaches to extract the activity of single neurons from these movies since manual post-processing takes tens of hours per dataset. We present the HNCcorr algorithm for cell identification in calcium-imaging movies. The name HNCcorr is derived from its use of the similarity-based Hochbaum's Normalized Cut (HNC) model with pairwise similarities derived from correlation. In HNCcorr, the task of cell detection is approached as a clustering problem. HNCcorr utilizes HNC to detect cells in these movies as coherent clusters of pixels that are highly distinct from the remaining pixels. HNCcorr guarantees, unlike existing methodologies for cell identification, a globally optimal solution to the underlying optimization problem. Of independent interest is a novel method, named similarity-squared, that we devised for measuring similarity between pixels. We provide an experimental study and demonstrate that HNCcorr is a top performer on the Neurofinder cell identification benchmark and that it improves over algorithms based on matrix factorization.The second application is detecting aberrant agents, such as fake news sources or spam websites, based on their link behavior in networks. Across contexts, a distinguishing characteristic between normal and aberrant agents is that normal agents rarely link to aberrant ones. We refer to this phenomenon as aberrant linking behavior. We present an Markov Random Fields (MRF) formulation, with links as the pairwise similarities, that detects aberrant agents based on aberrant linking behavior and any prior information (if given). This MRF formulation is solved optimally and in polynomial time. We compare the optimal solution for the MRF formulation to well-known algorithms based on random walks. In our empirical experiment with twenty-three different datasets, the MRF method outperforms the other detection algorithms. This work represents the first use of optimization methods for detecting aberrant agents as well as the first time that MRF is applied to directed graphs
DiffusionMat: Alpha Matting as Sequential Refinement Learning
In this paper, we introduce DiffusionMat, a novel image matting framework
that employs a diffusion model for the transition from coarse to refined alpha
mattes. Diverging from conventional methods that utilize trimaps merely as
loose guidance for alpha matte prediction, our approach treats image matting as
a sequential refinement learning process. This process begins with the addition
of noise to trimaps and iteratively denoises them using a pre-trained diffusion
model, which incrementally guides the prediction towards a clean alpha matte.
The key innovation of our framework is a correction module that adjusts the
output at each denoising step, ensuring that the final result is consistent
with the input image's structures. We also introduce the Alpha Reliability
Propagation, a novel technique designed to maximize the utility of available
guidance by selectively enhancing the trimap regions with confident alpha
information, thus simplifying the correction task. To train the correction
module, we devise specialized loss functions that target the accuracy of the
alpha matte's edges and the consistency of its opaque and transparent regions.
We evaluate our model across several image matting benchmarks, and the results
indicate that DiffusionMat consistently outperforms existing methods. Project
page at~\url{https://cnnlstm.github.io/DiffusionMa
Survey analysis for optimization algorithms applied to electroencephalogram
This paper presents a survey for optimization approaches that analyze and classify Electroencephalogram (EEG) signals. The automatic analysis of EEG presents a significant challenge due to the high-dimensional data volume. Optimization algorithms seek to achieve better accuracy by selecting practical features and reducing unwanted features. Forty-seven reputable research papers are provided in this work, emphasizing the developed and executed techniques divided into seven groups based on the applied optimization algorithm particle swarm optimization (PSO), ant colony optimization (ACO), artificial bee colony (ABC), grey wolf optimizer (GWO), Bat, Firefly, and other optimizer approaches). The main measures to analyze this paper are accuracy, precision, recall, and F1-score assessment. Several datasets have been utilized in the included papers like EEG Bonn University, CHB-MIT, electrocardiography (ECG) dataset, and other datasets. The results have proven that the PSO and GWO algorithms have achieved the highest accuracy rate of around 99% compared with other techniques
- …