565 research outputs found
Pushing the Boundaries of Boundary Detection using Deep Learning
In this work we show that adapting Deep Convolutional Neural Network training
to the task of boundary detection can result in substantial improvements over
the current state-of-the-art in boundary detection.
Our contributions consist firstly in combining a careful design of the loss
for boundary detection training, a multi-resolution architecture and training
with external data to improve the detection accuracy of the current state of
the art. When measured on the standard Berkeley Segmentation Dataset, we
improve theoptimal dataset scale F-measure from 0.780 to 0.808 - while human
performance is at 0.803. We further improve performance to 0.813 by combining
deep learning with grouping, integrating the Normalized Cuts technique within a
deep network.
We also examine the potential of our boundary detector in conjunction with
the task of semantic segmentation and demonstrate clear improvements over
state-of-the-art systems. Our detector is fully integrated in the popular Caffe
framework and processes a 320x420 image in less than a second.Comment: The previous version reported large improvements w.r.t. the LPO
region proposal baseline, which turned out to be due to a wrong computation
for the baseline. The improvements are currently less important, and are
omitted. We are sorry if the reported results caused any confusion. We have
also integrated reviewer feedback regarding human performance on the BSD
benchmar
Semantic Photo Manipulation with a Generative Image Prior
Despite the recent success of GANs in synthesizing images conditioned on
inputs such as a user sketch, text, or semantic labels, manipulating the
high-level attributes of an existing natural photograph with GANs is
challenging for two reasons. First, it is hard for GANs to precisely reproduce
an input image. Second, after manipulation, the newly synthesized pixels often
do not fit the original image. In this paper, we address these issues by
adapting the image prior learned by GANs to image statistics of an individual
image. Our method can accurately reconstruct the input image and synthesize new
content, consistent with the appearance of the input image. We demonstrate our
interactive system on several semantic image editing tasks, including
synthesizing new objects consistent with background, removing unwanted objects,
and changing the appearance of an object. Quantitative and qualitative
comparisons against several existing methods demonstrate the effectiveness of
our method.Comment: SIGGRAPH 201
VNect: Real-time 3D Human Pose Estimation with a Single RGB Camera
We present the first real-time method to capture the full global 3D skeletal
pose of a human in a stable, temporally consistent manner using a single RGB
camera. Our method combines a new convolutional neural network (CNN) based pose
regressor with kinematic skeleton fitting. Our novel fully-convolutional pose
formulation regresses 2D and 3D joint positions jointly in real time and does
not require tightly cropped input frames. A real-time kinematic skeleton
fitting method uses the CNN output to yield temporally stable 3D global pose
reconstructions on the basis of a coherent kinematic skeleton. This makes our
approach the first monocular RGB method usable in real-time applications such
as 3D character control---thus far, the only monocular methods for such
applications employed specialized RGB-D cameras. Our method's accuracy is
quantitatively on par with the best offline 3D monocular RGB pose estimation
methods. Our results are qualitatively comparable to, and sometimes better
than, results from monocular RGB-D approaches, such as the Kinect. However, we
show that our approach is more broadly applicable than RGB-D solutions, i.e. it
works for outdoor scenes, community videos, and low quality commodity RGB
cameras.Comment: Accepted to SIGGRAPH 201
Variational methods and its applications to computer vision
Many computer vision applications such as image segmentation can be formulated in a ''variational'' way as energy minimization problems. Unfortunately, the computational task of minimizing these energies is usually difficult as it generally involves non convex functions in a space with thousands of dimensions and often the associated combinatorial problems are NP-hard to solve. Furthermore, they are ill-posed inverse problems and therefore are extremely sensitive to perturbations (e.g. noise). For this reason in order to compute a physically reliable approximation from given noisy data, it is necessary to incorporate into the mathematical model appropriate regularizations that require complex computations.
The main aim of this work is to describe variational segmentation methods that are particularly effective for curvilinear structures. Due to their complex geometry, classical regularization techniques cannot be adopted because they lead to the loss of most of low contrasted details. In contrast, the proposed method not only better preserves curvilinear structures, but also reconnects some parts that may have been disconnected by noise. Moreover, it can be easily extensible to graphs and successfully applied to different types of data such as medical imagery (i.e. vessels, hearth coronaries etc), material samples (i.e. concrete) and satellite signals (i.e. streets, rivers etc.). In particular, we will show results and performances about an implementation targeting new generation of High Performance Computing (HPC) architectures where different types of coprocessors cooperate. The involved dataset consists of approximately 200 images of cracks, captured in three different tunnels by a robotic machine designed for the European ROBO-SPECT project.Open Acces
Pixel Adapter: A Graph-Based Post-Processing Approach for Scene Text Image Super-Resolution
Current Scene text image super-resolution approaches primarily focus on
extracting robust features, acquiring text information, and complex training
strategies to generate super-resolution images. However, the upsampling module,
which is crucial in the process of converting low-resolution images to
high-resolution ones, has received little attention in existing works. To
address this issue, we propose the Pixel Adapter Module (PAM) based on graph
attention to address pixel distortion caused by upsampling. The PAM effectively
captures local structural information by allowing each pixel to interact with
its neighbors and update features. Unlike previous graph attention mechanisms,
our approach achieves 2-3 orders of magnitude improvement in efficiency and
memory utilization by eliminating the dependency on sparse adjacency matrices
and introducing a sliding window approach for efficient parallel computation.
Additionally, we introduce the MLP-based Sequential Residual Block (MSRB) for
robust feature extraction from text images, and a Local Contour Awareness loss
() to enhance the model's perception of details.
Comprehensive experiments on TextZoom demonstrate that our proposed method
generates high-quality super-resolution images, surpassing existing methods in
recognition accuracy. For single-stage and multi-stage strategies, we achieved
improvements of 0.7\% and 2.6\%, respectively, increasing the performance from
52.6\% and 53.7\% to 53.3\% and 56.3\%. The code is available at
https://github.com/wenyu1009/RTSRN
- …