366,634 research outputs found
LW-ISP: A Lightweight Model with ISP and Deep Learning
The deep learning (DL)-based methods of low-level tasks have many advantages
over the traditional camera in terms of hardware prospects, error accumulation
and imaging effects. Recently, the application of deep learning to replace the
image signal processing (ISP) pipeline has appeared one after another; however,
there is still a long way to go towards real landing. In this paper, we show
the possibility of learning-based method to achieve real-time high-performance
processing in the ISP pipeline. We propose LW-ISP, a novel architecture
designed to implicitly learn the image mapping from RAW data to RGB image.
Based on U-Net architecture, we propose the fine-grained attention module and a
plug-and-play upsampling block suitable for low-level tasks. In particular, we
design a heterogeneous distillation algorithm to distill the implicit features
and reconstruction information of the clean image, so as to guide the learning
of the student model. Our experiments demonstrate that LW-ISP has achieved a
0.38 dB improvement in PSNR compared to the previous best method, while the
model parameters and calculation have been reduced by 23 times and 81 times.
The inference efficiency has been accelerated by at least 15 times. Without
bells and whistles, LW-ISP has achieved quite competitive results in ISP
subtasks including image denoising and enhancement.Comment: 16 PAGES, ACCEPTED AS A CONFERENCE PAPER AT: BMVC 202
Asymmetric Actor Critic for Image-Based Robot Learning
Deep reinforcement learning (RL) has proven a powerful technique in many
sequential decision making domains. However, Robotics poses many challenges for
RL, most notably training on a physical system can be expensive and dangerous,
which has sparked significant interest in learning control policies using a
physics simulator. While several recent works have shown promising results in
transferring policies trained in simulation to the real world, they often do
not fully utilize the advantage of working with a simulator. In this work, we
exploit the full state observability in the simulator to train better policies
which take as input only partial observations (RGBD images). We do this by
employing an actor-critic training algorithm in which the critic is trained on
full states while the actor (or policy) gets rendered images as input. We show
experimentally on a range of simulated tasks that using these asymmetric inputs
significantly improves performance. Finally, we combine this method with domain
randomization and show real robot experiments for several tasks like picking,
pushing, and moving a block. We achieve this simulation to real world transfer
without training on any real world data.Comment: Videos of experiments can be found at http://www.goo.gl/b57WT
Combining Model-Based and Model-Free Updates for Trajectory-Centric Reinforcement Learning
Reinforcement learning (RL) algorithms for real-world robotic applications
need a data-efficient learning process and the ability to handle complex,
unknown dynamical systems. These requirements are handled well by model-based
and model-free RL approaches, respectively. In this work, we aim to combine the
advantages of these two types of methods in a principled manner. By focusing on
time-varying linear-Gaussian policies, we enable a model-based algorithm based
on the linear quadratic regulator (LQR) that can be integrated into the
model-free framework of path integral policy improvement (PI2). We can further
combine our method with guided policy search (GPS) to train arbitrary
parameterized policies such as deep neural networks. Our simulation and
real-world experiments demonstrate that this method can solve challenging
manipulation tasks with comparable or better performance than model-free
methods while maintaining the sample efficiency of model-based methods. A video
presenting our results is available at
https://sites.google.com/site/icml17pilqrComment: Paper accepted to the International Conference on Machine Learning
(ICML) 201
- …