1,143 research outputs found
Temporal shape super-resolution by intra-frame motion encoding using high-fps structured light
One of the solutions of depth imaging of moving scene is to project a static
pattern on the object and use just a single image for reconstruction. However,
if the motion of the object is too fast with respect to the exposure time of
the image sensor, patterns on the captured image are blurred and reconstruction
fails. In this paper, we impose multiple projection patterns into each single
captured image to realize temporal super resolution of the depth image
sequences. With our method, multiple patterns are projected onto the object
with higher fps than possible with a camera. In this case, the observed pattern
varies depending on the depth and motion of the object, so we can extract
temporal information of the scene from each single image. The decoding process
is realized using a learning-based approach where no geometric calibration is
needed. Experiments confirm the effectiveness of our method where sequential
shapes are reconstructed from a single image. Both quantitative evaluations and
comparisons with recent techniques were also conducted.Comment: 9 pages, Published at the International Conference on Computer Vision
(ICCV 2017
Diverse Roles of JNK and MKK Pathways in the Brain
The c-Jun NH2-terminal protein kinase (JNK) plays important roles in a broad range of physiological processes. JNK is controlled by two upstream regulators, mitogen-activated protein kinase kinase (MKK) 4 and MKK7, which are activated by various MAPKKKs. Studies employing knockout mice have demonstrated that the JNK signaling pathway is involved in diverse phenomena in the brain, regulating brain development and maintenance as well as animal metabolism and behavior. Furthermore, examination of single or combined knockout mice of Jnk1, Jnk2, and Jnk3 has revealed both functional differences and redundancy among JNK1, JNK2, and JNK3. Phenotypic differences between knockouts of MKK4 and MKK7 have also been observed, suggesting that the JNK signaling pathway in the brain has a complex nature and is intricately regulated. This paper summarizes the functional properties of the major JNK signaling components in the developing and adult brain
TIDE: Temporally Incremental Disparity Estimation via Pattern Flow in Structured Light System
We introduced Temporally Incremental Disparity Estimation Network (TIDE-Net),
a learning-based technique for disparity computation in mono-camera structured
light systems. In our hardware setting, a static pattern is projected onto a
dynamic scene and captured by a monocular camera. Different from most former
disparity estimation methods that operate in a frame-wise manner, our network
acquires disparity maps in a temporally incremental way. Specifically, We
exploit the deformation of projected patterns (named pattern flow ) on captured
image sequences, to model the temporal information. Notably, this newly
proposed pattern flow formulation reflects the disparity changes along the
epipolar line, which is a special form of optical flow. Tailored for pattern
flow, the TIDE-Net, a recurrent architecture, is proposed and implemented. For
each incoming frame, our model fuses correlation volumes (from current frame)
and disparity (from former frame) warped by pattern flow. From fused features,
the final stage of TIDE-Net estimates the residual disparity rather than the
full disparity, as conducted by many previous methods. Interestingly, this
design brings clear empirical advantages in terms of efficiency and
generalization ability. Using only synthetic data for training, our extensitve
evaluation results (w.r.t. both accuracy and efficienty metrics) show superior
performance than several SOTA models on unseen real data. The code is available
on https://github.com/CodePointer/TIDENet
Online Adaptive Disparity Estimation for Dynamic Scenes in Structured Light Systems
In recent years, deep neural networks have shown remarkable progress in dense
disparity estimation from dynamic scenes in monocular structured light systems.
However, their performance significantly drops when applied in unseen
environments. To address this issue, self-supervised online adaptation has been
proposed as a solution to bridge this performance gap. Unlike traditional
fine-tuning processes, online adaptation performs test-time optimization to
adapt networks to new domains. Therefore, achieving fast convergence during the
adaptation process is critical for attaining satisfactory accuracy. In this
paper, we propose an unsupervised loss function based on long sequential
inputs. It ensures better gradient directions and faster convergence. Our loss
function is designed using a multi-frame pattern flow, which comprises a set of
sparse trajectories of the projected pattern along the sequence. We estimate
the sparse pseudo ground truth with a confidence mask using a filter-based
method, which guides the online adaptation process. Our proposed framework
significantly improves the online adaptation speed and achieves superior
performance on unseen data.Comment: Accpeted by 36th IEEE/RSJ International Conference on Intelligent
Robots and Systems, 202
- …