6,985 research outputs found
Video Upright Adjustment and Stabilization
Upright adjustment, Video stabilization, Camera pathWe propose a novel video upright adjustment method that can reliably correct slanted video contents that are often found in casual videos. Our approach combines deep learning and Bayesian inference to estimate accurate rotation angles from video frames. We train a convolutional neural network to obtain initial estimates of the rotation angles of input video frames. The initial estimates from the network are temporally inconsistent and inaccurate. To resolve this, we use Bayesian inference. We analyze estimation errors of the network, and derive an error model. We then use the error model to formulate video upright adjustment as a maximum a posteriori problem where we estimate consistent rotation angles from the initial estimates, while respecting relative rotations between consecutive frames. Finally, we propose a joint approach to video stabilization and upright adjustment, which minimizes information loss caused by separately handling stabilization and upright adjustment. Experimental results show that our video upright adjustment method can effectively correct slanted video contents, and its combination with video stabilization can achieve visually pleasing results from shaky and slanted videos.openI. INTRODUCTION
1.1. Related work
II. ROTATION ESTIMATION NETWORK
III. ERROR ANALYSIS
IV. VIDEO UPRIGHT ADJUSTMENT
4.1. Initial angle estimation
4.2. Robust angle estimation
4.3. Optimization
4.4. Warping
V. JOINT UPRIGHT ADJUSTMENT AND STABILIZATION
5.1. Bundled camera paths for video stabilization
5.2. Joint approach
VI. EXPERIMENTS
VII. CONCLUSION
ReferencesCNN)μ νλ ¨μν¨λ€. μ κ²½λ§μ μ΄κΈ° μΆμ μΉλ μμ ν μ ννμ§ μμΌλ©° μκ°μ μΌλ‘λ μΌκ΄λμ§ μλλ€. μ΄λ₯Ό ν΄κ²°νκΈ° μν΄ λ² μ΄μ§μ μΈνΌλ°μ€λ₯Ό μ¬μ©νλ€. λ³Έ λ
Όλ¬Έμ μ κ²½λ§μ μΆμ μ€λ₯λ₯Ό λΆμνκ³ μ€λ₯ λͺ¨λΈμ λμΆνλ€. κ·Έλ° λ€μ μ€λ₯ λͺ¨λΈμ μ¬μ©νμ¬ μ°μ νλ μ κ°μ μλ νμ κ°λ(Relative rotation angle)λ₯Ό λ°μνλ©΄μ μ΄κΈ° μΆμ μΉλ‘λΆν° μκ°μ μΌλ‘ μΌκ΄λ νμ κ°λλ₯Ό μΆμ νλ μ΅λ μ¬ν λ¬Έμ (Maximum a posteriori problem)λ‘ λμμ μν 보μ μ 곡μννλ€. λ§μ§λ§μΌλ‘, λμμ μν 보μ λ° λμμ μμ ν(Video stabilization)μ λν λμ μ κ·Ό λ°©λ²μ μ μνμ¬ μν 보μ κ³Ό μμ νλ₯Ό λ³λλ‘ μνν λ λ°μνλ κ³΅κ° μ 보 μμ€κ³Ό μ°μ°λμ μ΅μννλ©° μμ νμ μ±λ₯μ μ΅λννλ€. μ€ν κ²°κ³Όμ λ°λ₯΄λ©΄ λμμ μν 보μ μΌλ‘ κΈ°μΈμ΄μ§ λμμμ ν¨κ³Όμ μΌλ‘ 보μ ν μ μμΌλ©° λμμ μμ ν λ°©λ²κ³Ό κ²°ν©νμ¬ νλ€λ¦¬κ³ κΈ°μΈμ΄μ§ λμμμΌλ‘λΆν° μκ°μ μΌλ‘ λ§μ‘±μ€λ¬μ΄ μλ‘μ΄ λμμμ νλν μ μλ€.λ³Έ λ
Όλ¬Έμ μΌλ°μΈλ€μ΄ 촬μν λμμμμ νν λ°μνλ λ¬Έμ μΈ κΈ°μΈμ΄μ§μ μ κ±°νμ¬ μνμ΄ μ¬λ°λ₯Έ λμμμ νλν μ μκ² νλ λμμ μν 보μ (Video upright adjustment) λ°©λ²μ μ μνλ€. λ³Έ λ
Όλ¬Έμ μ κ·Ό λ°©μμ λ₯ λ¬λ(Deep learning)κ³Ό λ² μ΄μ§μ μΈνΌλ°μ€(Bayesian inference)λ₯Ό κ²°ν©νμ¬ λμμ νλ μ(Frame)μμ μ νν κ°λλ₯Ό μΆμ νλ€. λ¨Όμ μ
λ ₯ λμμ νλ μμ νμ κ°λμ μ΄κΈ° μΆμ μΉλ₯Ό μ»κΈ° μν΄ νμ μ κ²½λ§(Convolutional neural networkMasterdCollectio
The highD Dataset: A Drone Dataset of Naturalistic Vehicle Trajectories on German Highways for Validation of Highly Automated Driving Systems
Scenario-based testing for the safety validation of highly automated vehicles
is a promising approach that is being examined in research and industry. This
approach heavily relies on data from real-world scenarios to derive the
necessary scenario information for testing. Measurement data should be
collected at a reasonable effort, contain naturalistic behavior of road users
and include all data relevant for a description of the identified scenarios in
sufficient quality. However, the current measurement methods fail to meet at
least one of the requirements. Thus, we propose a novel method to measure data
from an aerial perspective for scenario-based validation fulfilling the
mentioned requirements. Furthermore, we provide a large-scale naturalistic
vehicle trajectory dataset from German highways called highD. We evaluate the
data in terms of quantity, variety and contained scenarios. Our dataset
consists of 16.5 hours of measurements from six locations with 110 000
vehicles, a total driven distance of 45 000 km and 5600 recorded complete lane
changes. The highD dataset is available online at: http://www.highD-dataset.comComment: IEEE International Conference on Intelligent Transportation Systems
(ITSC) 201
Long-Term Visual Object Tracking Benchmark
We propose a new long video dataset (called Track Long and Prosper - TLP) and
benchmark for single object tracking. The dataset consists of 50 HD videos from
real world scenarios, encompassing a duration of over 400 minutes (676K
frames), making it more than 20 folds larger in average duration per sequence
and more than 8 folds larger in terms of total covered duration, as compared to
existing generic datasets for visual tracking. The proposed dataset paves a way
to suitably assess long term tracking performance and train better deep
learning architectures (avoiding/reducing augmentation, which may not reflect
real world behaviour). We benchmark the dataset on 17 state of the art trackers
and rank them according to tracking accuracy and run time speeds. We further
present thorough qualitative and quantitative evaluation highlighting the
importance of long term aspect of tracking. Our most interesting observations
are (a) existing short sequence benchmarks fail to bring out the inherent
differences in tracking algorithms which widen up while tracking on long
sequences and (b) the accuracy of trackers abruptly drops on challenging long
sequences, suggesting the potential need of research efforts in the direction
of long-term tracking.Comment: ACCV 2018 (Oral
VNect: Real-time 3D Human Pose Estimation with a Single RGB Camera
We present the first real-time method to capture the full global 3D skeletal
pose of a human in a stable, temporally consistent manner using a single RGB
camera. Our method combines a new convolutional neural network (CNN) based pose
regressor with kinematic skeleton fitting. Our novel fully-convolutional pose
formulation regresses 2D and 3D joint positions jointly in real time and does
not require tightly cropped input frames. A real-time kinematic skeleton
fitting method uses the CNN output to yield temporally stable 3D global pose
reconstructions on the basis of a coherent kinematic skeleton. This makes our
approach the first monocular RGB method usable in real-time applications such
as 3D character control---thus far, the only monocular methods for such
applications employed specialized RGB-D cameras. Our method's accuracy is
quantitatively on par with the best offline 3D monocular RGB pose estimation
methods. Our results are qualitatively comparable to, and sometimes better
than, results from monocular RGB-D approaches, such as the Kinect. However, we
show that our approach is more broadly applicable than RGB-D solutions, i.e. it
works for outdoor scenes, community videos, and low quality commodity RGB
cameras.Comment: Accepted to SIGGRAPH 201
- β¦