2,100 research outputs found
Recommended from our members
Example-based video color grading
In most professional cinema productions, the color palette of the movie is painstakingly adjusted by a team of skilled colorists -- through a process referred to as color grading -- to achieve a certain visual look. The time and expertise required to grade a video makes it difficult for amateurs to manipulate the colors of their own video clips. In this work, we present a method that allows a user to transfer the color palette of a model video clip to their own video sequence. We estimate a per-frame color transform that maps the color distributions in the input video sequence to that of the model video clip. Applying this transformation naively leads to artifacts such as bleeding and flickering. Instead, we propose a novel differential-geometry-based scheme that interpolates these transformations in a manner that minimizes their curvature, similarly to curvature flows. In addition, we automatically determine a set of keyframes that best represent this interpolated transformation curve, and can be used subsequently, to manually refine the color grade. We show how our method can successfully transfer color palettes between videos for a range of visual styles and a number of input video clips.Engineering and Applied Science
HDRFusion:HDR SLAM using a low-cost auto-exposure RGB-D sensor
We describe a new method for comparing frame appearance in a frame-to-model
3-D mapping and tracking system using an low dynamic range (LDR) RGB-D camera
which is robust to brightness changes caused by auto exposure. It is based on a
normalised radiance measure which is invariant to exposure changes and not only
robustifies the tracking under changing lighting conditions, but also enables
the following exposure compensation perform accurately to allow online building
of high dynamic range (HDR) maps. The latter facilitates the frame-to-model
tracking to minimise drift as well as better capturing light variation within
the scene. Results from experiments with synthetic and real data demonstrate
that the method provides both improved tracking and maps with far greater
dynamic range of luminosity.Comment: 14 page
Remote Sensing of Giant Reed with QuickBird Satellite Imagery
QuickBird high resolution (2.8 m) satellite imagery was
evaluated for distinguishing giant reed (
Arundo donax
L.) infestations
along the Rio Grande in southwest Texas. (PDF has 5 pages.
The Large Area Crop Inventory Experiment (LACIE). An application of remote sensing by multispectral scanners
There are no author-identified significant results in this report
Perceptual evidence for protracted development in monosyllabic Mandarin lexical tone production in preschool children in Taiwan
This study used the same methodology in Wong [J. Speech Lang. Hear. Res. 55, 1423–1437 (2012b)] to examine the perceived accuracy of monosyllabic Mandarin tones produced by 4- and 5-year-old Mandarin-speaking children growing up in Taiwan and combined the findings with those of 3-year-olds reported in Wong [J. Speech Lang. Hear. Res. 55, 1423–1437 (2012b)] to track the development of monosyllabic tone production in preschool children. Tone productions of adults and children were collected in a picture naming task and low-pass filtered to remove lexical information and reserve tone information. Five native-speakers categorized the target tones in the filtered productions. Children's tone accuracy was compared to adults' to determine mastery and developmental changes. The results showed that preschool children in Taiwan have not fully mastered the production of monosyllabic Mandarin tones. None of the tones produced by the children in the three age groups reached adult-like accuracy. Little developmental change was found in children's tone accuracy during the preschool years. A similar order of accuracy of the tones was observed across the three age groups and the order appeared to follow the order of articulatory complexity in producing the tones. The findings suggest a protracted course of development in children's acquisition of Mandarin tones and that tone development may be constrained by physiological factors.published_or_final_versio
High dynamic range video merging, tone mapping, and real-time implementation
Although High Dynamic Range (High Dynamic Range (HDR)) imaging has been the subject of significant research over the past fifteen years, the goal of cinemaquality HDR video has not yet been achieved. This work references an optical method patented by Contrast Optical which is used to capture sequences of Low Dynamic Range (LDR) images that can be used to form HDR images as the basis for HDR video. Because of the large diverence in exposure spacing of the LDR images captured by this camera, present methods of merging LDR images are insufficient to produce cinema quality HDR images and video without significant visible artifacts. Thus the focus of the research presented is two fold. The first contribution is a new method of combining LDR images with exposure differences of greater than 3 stops into an HDR image. The second contribution is a method of tone mapping HDR video which solves potential problems of HDR video flicker and automated parameter control of the tone mapping operator. A prototype of this HDR video capture technique along with the combining and tone mapping algorithms have been implemented in a high-definition HDR-video system. Additionally, Field Programmable Gate Array (FPGA) hardware implementation details are given to support real time HDR video. Still frames from the acquired HDR video system which have been merged used the merging and tone mapping techniques will be presented
Computational Re-Photography
Rephotographers aim to recapture an existing photograph from the same viewpoint. A historical photograph paired with a well-aligned modern rephotograph can serve as a remarkable visualization of the passage of time. However, the task of rephotography is tedious and often imprecise, because reproducing the viewpoint of the original photograph is challenging. The rephotographer must disambiguate between the six degrees of freedom of 3D translation and rotation, and the confounding similarity between the effects of camera zoom and dolly. We present a real-time estimation and visualization technique for rephotography that helps users reach a desired viewpoint during capture. The input to our technique is a reference image taken from the desired viewpoint. The user moves through the scene with a camera and follows our visualization to reach the desired viewpoint. We employ computer vision techniques to compute the relative viewpoint difference. We guide 3D movement using two 2D arrows. We demonstrate the success of our technique by rephotographing historical images and conducting user studies
- …