59 research outputs found
Where and Who? Automatic Semantic-Aware Person Composition
Image compositing is a method used to generate realistic yet fake imagery by
inserting contents from one image to another. Previous work in compositing has
focused on improving appearance compatibility of a user selected foreground
segment and a background image (i.e. color and illumination consistency). In
this work, we instead develop a fully automated compositing model that
additionally learns to select and transform compatible foreground segments from
a large collection given only an input image background. To simplify the task,
we restrict our problem by focusing on human instance composition, because
human segments exhibit strong correlations with their background and because of
the availability of large annotated data. We develop a novel branching
Convolutional Neural Network (CNN) that jointly predicts candidate person
locations given a background image. We then use pre-trained deep feature
representations to retrieve person instances from a large segment database.
Experimental results show that our model can generate composite images that
look visually convincing. We also develop a user interface to demonstrate the
potential application of our method.Comment: 10 pages, 9 figure
Layered Neural Rendering for Retiming People in Video
We present a method for retiming people in an ordinary, natural
video---manipulating and editing the time in which different motions of
individuals in the video occur. We can temporally align different motions,
change the speed of certain actions (speeding up/slowing down, or entirely
"freezing" people), or "erase" selected people from the video altogether. We
achieve these effects computationally via a dedicated learning-based layered
video representation, where each frame in the video is decomposed into separate
RGBA layers, representing the appearance of different people in the video. A
key property of our model is that it not only disentangles the direct motions
of each person in the input video, but also correlates each person
automatically with the scene changes they generate---e.g., shadows,
reflections, and motion of loose clothing. The layers can be individually
retimed and recombined into a new video, allowing us to achieve realistic,
high-quality renderings of retiming effects for real-world videos depicting
complex actions and involving multiple individuals, including dancing,
trampoline jumping, or group running.Comment: To appear in SIGGRAPH Asia 2020. Project webpage:
https://retiming.github.io
FactorMatte: Redefining Video Matting for Re-Composition Tasks
We propose "factor matting", an alternative formulation of the video matting
problem in terms of counterfactual video synthesis that is better suited for
re-composition tasks. The goal of factor matting is to separate the contents of
video into independent components, each visualizing a counterfactual version of
the scene where contents of other components have been removed. We show that
factor matting maps well to a more general Bayesian framing of the matting
problem that accounts for complex conditional interactions between layers.
Based on this observation, we present a method for solving the factor matting
problem that produces useful decompositions even for video with complex
cross-layer interactions like splashes, shadows, and reflections. Our method is
trained per-video and requires neither pre-training on external large datasets,
nor knowledge about the 3D structure of the scene. We conduct extensive
experiments, and show that our method not only can disentangle scenes with
complex interactions, but also outperforms top methods on existing tasks such
as classical video matting and background subtraction. In addition, we
demonstrate the benefits of our approach on a range of downstream tasks. Please
refer to our project webpage for more details: https://factormatte.github.ioComment: Project webpage: https://factormatte.github.i
Plant Seed Identification
Plant seed identification is routinely performed for seed certification in seed trade, phytosanitary certification for the import and export of agricultural commodities, and regulatory monitoring, surveillance, and enforcement. Current identification is performed manually by seed analysts with limited aiding tools. Extensive expertise and time is required, especially for small, morphologically similar seeds. Computers are, however, especially good at recognizing subtle differences that humans find difficult to perceive. In this thesis, a 2D, image-based computer-assisted approach is proposed.
The size of plant seeds is extremely small compared with daily objects. The microscopic images of plant seeds are usually degraded by defocus blur due to the high magnification of the imaging equipment. It is necessary and beneficial to differentiate the in-focus and blurred regions given that only sharp regions carry distinctive information usually for identification. If the object of interest, the plant seed in this case, is in- focus under a single image frame, the amount of defocus blur can be employed as a cue to separate the object and the cluttered background. If the defocus blur is too strong to obscure the object itself, sharp regions of multiple image frames acquired at different focal distance can be merged together to make an all-in-focus image. This thesis describes a novel non-reference sharpness metric which exploits the distribution difference of uniform LBP patterns in blurred and non-blurred image regions. It runs in realtime on a single core cpu and responses much better on low contrast sharp regions than the competitor metrics. Its benefits are shown both in defocus segmentation and focal stacking.
With the obtained all-in-focus seed image, a scale-wise pooling method is proposed to construct its feature representation. Since the imaging settings in lab testing are well constrained, the seed objects in the acquired image can be assumed to have measureable scale and controllable scale variance. The proposed method utilizes real pixel scale information and allows for accurate comparison of seeds across scales. By cross-validation on our high quality seed image dataset, better identification rate (95%) was achieved compared with pre- trained convolutional-neural-network-based models (93.6%). It offers an alternative method for image based identification with all-in-focus object images of limited scale variance.
The very first digital seed identification tool of its kind was built and deployed for test in the seed laboratory of Canadian food inspection agency (CFIA). The proposed focal stacking algorithm was employed to create all-in-focus images, whereas scale-wise pooling feature representation was used as the image signature. Throughput, workload, and identification rate were evaluated and seed analysts reported significantly lower mental demand (p = 0.00245) when using the provided tool compared with manual identification. Although the identification rate in practical test is only around 50%, I have demonstrated common mistakes that have been made in the imaging process and possible ways to deploy the tool to improve the recognition rate
MegaPortraits: One-shot Megapixel Neural Head Avatars
In this work, we advance the neural head avatar technology to the megapixel
resolution while focusing on the particularly challenging task of cross-driving
synthesis, i.e., when the appearance of the driving image is substantially
different from the animated source image. We propose a set of new neural
architectures and training methods that can leverage both medium-resolution
video data and high-resolution image data to achieve the desired levels of
rendered image quality and generalization to novel views and motion. We
demonstrate that suggested architectures and methods produce convincing
high-resolution neural avatars, outperforming the competitors in the
cross-driving scenario. Lastly, we show how a trained high-resolution neural
avatar model can be distilled into a lightweight student model which runs in
real-time and locks the identities of neural avatars to several dozens of
pre-defined source images. Real-time operation and identity lock are essential
for many practical applications head avatar systems
- …