214 research outputs found
Radiance interpolants for interactive scene editing and ray tracing
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.Includes bibliographical references (p. 189-197).by Kavita Bala.Ph.D
Deep Photo Style Transfer
This paper introduces a deep-learning approach to photographic style transfer
that handles a large variety of image content while faithfully transferring the
reference style. Our approach builds upon the recent work on painterly transfer
that separates style from the content of an image by considering different
layers of a neural network. However, as is, this approach is not suitable for
photorealistic style transfer. Even when both the input and reference images
are photographs, the output still exhibits distortions reminiscent of a
painting. Our contribution is to constrain the transformation from the input to
the output to be locally affine in colorspace, and to express this constraint
as a custom fully differentiable energy term. We show that this approach
successfully suppresses distortion and yields satisfying photorealistic style
transfers in a broad variety of scenarios, including transfer of the time of
day, weather, season, and artistic edits
Software management techniques for translation lookaside buffers
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.Includes bibliographical references (leaves 67-70).by Kavita Bala.M.S
Computational rim illumination of dynamic subjects using aerial robots
Lighting plays a major role in photography. Professional photographers use elaborate installations to light their subjects and achieve sophisticated styles. However, lighting moving subjects performing dynamic tasks presents significant challenges and requires expensive manual intervention. A skilled additional assistant might be needed to reposition lights as the subject changes pose or moves, and the extra logistics significantly raises costs and time. The associated latencies as the assistant lights the subject, and the communication required from the photographer to achieve optimum lighting could mean missing a critical shot.
We present a new approach to lighting dynamic subjects where an aerial robot equipped with a portable light source lights the subject to automatically achieve a desired lighting effect. We focus on rim lighting, a particularly challenging effect to achieve with dynamic subjects, and allow the photographer to specify a required rim width. Our algorithm processes the images from the photographer׳s camera and provides necessary motion commands to the aerial robot to achieve the desired rim width in the resulting photographs. With an indoor setup, we demonstrate a control approach that localizes the aerial robot with reference to the subject and tracks the subject to achieve the necessary motion. In addition to indoor experiments, we perform open-loop outdoor experiments in a realistic photo-shooting scenario to understand lighting ergonomics. Our proof-of-concept results demonstrate the utility of robots in computational lighting
Learning Visual Clothing Style with Heterogeneous Dyadic Co-occurrences
With the rapid proliferation of smart mobile devices, users now take millions
of photos every day. These include large numbers of clothing and accessory
images. We would like to answer questions like `What outfit goes well with this
pair of shoes?' To answer these types of questions, one has to go beyond
learning visual similarity and learn a visual notion of compatibility across
categories. In this paper, we propose a novel learning framework to help answer
these types of questions. The main idea of this framework is to learn a feature
transformation from images of items into a latent space that expresses
compatibility. For the feature transformation, we use a Siamese Convolutional
Neural Network (CNN) architecture, where training examples are pairs of items
that are either compatible or incompatible. We model compatibility based on
co-occurrence in large-scale user behavior data; in particular co-purchase data
from Amazon.com. To learn cross-category fit, we introduce a strategic method
to sample training data, where pairs of items are heterogeneous dyads, i.e.,
the two elements of a pair belong to different high-level categories. While
this approach is applicable to a wide variety of settings, we focus on the
representative problem of learning compatible clothing style. Our results
indicate that the proposed framework is capable of learning semantic
information about visual style and is able to generate outfits of clothes, with
items from different categories, that go well together.Comment: ICCV 201
- …