3,094 research outputs found
Continuous-time Mean-Variance Portfolio Selection with Stochastic Parameters
This paper studies a continuous-time market {under stochastic environment}
where an agent, having specified an investment horizon and a target terminal
mean return, seeks to minimize the variance of the return with multiple stocks
and a bond. In the considered model firstly proposed by [3], the mean returns
of individual assets are explicitly affected by underlying Gaussian economic
factors. Using past and present information of the asset prices, a
partial-information stochastic optimal control problem with random coefficients
is formulated. Here, the partial information is due to the fact that the
economic factors can not be directly observed. Via dynamic programming theory,
the optimal portfolio strategy can be constructed by solving a deterministic
forward Riccati-type ordinary differential equation and two linear
deterministic backward ordinary differential equations
Advances in 3D Neural Stylization: A Survey
Modern artificial intelligence provides a novel way of producing digital art
in styles. The expressive power of neural networks enables the realm of visual
style transfer methods, which can be used to edit images, videos, and 3D data
to make them more artistic and diverse. This paper reports on recent advances
in neural stylization for 3D data. We provide a taxonomy for neural stylization
by considering several important design choices, including scene
representation, guidance data, optimization strategies, and output styles.
Building on such taxonomy, our survey first revisits the background of neural
stylization on 2D images, and then provides in-depth discussions on recent
neural stylization methods for 3D data, where we also provide a mini-benchmark
on artistic stylization methods. Based on the insights gained from the survey,
we then discuss open challenges, future research, and potential applications
and impacts of neural stylization.Comment: 26 page
Time-of-Day Neural Style Transfer for Architectural Photographs
Architectural photography is a genre of photography that focuses on capturing
a building or structure in the foreground with dramatic lighting in the
background. Inspired by recent successes in image-to-image translation methods,
we aim to perform style transfer for architectural photographs. However, the
special composition in architectural photography poses great challenges for
style transfer in this type of photographs. Existing neural style transfer
methods treat the architectural images as a single entity, which would generate
mismatched chrominance and destroy geometric features of the original
architecture, yielding unrealistic lighting, wrong color rendition, and visual
artifacts such as ghosting, appearance distortion, or color mismatching. In
this paper, we specialize a neural style transfer method for architectural
photography. Our method addresses the composition of the foreground and
background in an architectural photograph in a two-branch neural network that
separately considers the style transfer of the foreground and the background,
respectively. Our method comprises a segmentation module, a learning-based
image-to-image translation module, and an image blending optimization module.
We trained our image-to-image translation neural network with a new dataset of
unconstrained outdoor architectural photographs captured at different magic
times of a day, utilizing additional semantic information for better
chrominance matching and geometry preservation. Our experiments show that our
method can produce photorealistic lighting and color rendition on both the
foreground and background, and outperforms general image-to-image translation
and arbitrary style transfer baselines quantitatively and qualitatively. Our
code and data are available at
https://github.com/hkust-vgd/architectural_style_transfer.Comment: Updated version with corrected equations. Paper published at the
International Conference on Computational Photography (ICCP) 2022. 12 pages
of content with 6 pages of supplementary material
Language-driven Object Fusion into Neural Radiance Fields with Pose-Conditioned Dataset Updates
Neural radiance field is an emerging rendering method that generates
high-quality multi-view consistent images from a neural scene representation
and volume rendering. Although neural radiance field-based techniques are
robust for scene reconstruction, their ability to add or remove objects remains
limited. This paper proposes a new language-driven approach for object
manipulation with neural radiance fields through dataset updates. Specifically,
to insert a new foreground object represented by a set of multi-view images
into a background radiance field, we use a text-to-image diffusion model to
learn and generate combined images that fuse the object of interest into the
given background across views. These combined images are then used for refining
the background radiance field so that we can render view-consistent images
containing both the object and the background. To ensure view consistency, we
propose a dataset updates strategy that prioritizes radiance field training
with camera views close to the already-trained views prior to propagating the
training to remaining views. We show that under the same dataset updates
strategy, we can easily adapt our method for object insertion using data from
text-to-3D models as well as object removal. Experimental results show that our
method generates photorealistic images of the edited scenes, and outperforms
state-of-the-art methods in 3D reconstruction and neural radiance field
blending
- …