252 research outputs found
Microsatellites within genes and ESTs of the Pacific oyster Crassostrea gigas and their transferability in five other Crassostrea species
We developed 15 novel polymorphic microsatellites for the Pacific oyster Crassostrea gigas by screening genes and expressed sequence tags (ESTs) found in GenBank. The number of alleles per locus ranged from two to 24 with an average of 8.7, and the values of observed heterozygosity (Ho) and expected heterozygosity (He) ranged from 0.026 to 0.750 and from 0.120 to 0.947, respectively. No significant pairwise linkage disequilibrium was detected among loci and eight loci conformed to Hardy-Weinberg equilibrium. Transferability of the markers was examined on five other Crassostrea species and all the markers were amplified successfully in at least one species. These new microsatellites should be useful for population genetics, parentage analysis and genome mapping studies of C. gigas and closely related species. The nine markers identified from known genes are expected to be especially valuable for comparative mapping as type I markers
3DHumanGAN: 3D-Aware Human Image Generation with 3D Pose Mapping
We present 3DHumanGAN, a 3D-aware generative adversarial network that
synthesizes photorealistic images of full-body humans with consistent
appearances under different view-angles and body-poses. To tackle the
representational and computational challenges in synthesizing the articulated
structure of human bodies, we propose a novel generator architecture in which a
2D convolutional backbone is modulated by a 3D pose mapping network. The 3D
pose mapping network is formulated as a renderable implicit function
conditioned on a posed 3D human mesh. This design has several merits: i) it
leverages the strength of 2D GANs to produce high-quality images; ii) it
generates consistent images under varying view-angles and poses; iii) the model
can incorporate the 3D human prior and enable pose conditioning. Project page:
https://3dhumangan.github.io/.Comment: 9 pages, 8 figure
Dynamic Tensor Decomposition via Neural Diffusion-Reaction Processes
Tensor decomposition is an important tool for multiway data analysis. In
practice, the data is often sparse yet associated with rich temporal
information. Existing methods, however, often under-use the time information
and ignore the structural knowledge within the sparsely observed tensor
entries. To overcome these limitations and to better capture the underlying
temporal structure, we propose Dynamic EMbedIngs fOr dynamic Tensor
dEcomposition (DEMOTE). We develop a neural diffusion-reaction process to
estimate dynamic embeddings for the entities in each tensor mode. Specifically,
based on the observed tensor entries, we build a multi-partite graph to encode
the correlation between the entities. We construct a graph diffusion process to
co-evolve the embedding trajectories of the correlated entities and use a
neural network to construct a reaction process for each individual entity. In
this way, our model can capture both the commonalities and personalities during
the evolution of the embeddings for different entities. We then use a neural
network to model the entry value as a nonlinear function of the embedding
trajectories. For model estimation, we combine ODE solvers to develop a
stochastic mini-batch learning algorithm. We propose a stratified sampling
method to balance the cost of processing each mini-batch so as to improve the
overall efficiency. We show the advantage of our approach in both simulation
study and real-world applications. The code is available at
https://github.com/wzhut/Dynamic-Tensor-Decomposition-via-Neural-Diffusion-Reaction-Processes
Analysis of Multivariate Scoring Functions for Automatic Unbiased Learning to Rank
Leveraging biased click data for optimizing learning to rank systems has been
a popular approach in information retrieval. Because click data is often noisy
and biased, a variety of methods have been proposed to construct unbiased
learning to rank (ULTR) algorithms for the learning of unbiased ranking models.
Among them, automatic unbiased learning to rank (AutoULTR) algorithms that
jointly learn user bias models (i.e., propensity models) with unbiased rankers
have received a lot of attention due to their superior performance and low
deployment cost in practice. Despite their differences in theories and
algorithm design, existing studies on ULTR usually use uni-variate ranking
functions to score each document or result independently. On the other hand,
recent advances in context-aware learning-to-rank models have shown that
multivariate scoring functions, which read multiple documents together and
predict their ranking scores jointly, are more powerful than uni-variate
ranking functions in ranking tasks with human-annotated relevance labels.
Whether such superior performance would hold in ULTR with noisy data, however,
is mostly unknown. In this paper, we investigate existing multivariate scoring
functions and AutoULTR algorithms in theory and prove that permutation
invariance is a crucial factor that determines whether a context-aware
learning-to-rank model could be applied to existing AutoULTR framework. Our
experiments with synthetic clicks on two large-scale benchmark datasets show
that AutoULTR models with permutation-invariant multivariate scoring functions
significantly outperform those with uni-variate scoring functions and
permutation-variant multivariate scoring functions.Comment: 4 pages, 2 figures. It has already been accepted and will show in
Proceedings of the 29th ACM International Conference on Information and
Knowledge Management (CIKM '20), October 19--23, 202
UnitedHuman: Harnessing Multi-Source Data for High-Resolution Human Generation
Human generation has achieved significant progress. Nonetheless, existing
methods still struggle to synthesize specific regions such as faces and hands.
We argue that the main reason is rooted in the training data. A holistic human
dataset inevitably has insufficient and low-resolution information on local
parts. Therefore, we propose to use multi-source datasets with various
resolution images to jointly learn a high-resolution human generative model.
However, multi-source data inherently a) contains different parts that do not
spatially align into a coherent human, and b) comes with different scales. To
tackle these challenges, we propose an end-to-end framework, UnitedHuman, that
empowers continuous GAN with the ability to effectively utilize multi-source
data for high-resolution human generation. Specifically, 1) we design a
Multi-Source Spatial Transformer that spatially aligns multi-source images to
full-body space with a human parametric model. 2) Next, a continuous GAN is
proposed with global-structural guidance and CutMix consistency. Patches from
different datasets are then sampled and transformed to supervise the training
of this scale-invariant generative model. Extensive experiments demonstrate
that our model jointly learned from multi-source data achieves superior quality
than those learned from a holistic dataset.Comment: Accepted by ICCV2023. Project page: https://unitedhuman.github.io/
Github: https://github.com/UnitedHuman/UnitedHuma
- …