32 research outputs found
Predicting college basketball match outcomes using machine learning techniques: some results and lessons learned
Most existing work on predicting NCAAB matches has been developed in a
statistical context. Trusting the capabilities of ML techniques, particularly
classification learners, to uncover the importance of features and learn their
relationships, we evaluated a number of different paradigms on this task. In
this paper, we summarize our work, pointing out that attributes seem to be more
important than models, and that there seems to be an upper limit to predictive
quality
Deep Generative Models on 3D Representations: A Survey
Generative models, as an important family of statistical modeling, target
learning the observed data distribution via generating new instances. Along
with the rise of neural networks, deep generative models, such as variational
autoencoders (VAEs) and generative adversarial network (GANs), have made
tremendous progress in 2D image synthesis. Recently, researchers switch their
attentions from the 2D space to the 3D space considering that 3D data better
aligns with our physical world and hence enjoys great potential in practice.
However, unlike a 2D image, which owns an efficient representation (i.e., pixel
grid) by nature, representing 3D data could face far more challenges.
Concretely, we would expect an ideal 3D representation to be capable enough to
model shapes and appearances in details, and to be highly efficient so as to
model high-resolution data with fast speed and low memory cost. However,
existing 3D representations, such as point clouds, meshes, and recent neural
fields, usually fail to meet the above requirements simultaneously. In this
survey, we make a thorough review of the development of 3D generation,
including 3D shape generation and 3D-aware image synthesis, from the
perspectives of both algorithms and more importantly representations. We hope
that our discussion could help the community track the evolution of this field
and further spark some innovative ideas to advance this challenging task
Improving 3D-aware Image Synthesis with A Geometry-aware Discriminator
3D-aware image synthesis aims at learning a generative model that can render
photo-realistic 2D images while capturing decent underlying 3D shapes. A
popular solution is to adopt the generative adversarial network (GAN) and
replace the generator with a 3D renderer, where volume rendering with neural
radiance field (NeRF) is commonly used. Despite the advancement of synthesis
quality, existing methods fail to obtain moderate 3D shapes. We argue that,
considering the two-player game in the formulation of GANs, only making the
generator 3D-aware is not enough. In other words, displacing the generative
mechanism only offers the capability, but not the guarantee, of producing
3D-aware images, because the supervision of the generator primarily comes from
the discriminator. To address this issue, we propose GeoD through learning a
geometry-aware discriminator to improve 3D-aware GANs. Concretely, besides
differentiating real and fake samples from the 2D image space, the
discriminator is additionally asked to derive the geometry information from the
inputs, which is then applied as the guidance of the generator. Such a simple
yet effective design facilitates learning substantially more accurate 3D
shapes. Extensive experiments on various generator architectures and training
datasets verify the superiority of GeoD over state-of-the-art alternatives.
Moreover, our approach is registered as a general framework such that a more
capable discriminator (i.e., with a third task of novel view synthesis beyond
domain classification and geometry extraction) can further assist the generator
with a better multi-view consistency.Comment: Accepted by NeurIPS 2022. Project page:
https://vivianszf.github.io/geo
LinkGAN: Linking GAN Latents to Pixels for Controllable Image Synthesis
This work presents an easy-to-use regularizer for GAN training, which helps
explicitly link some axes of the latent space to a set of pixels in the
synthesized image. Establishing such a connection facilitates a more convenient
local control of GAN generation, where users can alter the image content only
within a spatial area simply by partially resampling the latent code.
Experimental results confirm four appealing properties of our regularizer,
which we call LinkGAN. (1) The latent-pixel linkage is applicable to either a
fixed region (\textit{i.e.}, same for all instances) or a particular semantic
category (i.e., varying across instances), like the sky. (2) Two or multiple
regions can be independently linked to different latent axes, which further
supports joint control. (3) Our regularizer can improve the spatial
controllability of both 2D and 3D-aware GAN models, barely sacrificing the
synthesis performance. (4) The models trained with our regularizer are
compatible with GAN inversion techniques and maintain editability on real
images
Gaussian Shell Maps for Efficient 3D Human Generation
Efficient generation of 3D digital humans is important in several industries,
including virtual reality, social media, and cinematic production. 3D
generative adversarial networks (GANs) have demonstrated state-of-the-art
(SOTA) quality and diversity for generated assets. Current 3D GAN
architectures, however, typically rely on volume representations, which are
slow to render, thereby hampering the GAN training and requiring
multi-view-inconsistent 2D upsamplers. Here, we introduce Gaussian Shell Maps
(GSMs) as a framework that connects SOTA generator network architectures with
emerging 3D Gaussian rendering primitives using an articulable multi
shell--based scaffold. In this setting, a CNN generates a 3D texture stack with
features that are mapped to the shells. The latter represent inflated and
deflated versions of a template surface of a digital human in a canonical body
pose. Instead of rasterizing the shells directly, we sample 3D Gaussians on the
shells whose attributes are encoded in the texture features. These Gaussians
are efficiently and differentiably rendered. The ability to articulate the
shells is important during GAN training and, at inference time, to deform a
body into arbitrary user-defined poses. Our efficient rendering scheme bypasses
the need for view-inconsistent upsamplers and achieves high-quality multi-view
consistent renderings at a native resolution of pixels. We
demonstrate that GSMs successfully generate 3D humans when trained on
single-view datasets, including SHHQ and DeepFashion.Comment: Project page : https://rameenabdal.github.io/GaussianShellMaps
DMV3D: Denoising Multi-View Diffusion using 3D Large Reconstruction Model
We propose \textbf{DMV3D}, a novel 3D generation approach that uses a
transformer-based 3D large reconstruction model to denoise multi-view
diffusion. Our reconstruction model incorporates a triplane NeRF representation
and can denoise noisy multi-view images via NeRF reconstruction and rendering,
achieving single-stage 3D generation in 30s on single A100 GPU. We train
\textbf{DMV3D} on large-scale multi-view image datasets of highly diverse
objects using only image reconstruction losses, without accessing 3D assets. We
demonstrate state-of-the-art results for the single-image reconstruction
problem where probabilistic modeling of unseen object parts is required for
generating diverse reconstructions with sharp textures. We also show
high-quality text-to-3D generation results outperforming previous 3D diffusion
models. Our project website is at: https://justimyhxu.github.io/projects/dmv3d/ .Comment: Project Page: https://justimyhxu.github.io/projects/dmv3d
Identification of hub genes associated with hepatitis B virus-related hepatocellular cancer using weighted gene co-expression network analysis and protein-protein interaction network analysis
Background. Chronic hepatitis B virus (HBV) infection is the main pathogen of hepatocellular carcinoma. However, the mechanisms of HBV-related hepatocellular carcinoma (HCC) progression are practically unknown. Materials and Methods. The results of RNA-sequence and clinical data for GSE121248 and GSE17548 were accessed from the Gene Expression Omnibus data library. We screened Sangerbox 3.0 for differentially expressed genes (DEGs). The weighted gene co-expression network analysis (WGCNA) was employed to select core modules and hub genes, and protein-protein interaction network module analysis also played a significant part in it. Validation was performed using RNA-sequence data of cancer and normal tissues of HBV-related HCC patients in the cancer genome atlas-liver hepatocellular cancer database (TCGA-LIHC). Results. 787 DEGs were identified from GSE121248 and 772 DEGs were identified from GSE17548. WGCNA analysis indicated that black modules (99 genes) and grey modules (105 genes) were significantly associated with HBV-related HCC. Gene ontology analysis found that there is a direct correlation between DEGs and the regulation of cell movement and adhesion; the internal components and external packaging structure of plasma membrane; signaling receptor binding, calcium ion binding, etc. Kyoto Encyclopedia of Genes and Genomes pathway analysis found out the association between cytokine receptors, cytokine-cytokine receptor interactions, and viral protein interactions with cytokines were important and HBV-related HCC. Finally, we further validated 6 key genes including C7, EGR1, EGR3, FOS, FOSB, and prostaglandin-endoperoxide synthase 2 by using the TCGALIHC. Conclusions. We identified 6 hub genes as candidate biomarkers for HBV-related HCC. These hub genes may act as an essential part of HBV-related HCC progression
A new species of forest hedgehog (Mesechinus, Erinaceidae, Eulipotyphla, Mammalia) from eastern China
The hedgehog genus Mesechinus (Erinaceidae, Eulipotyphla) is currently comprised of four species, M. dauuricus, M. hughi, M. miodon, and M. wangi. Except for M. wangi, which is found in southwestern China, the other three species are mainly distributed in northern China and adjacent Mongolia and Russia. From 2018 to 2023, we collected seven Mesechinus specimens from Anhui and Zhejiang provinces, eastern China. Here, we evaluate the taxonomic and phylogenetic status of these specimens by integrating molecular, morphometric, and karyotypic approaches. Our results indicate that the Anhui and Zhejiang specimens are distinct from the four previously recognized species and are a new species. We formally described it here as Mesechinus orientalis sp. nov. It is the only Mesechinus species occurring in eastern China and is geographically distant from all known congeners. Morphologically, the new species is most similar to M. hughi, but it is distinguishable from that species by the combination of its smaller size, shorter spines, and several cranial characteristics. Mesechinus orientalis sp. nov. is a sister to the lineage composed of M. hughi and M. wangi from which it diverged approximately 1.10 Ma