39 research outputs found

    SEGA: Structural Entropy Guided Anchor View for Graph Contrastive Learning

    Full text link
    In contrastive learning, the choice of ``view'' controls the information that the representation captures and influences the performance of the model. However, leading graph contrastive learning methods generally produce views via random corruption or learning, which could lead to the loss of essential information and alteration of semantic information. An anchor view that maintains the essential information of input graphs for contrastive learning has been hardly investigated. In this paper, based on the theory of graph information bottleneck, we deduce the definition of this anchor view; put differently, \textit{the anchor view with essential information of input graph is supposed to have the minimal structural uncertainty}. Furthermore, guided by structural entropy, we implement the anchor view, termed \textbf{SEGA}, for graph contrastive learning. We extensively validate the proposed anchor view on various benchmarks regarding graph classification under unsupervised, semi-supervised, and transfer learning and achieve significant performance boosts compared to the state-of-the-art methods.Comment: ICML'2

    MagicPony: Learning Articulated 3D Animals in the Wild

    Full text link
    We consider the problem of learning a function that can estimate the 3D shape, articulation, viewpoint, texture, and lighting of an articulated animal like a horse, given a single test image. We present a new method, dubbed MagicPony, that learns this function purely from in-the-wild single-view images of the object category, with minimal assumptions about the topology of deformation. At its core is an implicit-explicit representation of articulated shape and appearance, combining the strengths of neural fields and meshes. In order to help the model understand an object's shape and pose, we distil the knowledge captured by an off-the-shelf self-supervised vision transformer and fuse it into the 3D model. To overcome common local optima in viewpoint estimation, we further introduce a new viewpoint sampling scheme that comes at no added training cost. Compared to prior works, we show significant quantitative and qualitative improvements on this challenging task. The model also demonstrates excellent generalisation in reconstructing abstract drawings and artefacts, despite the fact that it is only trained on real images.Comment: Project Page: https://3dmagicpony.github.io

    Controllable 3D Face Synthesis with Conditional Generative Occupancy Fields

    Full text link
    Capitalizing on the recent advances in image generation models, existing controllable face image synthesis methods are able to generate high-fidelity images with some levels of controllability, e.g., controlling the shapes, expressions, textures, and poses of the generated face images. However, these methods focus on 2D image generative models, which are prone to producing inconsistent face images under large expression and pose changes. In this paper, we propose a new NeRF-based conditional 3D face synthesis framework, which enables 3D controllability over the generated face images by imposing explicit 3D conditions from 3D face priors. At its core is a conditional Generative Occupancy Field (cGOF) that effectively enforces the shape of the generated face to commit to a given 3D Morphable Model (3DMM) mesh. To achieve accurate control over fine-grained 3D face shapes of the synthesized image, we additionally incorporate a 3D landmark loss as well as a volume warping loss into our synthesis algorithm. Experiments validate the effectiveness of the proposed method, which is able to generate high-fidelity face images and shows more precise 3D controllability than state-of-the-art 2D-based controllable face synthesis methods. Find code and demo at https://keqiangsun.github.io/projects/cgof

    CGOF++: Controllable 3D Face Synthesis with Conditional Generative Occupancy Fields

    Full text link
    Capitalizing on the recent advances in image generation models, existing controllable face image synthesis methods are able to generate high-fidelity images with some levels of controllability, e.g., controlling the shapes, expressions, textures, and poses of the generated face images. However, previous methods focus on controllable 2D image generative models, which are prone to producing inconsistent face images under large expression and pose changes. In this paper, we propose a new NeRF-based conditional 3D face synthesis framework, which enables 3D controllability over the generated face images by imposing explicit 3D conditions from 3D face priors. At its core is a conditional Generative Occupancy Field (cGOF++) that effectively enforces the shape of the generated face to conform to a given 3D Morphable Model (3DMM) mesh, built on top of EG3D [1], a recent tri-plane-based generative model. To achieve accurate control over fine-grained 3D face shapes of the synthesized images, we additionally incorporate a 3D landmark loss as well as a volume warping loss into our synthesis framework. Experiments validate the effectiveness of the proposed method, which is able to generate high-fidelity face images and shows more precise 3D controllability than state-of-the-art 2D-based controllable face synthesis methods.Comment: This article is an extension of the NeurIPS'22 paper arXiv:2206.0836

    Sparse Dense Fusion for 3D Object Detection

    Full text link
    With the prevalence of multimodal learning, camera-LiDAR fusion has gained popularity in 3D object detection. Although multiple fusion approaches have been proposed, they can be classified into either sparse-only or dense-only fashion based on the feature representation in the fusion module. In this paper, we analyze them in a common taxonomy and thereafter observe two challenges: 1) sparse-only solutions preserve 3D geometric prior and yet lose rich semantic information from the camera, and 2) dense-only alternatives retain the semantic continuity but miss the accurate geometric information from LiDAR. By analyzing these two formulations, we conclude that the information loss is inevitable due to their design scheme. To compensate for the information loss in either manner, we propose Sparse Dense Fusion (SDF), a complementary framework that incorporates both sparse-fusion and dense-fusion modules via the Transformer architecture. Such a simple yet effective sparse-dense fusion structure enriches semantic texture and exploits spatial structure information simultaneously. Through our SDF strategy, we assemble two popular methods with moderate performance and outperform baseline by 4.3% in mAP and 2.5% in NDS, ranking first on the nuScenes benchmark. Extensive ablations demonstrate the effectiveness of our method and empirically align our analysis

    Evolutionary origin of a tetraploid allium species in the Qinghai-Tibet Plateau

    Get PDF
    Extinct taxa may be detectable if they were ancestors to extant hybrid species, which retain their genetic signature. In this study, we combined phylogenomics, population genetics and fluorescence in situ hybridization (GISH and FISH) analyses to trace the origin of the alpine tetraploid Allium tetraploideum (2n = 4x = 32), one of the five known members in the subgenus Cyathophora. We found that A. tetraploideum was an obvious allotetrapoploid derived from ancestors including at least two closely related diploid species, A. farreri and A. cyathophorum, from which it differs by multiple ecological and genomic attributes. However, these two species cannot account for the full genome of A. tetraploideum, indicating that at least one extinct diploid is also involved in its ancestry. Furthermore, A. tetraploideum appears to have arisen via homoploid hybrid speciation (HHS) from two extinct allotetraploid parents, which derived in turn from the aforementioned diploids. Other modes of origin were possible, but all were even more complex and involved additional extinct ancestors. Our study together highlights how some polyploid species might have very complex origins, involving both HHS and polyploid speciation and also extinct ancestors.</p

    Protection and Development of Traditional Villages from the Perspective of Territorial Spatial Planning: Taking Baisi Village, Henan Province as an Example

    No full text
    In the process of historical development, the history and culture of villages are constantly changing, and they have become a powerful carrier of cultural heritage. Taking Baisi Village, Xun County, Henan Province as an example, the basic principles and development mode of traditional village protection were studied from aspects of traditional space renovation, architectural features and folk culture protection in this article, with a view to providing reference for the protection and development of other traditional villages
    corecore