10,770 research outputs found

    The Halo Occupation Distribution of SDSS Quasars

    Full text link
    We present an estimate of the projected two-point correlation function (2PCF) of quasars in the Sloan Digital Sky Survey (SDSS) over the full range of one- and two-halo scales, 0.02-120 Mpc/h. This was achieved by combining data from SDSS DR7 on large scales and Hennawi et al. (2006; with appropriate statistical corrections) on small scales. Our combined clustering sample is the largest spectroscopic quasar clustering sample to date, containing ~48,000 quasars in the redshift range 0.4<z<2.5 with median redshift 1.4. We interpret these precise 2PCF measurements within the halo occupation distribution (HOD) framework and constrain the occupation functions of central and satellite quasars in dark matter halos. In order to explain the small-scale clustering, the HOD modeling requires that a small fraction of z~1.4 quasars, fsat=(7.4+/-1.4) 10^(-4), be satellites in dark matter halos. At z~1.4, the median masses of the host halos of central and satellite quasars are constrained to be Mcen=(4.1+0.3/-0.4) 10^12 Msun/h and Msat=(3.6+0.8/-1.0) 10^14 Msun/h, respectively. To investigate the redshift evolution of the quasar-halo relationship, we also perform HOD modeling of the projected 2PCF measured by Shen et al. (2007) for SDSS quasars with median redshift 3.2. We find tentative evidence for an increase in the mass scale of quasar host halos---the inferred median mass of halos hosting central quasars at z~3.2 is Mcen=(14.1+5.8/-6.9) 10^12 Msun/h. The cutoff profiles of the mean occupation functions of central quasars reveal that quasar luminosity is more tightly correlated with halo mass at higher redshifts. The average quasar duty cycle around the median host halo mass is inferred to be fq=(7.3+0.6/-1.5) 10^(-4) at z~1.4 and fq=(8.6+20.4/-7.2) 10^(-2) at z~3.2. We discuss the implications of our results for quasar evolution and quasar-galaxy co-evolution.Comment: matches the ApJ published versio

    Crystal structure of human muscle creatine kinase

    Get PDF
    This is the publisher's version, also available electronically from "http://scripts.iucr.org".The crystal structure of human muscle creatine kinase has been determined by the molecular-replacement method and refined at 3.5 Å resolution. The structures of both the monomer and the dimer closely resemble those of the other known structures in the creatine kinase family. Two types of dimers, one with a non-crystallographic twofold symmetry axis and the other with a crystallographic twofold symmetry axis, were found to occur simultaneously in the crystal. These dimers form an infinite `double-helix'-like structure along an unusual long crystallographic 31 axis

    MLF-DET: Multi-Level Fusion for Cross-Modal 3D Object Detection

    Full text link
    In this paper, we propose a novel and effective Multi-Level Fusion network, named as MLF-DET, for high-performance cross-modal 3D object DETection, which integrates both the feature-level fusion and decision-level fusion to fully utilize the information in the image. For the feature-level fusion, we present the Multi-scale Voxel Image fusion (MVI) module, which densely aligns multi-scale voxel features with image features. For the decision-level fusion, we propose the lightweight Feature-cued Confidence Rectification (FCR) module which further exploits image semantics to rectify the confidence of detection candidates. Besides, we design an effective data augmentation strategy termed Occlusion-aware GT Sampling (OGS) to reserve more sampled objects in the training scenes, so as to reduce overfitting. Extensive experiments on the KITTI dataset demonstrate the effectiveness of our method. Notably, on the extremely competitive KITTI car 3D object detection benchmark, our method reaches 82.89% moderate AP and achieves state-of-the-art performance without bells and whistles

    De novo protein design using geometric vector field networks

    Full text link
    Innovations like protein diffusion have enabled significant progress in de novo protein design, which is a vital topic in life science. These methods typically depend on protein structure encoders to model residue backbone frames, where atoms do not exist. Most prior encoders rely on atom-wise features, such as angles and distances between atoms, which are not available in this context. Thus far, only several simple encoders, such as IPA, have been proposed for this scenario, exposing the frame modeling as a bottleneck. In this work, we proffer the Vector Field Network (VFN), which enables network layers to perform learnable vector computations between coordinates of frame-anchored virtual atoms, thus achieving a higher capability for modeling frames. The vector computation operates in a manner similar to a linear layer, with each input channel receiving 3D virtual atom coordinates instead of scalar values. The multiple feature vectors output by the vector computation are then used to update the residue representations and virtual atom coordinates via attention aggregation. Remarkably, VFN also excels in modeling both frames and atoms, as the real atoms can be treated as the virtual atoms for modeling, positioning VFN as a potential universal encoder. In protein diffusion (frame modeling), VFN exhibits an impressive performance advantage over IPA, excelling in terms of both designability (67.04% vs. 53.58%) and diversity (66.54% vs. 51.98%). In inverse folding (frame and atom modeling), VFN outperforms the previous SoTA model, PiFold (54.7% vs. 51.66%), on sequence recovery rate. We also propose a method of equipping VFN with the ESM model, which significantly surpasses the previous ESM-based SoTA (62.67% vs. 55.65%), LM-Design, by a substantial margin

    COST-EFF: Collaborative Optimization of Spatial and Temporal Efficiency with Slenderized Multi-exit Language Models

    Full text link
    Transformer-based pre-trained language models (PLMs) mostly suffer from excessive overhead despite their advanced capacity. For resource-constrained devices, there is an urgent need for a spatially and temporally efficient model which retains the major capacity of PLMs. However, existing statically compressed models are unaware of the diverse complexities between input instances, potentially resulting in redundancy and inadequacy for simple and complex inputs. Also, miniature models with early exiting encounter challenges in the trade-off between making predictions and serving the deeper layers. Motivated by such considerations, we propose a collaborative optimization for PLMs that integrates static model compression and dynamic inference acceleration. Specifically, the PLM is slenderized in width while the depth remains intact, complementing layer-wise early exiting to speed up inference dynamically. To address the trade-off of early exiting, we propose a joint training approach that calibrates slenderization and preserves contributive structures to each exit instead of only the final layer. Experiments are conducted on GLUE benchmark and the results verify the Pareto optimality of our approach at high compression and acceleration rate with 1/8 parameters and 1/19 FLOPs of BERT.Comment: Accepted in EMNLP 2022 main conferenc

    VisorGPT: Learning Visual Prior via Generative Pre-Training

    Full text link
    Various stuff and things in visual data possess specific traits, which can be learned by deep neural networks and are implicitly represented as the visual prior, e.g., object location and shape, in the model. Such prior potentially impacts many vision tasks. For example, in conditional image synthesis, spatial conditions failing to adhere to the prior can result in visually inaccurate synthetic results. This work aims to explicitly learn the visual prior and enable the customization of sampling. Inspired by advances in language modeling, we propose to learn Visual prior via Generative Pre-Training, dubbed VisorGPT. By discretizing visual locations of objects, e.g., bounding boxes, human pose, and instance masks, into sequences, VisorGPT can model visual prior through likelihood maximization. Besides, prompt engineering is investigated to unify various visual locations and enable customized sampling of sequential outputs from the learned prior. Experimental results demonstrate that VisorGPT can effectively model the visual prior, which can be employed for many vision tasks, such as customizing accurate human pose for conditional image synthesis models like ControlNet. Code will be released at https://github.com/Sierkinhane/VisorGPT.Comment: Project web-page: https://sierkinhane.github.io/visor-gpt
    corecore