386,593 research outputs found

    Searching for New Physics in Rare BτB \to \tau Decays

    Full text link
    The rare decays {BτνˉB^- \to \tau \bar\nu}, \mbox{Bτ+τB \to \tau^+ \tau^-}, {bXννˉb \to X \nu \bar\nu} and \mbox{bXτ+τb \to X \tau^+ \tau^-} all contain third generation leptons in the final state, and hence are sensitive to new physics that couples more strongly to the third family. We present model independent expressions for these decays that can be useful to study several types of new physics effects. We concentrate on supersymmetric models without R-parity and without lepton number. We also assume a horizontal U(1) symmetry with fermion horizontal charges chosen to explain the magnitude of fermion masses and quark mixing angles. This allows us to estimate the order of magnitude of the new effects, and to derive numerical predictions for the various decay rates and for the forward-backward asymmetry and the τ\tau polarization components measurable in \mbox{bXτ+τb \to X \tau^+ \tau^-}. In some cases the branching ratios are enhanced by more than one order of magnitude, rendering foreseeable their detection at upcoming B-factories. We also discuss how a measurement of asymmetries in \mbox{bXτ+τb \to X \tau^+ \tau^-} can be crucial in distinguishing between different sources of new physics.Comment: 30 pages, LaTeX, 8 ps-figures (uses epsfig.sty) Equations (2.7) (3.10) (3.14) (3.18) (3.19) (3.20) (4.6) corrected, conclusions unmodified. To be published on Phys. Rev.

    DINAR: Diffusion Inpainting of Neural Textures for One-Shot Human Avatars

    Full text link
    We present DINAR, an approach for creating realistic rigged fullbody avatars from single RGB images. Similarly to previous works, our method uses neural textures combined with the SMPL-X body model to achieve photo-realistic quality of avatars while keeping them easy to animate and fast to infer. To restore the texture, we use a latent diffusion model and show how such model can be trained in the neural texture space. The use of the diffusion model allows us to realistically reconstruct large unseen regions such as the back of a person given the frontal view. The models in our pipeline are trained using 2D images and videos only. In the experiments, our approach achieves state-of-the-art rendering quality and good generalization to new poses and viewpoints. In particular, the approach improves state-of-the-art on the SnapshotPeople public benchmark

    SynBody: Synthetic Dataset with Layered Human Models for 3D Human Perception and Modeling

    Full text link
    Synthetic data has emerged as a promising source for 3D human research as it offers low-cost access to large-scale human datasets. To advance the diversity and annotation quality of human models, we introduce a new synthetic dataset, SynBody, with three appealing features: 1) a clothed parametric human model that can generate a diverse range of subjects; 2) the layered human representation that naturally offers high-quality 3D annotations to support multiple tasks; 3) a scalable system for producing realistic data to facilitate real-world tasks. The dataset comprises 1.2M images with corresponding accurate 3D annotations, covering 10,000 human body models, 1,187 actions, and various viewpoints. The dataset includes two subsets for human pose and shape estimation as well as human neural rendering. Extensive experiments on SynBody indicate that it substantially enhances both SMPL and SMPL-X estimation. Furthermore, the incorporation of layered annotations offers a valuable training resource for investigating the Human Neural Radiance Fields (NeRF).Comment: Accepted by ICCV 2023. Project webpage: https://synbody.github.io

    Progressive refinement rendering of implicit surfaces

    Get PDF
    The visualisation of implicit surfaces can be an inefficient task when such surfaces are complex and highly detailed. Visualising a surface by first converting it to a polygon mesh may lead to an excessive polygon count. Visualising a surface by direct ray casting is often a slow procedure. In this paper we present a progressive refinement renderer for implicit surfaces that are Lipschitz continuous. The renderer first displays a low resolution estimate of what the final image is going to be and, as the computation progresses, increases the quality of this estimate at an interactive frame rate. This renderer provides a quick previewing facility that significantly reduces the design cycle of a new and complex implicit surface. The renderer is also capable of completing an image faster than a conventional implicit surface rendering algorithm based on ray casting

    VolumeEVM: A new surface/volume integrated model

    Get PDF
    Volume visualization is a very active research area in the field of scien-tific visualization. The Extreme Vertices Model (EVM) has proven to be a complete intermediate model to visualize and manipulate volume data using a surface rendering approach. However, the ability to integrate the advantages of surface rendering approach with the superiority in visual exploration of the volume rendering would actually produce a very complete visualization and edition system for volume data. Therefore, we decided to define an enhanced EVM-based model which incorporates the volumetric information required to achieved a nearly direct volume visualization technique. Thus, VolumeEVM was designed maintaining the same EVM-based data structure plus a sorted list of density values corresponding to the EVM-based VoIs interior voxels. A function which relates interior voxels of the EVM with the set of densities was mandatory to be defined. This report presents the definition of this new surface/volume integrated model based on the well known EVM encoding and propose implementations of the main software-based direct volume rendering techniques through the proposed model.Postprint (published version

    Adaptive transfer functions: improved multiresolution visualization of medical models

    Get PDF
    The final publication is available at Springer via http://dx.doi.org/10.1007/s00371-016-1253-9Medical datasets are continuously increasing in size. Although larger models may be available for certain research purposes, in the common clinical practice the models are usually of up to 512x512x2000 voxels. These resolutions exceed the capabilities of conventional GPUs, the ones usually found in the medical doctors’ desktop PCs. Commercial solutions typically reduce the data by downsampling the dataset iteratively until it fits the available target specifications. The data loss reduces the visualization quality and this is not commonly compensated with other actions that might alleviate its effects. In this paper, we propose adaptive transfer functions, an algorithm that improves the transfer function in downsampled multiresolution models so that the quality of renderings is highly improved. The technique is simple and lightweight, and it is suitable, not only to visualize huge models that would not fit in a GPU, but also to render not-so-large models in mobile GPUs, which are less capable than their desktop counterparts. Moreover, it can also be used to accelerate rendering frame rates using lower levels of the multiresolution hierarchy while still maintaining high-quality results in a focus and context approach. We also show an evaluation of these results based on perceptual metrics.Peer ReviewedPostprint (author's final draft
    corecore