203 research outputs found

    Borel lemma: geometric progression vs. Riemann zeta-function

    Full text link
    In the proof of the classical Borel lemma [6] given by Hayman [11], any positive increasing continuous function T(r)T(r) satisfies T(r+1T(r))<2T(r)T\Big(r+\frac{1}{T(r)}\Big)<2T(r) outside a possible exceptional set of linear measure 22. This result is pivotal in the value distribution theory of entire and meromorphic functions; therefore, exceptional sets appear throughout Nevanlinna theory, most of which regard the second main theorem. In this work, we show that T(r)T(r) satisfies a smaller inequality T(r+1T(r))<(T(r)+1)2<2T(r)T\Big(r+\frac{1}{T(r)}\Big)<\big(\sqrt{T(r)}+1\big)^2<2T(r) outside a possible exceptional set of linear measure ζ(2)=π26<2\zeta(2)=\frac{\pi^2}{6}<2 with ζ(s)\zeta(s) the Riemann zeta-function. The sharp-form second main theorem by Hinkkanen [12] is used, and a comparison with Nevanlinna [17] and an extension to Arias [2] are given

    Ultrafast phonon and spin dynamics studies in magnetic heterostructure systems

    Get PDF

    Spintronic Sources of Ultrashort Terahertz Electromagnetic Pulses

    Get PDF
    Spintronic terahertz emitters are novel, broadband and efficient sources of terahertz radiation, which emerged at the intersection of ultrafast spintronics and terahertz photonics. They are based on efficient spin-current generation, spin-to-charge-current and current-to-field conversion at terahertz rates. In this review, we address the recent developments and applications, the current understanding of the physical processes as well as the future challenges and perspectives of broadband spintronic terahertz emitters

    HumanRef: Single Image to 3D Human Generation via Reference-Guided Diffusion

    Full text link
    Generating a 3D human model from a single reference image is challenging because it requires inferring textures and geometries in invisible views while maintaining consistency with the reference image. Previous methods utilizing 3D generative models are limited by the availability of 3D training data. Optimization-based methods that lift text-to-image diffusion models to 3D generation often fail to preserve the texture details of the reference image, resulting in inconsistent appearances in different views. In this paper, we propose HumanRef, a 3D human generation framework from a single-view input. To ensure the generated 3D model is photorealistic and consistent with the input image, HumanRef introduces a novel method called reference-guided score distillation sampling (Ref-SDS), which effectively incorporates image guidance into the generation process. Furthermore, we introduce region-aware attention to Ref-SDS, ensuring accurate correspondence between different body regions. Experimental results demonstrate that HumanRef outperforms state-of-the-art methods in generating 3D clothed humans with fine geometry, photorealistic textures, and view-consistent appearances.Comment: Homepage: https://eckertzhang.github.io/HumanRef.github.io
    • …
    corecore