3,293 research outputs found

    Primordial Black Holes from Sound Speed Resonance during Inflation

    Get PDF
    We report on a novel phenomenon of the resonance effect of primordial density perturbations arisen from a sound speed parameter with an oscillatory behavior, which can generically lead to the formation of primordial black holes in the early Universe. For a general inflaton field, it can seed primordial density fluctuations and their propagation is governed by a parameter of sound speed square. Once if this parameter achieves an oscillatory feature for a while during inflation, a significant non-perturbative resonance effect on the inflaton field fluctuations takes place around a critical length scale, which results in significant peaks in the primordial power spectrum. By virtue of this robust mechanism, primordial black holes with specific mass function can be produced with a sufficient abundance for dark matter in sizable parameter ranges.Comment: 6 pages, 4 figures; v2: figures replotted with corrections, analysis extended, version accepted by Phys.Rev.Let

    Is f1(1420)f_1(1420) the partner of f1(1285)f_1(1285) in the 3P1^3P_1 qqˉq\bar{q} nonet?

    Full text link
    Based on a 2×22\times 2 mass matrix, the mixing angle of the axial vector states f1(1420)f_1(1420) and f1(1285)f_1(1285) is determined to be 51.5∘51.5^{\circ}, and the theoretical results about the decay and production of the two states are presented. The theoretical results are in good agreement with the present experimental results, which suggests that f1(1420)f_1(1420) can be assigned as the partner of f1(1285)f_1(1285) in the 3P1^3P_1 qqˉq\bar{q} nonet. We also suggest that the existence of f1(1510)f_1(1510) needs further experimental confirmation.Comment: Latex, 6 pages, to be published in Chin. Phys. let

    Cross-modality Guidance-aided Multi-modal Learning with Dual Attention for MRI Brain Tumor Grading

    Full text link
    Brain tumor represents one of the most fatal cancers around the world, and is very common in children and the elderly. Accurate identification of the type and grade of tumor in the early stages plays an important role in choosing a precise treatment plan. The Magnetic Resonance Imaging (MRI) protocols of different sequences provide clinicians with important contradictory information to identify tumor regions. However, manual assessment is time-consuming and error-prone due to big amount of data and the diversity of brain tumor types. Hence, there is an unmet need for MRI automated brain tumor diagnosis. We observe that the predictive capability of uni-modality models is limited and their performance varies widely across modalities, and the commonly used modality fusion methods would introduce potential noise, which results in significant performance degradation. To overcome these challenges, we propose a novel cross-modality guidance-aided multi-modal learning with dual attention for addressing the task of MRI brain tumor grading. To balance the tradeoff between model efficiency and efficacy, we employ ResNet Mix Convolution as the backbone network for feature extraction. Besides, dual attention is applied to capture the semantic interdependencies in spatial and slice dimensions respectively. To facilitate information interaction among modalities, we design a cross-modality guidance-aided module where the primary modality guides the other secondary modalities during the process of training, which can effectively leverage the complementary information of different MRI modalities and meanwhile alleviate the impact of the possible noise

    DynaComm: Accelerating Distributed CNN Training between Edges and Clouds through Dynamic Communication Scheduling

    Full text link
    To reduce uploading bandwidth and address privacy concerns, deep learning at the network edge has been an emerging topic. Typically, edge devices collaboratively train a shared model using real-time generated data through the Parameter Server framework. Although all the edge devices can share the computing workloads, the distributed training processes over edge networks are still time-consuming due to the parameters and gradients transmission procedures between parameter servers and edge devices. Focusing on accelerating distributed Convolutional Neural Networks (CNNs) training at the network edge, we present DynaComm, a novel scheduler that dynamically decomposes each transmission procedure into several segments to achieve optimal layer-wise communications and computations overlapping during run-time. Through experiments, we verify that DynaComm manages to achieve optimal layer-wise scheduling for all cases compared to competing strategies while the model accuracy remains untouched.Comment: 16 pages, 12 figures. IEEE Journal on Selected Areas in Communication

    Quasinormal modes in the background of charged Kaluza-Klein black hole with squashed horizons

    Get PDF
    We study the scalar perturbation in the background of the charged Kaluza-Klein black holes with squashed horizons. We find that the position of infinite discontinuities of the heat capacities can be reflected in quasinormal spectrum. This shows the possible non-trivial relation between the thermodynamical and dynamical properties of black holes.Comment: revised version, accepted for publication in Phys.Lett.

    OPT: One-shot Pose-Controllable Talking Head Generation

    Full text link
    One-shot talking head generation produces lip-sync talking heads based on arbitrary audio and one source face. To guarantee the naturalness and realness, recent methods propose to achieve free pose control instead of simply editing mouth areas. However, existing methods do not preserve accurate identity of source face when generating head motions. To solve the identity mismatch problem and achieve high-quality free pose control, we present One-shot Pose-controllable Talking head generation network (OPT). Specifically, the Audio Feature Disentanglement Module separates content features from audios, eliminating the influence of speaker-specific information contained in arbitrary driving audios. Later, the mouth expression feature is extracted from the content feature and source face, during which the landmark loss is designed to enhance the accuracy of facial structure and identity preserving quality. Finally, to achieve free pose control, controllable head pose features from reference videos are fed into the Video Generator along with the expression feature and source face to generate new talking heads. Extensive quantitative and qualitative experimental results verify that OPT generates high-quality pose-controllable talking heads with no identity mismatch problem, outperforming previous SOTA methods.Comment: Accepted by ICASSP202

    OSM-Net: One-to-Many One-shot Talking Head Generation with Spontaneous Head Motions

    Full text link
    One-shot talking head generation has no explicit head movement reference, thus it is difficult to generate talking heads with head motions. Some existing works only edit the mouth area and generate still talking heads, leading to unreal talking head performance. Other works construct one-to-one mapping between audio signal and head motion sequences, introducing ambiguity correspondences into the mapping since people can behave differently in head motions when speaking the same content. This unreasonable mapping form fails to model the diversity and produces either nearly static or even exaggerated head motions, which are unnatural and strange. Therefore, the one-shot talking head generation task is actually a one-to-many ill-posed problem and people present diverse head motions when speaking. Based on the above observation, we propose OSM-Net, a \textit{one-to-many} one-shot talking head generation network with natural head motions. OSM-Net constructs a motion space that contains rich and various clip-level head motion features. Each basis of the space represents a feature of meaningful head motion in a clip rather than just a frame, thus providing more coherent and natural motion changes in talking heads. The driving audio is mapped into the motion space, around which various motion features can be sampled within a reasonable range to achieve the one-to-many mapping. Besides, the landmark constraint and time window feature input improve the accurate expression feature extraction and video generation. Extensive experiments show that OSM-Net generates more natural realistic head motions under reasonable one-to-many mapping paradigm compared with other methods.Comment: Paper Under Revie
    • …
    corecore