82,211 research outputs found

    Dark Matter as a Possible New Energy Source for Future Rocket Technology

    Full text link
    Current rocket technology can not send the spaceship very far, because the amount of the chemical fuel it can take is limited. We try to use dark matter (DM) as fuel to solve this problem. In this work, we give an example of DM engine using dark matter annihilation products as propulsion. The acceleration is proportional to the velocity, which makes the velocity increase exponentially with time in non-relativistic region. The important points for the acceleration are how dense is the DM density and how large is the saturation region. The parameters of the spaceship may also have great influence on the results. We show that the (sub)halos can accelerate the spaceship to velocity 105c103c 10^{- 5} c \sim 10^{- 3} c. Moreover, in case there is a central black hole in the halo, like the galactic center, the radius of the dense spike can be large enough to accelerate the spaceship close to the speed of light.Comment: 7 pages, 6 figures; v2, minor correction, add the discussion in annihilation spee

    An improved EM algorithm for solving MLE in constrained diffusion kurtosis imaging of human brain

    Full text link
    The displacement distribution of a water molecular is characterized mathematically as Gaussianity without considering potential diffusion barriers and compartments. However, this is not true in real scenario: most biological tissues are comprised of cell membranes, various intracellular and extracellular spaces, and of other compartments, where the water diffusion is referred to have a non-Gaussian distribution. Diffusion kurtosis imaging (DKI), recently considered to be one sensitive biomarker, is an extension of diffusion tensor imaging, which quantifies the degree of non-Gaussianity of the diffusion. This work proposes an efficient scheme of maximum likelihood estimation (MLE) in DKI: we start from the Rician noise model of the signal intensities. By augmenting a Von-Mises distributed latent phase variable, the Rician likelihood is transformed to a tractable joint density without loss of generality. A fast computational method, an expectation-maximization (EM) algorithm for MLE is proposed in DKI. To guarantee the physical relevance of the diffusion kurtosis we apply the ternary quartic (TQ) parametrization to utilize its positivity, which imposes the upper bound to the kurtosis. A Fisher-scoring method is used for achieving fast convergence of the individual diffusion compartments. In addition, we use the barrier method to constrain the lower bound to the kurtosis. The proposed estimation scheme is conducted on both synthetic and real data with an objective of healthy human brain. We compared the method with the other popular ones with promising performance shown in the results

    Large Margin Softmax Loss for Speaker Verification

    Full text link
    In neural network based speaker verification, speaker embedding is expected to be discriminative between speakers while the intra-speaker distance should remain small. A variety of loss functions have been proposed to achieve this goal. In this paper, we investigate the large margin softmax loss with different configurations in speaker verification. Ring loss and minimum hyperspherical energy criterion are introduced to further improve the performance. Results on VoxCeleb show that our best system outperforms the baseline approach by 15\% in EER, and by 13\%, 33\% in minDCF08 and minDCF10, respectively.Comment: submitted to Interspeech 2019. The code and models have been release

    Adaptive Stochastic Gradient Langevin Dynamics: Taming Convergence and Saddle Point Escape Time

    Full text link
    In this paper, we propose a new adaptive stochastic gradient Langevin dynamics (ASGLD) algorithmic framework and its two specialized versions, namely adaptive stochastic gradient (ASG) and adaptive gradient Langevin dynamics(AGLD), for non-convex optimization problems. All proposed algorithms can escape from saddle points with at most O(logd)O(\log d) iterations, which is nearly dimension-free. Further, we show that ASGLD and ASG converge to a local minimum with at most O(logd/ϵ4)O(\log d/\epsilon^4) iterations. Also, ASGLD with full gradients or ASGLD with a slowly linearly increasing batch size converge to a local minimum with iterations bounded by O(logd/ϵ2)O(\log d/\epsilon^2), which outperforms existing first-order methods.Comment: 24 pages, 13 figure

    The Origin of Weak Lensing Convergence Peaks

    Full text link
    Weak lensing convergence peaks are a promising tool to probe nonlinear structure evolution at late times, providing additional cosmological information beyond second-order statistics. Previous theoretical and observational studies have shown that the cosmological constraints on Ωm\Omega_m and σ8\sigma_8 are improved by a factor of up to ~ 2 when peak counts and second-order statistics are combined, compared to using the latter alone. We study the origin of lensing peaks using observational data from the 154 deg2^2 Canada-France-Hawaii Telescope Lensing Survey. We found that while high peaks (with height κ\kappa >3.5 σκ\sigma_\kappa, where σκ\sigma_\kappa is the r.m.s. of the convergence κ\kappa) are typically due to one single massive halo of ~1015M10^{15}M_\odot, low peaks (κ\kappa <~ σκ\sigma_\kappa) are associated with constellations of 2-8 smaller halos (<~1013M10^{13}M_\odot). In addition, halos responsible for forming low peaks are found to be significantly offset from the line-of-sight towards the peak center (impact parameter >~ their virial radii), compared with ~0.25 virial radii for halos linked with high peaks, hinting that low peaks are more immune to baryonic processes whose impact is confined to the inner regions of the dark matter halos. Our findings are in good agreement with results from the simulation work by Yang el al. (2011).Comment: 10 pages, 10 figures; v2 matches PRD accepted version, results unchange

    Bayesian model-based spatiotemporal survey design for log-Gaussian Cox process

    Full text link
    In geostatistics, the design for data collection is central for accurate prediction and parameter inference. One important class of geostatistical models is log-Gaussian Cox process (LGCP) which is used extensively, for example, in ecology. However, there are no formal analyses on optimal designs for LGCP models. In this work, we develop a novel model-based experimental design for LGCP modeling of spatiotemporal point process data. We propose a new spatially balanced rejection sampling design which directs sampling to spatiotemporal locations that are a priori expected to provide most information. We compare the rejection sampling design to traditional balanced and uniform random designs using the average predictive variance loss function and the Kullback-Leibler divergence between prior and posterior for the LGCP intensity function. Our results show that the rejection sampling method outperforms the corresponding balanced and uniform random sampling designs for LGCP whereas the latter work better for models with Gaussian models. We perform a case study applying our new sampling design to plan a survey for species distribution modeling on larval areas of two commercially important fish stocks on Finnish coastal areas. The case study results show that rejection sampling designs give considerable benefit compared to traditional designs. Results show also that best performing designs may vary considerably between target species

    Scanning Tunneling Microscope Nanolithography on SrRuO3 Thin Film Surfaces

    Full text link
    Nanoscale lithography on SrRuO3 (SRO) thin film surfaces has been performed by scanning tunneling microscopy under ambient conditions. The depth of etched lines increases with increasing bias voltage but it does not change significantly by increasing the tunneling current. The dependence of line width on bias voltage from experimental data is in agreement with theoretical calculation based on field-induced evaporation. Moreover, a three-square nanostructure was successfully created, showing the capability of fabricating nanodevices in SRO thin films.Comment: 10 pages, 6 figure

    Vanilla Lasso for sparse classification under single index models

    Full text link
    This paper study sparse classification problems. We show that under single-index models, vanilla Lasso could give good estimate of unknown parameters. With this result, we see that even if the model is not linear, and even if the response is not continuous, we could still use vanilla Lasso to train classifiers. Simulations confirm that vanilla Lasso could be used to get a good estimation when data are generated from a logistic regression model

    On the Hausdorff dimension faithfulness connected with QinftyQ_{infty}-expansion

    Full text link
    In this paper, we show that, the family of all possible union of finite consecutive cylinders of the same rank of QQ_{\infty}-expansion is faithful for the Hausdorff dimension calculation. Applying this result, we give the necessary and sufficient condition for the family of all cylinders of QQ_{\infty}-expansion to be faithful for Hausdorff dimension calculation on the unit interval, this answers the open problem mentioned in a paper of S. Albeverio et al..Comment: 10 page

    Pre-equilibrium dynamics and heavy-ion observables

    Full text link
    To bracket the importance of the pre-equilibrium stage on relativistic heavy-ion collision observables, we compare simulations where it is modeled by either free-streaming partons or fluid dynamics. These cases implement the assumptions of extremely weak vs. extremely strong coupling in the initial collision stage. Accounting for flow generated in the pre-equilibrium stage, we study the sensitivity of radial, elliptic and triangular flow on the switching time when the hydrodynamic description becomes valid. Using the hybrid code iEBE-VISHNU we perform a multi-parameter search, constrained by particle ratios, integrated elliptic and triangular charged hadron flow, the mean transverse momenta of pions, kaons and protons, and the second moment pT2\langle p_T^2\rangle of the proton transverse momentum spectrum, to identify optimized values for the switching time τs\tau_s from pre-equilibrium to hydrodynamics, the specific shear viscosity η/s\eta/s, the normalization factor of the temperature-dependent specific bulk viscosity (ζ/s)(T)(\zeta/s)(T), and the switching temperature TswT_\mathrm{sw} from viscous hydrodynamics to the hadron cascade UrQMD. With the optimized parameters, we predict and compare with experiment the pTp_T-distributions of π\pi, KK, pp, Λ\Lambda, Ξ\Xi and Ω\Omega yields and their elliptic flow coefficients, focusing specifically on the mass-ordering of the elliptic flow for protons and Lambda hyperons which is incorrectly described by VISHNU without pre-equilibrium flow.Comment: 4 pages, 1 figure. Talk presented at Quark Matter 2015, Kobe, Sep. 27 - Oct. 3, 2015, to appear in the proceedings published by Nuclear Physics A. v2 corrects originally mislabeled curves in Figs. 2a,
    corecore