1,639 research outputs found

    Improved Direct Counterfactual Quantum Communication

    Full text link
    Recently, a novel direct counterfactual quantum communication protocol was proposed using chained quantum Zeno effect. We found that this protocol is far from being widely used in practical channels, due to the side effect of 'chained', which leads to a dramatic increase of the equivalent optical distance between Alice and Bob. Therefore, not only the transmission time of a single bit increases in multiple times, but also the protocol is more sensitive to the noise. Here, we proposed an improved protocol, in which quantum interference is employed to destroy the nested structure induced by 'chained' effect. Moreover, we proved that a better counterfactuality is easier to be achieved, and showed that our protocol outperforms the former in the presence of noises.Comment: 6 pages, 4 figure

    Gravitational wave as probe of superfluid dark matter

    Full text link
    In recent years, superfluid dark matter (SfDM) has become a competitive model of emergent modified Newtonian dynamics (MOND) scenario: MOND phenomenons naturally emerge as a derived concept due to an extra force mediated between baryons by phonons as a result of axionlike particles condensed as superfluid at galactic scales; Beyond galactic scales, these axionlike particles behave as normal fluid without phonon-mediated MOND-like force between baryons, therefore SfDM also maintains the usual success of Λ\LambdaCDM at cosmological scales. In this paper, we use gravitational waves (GWs) to probe the relevant parameter space of SfDM. GWs through Bose-Einstein condensate (BEC) could propagate with a speed slightly deviation from the speed-of-light due to the change in the effective refractive index, which depends on the SfDM parameters and GW-source properties. We find that Five hundred meter Aperture Spherical Telescope (FAST), Square Kilometre Array (SKA) and International Pulsar Timing Array (IPTA) are the most promising means as GW probe of relevant parameter space of SfDM. Future space-based GW detectors are also capable of probing SfDM if a multimessenger approach is adopted.Comment: v1, 10 pages, 2 figures, two columns; v2, 12 pages, 2 figures, two columns, references are added, a summary for GW velocity constraints is added, a discussion on Shapiro time delay is added; v3, 13 pages, 2 figures, two columns, final version to match the published versio

    The GWs from the S-stars revolving around the SMBH at Sgr A*

    Full text link
    A preliminary estimation of gravitational waves (GWs) from the extreme-mass-ratio-inspirals (EMRIs) system in the Galactic Centre (GC) is given for the 37 observed S-stars revolving around the supermassive black hole (SMBH) at Sagittarius (Sgr) A*. Within this century, the total strain of the gravitational waveform calculated from the post-Newtonian (PN) method with eccentricity is well below the current planned sensitivity of pulsar-timing-array (PTA). New technology might be required in order to extract GW signal from this EMRIs system for future PTA detections.Comment: v1, 16 pages, 3 figures, 1 table, two columns; v2, reference added, numerical calculation improved, submitted to Phys.Rev.

    Dual Iterative Hard Thresholding: From Non-convex Sparse Minimization to Non-smooth Concave Maximization

    Full text link
    Iterative Hard Thresholding (IHT) is a class of projected gradient descent methods for optimizing sparsity-constrained minimization models, with the best known efficiency and scalability in practice. As far as we know, the existing IHT-style methods are designed for sparse minimization in primal form. It remains open to explore duality theory and algorithms in such a non-convex and NP-hard problem setting. In this paper, we bridge this gap by establishing a duality theory for sparsity-constrained minimization with â„“2\ell_2-regularized loss function and proposing an IHT-style algorithm for dual maximization. Our sparse duality theory provides a set of sufficient and necessary conditions under which the original NP-hard/non-convex problem can be equivalently solved in a dual formulation. The proposed dual IHT algorithm is a super-gradient method for maximizing the non-smooth dual objective. An interesting finding is that the sparse recovery performance of dual IHT is invariant to the Restricted Isometry Property (RIP), which is required by virtually all the existing primal IHT algorithms without sparsity relaxation. Moreover, a stochastic variant of dual IHT is proposed for large-scale stochastic optimization. Numerical results demonstrate the superiority of dual IHT algorithms to the state-of-the-art primal IHT-style algorithms in model estimation accuracy and computational efficiency

    Probing cosmic anisotropy with gravitational waves as standard sirens

    Full text link
    The gravitational wave (GW) as a standard siren directly determines the luminosity distance from the gravitational waveform without reference to the specific cosmological model, of which the redshift can be obtained separately by means of the electromagnetic counterpart like GW events from binary neutron stars and massive black hole binaries (MBHBs). To see to what extent the standard siren can reproduce the presumed dipole anisotropy written in the simulated data of standard siren events from typical configurations of GW detectors, we find that (1) for the Laser Interferometer Space Antenna with different MBHB models during five-year observations, the cosmic isotropy can be ruled out at 3σ3\sigma confidence level (C.L.) and the dipole direction can be constrained roughly around 20%20\% at 2σ2\sigma C.L., as long as the dipole amplitude is larger than 0.030.03, 0.060.06 and 0.0250.025 for MBHB models Q3d, pop III and Q3nod with increasing constraining ability, respectively; (2) for Einstein Telescope with no less than 200200 standard siren events, the cosmic isotropy can be ruled out at 3σ3\sigma C.L. if the dipole amplitude is larger than 0.060.06, and the dipole direction can be constrained within 20%20\% at 3σ3\sigma C.L. if the dipole amplitude is near 0.10.1; (3) for the Deci-Hertz Interferometer Gravitational wave Observatory with no less than 100100 standard siren events, the cosmic isotropy can be ruled out at 3σ3\sigma C.L. for dipole amplitude larger than 0.030.03 , and the dipole direction can even be constrained within 10%10\% at 3σ3\sigma C.L. if the dipole amplitude is larger than 0.070.07. Our work manifests the promising perspective of the constraint ability on the cosmic anisotropy from the standard siren approach.Comment: v1, 10 pages, 4 figures, two columns; v2, 10 pages, 4 figures, Phys.Rev.D accepted, to match the published version, added discussion on the effect of detectors' rotations for LIS

    Probing cosmic anisotropy with GW/FRB as upgraded standard sirens

    Full text link
    Recently it was shown that cosmic anisotropy can be well tested using either standard siren measurement of luminosity distance dL(z)d_\mathrm{L}(z) from gravitational-wave (GW) observation or dispersion measure (DM(z)\mathrm{DM}(z)) from fast radio burst (FRB). It was also observed that the combined measurement of dL(z)â‹…DM(z)d_\mathrm{L}(z)\cdot\mathrm{DM}(z) from the GW/FRB association system as suggested in some of FRB models is more effective to constrain cosmological parameters than dL(z)d_\mathrm{L}(z) or DM(z)\mathrm{DM}(z) separately due to its independence from Hubble constant. In this paper, we will show both theoretically and with simulation that, this upgraded sirens from combined GW/FRB observations could test cosmic anisotropy with a double relative sensitivity compared to the usual standard siren from GW observation alone.Comment: 11 pages and 1 figure; match the publication version of JCA

    Internal X-ray plateau in short GRBs: Signature of supramassive fast-rotating quark stars?

    Full text link
    A supramassive, strongly-magnetized millisecond neutron star (NS) has been proposed to be the candidate central engine of at least some short gamma-ray bursts (SGRBs), based on the "internal plateau" commonly observed in the early X-ray afterglow. While a previous analysis shows a qualitative consistency between this suggestion and the Swift SGRB data, the distribution of observed break time tbt_b is much narrower than the distribution of the collapse time of supramassive NSs for the several NS equations-of-state (EoSs) investigated. In this paper, we study four recently-constructed "unified" NS EoSs, as well as three developed strange quark star (QS) EoSs within the new confinement density-dependent mass model. All the EoSs chosen here satisfy the recent observational constraints of the two massive pulsars whose masses are precisely measured. We construct sequences of rigidly rotating NS/QS configurations with increasing spinning frequency ff, from non-rotating (f=0f = 0) to the Keplerian frequency (f=fKf = f_{\rm K}), and provide convenient analytical parametrizations of the results. Assuming that the cosmological NS-NS merger systems have the same mass distribution as the Galactic NS-NS systems, we demonstrate that all except the BCPM NS EoS can reproduce the current 22%22\% supramassive NS/QS fraction constraint as derived from the SGRB data. We simultaneously simulate the observed quantities (the break time tbt_b, the break time luminosity LbL_b and the total energy in the electromagnetic channel EtotalE_{\rm total}) of SGRBs, and find that while equally well reproducing other observational constraints, QS EoSs predict a much narrower tbt_b distribution than that of the NS EoSs, better matching the data. We therefore suggest that the post-merger product of NS-NS mergers might be fast-rotating supramassive QSs rather than NSs.Comment: 6 pages, 5 figures, 2 tables, Phys. Rev. D (2016) accepte

    Meta-Learning with Network Pruning

    Full text link
    Meta-learning is a powerful paradigm for few-shot learning. Although with remarkable success witnessed in many applications, the existing optimization based meta-learning models with over-parameterized neural networks have been evidenced to ovetfit on training tasks. To remedy this deficiency, we propose a network pruning based meta-learning approach for overfitting reduction via explicitly controlling the capacity of network. A uniform concentration analysis reveals the benefit of network capacity constraint for reducing generalization gap of the proposed meta-learner. We have implemented our approach on top of Reptile assembled with two network pruning routines: Dense-Sparse-Dense (DSD) and Iterative Hard Thresholding (IHT). Extensive experimental results on benchmark datasets with different over-parameterized deep networks demonstrate that our method not only effectively alleviates meta-overfitting but also in many cases improves the overall generalization performance when applied to few-shot classification tasks

    TStarBots: Defeating the Cheating Level Builtin AI in StarCraft II in the Full Game

    Full text link
    Starcraft II (SC2) is widely considered as the most challenging Real Time Strategy (RTS) game. The underlying challenges include a large observation space, a huge (continuous and infinite) action space, partial observations, simultaneous move for all players, and long horizon delayed rewards for local decisions. To push the frontier of AI research, Deepmind and Blizzard jointly developed the StarCraft II Learning Environment (SC2LE) as a testbench of complex decision making systems. SC2LE provides a few mini games such as MoveToBeacon, CollectMineralShards, and DefeatRoaches, where some AI agents have achieved the performance level of human professional players. However, for full games, the current AI agents are still far from achieving human professional level performance. To bridge this gap, we present two full game AI agents in this paper - the AI agent TStarBot1 is based on deep reinforcement learning over a flat action structure, and the AI agent TStarBot2 is based on hard-coded rules over a hierarchical action structure. Both TStarBot1 and TStarBot2 are able to defeat the built-in AI agents from level 1 to level 10 in a full game (1v1 Zerg-vs-Zerg game on the AbyssalReef map), noting that level 8, level 9, and level 10 are cheating agents with unfair advantages such as full vision on the whole map and resource harvest boosting. To the best of our knowledge, this is the first public work to investigate AI agents that can defeat the built-in AI in the StarCraft II full game.Comment: add link for source cod

    RoeNets: Predicting Discontinuity of Hyperbolic Systems from Continuous Data

    Full text link
    We introduce Roe Neural Networks (RoeNets) that can predict the discontinuity of the hyperbolic conservation laws (HCLs) based on short-term discontinuous and even continuous training data. Our methodology is inspired by Roe approximate Riemann solver (P. L. Roe, J. Comput. Phys., vol. 43, 1981, pp. 357--372), which is one of the most fundamental HCLs numerical solvers. In order to accurately solve the HCLs, Roe argues the need to construct a Roe matrix that fulfills "Property U", including diagonalizable with real eigenvalues, consistent with the exact Jacobian, and preserving conserved quantities. However, the construction of such matrix cannot be achieved by any general numerical method. Our model made a breakthrough improvement in solving the HCLs by applying Roe solver under a neural network perspective. To enhance the expressiveness of our model, we incorporate pseudoinverses into a novel context to enable a hidden dimension so that we are flexible with the number of parameters. The ability of our model to predict long-term discontinuity from a short window of continuous training data is in general considered impossible using traditional machine learning approaches. We demonstrate that our model can generate highly accurate predictions of evolution of convection without dissipation and the discontinuity of hyperbolic systems from smooth training data
    • …
    corecore