6,241 research outputs found

    The Flatness of Mass-to-Light Ratio on Large Scales

    Get PDF
    It has been suggested that the mass-to-light (M/LM/L) ratio of gravitationally clustering objects is scale-independent on scales beyond galaxy clusters, and may also be independent of the mass of the objects. In this paper, we show that the scale behavior of M/LM/L ratio is closely related to the scaling of cosmic structures larger than clusters. The scale dependence of the M/LM/L ratio can be determined by comparing the observed scaling of richness function (RF) of multi-scale identified objects with the model-predicted scaling of mass function (MF) of large scale structures. Using the multi-scale identified clusters from IRAS 1.2 Jy galaxy survey, we have made comparisons of the observed RF scaling of IRAS rclr_{cl}-clusters with the MF scalings given by simulations of three popular models SCDM, LCDM and OCDM. We find that, the M/L ratio basically is scale-independent from the Abell radius up to about 24 h−1h^{-1}Mpc, while it seems to show a slight, but systematical, increase over this scale range. This result is weakly dependent on the cosmological parameters.Comment: AAS Latex file, 8 pages+ 4 figures, accepted for publication in ApJ

    Tunneling Qubit Operation on a Protected Josephson Junction Array

    Full text link
    We discuss a protected quantum computation process based on a hexagon Josephson junction array. Qubits are encoded in the punctured array, which is topologically protected. The degeneracy is related to the number of holes. The topological degeneracy is lightly shifted by tuning the flux through specific hexagons. We also show how to perform single qubit operation and basic quantum gate operations in this system.Comment: 8 pages, 4 figures. The published version in Phys. Rev., A81(2010)01232

    On the exactness of soft theorems

    Get PDF
    Soft behaviours of S-matrix for massless theories reflect the underlying symmetry principle that enforces its masslessness. As an expansion in soft momenta, sub-leading soft theorems can arise either due to (I) unique structure of the fundamental vertex or (II) presence of enhanced broken-symmetries. While the former is expected to be modified by infrared or ultraviolet divergences, the latter should remain exact to all orders in perturbation theory. Using current algebra, we clarify such distinction for spontaneously broken (super) Poincar\'e and (super) conformal symmetry. We compute the UV divergences of DBI, conformal DBI, and A-V theory to verify the exactness of type (II) soft theorems, while type (I) are shown to be broken and the soft-modifying higher-dimensional operators are identified. As further evidence for the exactness of type (II) soft theorems, we consider the alpha' expansion of both super and bosonic open strings amplitudes, and verify the validity of the translation symmetry breaking soft-theorems up to O(alpha'^6). Thus the massless S-matrix of string theory "knows" about the presence of D-branes.Comment: 35 pages. Additional mathematica note book with the UV-divergenece of the 6-point amplitude in AV/KS theor

    JALAD: Joint Accuracy- and Latency-Aware Deep Structure Decoupling for Edge-Cloud Execution

    Full text link
    Recent years have witnessed a rapid growth of deep-network based services and applications. A practical and critical problem thus has emerged: how to effectively deploy the deep neural network models such that they can be executed efficiently. Conventional cloud-based approaches usually run the deep models in data center servers, causing large latency because a significant amount of data has to be transferred from the edge of network to the data center. In this paper, we propose JALAD, a joint accuracy- and latency-aware execution framework, which decouples a deep neural network so that a part of it will run at edge devices and the other part inside the conventional cloud, while only a minimum amount of data has to be transferred between them. Though the idea seems straightforward, we are facing challenges including i) how to find the best partition of a deep structure; ii) how to deploy the component at an edge device that only has limited computation power; and iii) how to minimize the overall execution latency. Our answers to these questions are a set of strategies in JALAD, including 1) A normalization based in-layer data compression strategy by jointly considering compression rate and model accuracy; 2) A latency-aware deep decoupling strategy to minimize the overall execution latency; and 3) An edge-cloud structure adaptation strategy that dynamically changes the decoupling for different network conditions. Experiments demonstrate that our solution can significantly reduce the execution latency: it speeds up the overall inference execution with a guaranteed model accuracy loss.Comment: conference, copyright transfered to IEE
    • …
    corecore