13,283 research outputs found

    The similarity metric

    Full text link
    A new class of distances appropriate for measuring similarity relations between sequences, say one type of similarity per distance, is studied. We propose a new ``normalized information distance'', based on the noncomputable notion of Kolmogorov complexity, and show that it is in this class and it minorizes every computable distance in the class (that is, it is universal in that it discovers all computable similarities). We demonstrate that it is a metric and call it the {\em similarity metric}. This theory forms the foundation for a new practical tool. To evidence generality and robustness we give two distinctive applications in widely divergent areas using standard compression programs like gzip and GenCompress. First, we compare whole mitochondrial genomes and infer their evolutionary history. This results in a first completely automatic computed whole mitochondrial phylogeny tree. Secondly, we fully automatically compute the language tree of 52 different languages.Comment: 13 pages, LaTex, 5 figures, Part of this work appeared in Proc. 14th ACM-SIAM Symp. Discrete Algorithms, 2003. This is the final, corrected, version to appear in IEEE Trans Inform. T

    Distinguishing f(R) theories from general relativity by gravitational lensing effect

    Full text link
    The post-Newtonian formulation of a general class of f(R) theories is set up to 3rd order approximation. It turns out that the information of a specific form of f(R) gravity is encoded in the Yukawa potential, which is contained in the perturbative expansion of the metric components. Although the Yukawa potential is canceled in the 2nd order expression of the effective refraction index of light, detailed analysis shows that the difference of the lensing effect between the f(R) gravity and general relativity does appear at the 3rd order when f′′(0)/f′(0)\sqrt{f''(0)/f'(0)} is larger than the distance d0d_0 to the gravitational source. However, the difference between these two kinds of theories will disappear in the axially symmetric spacetime region. Therefore only in very rare case the f(R) theories are distinguishable from general relativity by gravitational lensing effect at the 3rd order post-Newtonian approximation.Comment: 14 page

    Planck Constraints on Holographic Dark Energy

    Full text link
    We perform a detailed investigation on the cosmological constraints on the holographic dark energy (HDE) model by using the Planck data. HDE can provide a good fit to Planck high-l (l>40) temperature power spectrum, while the discrepancy at l=20-40 found in LCDM remains unsolved in HDE. The Planck data alone can lead to strong and reliable constraint on the HDE parameter c. At 68% CL, we get c=0.508+-0.207 with Planck+WP+lensing, favoring the present phantom HDE at > 2sigma CL. Comparably, by using WMAP9 alone we cannot get interesting constraint on c. By combining Planck+WP with the BAO measurements from 6dFGS+SDSS DR7(R)+BOSS DR9, the H0 measurement from HST, the SNLS3 and Union2.1 SNIa data sets, we get 68% CL constraints c=0.484+-0.070, 0.474+-0.049, 0.594+-0.051 and 0.642+-0.066. Constraints can be improved by 2%-15% if we further add the Planck lensing data. Compared with the WMAP9 results, the Planck results reduce the error by 30%-60%, and prefer a phantom-like HDE at higher CL. We find no evident tension between Planck and BAO/HST. Especially, the strong correlation between Omegam h^3 and dark energy parameters is helpful in relieving the tension between Planck and HST. The residual chi^2_{Planck+WP+HST}-chi^2_{Planck+WP} is 7.8 in LCDM, and is reduced to 1.0 or 0.3 if we switch dark energy to the w model or the holographic model. We find SNLS3 is in tension with all other data sets; for Planck+WP, WMAP9 and BAO+HST, the corresponding Delta chi^2 is 6.4, 3.5 and 4.1, respectively. Comparably, Union2.1 is consistent with these data sets, but the combination Union2.1+BAO+HST is in tension with Planck+WP+lensing, corresponding to a Delta chi^2 8.6 (1.4% probability). Thus, it is not reasonable to perform an all-combined (CMB+SNIa+BAO+HST) analysis for HDE when using the Planck data. Our tightest self-consistent constraint is c=0.495+-0.039 obtained from Planck+WP+BAO+HST+lensing.Comment: 29 pages, 11 figures, 3 tables; version accepted for publication in JCA

    Polyethylenimine-Modified Multiwalled Carbon Nanotubes for Plasmid DNA Gene Delivery

    Get PDF
    An efficient molecular delivery technique based on the transporting high-molecular-weight PEI 600K-modified multiwalled carbon nanotubes (PEI 600K-MWCNTs) into cell membranes is reported. The PEI 600K-MWCNTs exhibit low cytotoxicity and its associated plasmid DNA (pDNA) is delivered to cells efficiently, and the green fluorescent protein (GFP) levels up to 18 times higher than that of naked DNA were observed

    Sampling Online Social Networks via Heterogeneous Statistics

    Full text link
    Most sampling techniques for online social networks (OSNs) are based on a particular sampling method on a single graph, which is referred to as a statistics. However, various realizing methods on different graphs could possibly be used in the same OSN, and they may lead to different sampling efficiencies, i.e., asymptotic variances. To utilize multiple statistics for accurate measurements, we formulate a mixture sampling problem, through which we construct a mixture unbiased estimator which minimizes asymptotic variance. Given fixed sampling budgets for different statistics, we derive the optimal weights to combine the individual estimators; given fixed total budget, we show that a greedy allocation towards the most efficient statistics is optimal. In practice, the sampling efficiencies of statistics can be quite different for various targets and are unknown before sampling. To solve this problem, we design a two-stage framework which adaptively spends a partial budget to test different statistics and allocates the remaining budget to the inferred best statistics. We show that our two-stage framework is a generalization of 1) randomly choosing a statistics and 2) evenly allocating the total budget among all available statistics, and our adaptive algorithm achieves higher efficiency than these benchmark strategies in theory and experiment
    • …
    corecore