1,393 research outputs found

    The Data Big Bang and the Expanding Digital Universe: High-Dimensional, Complex and Massive Data Sets in an Inflationary Epoch

    Get PDF
    Recent and forthcoming advances in instrumentation, and giant new surveys, are creating astronomical data sets that are not amenable to the methods of analysis familiar to astronomers. Traditional methods are often inadequate not merely because of the size in bytes of the data sets, but also because of the complexity of modern data sets. Mathematical limitations of familiar algorithms and techniques in dealing with such data sets create a critical need for new paradigms for the representation, analysis and scientific visualization (as opposed to illustrative visualization) of heterogeneous, multiresolution data across application domains. Some of the problems presented by the new data sets have been addressed by other disciplines such as applied mathematics, statistics and machine learning and have been utilized by other sciences such as space-based geosciences. Unfortunately, valuable results pertaining to these problems are mostly to be found only in publications outside of astronomy. Here we offer brief overviews of a number of concepts, techniques and developments, some "old" and some new. These are generally unknown to most of the astronomical community, but are vital to the analysis and visualization of complex datasets and images. In order for astronomers to take advantage of the richness and complexity of the new era of data, and to be able to identify, adopt, and apply new solutions, the astronomical community needs a certain degree of awareness and understanding of the new concepts. One of the goals of this paper is to help bridge the gap between applied mathematics, artificial intelligence and computer science on the one side and astronomy on the other.Comment: 24 pages, 8 Figures, 1 Table. Accepted for publication: "Advances in Astronomy, special issue "Robotic Astronomy

    Standard Model in multiscale theories and observational constraints

    Get PDF
    We construct and analyze the Standard Model of electroweak and strong interactions in multiscale spacetimes with (i) weighted derivatives and (ii) qq-derivatives. Both theories can be formulated in two different frames, called fractional and integer picture. By definition, the fractional picture is where physical predictions should be made. (i) In the theory with weighted derivatives, it is shown that gauge invariance and the requirement of having constant masses in all reference frames make the Standard Model in the integer picture indistinguishable from the ordinary one. Experiments involving only weak and strong forces are insensitive to a change of spacetime dimensionality also in the fractional picture, and only the electromagnetic and gravitational sectors can break the degeneracy. For the simplest multiscale measures with only one characteristic time, length and energy scale tt_*, \ell_* and EE_*, we compute the Lamb shift in the hydrogen atom and constrain the multiscale correction to the ordinary result, getting the absolute upper bound t<1023st_*<10^{-23}\,{\rm s}. For the natural choice α0=1/2\alpha_0=1/2 of the fractional exponent in the measure, this bound is strengthened to t<1029st_*<10^{-29}\,{\rm s}, corresponding to <1020m\ell_*<10^{-20}\,{\rm m} and E>28TeVE_*>28\,{\rm TeV}. Stronger bounds are obtained from the measurement of the fine-structure constant. (ii) In the theory with qq-derivatives, considering the muon decay rate and the Lamb shift in light atoms, we obtain the independent absolute upper bounds t<1013st_* < 10^{-13}{\rm s} and E>35MeVE_*>35\,\text{MeV}. For α0=1/2\alpha_0=1/2, the Lamb shift alone yields t450GeVt_*450\,\text{GeV}.Comment: 25 pages. v2: authors' metadata corrected; v3: references added, new material added including a comparison with varying-couplings and effective field theories, a section on predictivity and falsifiability of multiscale theories, a discussion on classical CPT, expanded conclusions, and new QED constraints from the fine-structure constant; v3: minor typos corrected to match the published versio

    Multiscale Geometric Methods for Data Sets II: Geometric Multi-Resolution Analysis

    Get PDF
    Data sets are often modeled as point clouds in RDR^D, for DD large. It is often assumed that the data has some interesting low-dimensional structure, for example that of a dd-dimensional manifold MM, with dd much smaller than DD. When MM is simply a linear subspace, one may exploit this assumption for encoding efficiently the data by projecting onto a dictionary of dd vectors in RDR^D (for example found by SVD), at a cost (n+D)d(n+D)d for nn data points. When MM is nonlinear, there are no "explicit" constructions of dictionaries that achieve a similar efficiency: typically one uses either random dictionaries, or dictionaries obtained by black-box optimization. In this paper we construct data-dependent multi-scale dictionaries that aim at efficient encoding and manipulating of the data. Their construction is fast, and so are the algorithms that map data points to dictionary coefficients and vice versa. In addition, data points are guaranteed to have a sparse representation in terms of the dictionary. We think of dictionaries as the analogue of wavelets, but for approximating point clouds rather than functions.Comment: Re-formatted using AMS styl
    corecore