13,863 research outputs found

    Map online system using internet-based image catalogue

    Get PDF
    Digital maps carry along its geodata information such as coordinate that is important in one particular topographic and thematic map. These geodatas are meaningful especially in military field. Since the maps carry along this information, its makes the size of the images is too big. The bigger size, the bigger storage is required to allocate the image file. It also can cause longer loading time. These conditions make it did not suitable to be applied in image catalogue approach via internet environment. With compression techniques, the image size can be reduced and the quality of the image is still guaranteed without much changes. This report is paying attention to one of the image compression technique using wavelet technology. Wavelet technology is much batter than any other image compression technique nowadays. As a result, the compressed images applied to a system called Map Online that used Internet-based Image Catalogue approach. This system allowed user to buy map online. User also can download the maps that had been bought besides using the searching the map. Map searching is based on several meaningful keywords. As a result, this system is expected to be used by Jabatan Ukur dan Pemetaan Malaysia (JUPEM) in order to make the organization vision is implemented

    Entropy-scaling search of massive biological data

    Get PDF
    Many datasets exhibit a well-defined structure that can be exploited to design faster search tools, but it is not always clear when such acceleration is possible. Here, we introduce a framework for similarity search based on characterizing a dataset's entropy and fractal dimension. We prove that searching scales in time with metric entropy (number of covering hyperspheres), if the fractal dimension of the dataset is low, and scales in space with the sum of metric entropy and information-theoretic entropy (randomness of the data). Using these ideas, we present accelerated versions of standard tools, with no loss in specificity and little loss in sensitivity, for use in three domains---high-throughput drug screening (Ammolite, 150x speedup), metagenomics (MICA, 3.5x speedup of DIAMOND [3,700x BLASTX]), and protein structure search (esFragBag, 10x speedup of FragBag). Our framework can be used to achieve "compressive omics," and the general theory can be readily applied to data science problems outside of biology.Comment: Including supplement: 41 pages, 6 figures, 4 tables, 1 bo

    Algorithmic complexity for psychology: A user-friendly implementation of the coding theorem method

    Full text link
    Kolmogorov-Chaitin complexity has long been believed to be impossible to approximate when it comes to short sequences (e.g. of length 5-50). However, with the newly developed \emph{coding theorem method} the complexity of strings of length 2-11 can now be numerically estimated. We present the theoretical basis of algorithmic complexity for short strings (ACSS) and describe an R-package providing functions based on ACSS that will cover psychologists' needs and improve upon previous methods in three ways: (1) ACSS is now available not only for binary strings, but for strings based on up to 9 different symbols, (2) ACSS no longer requires time-consuming computing, and (3) a new approach based on ACSS gives access to an estimation of the complexity of strings of any length. Finally, three illustrative examples show how these tools can be applied to psychology.Comment: to appear in "Behavioral Research Methods", 14 pages in journal format, R package at http://cran.r-project.org/web/packages/acss/index.htm

    From Caenorhabditis elegans to the Human Connectome: A Specific Modular Organisation Increases Metabolic, Functional, and Developmental Efficiency

    Full text link
    The connectome, or the entire connectivity of a neural system represented by network, ranges various scales from synaptic connections between individual neurons to fibre tract connections between brain regions. Although the modularity they commonly show has been extensively studied, it is unclear whether connection specificity of such networks can already be fully explained by the modularity alone. To answer this question, we study two networks, the neuronal network of C. elegans and the fibre tract network of human brains yielded through diffusion spectrum imaging (DSI). We compare them to their respective benchmark networks with varying modularities, which are generated by link swapping to have desired modularity values but otherwise maximally random. We find several network properties that are specific to the neural networks and cannot be fully explained by the modularity alone. First, the clustering coefficient and the characteristic path length of C. elegans and human connectomes are both higher than those of the benchmark networks with similar modularity. High clustering coefficient indicates efficient local information distribution and high characteristic path length suggests reduced global integration. Second, the total wiring length is smaller than for the alternative configurations with similar modularity. This is due to lower dispersion of connections, which means each neuron in C. elegans connectome or each region of interest (ROI) in human connectome reaches fewer ganglia or cortical areas, respectively. Third, both neural networks show lower algorithmic entropy compared to the alternative arrangements. This implies that fewer rules are needed to encode for the organisation of neural systems

    On the complexity and the information content of cosmic structures

    Full text link
    The emergence of cosmic structure is commonly considered one of the most complex phenomena in Nature. However, this complexity has never been defined nor measured in a quantitative and objective way. In this work we propose a method to measure the information content of cosmic structure and to quantify the complexity that emerges from it, based on Information Theory. The emergence of complex evolutionary patterns is studied with a statistical symbolic analysis of the datastream produced by state-of-the-art cosmological simulations of forming galaxy clusters. This powerful approach allows us to measure how many bits of information are necessary to predict the evolution of energy fields in a statistical way, and it offers a simple way to quantify when, where and how the cosmic gas behaves in complex ways. The most complex behaviors are found in the peripheral regions of galaxy clusters, where supersonic flows drive shocks and large energy fluctuations over a few tens of million years. Describing the evolution of magnetic energy requires at least a twice as large amount of bits than for the other energy fields. When radiative cooling and feedback from galaxy formation are considered, the cosmic gas is overall found to double its degree of complexity. In the future, Cosmic Information Theory can significantly increase our understanding of the emergence of cosmic structure as it represents an innovative framework to design and analyze complex simulations of the Universe in a simple, yet powerful way.Comment: 15 pages, 14 figures. MNRAS accepted, in pres
    corecore