48,230 research outputs found

    On-the-fly memory compression for multibody algorithms.

    Get PDF
    Memory and bandwidth demands challenge developers of particle-based codes that have to scale on new architectures, as the growth of concurrency outperforms improvements in memory access facilities, as the memory per core tends to stagnate, and as communication networks cannot increase bandwidth arbitrary. We propose to analyse each particle of such a code to find out whether a hierarchical data representation storing data with reduced precision caps the memory demands without exceeding given error bounds. For admissible candidates, we perform this compression and thus reduce the pressure on the memory subsystem, lower the total memory footprint and reduce the data to be exchanged via MPI. Notably, our analysis and transformation changes the data compression dynamically, i.e. the choice of data format follows the solution characteristics, and it does not require us to alter the core simulation code

    Efficient Data Compression with Error Bound Guarantee in Wireless Sensor Networks

    Get PDF
    We present a data compression and dimensionality reduction scheme for data fusion and aggregation applications to prevent data congestion and reduce energy consumption at network connecting points such as cluster heads and gateways. Our in-network approach can be easily tuned to analyze the data temporal or spatial correlation using an unsupervised neural network scheme, namely the autoencoders. In particular, our algorithm extracts intrinsic data features from previously collected historical samples to transform the raw data into a low dimensional representation. Moreover, the proposed framework provides an error bound guarantee mechanism. We evaluate the proposed solution using real-world data sets and compare it with traditional methods for temporal and spatial data compression. The experimental validation reveals that our approach outperforms several existing wireless sensor network's data compression methods in terms of compression efficiency and signal reconstruction.Comment: ACM MSWiM 201

    Kolmogorov Complexity in perspective. Part II: Classification, Information Processing and Duality

    Get PDF
    We survey diverse approaches to the notion of information: from Shannon entropy to Kolmogorov complexity. Two of the main applications of Kolmogorov complexity are presented: randomness and classification. The survey is divided in two parts published in a same volume. Part II is dedicated to the relation between logic and information system, within the scope of Kolmogorov algorithmic information theory. We present a recent application of Kolmogorov complexity: classification using compression, an idea with provocative implementation by authors such as Bennett, Vitanyi and Cilibrasi. This stresses how Kolmogorov complexity, besides being a foundation to randomness, is also related to classification. Another approach to classification is also considered: the so-called "Google classification". It uses another original and attractive idea which is connected to the classification using compression and to Kolmogorov complexity from a conceptual point of view. We present and unify these different approaches to classification in terms of Bottom-Up versus Top-Down operational modes, of which we point the fundamental principles and the underlying duality. We look at the way these two dual modes are used in different approaches to information system, particularly the relational model for database introduced by Codd in the 70's. This allows to point out diverse forms of a fundamental duality. These operational modes are also reinterpreted in the context of the comprehension schema of axiomatic set theory ZF. This leads us to develop how Kolmogorov's complexity is linked to intensionality, abstraction, classification and information system.Comment: 43 page

    Reconstruction of the Antenna Near-Field

    Get PDF
    Cílem disertační práce je navrhnout efektivně pracující algoritmus, který na základě bezfázového měření v blízkém poli antény bude schopen zrekonstruovat komplexní blízké pole antény resp. vyzařovací diagram antény ve vzdáleném poli. Na základě těchto úvah byly zkoumány vlastnosti minimalizačního algoritmu. Zejména byl analyzován a vhodně zvolen minimalizační přistup, optimalizační metoda a v neposlední řadě i optimalizační funkce tzv. funkcionál. Dále pro urychlení celého minimalizačního procesu byly uvažovány prvotní odhady. A na závěr byla do minimalizačního algoritmu zahrnuta myšlenka nahrazující hledané elektrické pole několika koeficienty. Na základě předchozích analýz byla navržená bezfázová metoda pro charakterizaci vyzařovacích vlastností antén. Tato metoda kombinuje globální optimalizaci s obrazovou kompresní metodou a s lokální metodou ve spojení s konvečním amplitudovým měřením na dvou površích. V našem případě je globální optimalizace použita k nalezení globálního minima minimalizovaného funkcionálu, kompresní metoda k redukci neznámých proměnných na apertuře antény a lokální metoda zajišťuje přesnější nalezení minima. Navržená metoda je velmi robustní a mnohem rychlejší než jiné dostupné minimalizační algoritmy. Další výzkum byl zaměřen na možnosti využití měřených amplitud pouze z jednoho měřícího povrchu pro rekonstrukci vyzařovacích charakteristik antén a využití nového algoritmu pro rekonstrukci fáze na válcové geometrii.The aim of this dissertation thesis is to design a very effective algorithm, which is able to reconstruct the antenna near-field and radiation patterns, respectively, from amplitude-only measurements. Under these circumstances, the properties of minimization algorithm were researched. The selection of the minimization approach, optimization technique and the appropriate functional were investigated and appropriately chosen. To reveal the global minimum area faster, the possibilities in the form of initial estimates for accelerating minimization algorithm were also considered. And finally, the idea to represent the unknown electric field distribution by a few coefficients was implicated into the minimization algorithm. The designed near-field phaseless approach for the antenna far-field characterization combines a global optimization, an image compression method and a local optimization in conjunction with conventional two-surface amplitude measurements. The global optimization method is used to minimize the functional, the image compression method is used to reduce the number of unknown variables, and the local optimization method is used to improve the estimate achieved by the previous method. The proposed algorithm is very robust and faster than comparable algorithms available. Other investigations were focused on possibilities of using amplitude from only single scanning surface for reconstruction of radiation patterns and the application of the novel phase retrieval algorithm for cylindrical geometry.

    Optimising Spatial and Tonal Data for PDE-based Inpainting

    Full text link
    Some recent methods for lossy signal and image compression store only a few selected pixels and fill in the missing structures by inpainting with a partial differential equation (PDE). Suitable operators include the Laplacian, the biharmonic operator, and edge-enhancing anisotropic diffusion (EED). The quality of such approaches depends substantially on the selection of the data that is kept. Optimising this data in the domain and codomain gives rise to challenging mathematical problems that shall be addressed in our work. In the 1D case, we prove results that provide insights into the difficulty of this problem, and we give evidence that a splitting into spatial and tonal (i.e. function value) optimisation does hardly deteriorate the results. In the 2D setting, we present generic algorithms that achieve a high reconstruction quality even if the specified data is very sparse. To optimise the spatial data, we use a probabilistic sparsification, followed by a nonlocal pixel exchange that avoids getting trapped in bad local optima. After this spatial optimisation we perform a tonal optimisation that modifies the function values in order to reduce the global reconstruction error. For homogeneous diffusion inpainting, this comes down to a least squares problem for which we prove that it has a unique solution. We demonstrate that it can be found efficiently with a gradient descent approach that is accelerated with fast explicit diffusion (FED) cycles. Our framework allows to specify the desired density of the inpainting mask a priori. Moreover, is more generic than other data optimisation approaches for the sparse inpainting problem, since it can also be extended to nonlinear inpainting operators such as EED. This is exploited to achieve reconstructions with state-of-the-art quality. We also give an extensive literature survey on PDE-based image compression methods

    CONCISE: Compressed 'n' Composable Integer Set

    Full text link
    Bit arrays, or bitmaps, are used to significantly speed up set operations in several areas, such as data warehousing, information retrieval, and data mining, to cite a few. However, bitmaps usually use a large storage space, thus requiring compression. Nevertheless, there is a space-time tradeoff among compression schemes. The Word Aligned Hybrid (WAH) bitmap compression trades some space to allow for bitwise operations without first decompressing bitmaps. WAH has been recognized as the most efficient scheme in terms of computation time. In this paper we present CONCISE (Compressed 'n' Composable Integer Set), a new scheme that enjoys significatively better performances than those of WAH. In particular, when compared to WAH, our algorithm is able to reduce the required memory up to 50%, by having similar or better performance in terms of computation time. Further, we show that CONCISE can be efficiently used to manipulate bitmaps representing sets of integral numbers in lieu of well-known data structures such as arrays, lists, hashtables, and self-balancing binary search trees. Extensive experiments over synthetic data show the effectiveness of our approach.Comment: Preprint submitted to Information Processing Letters, 7 page
    corecore