18,782 research outputs found

    The Thermodynamics of Network Coding, and an Algorithmic Refinement of the Principle of Maximum Entropy

    Full text link
    The principle of maximum entropy (Maxent) is often used to obtain prior probability distributions as a method to obtain a Gibbs measure under some restriction giving the probability that a system will be in a certain state compared to the rest of the elements in the distribution. Because classical entropy-based Maxent collapses cases confounding all distinct degrees of randomness and pseudo-randomness, here we take into consideration the generative mechanism of the systems considered in the ensemble to separate objects that may comply with the principle under some restriction and whose entropy is maximal but may be generated recursively from those that are actually algorithmically random offering a refinement to classical Maxent. We take advantage of a causal algorithmic calculus to derive a thermodynamic-like result based on how difficult it is to reprogram a computer code. Using the distinction between computable and algorithmic randomness we quantify the cost in information loss associated with reprogramming. To illustrate this we apply the algorithmic refinement to Maxent on graphs and introduce a Maximal Algorithmic Randomness Preferential Attachment (MARPA) Algorithm, a generalisation over previous approaches. We discuss practical implications of evaluation of network randomness. Our analysis provides insight in that the reprogrammability asymmetry appears to originate from a non-monotonic relationship to algorithmic probability. Our analysis motivates further analysis of the origin and consequences of the aforementioned asymmetries, reprogrammability, and computation.Comment: 30 page

    Distance-regular graphs

    Get PDF
    This is a survey of distance-regular graphs. We present an introduction to distance-regular graphs for the reader who is unfamiliar with the subject, and then give an overview of some developments in the area of distance-regular graphs since the monograph 'BCN' [Brouwer, A.E., Cohen, A.M., Neumaier, A., Distance-Regular Graphs, Springer-Verlag, Berlin, 1989] was written.Comment: 156 page

    Foundational principles for large scale inference: Illustrations through correlation mining

    Full text link
    When can reliable inference be drawn in the "Big Data" context? This paper presents a framework for answering this fundamental question in the context of correlation mining, with implications for general large scale inference. In large scale data applications like genomics, connectomics, and eco-informatics the dataset is often variable-rich but sample-starved: a regime where the number nn of acquired samples (statistical replicates) is far fewer than the number pp of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for "Big Data." Sample complexity however has received relatively less attention, especially in the setting when the sample size nn is fixed, and the dimension pp grows without bound. To address this gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where the variable dimension is fixed and the sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; 3) the purely high dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa-scale data dimension. We illustrate this high dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables that are of interest. We demonstrate various regimes of correlation mining based on the unifying perspective of high dimensional learning rates and sample complexity for different structured covariance models and different inference tasks

    Experimental Synthetic Aperture Radar with Dynamic Metasurfaces

    Full text link
    We investigate the use of a dynamic metasurface as the transmitting antenna for a synthetic aperture radar (SAR) imaging system. The dynamic metasurface consists of a one-dimensional microstrip waveguide with complementary electric resonator (cELC) elements patterned into the upper conductor. Integrated into each of the cELCs are two diodes that can be used to shift each cELC resonance out of band with an applied voltage. The aperture is designed to operate at K band frequencies (17.5 to 20.3 GHz), with a bandwidth of 2.8 GHz. We experimentally demonstrate imaging with a fabricated metasurface aperture using existing SAR modalities, showing image quality comparable to traditional antennas. The agility of this aperture allows it to operate in spotlight and stripmap SAR modes, as well as in a third modality inspired by computational imaging strategies. We describe its operation in detail, demonstrate high-quality imaging in both 2D and 3D, and examine various trade-offs governing the integration of dynamic metasurfaces in future SAR imaging platforms

    Configurable pseudo noise radar imaging system enabling synchronous MIMO channel extension

    Get PDF
    In this article, we propose an evolved system design approach to ultra-wideband (UWB) radar based on pseudo-random noise (PRN) sequences, the key features of which are its user-adaptability to meet the demands provided by desired microwave imaging applications and its multichannel scalability. In light of providing a fully synchronized multichannel radar imaging system for short-range imaging as mine detection, non-destructive testing (NDT) or medical imaging, the advanced system architecture is presented with a special focus put on the implemented synchronization mechanism and clocking scheme. The core of the targeted adaptivity is provided by means of hardware, such as variable clock generators and dividers as well as programmable PRN generators. In addition to adaptive hardware, the customization of signal processing is feasible within an extensive open-source framework using the Red Pitaya ® data acquisition platform. A system benchmark in terms of signal-to-noise ratio (SNR), jitter, and synchronization stability is conducted to determine the achievable performance of the prototype system put into practice. Furthermore, an outlook on the planned future development and performance improvement is provided

    High Lundquist Number Simulations of Parker\u27s Model of Coronal Heating: Scaling and Current Sheet Statistics Using Heterogeneous Computing Architectures

    Get PDF
    Parker\u27s model [Parker, Astrophys. J., 174, 499 (1972)] is one of the most discussed mechanisms for coronal heating and has generated much debate. We have recently obtained new scaling results for a 2D version of this problem suggesting that the heating rate becomes independent of resistivity in a statistical steady state [Ng and Bhattacharjee, Astrophys. J., 675, 899 (2008)]. Our numerical work has now been extended to 3D using high resolution MHD numerical simulations. Random photospheric footpoint motion is applied for a time much longer than the correlation time of the motion to obtain converged average coronal heating rates. Simulations are done for different values of the Lundquist number to determine scaling. In the high-Lundquist number limit (S \u3e 1000), the coronal heating rate obtained is consistent with a trend that is independent of the Lundquist number, as predicted by previous analysis and 2D simulations. We will present scaling analysis showing that when the dissipation time is comparable or larger than the correlation time of the random footpoint motion, the heating rate tends to become independent of Lundquist number, and that the magnetic energy production is also reduced significantly. We also present a comprehensive reprogramming of our simulation code to run on NVidia graphics processing units using the Compute Unified Device Architecture (CUDA) and report code performance on several large scale heterogenous machines

    EIT Reconstruction Algorithms: Pitfalls, Challenges and Recent Developments

    Full text link
    We review developments, issues and challenges in Electrical Impedance Tomography (EIT), for the 4th Workshop on Biomedical Applications of EIT, Manchester 2003. We focus on the necessity for three dimensional data collection and reconstruction, efficient solution of the forward problem and present and future reconstruction algorithms. We also suggest common pitfalls or ``inverse crimes'' to avoid.Comment: A review paper for the 4th Workshop on Biomedical Applications of EIT, Manchester, UK, 200

    Origin of negative differential resistance in molecular junctions of Rose Bengal

    Full text link
    Negative differential resistance (NDR) is tuned at junctions of electronically different dimer and trimer of Rose Bengal on an atomic flat gold (111) surface. Isolated molecule did not show any NDR. But it was induced to show double NDR with large peak to valley ratio (1.8~3.1) in room temperature via charging its neighbor reproducibly by an electrical pulse. In some sections of junction by applying pulse one could destroy the phenomenon or regenerate it by STM manipulation of molecules. NDR was also independent of polaronic nature. It was possible to write bits 1 and 0 for cationic NDR (in dimer) and 00, 01, 10, 11 for di-anionic NDR (trimer) which generated 2/4 bit memory in a atomic scale junction showing importance of junction electronics in future of moletronics.Comment: 14 pages, 3 figures, communicate
    corecore