3,031 research outputs found

    Coding-theorem Like Behaviour and Emergence of the Universal Distribution from Resource-bounded Algorithmic Probability

    Full text link
    Previously referred to as `miraculous' in the scientific literature because of its powerful properties and its wide application as optimal solution to the problem of induction/inference, (approximations to) Algorithmic Probability (AP) and the associated Universal Distribution are (or should be) of the greatest importance in science. Here we investigate the emergence, the rates of emergence and convergence, and the Coding-theorem like behaviour of AP in Turing-subuniversal models of computation. We investigate empirical distributions of computing models in the Chomsky hierarchy. We introduce measures of algorithmic probability and algorithmic complexity based upon resource-bounded computation, in contrast to previously thoroughly investigated distributions produced from the output distribution of Turing machines. This approach allows for numerical approximations to algorithmic (Kolmogorov-Chaitin) complexity-based estimations at each of the levels of a computational hierarchy. We demonstrate that all these estimations are correlated in rank and that they converge both in rank and values as a function of computational power, despite fundamental differences between computational models. In the context of natural processes that operate below the Turing universal level because of finite resources and physical degradation, the investigation of natural biases stemming from algorithmic rules may shed light on the distribution of outcomes. We show that up to 60\% of the simplicity/complexity bias in distributions produced even by the weakest of the computational models can be accounted for by Algorithmic Probability in its approximation to the Universal Distribution.Comment: 27 pages main text, 39 pages including supplement. Online complexity calculator: http://complexitycalculator.com

    Identification of the neighborhood and CA rules from spatio-temporal CA patterns

    Get PDF
    Extracting the rules from spatio-temporal patterns generated by the evolution of cellular automata (CA) usually produces a CA rule table without providing a clear understanding of the structure of the neighborhood or the CA rule. In this paper, a new identification method based on using a modified orthogonal least squares or CA-OLS algorithm to detect the neighborhood structure and the underlying polynomial form of the CA rules is proposed. The Quine-McCluskey method is then applied to extract minimum Boolean expressions from the polynomials. Spatio-temporal patterns produced by the evolution of 1D, 2D, and higher dimensional binary CAs are used to illustrate the new algorithm, and simulation results show that the CA-OLS algorithm can quickly select both the correct neighborhood structure and the corresponding rule

    Cellular Automata Model of Macroevolution

    Full text link
    In this paper I describe a cellular automaton model of a multi-species ecosystem, suitable for the study of emergent properties of macroevolution. Unlike majority of ecological models, the number of coexisting species is not fixed. Starting from one common ancestor they appear by "mutations" of existent species, and then survive or extinct depending on the balance of local ecological interactions. Monte-Carlo numerical simulations show that this model is able to qualitatively reproduce phenomena that have been observed in other models and in nature.Comment: 8 pages, 3 figures, Fourteenth National Conference on Application of Mathematics in Biology and Medicine, Leszno 2008 (POLAND

    Systems of Preventive Cardiological Monitoring: Models, Algorithms, First Results, and Perspectives

    Get PDF
    The results of work on creating methods, models, and computational algorithms for remote preventive health-monitoring systems are presented, in particular, cardiac preventive monitoring. The main attention is paid to the models and computational algorithms of preventive monitoring, the interaction of the computing kernels of a remote cluster with portable ECG recorders, implantable devices, and sensors. Computational kernels of preventive monitoring are a set of several thousand interacting automata of analog of Turing machines, recognizing the characteristic features and evolution of the hidden predictors of atrial fibrillation(AF), ventricular tachycardia or fibrillation (VT-VF), sudden cardiac death, and heart failure (HF) revealed by them. The estimation of the time for reaching the heart events boundaries is calculated on the basis of the evolution equations for the ECG multi-trajectories determined by recognizing automata. Evaluation time of heart event (HE) boundaries to achieve is calculated on the basis of the evolution equations for ECG multi-paths defined by recognizing machines. Ultimately, the computational cores reconstruct the ECG of the forecast and give temporary estimates of its achievement. Cloud computing cluster supports low-cost ECG ultra-portable recorders and does not limit the possibilities of using a more complex patient telemetry containing wearable and implantable devices: CRT and ICD, CardioMEMS HF System, and so on

    An Ensemble Approach to Space-Time Interpolation

    Get PDF
    There has been much excitement and activity in recent years related to the relatively sudden availability of earth-related data and the computational capabilities to visualize and analyze these data. Despite the increased ability to collect and store large volumes of data, few individual data sets exist that provide both the requisite spatial and temporal observational frequency for many urban and/or regional-scale applications. The motivating view of this paper, however, is that the relative temporal richness of one data set can be leveraged with the relative spatial richness of another to fill in the gaps. We also note that any single interpolation technique has advantages and disadvantages. Particularly when focusing on the spatial or on the temporal dimension, this means that different techniques are more appropriate than others for specific types of data. We therefore propose a space- time interpolation approach whereby two interpolation methods – one for the temporal and one for the spatial dimension – are used in tandem in order to maximize the quality of the result. We call our ensemble approach the Space-Time Interpolation Environment (STIE). The primary steps within this environment include a spatial interpolator, a time-step processor, and a calibration step that enforces phenomenon-related behavioral constraints. The specific interpolation techniques used within the STIE can be chosen on the basis of suitability for the data and application at hand. In the current paper, we describe STIE conceptually including the structure of the data inputs and output, details of the primary steps (the STIE processors), and the mechanism for coordinating the data and the processors. We then describe a case study focusing on urban land cover in Phoenix, Arizona. Our empirical results show that STIE was effective as a space-time interpolator for urban land cover with an accuracy of 85.2% and furthermore that it was more effective than a single technique.

    An Ensemble Approach to Space-Time Interpolation

    Get PDF
    There has been much excitement and activity in recent years related to the relatively sudden availability of earth-related data and the computational capabilities to visualize and analyze these data. Despite the increased ability to collect and store large volumes of data, few individual data sets exist that provide both the requisite spatial and temporal observational frequency for many urban and/or regional-scale applications. The motivating view of this paper, however, is that the relative temporal richness of one data set can be leveraged with the relative spatial richness of another to fill in the gaps. We also note that any single interpolation technique has advantages and disadvantages. Particularly when focusing on the spatial or on the temporal dimension, this means that different techniques are more appropriate than others for specific types of data. We therefore propose a space- time interpolation approach whereby two interpolation methods – one for the temporal and one for the spatial dimension – are used in tandem in order to maximize the quality of the result. We call our ensemble approach the Space-Time Interpolation Environment (STIE). The primary steps within this environment include a spatial interpolator, a time-step processor, and a calibration step that enforces phenomenon-related behavioral constraints. The specific interpolation techniques used within the STIE can be chosen on the basis of suitability for the data and application at hand. In the current paper, we describe STIE conceptually including the structure of the data inputs and output, details of the primary steps (the STIE processors), and the mechanism for coordinating the data and the 1 processors. We then describe a case study focusing on urban land cover in Phoenix Arizona. Our empirical results show that STIE was effective as a space-time interpolator for urban land cover with an accuracy of 85.2% and furthermore that it was more effective than a single technique.
    • …
    corecore