1,163 research outputs found

    Are there new models of computation? Reply to Wegner and Eberbach

    Get PDF
    Wegner and Eberbach[Weg04b] have argued that there are fundamental limitations to Turing Machines as a foundation of computability and that these can be overcome by so-called superTuring models such as interaction machines, the [pi]calculus and the $-calculus. In this paper we contest Weger and Eberbach claims

    Topology Control Multi-Objective Optimisation in Wireless Sensor Networks: Connectivity-Based Range Assignment and Node Deployment

    Get PDF
    The distinguishing characteristic that sets topology control apart from other methods, whose motivation is to achieve effects of energy minimisation and an increased network capacity, is its network-wide perspective. In other words, local choices made at the node-level always have the goal in mind of achieving a certain global, network-wide property, while not excluding the possibility for consideration of more localised factors. As such, our approach is marked by being a centralised computation of the available location-based data and its reduction to a set of non-homogeneous transmitting range assignments, which elicit a certain network-wide property constituted as a whole, namely, strong connectedness and/or biconnectedness. As a means to effect, we propose a variety of GA which by design is multi-morphic, where dependent upon model parameters that can be dynamically set by the user, the algorithm, acting accordingly upon either single or multiple objective functions in response. In either case, leveraging the unique faculty of GAs for finding multiple optimal solutions in a single pass. Wherefore it is up to the designer to select the singular solution which best meets requirements. By means of simulation, we endeavour to establish its relative performance against an optimisation typifying a standard topology control technique in the literature in terms of the proportion of time the network exhibited the property of strong connectedness. As to which, an analysis of the results indicates that such is highly sensitive to factors of: the effective maximum transmitting range, node density, and mobility scenario under observation. We derive an estimate of the optimal constitution thereof taking into account the specific conditions within the domain of application in that of a WSN, thereby concluding that only GA optimising for the biconnected components in a network achieves the stated objective of a sustained connected status throughout the duration.fi=Opinnäytetyö kokotekstinä PDF-muodossa.|en=Thesis fulltext in PDF format.|sv=Lärdomsprov tillgängligt som fulltext i PDF-format

    New Insights on Learning Rules for Hopfield Networks: Memory and Objective Function Minimisation

    Full text link
    Hopfield neural networks are a possible basis for modelling associative memory in living organisms. After summarising previous studies in the field, we take a new look at learning rules, exhibiting them as descent-type algorithms for various cost functions. We also propose several new cost functions suitable for learning. We discuss the role of biases (the external inputs) in the learning process in Hopfield networks. Furthermore, we apply Newtons method for learning memories, and experimentally compare the performances of various learning rules. Finally, to add to the debate whether allowing connections of a neuron to itself enhances memory capacity, we numerically investigate the effects of self coupling. Keywords: Hopfield Networks, associative memory, content addressable memory, learning rules, gradient descent, attractor networksComment: 8 pages, IEEE-Xplore, 2020 International Joint Conference on Neural Networks (IJCNN), Glasgo

    Consciosusness in Cognitive Architectures. A Principled Analysis of RCS, Soar and ACT-R

    Get PDF
    This report analyses the aplicability of the principles of consciousness developed in the ASys project to three of the most relevant cognitive architectures. This is done in relation to their aplicability to build integrated control systems and studying their support for general mechanisms of real-time consciousness.\ud To analyse these architectures the ASys Framework is employed. This is a conceptual framework based on an extension for cognitive autonomous systems of the General Systems Theory (GST).\ud A general qualitative evaluation criteria for cognitive architectures is established based upon: a) requirements for a cognitive architecture, b) the theoretical framework based on the GST and c) core design principles for integrated cognitive conscious control systems

    Risk-Averse Model Predictive Operation Control of Islanded Microgrids

    Full text link
    In this paper we present a risk-averse model predictive control (MPC) scheme for the operation of islanded microgrids with very high share of renewable energy sources. The proposed scheme mitigates the effect of errors in the determination of the probability distribution of renewable infeed and load. This allows to use less complex and less accurate forecasting methods and to formulate low-dimensional scenario-based optimisation problems which are suitable for control applications. Additionally, the designer may trade performance for safety by interpolating between the conventional stochastic and worst-case MPC formulations. The presented risk-averse MPC problem is formulated as a mixed-integer quadratically-constrained quadratic problem and its favourable characteristics are demonstrated in a case study. This includes a sensitivity analysis that illustrates the robustness to load and renewable power prediction errors

    Computational Multiscale Methods

    Get PDF
    Many physical processes in material sciences or geophysics are characterized by inherently complex interactions across a large range of non-separable scales in space and time. The resolution of all features on all scales in a computer simulation easily exceeds today's computing resources by multiple orders of magnitude. The observation and prediction of physical phenomena from multiscale models, hence, requires insightful numerical multiscale techniques to adaptively select relevant scales and effectively represent unresolved scales. This workshop enhanced the development of such methods and the mathematics behind them so that the reliable and efficient numerical simulation of some challenging multiscale problems eventually becomes feasible in high performance computing environments

    Computation of Electromagnetic Fields Scattered From Objects With Uncertain Shapes Using Multilevel Monte Carlo Method

    Full text link
    Computational tools for characterizing electromagnetic scattering from objects with uncertain shapes are needed in various applications ranging from remote sensing at microwave frequencies to Raman spectroscopy at optical frequencies. Often, such computational tools use the Monte Carlo (MC) method to sample a parametric space describing geometric uncertainties. For each sample, which corresponds to a realization of the geometry, a deterministic electromagnetic solver computes the scattered fields. However, for an accurate statistical characterization the number of MC samples has to be large. In this work, to address this challenge, the continuation multilevel Monte Carlo (CMLMC) method is used together with a surface integral equation solver. The CMLMC method optimally balances statistical errors due to sampling of the parametric space, and numerical errors due to the discretization of the geometry using a hierarchy of discretizations, from coarse to fine. The number of realizations of finer discretizations can be kept low, with most samples computed on coarser discretizations to minimize computational cost. Consequently, the total execution time is significantly reduced, in comparison to the standard MC scheme.Comment: 25 pages, 10 Figure

    07101 Abstracts Collection -- Quantitative Aspects of Embedded Systems

    Get PDF
    From March 5 to March 9, 2007, the Dagstuhl Seminar 07101 ``Quantitative Aspects of Embedded Systems\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    Natural Language Syntax Complies with the Free-Energy Principle

    Full text link
    Natural language syntax yields an unbounded array of hierarchically structured expressions. We claim that these are used in the service of active inference in accord with the free-energy principle (FEP). While conceptual advances alongside modelling and simulation work have attempted to connect speech segmentation and linguistic communication with the FEP, we extend this program to the underlying computations responsible for generating syntactic objects. We argue that recently proposed principles of economy in language design - such as "minimal search" criteria from theoretical syntax - adhere to the FEP. This affords a greater degree of explanatory power to the FEP - with respect to higher language functions - and offers linguistics a grounding in first principles with respect to computability. We show how both tree-geometric depth and a Kolmogorov complexity estimate (recruiting a Lempel-Ziv compression algorithm) can be used to accurately predict legal operations on syntactic workspaces, directly in line with formulations of variational free energy minimization. This is used to motivate a general principle of language design that we term Turing-Chomsky Compression (TCC). We use TCC to align concerns of linguists with the normative account of self-organization furnished by the FEP, by marshalling evidence from theoretical linguistics and psycholinguistics to ground core principles of efficient syntactic computation within active inference
    • …
    corecore