8,438 research outputs found

    Consciosusness in Cognitive Architectures. A Principled Analysis of RCS, Soar and ACT-R

    Get PDF
    This report analyses the aplicability of the principles of consciousness developed in the ASys project to three of the most relevant cognitive architectures. This is done in relation to their aplicability to build integrated control systems and studying their support for general mechanisms of real-time consciousness.\ud To analyse these architectures the ASys Framework is employed. This is a conceptual framework based on an extension for cognitive autonomous systems of the General Systems Theory (GST).\ud A general qualitative evaluation criteria for cognitive architectures is established based upon: a) requirements for a cognitive architecture, b) the theoretical framework based on the GST and c) core design principles for integrated cognitive conscious control systems

    Nonlinear time-series analysis revisited

    Full text link
    In 1980 and 1981, two pioneering papers laid the foundation for what became known as nonlinear time-series analysis: the analysis of observed data---typically univariate---via dynamical systems theory. Based on the concept of state-space reconstruction, this set of methods allows us to compute characteristic quantities such as Lyapunov exponents and fractal dimensions, to predict the future course of the time series, and even to reconstruct the equations of motion in some cases. In practice, however, there are a number of issues that restrict the power of this approach: whether the signal accurately and thoroughly samples the dynamics, for instance, and whether it contains noise. Moreover, the numerical algorithms that we use to instantiate these ideas are not perfect; they involve approximations, scale parameters, and finite-precision arithmetic, among other things. Even so, nonlinear time-series analysis has been used to great advantage on thousands of real and synthetic data sets from a wide variety of systems ranging from roulette wheels to lasers to the human heart. Even in cases where the data do not meet the mathematical or algorithmic requirements to assure full topological conjugacy, the results of nonlinear time-series analysis can be helpful in understanding, characterizing, and predicting dynamical systems

    An Overview on Application of Machine Learning Techniques in Optical Networks

    Get PDF
    Today's telecommunication networks have become sources of enormous amounts of widely heterogeneous data. This information can be retrieved from network traffic traces, network alarms, signal quality indicators, users' behavioral data, etc. Advanced mathematical tools are required to extract meaningful information from these data and take decisions pertaining to the proper functioning of the networks from the network-generated data. Among these mathematical tools, Machine Learning (ML) is regarded as one of the most promising methodological approaches to perform network-data analysis and enable automated network self-configuration and fault management. The adoption of ML techniques in the field of optical communication networks is motivated by the unprecedented growth of network complexity faced by optical networks in the last few years. Such complexity increase is due to the introduction of a huge number of adjustable and interdependent system parameters (e.g., routing configurations, modulation format, symbol rate, coding schemes, etc.) that are enabled by the usage of coherent transmission/reception technologies, advanced digital signal processing and compensation of nonlinear effects in optical fiber propagation. In this paper we provide an overview of the application of ML to optical communications and networking. We classify and survey relevant literature dealing with the topic, and we also provide an introductory tutorial on ML for researchers and practitioners interested in this field. Although a good number of research papers have recently appeared, the application of ML to optical networks is still in its infancy: to stimulate further work in this area, we conclude the paper proposing new possible research directions

    Direct determination of the solar neutrino fluxes from solar neutrino data

    Get PDF
    We determine the solar neutrino fluxes from a global analysis of the solar and terrestrial neutrino data in the framework of three-neutrino mixing. Using a Bayesian approach we reconstruct the posterior probability distribution function for the eight normalization parameters of the solar neutrino fluxes plus the relevant masses and mixing, with and without imposing the luminosity constraint. This is done by means of a Markov Chain Monte Carlo employing the Metropolis-Hastings algorithm. We also describe how these results can be applied to test the predictions of the Standard Solar Models. Our results show that, at present, both models with low and high metallicity can describe the data with good statistical agreement.Comment: 24 pages, 1 table, 7 figures. Acknowledgments correcte

    Variable selection with Random Forests for missing data

    Get PDF
    Variable selection has been suggested for Random Forests to improve their efficiency of data prediction and interpretation. However, its basic element, i.e. variable importance measures, can not be computed straightforward when there is missing data. Therefore an extensive simulation study has been conducted to explore possible solutions, i.e. multiple imputation, complete case analysis and a newly suggested importance measure for several missing data generating processes. The ability to distinguish relevant from non-relevant variables has been investigated for these procedures in combination with two popular variable selection methods. Findings and recommendations: Complete case analysis should not be applied as it lead to inaccurate variable selection and models with the worst prediction accuracy. Multiple imputation is a good means to select variables that would be of relevance in fully observed data. It produced the best prediction accuracy. By contrast, the application of the new importance measure causes a selection of variables that reflects the actual data situation, i.e. that takes the occurrence of missing values into account. It's error was only negligible worse compared to imputation

    Channel Flow of a Tensorial Shear-Thinning Maxwell Model: Lattice Boltzmann Simulations

    Full text link
    We introduce a nonlinear generalized tensorial Maxwell-type constitutive equation to describe shear-thinning glass-forming fluids, motivated by a recent microscopic approach to the nonlinear rheology of colloidal suspensions. The model captures a nonvanishing dynamical yield stress at the glass transition and incorporates normal-stress differences. A modified lattice-Boltzmann (LB) simulation scheme is presented that includes non-Newtonian contributions to the stress tensor and deals with flow-induced pressure differences. We test this scheme in pressure-driven 2D Poiseuille flow of the nonlinear generalized Maxwell fluid. In the steady state, comparison with an analytical solution shows good agreement. The transient dynamics after startup and cessation of the pressure gradient are studied; the simulation reproduces a finite stopping time for the cessation flow of the yield-stress fluid in agreement with previous analytical estimates
    corecore