651 research outputs found

    Decoding Trace Peak Behaviour - A Neuro-Fuzzy Approach

    Get PDF

    Machine learned regression for abductive DNA sequencing

    No full text

    Computational cognitive modelling of action awareness: prior and retrospective

    Get PDF
    This paper presents a computational cognitive model for action awareness focusing on action preparation and performance by considering its cognitive effects and affects from both prior and retrospective form relative to the action execution. How action selection and execution contribute to the awareness or vice versa is a research question, and from the findings of brain imaging and recording techniques more information has become available on this. Some evidence leads to a hypothesis that awareness of action selection is not directly causing the action execution (or behaviour) but comes afterwards as an effect of unconscious processes of action preparation. In contrast, another hypothesis claims that both predictive and inferential processes related to the action preparation and execution may contribute to the conscious awareness of the action, and furthermore, this awareness of an action is a dynamic combination of both prior awareness (through predictive motor control processes) and retrospective awareness (through inferential sense-making processes) relative to the action execution. The proposed model integrates the findings of both conscious and unconscious explanations for both action awareness and ownership and acts as a generic computational cognitive model to explain agent behaviour through the interplay between conscious and unconscious processes. Validation of the proposed model is achieved through simulations on suitable scenarios which are covered with actions that are prepared without being conscious at any point in time, and also with the actions that agent develops prior awareness and/or retrospective awareness. Having selected an interrelated set of scenarios, a systematic approach is used to find a suitable but generic parameter value set which is used throughout all the simulations that highlights the strength of the design of this cognitive model

    NILM techniques for intelligent home energy management and ambient assisted living: a review

    Get PDF
    The ongoing deployment of smart meters and different commercial devices has made electricity disaggregation feasible in buildings and households, based on a single measure of the current and, sometimes, of the voltage. Energy disaggregation is intended to separate the total power consumption into specific appliance loads, which can be achieved by applying Non-Intrusive Load Monitoring (NILM) techniques with a minimum invasion of privacy. NILM techniques are becoming more and more widespread in recent years, as a consequence of the interest companies and consumers have in efficient energy consumption and management. This work presents a detailed review of NILM methods, focusing particularly on recent proposals and their applications, particularly in the areas of Home Energy Management Systems (HEMS) and Ambient Assisted Living (AAL), where the ability to determine the on/off status of certain devices can provide key information for making further decisions. As well as complementing previous reviews on the NILM field and providing a discussion of the applications of NILM in HEMS and AAL, this paper provides guidelines for future research in these topics.Agência financiadora: Programa Operacional Portugal 2020 and Programa Operacional Regional do Algarve 01/SAICT/2018/39578 Fundação para a Ciência e Tecnologia through IDMEC, under LAETA: SFRH/BSAB/142998/2018 SFRH/BSAB/142997/2018 UID/EMS/50022/2019 Junta de Comunidades de Castilla-La-Mancha, Spain: SBPLY/17/180501/000392 Spanish Ministry of Economy, Industry and Competitiveness (SOC-PLC project): TEC2015-64835-C3-2-R MINECO/FEDERinfo:eu-repo/semantics/publishedVersio

    Evolutionary and Reinforcement Fuzzy Control

    Get PDF
    Many modern and classical techniques exist for the design of control systems. However, many real world applications are inherently complex and the application of traditional design and control techniques is limited. In addition, no single design method exists which can be applied to all types of system. Due to this 'deficiency', recent years have seen an exponential increase in the use of methods loosely termed 'computational intelligent techniques' or 'soft- computing techniques'. Such techniques tend to solve problems using a population of individual elements or potential solutions or the flexibility of a network as opposed to using a rigid, single point of computing. Through use of computational redundancies, soft-computing allows unmatched tractability in practical problem solving. The intelligent paradigm most successfully applied to control engineering, is that of fuzzy logic in the form of fuzzy control. The motivation of using fuzzy control is twofold. First, it allows one to incorporate heuristics into the control strategy, such as the model operator actions. Second, it allows nonlinearities to be defined in an intuitive way using rules and interpolations. Although it is an attractive tool, there still exist many problems to be solved in fuzzy control. To date most applications have been limited to relatively simple problems of low dimensionality. This is primarily due to the fact that the design process is very much a trial and error one and is heavily dependent on the quality of expert knowledge provided by the operator. In addition, fuzzy control design is virtually ad hoc, lacking a systematic design procedure. Other problems include those associated with the curse of dimensionality and the inability to learn and improve from experience. While much work has been carried out to alleviate most of these difficulties, there exists a lack of drive and exploration in the last of these points. The objective of this thesis is to develop an automated, systematic procedure for optimally learning fuzzy logic controllers (FLCs), which provides for autonomous and simple implementations. In pursuit of this goal, a hybrid method is to combine the advantages artificial neural networks (ANNs), evolutionary algorithms (EA) and reinforcement learning (RL). This overcomes the deficiencies of conventional EAs that may omit representation of the region within a variable's operating range and that do not in practice achieve fine learning. This method also allows backpropagation when necessary or feasible. It is termed an Evolutionary NeuroFuzzy Learning Intelligent Control technique (ENFLICT) model. Unlike other hybrids, ENFLICT permits globally structural learning and local offline or online learning. The global EA and local neural learning processes should not be separated. Here, the EA learns and optimises the ENFLICT structure while ENFLICT learns the network parameters. The EA used here is an improved version of a technique known as the messy genetic algorithm (mGA), which utilises flexible cellular chromosomes for structural optimisation. The properties of the mGA as compared with other flexible length EAs, are that it enables the addressing of issues such as the curse of dimensionality and redundant genetic information. Enhancements to the algorithm are in the coding and decoding of the genetic information to represent a growing and shrinking network; the defining of the network properties such as neuron activation type and network connectivity; and that all of this information is represented in a single gene. Another step forward taken in this thesis on neurofuzzy learning is that of learning online. Online in this case refers to learning unsupervised and adapting to real time system parameter changes. It is much more attractive because the alternative (supervised offline learning) demands quality learning data which is often expensive to obtain, and unrepresentative of and inaccurate about the real environment. First, the learning algorithm is developed for the case of a given model of the system where the system dynamics are available or can be obtained through, for example, system identification. This naturally leads to the development of a method for learning by directly interacting with the environment. The motivation for this is that usually real world applications tend to be large and complex, and obtaining a mathematical model of the plant is not always possible. For this purpose the reinforcement learning paradigm is utilised, which is the primary learning method of biological systems, systems that can adapt to their environment and experiences, in this thesis, the reinforcement learning algorithm is based on the advantage learning method and has been extended to deal with continuous time systems and online implementations, and which does not use a lookup table. This means that large databases containing the system behaviour need not be constructed, and the procedure can work online where the information available is that of the immediate situation. For complex systems of higher order dimensions, and where identifying the system model is difficult, a hierarchical method has been developed and is based on a hybrid of all the other methods developed. In particular, the procedure makes use of a method developed to work directly with plant step response, thus avoiding the need for mathematical model fitting which may be time-consuming and inaccurate. All techniques developed and contributions in the thesis are illustrated by several case studies, and are validated through simulations

    Drawing, Handwriting Processing Analysis: New Advances and Challenges

    No full text
    International audienceDrawing and handwriting are communicational skills that are fundamental in geopolitical, ideological and technological evolutions of all time. drawingand handwriting are still useful in defining innovative applications in numerous fields. In this regard, researchers have to solve new problems like those related to the manner in which drawing and handwriting become an efficient way to command various connected objects; or to validate graphomotor skills as evident and objective sources of data useful in the study of human beings, their capabilities and their limits from birth to decline

    An Integrated Model of Contex, Short-Term, and Long-Term Memory

    Get PDF
    I present the context-unified encoding (CUE) model, a large-scale spiking neural network model of human memory. It combines and integrates activity-based short-term memory with weight-based long-term memory. The implementation with spiking neurons ensures biological plausibility and allows for predictions on the neural level. At the same time, the model produces behavioural outputs that have been matched to human data from serial and free recall experiments. In particular, well-known results such as primacy, recency, transposition error gradients, and forward recall bias have been reproduced with good quantitative matches. Additionally, the model accounts for the effects of the acetylcholine antagonist scopolamine, and the Hebb repetition effect. The CUE model combines and extends the ordinal serial encoding (OSE) model, a spiking neuron model of short-term memory, and the temporal context model (TCM), a mathematical model of free recall. To the former, a neural mechanism for tracking the list position is added. The latter is converted into a spiking neural network under considerations of the main features and simplification of equations where appropriate. Previous models of the recall process in the TCM are replaced by a new independent accumulator recall process that is more suited to the integration into a large-scale network. To implement the modification of the required association matrices, a novel learning rule, the association matrix learning rule (AML), is derived that allows for one-shot learning without catastrophic forgetting. Its biological plausibility is discussed and it is shown that it accounts for changes in neural firing observed in human recordings from an association learning experiment. Furthermore, I discuss a recent proposal of an optimal fuzzy temporal memory as replacement for the TCM context signal and show it to be likely to require more neurons than there are in the human brain. To construct the CUE model, I have used the Neural Engineering Framework (NEF) and Semantic Pointer Architecture (SPA). This thesis makes novel contributions to both. I propose to distribute NEF intercepts according to the distribution of cosine similarities of random uniformly distributed unit vectors. This leads to a uniform distribution of active neurons and reduces the error introduced by spiking noise considerably in high-dimensional neuronal representations. It improves the asymptotic scaling of the noise error with dimensions d from O(d) to O(d^(3/4))$. These results are applied to achieve improved Semantic Pointer representations in neural networks are on par with or better than previous methods of optimizing neural representations for the Semantic Pointer Architecture. Furthermore, the vector-derived transformation binding (VTB) is investigated as an alternative to circular convolution in the SPA, with promising results
    corecore