22 research outputs found

    The hippocampal formation from a machine learning perspective

    Get PDF
    Nos dias de hoje, existem diversos tipos de sensores que conseguem captar uma grande quantidade de dados em curtos espaços de tempo. Em muitas situações, as informações obtidas pelos diferentes sensores traduzem fenómenos específicos, através de dados obtidos em diferentes formatos. Nesses casos, torna-se difícil saber quais as relações entre os dados e/ou identificar se os diferentes dados traduzem uma certa condição. Neste contexto, torna-se relevante desenvolver sistemas que tenham capacidade de analisar grandes quantidades de dados num menor tempo possível, produzindo informação válida a partir da informação recolhida. O cérebro dos animais é um órgão biológico capaz de fazer algo semelhante com a informação obtida pelos sentidos, que traduzem fenómenos específicos. Dentro do cérebro, existe um elemento chamado Hipocampo, que se encontra situado na área do lóbulo temporal. A sua função principal consiste em analisar os elementos previamente codificados pelo Entorhinal Cortex, dando origem à formação de novas memórias. Sendo o Hipocampo um órgão que foi sofrendo evoluções ao longo do tempos, é importante perceber qual é o seu funcionamento e, se possível, tentar encontrar modelos computacionais que traduzam o seu mecanismo. Desde a remoção do Hipocampo num paciente que sofria de convulsões, ficou claro que, sem esse elemento, não seria possível memorizar lugares ou eventos ocorridos num determinado espaço de tempo. Essa funcionalidade é obtida através de um conjunto específico de células chamadas de Grid Cells, que estão situadas na área do Entorhinal Cortex, mas também das Place Cells, Head Direction Cells e Boundary Vector Cells. Neste âmbito, o principal objetivo desta Dissertação consiste em descrever os principais mecanismos biológicos localizados no Hipocampo e definir modelos computacionais que consigam simular as funções mais críticas de ambos os Hipocampos e da área do Entorhinal Cortex.Nowadays, sensor devices are able to generate huge amounts of data in short periods of time. In many situations, that data, collected by many different sensors, translates a specific phenomenon, but is presented in very different types and formats. In these cases, it is hard to determine how these distinct types of data are related to each other or translate a certain condition. In this context, it would be of great importance to develop a system capable of analysing such data in the smallest amount time to produce valid information. The brain is a biological organ capable of such decisions. Inside the brain, there is an element called Hippocampus, that is situated in the Temporal Lobe area. Its main function is to analyse the sensorial data encoded by the Entorhinal Cortex to create new memories. Since the Hippocampus has evolved for thousands of years to perform these tasks, it is of high importance to try to understand its functioning and to model it, i.e. to define a set of computer algorithms that approximates it. Since the removal of the Hippocampus from a patient suffering from seizures, the scientific community believes that the Hippocampus is crucial for memory formation and for spatial navigation. Without it, it wouldn’t be possible to memorize places and events that happened in a specific time or place. Such functionality is achieved with the help of set of cells called Grid Cells, present in the Entorhinal Cortex area, but also with Place Cells, Head Direction Cells and Boundary Vector Cells. The combined information analysed by those cells allows the unique identification of places or events. The main objective of the work developed in this Thesis consists in describing the biological mechanisms present in the Hippocampus area and to define potential computer models that allow the simulation of all or the most critical functions of both the Hippocampus and the Entorhinal Cortex areas

    Learning manipulative skills using an artificial intelligence approach.

    Get PDF
    The aim of this research was to design a non-linear controller based on an Artificial Neural Network and Reinforcement Learning algorithms implementation, which is able to perform an intelligent robotic assembly of mechanical components. Different information was applied and combined to develop a fully unsupervised, intelligent controller. In the author's design no class labelling or geometry feature pretraining takes place. Only force and torque signals together with the direction of insertion were supplied to the controller. A unique sandwich structure of the intelligent controller was proposed. It featured two major layers, a State Recognition module where the detection and localisation of the contact points were performed, and the Decision Making subsystem where the decision about the next action took place.All the algorithms were implemented and tested on simulated data before being applied to the real-life peg-in-hole insertion. The results are presented in the form of graphs and tables.Evaluation of the environmental uncertainty was accomplished. The signal from the force and torque sensor was acquired under controlled conditions. All the data was collected to establish the area and level of uncertainty (e.g. signal errors) the artificial controller would need to learn to cope with and compensate for.The empirical part of the thesis includes the investigation into the effects of different learning methods applied on the same geometry. The influence of action-selection methods on AI agent performance was analysed. The proposed controller was applied to a set of real life peg-and-hole experiments. Both circular and square peg geometries were used, and insertions into chamfered and non-chamfered holes were performed. Materials with different friction factors were used for mating parts.Fast and stable knowledge acquisition was clearly present in all the cases investigated. A significant reduction in contact force value during the initial stage of the learning process was recorded. The force was usually reduced to one tenth of the initial value. Some fluctuations were recorded but when the cylindrical peg was considered the value of contact forces never exceeded 0.5 N during the steady state

    Conference on Intelligent Robotics in Field, Factory, Service, and Space (CIRFFSS 1994), volume 1

    Get PDF
    The AIAA/NASA Conference on Intelligent Robotics in Field, Factory, Service, and Space (CIRFFSS '94) was originally proposed because of the strong belief that America's problems of global economic competitiveness and job creation and preservation can partly be solved by the use of intelligent robotics, which are also required for human space exploration missions. Individual sessions addressed nuclear industry, agile manufacturing, security/building monitoring, on-orbit applications, vision and sensing technologies, situated control and low-level control, robotic systems architecture, environmental restoration and waste management, robotic remanufacturing, and healthcare applications

    Machine Learning

    Get PDF
    Machine Learning can be defined in various ways related to a scientific domain concerned with the design and development of theoretical and implementation tools that allow building systems with some Human Like intelligent behavior. Machine learning addresses more specifically the ability to improve automatically through experience

    Diagnostic and adaptive redundant robotic planning and control

    Get PDF
    Neural networks and fuzzy logic are combined into a hierarchical structure capable of planning, diagnosis, and control for a redundant, nonlinear robotic system in a real world scenario. Throughout this work levels of this overall approach are demonstrated for a redundant robot and hand combination as it is commanded to approach, grasp, and successfully manipulate objects for a wheelchair-bound user in a crowded, unpredictable environment. Four levels of hierarchy are developed and demonstrated, from the lowest level upward: diagnostic individual motor control, optimal redundant joint allocation for trajectory planning, grasp planning with tip and slip control, and high level task planning for multiple arms and manipulated objects. Given the expectations of the user and of the constantly changing nature of processes, the robot hierarchy learns from its experiences in order to more efficiently execute the next related task, and allocate this knowledge to the appropriate levels of planning and control. The above approaches are then extended to automotive and space applications

    Adaptive hypertext and hypermedia : workshop : proceedings, 3rd, Sonthofen, Germany, July 14, 2001 and Aarhus, Denmark, August 15, 2001

    Get PDF
    This paper presents two empirical usability studies based on techniques from Human-Computer Interaction (HeI) and software engineering, which were used to elicit requirements for the design of a hypertext generation system. Here we will discuss the findings of these studies, which were used to motivate the choice of adaptivity techniques. The results showed dependencies between different ways to adapt the explanation content and the document length and formatting. Therefore, the system's architecture had to be modified to cope with this requirement. In addition, the system had to be made adaptable, in addition to being adaptive, in order to satisfy the elicited users' preferences

    Adaptive hypertext and hypermedia : workshop : proceedings, 3rd, Sonthofen, Germany, July 14, 2001 and Aarhus, Denmark, August 15, 2001

    Get PDF
    This paper presents two empirical usability studies based on techniques from Human-Computer Interaction (HeI) and software engineering, which were used to elicit requirements for the design of a hypertext generation system. Here we will discuss the findings of these studies, which were used to motivate the choice of adaptivity techniques. The results showed dependencies between different ways to adapt the explanation content and the document length and formatting. Therefore, the system's architecture had to be modified to cope with this requirement. In addition, the system had to be made adaptable, in addition to being adaptive, in order to satisfy the elicited users' preferences

    Machine Learning Based Detection and Evasion Techniques for Advanced Web Bots.

    Get PDF
    Web bots are programs that can be used to browse the web and perform different types of automated actions, both benign and malicious. Such web bots vary in sophistication based on their purpose, ranging from simple automated scripts to advanced web bots that have a browser fingerprint and exhibit a humanlike behaviour. Advanced web bots are especially appealing to malicious web bot creators, due to their browserlike fingerprint and humanlike behaviour which reduce their detectability. Several effective behaviour-based web bot detection techniques have been pro- posed in literature. However, the performance of these detection techniques when target- ing malicious web bots that try to evade detection has not been examined in depth. Such evasive web bot behaviour is achieved by different techniques, including simple heuris- tics and statistical distributions, or more advanced machine learning based techniques. Motivated by the above, in this thesis we research novel web bot detection techniques and how effective these are against evasive web bots that try to evade detection using, among others, recent advances in machine learning. To this end, we initially evaluate state-of-the-art web bot detection techniques against web bots of different sophistication levels and show that, while the existing approaches achieve very high performance in general, such approaches are not very effective when faced with only advanced web bots that try to remain undetected. Thus, we propose a novel web bot detection framework that can be used to detect effectively bots of varying levels of sophistication, including advanced web bots. This framework comprises and combines two detection modules: (i) a detection module that extracts several features from web logs and uses them as input to several well-known machine learning algo- rithms, and (ii) a detection module that uses mouse trajectories as input to Convolutional Neural Networks (CNNs). Moreover, we examine the case where advanced web bots utilise themselves the re- cent advances in machine learning to evade detection. Specifically, we propose two novel evasive advanced web bot types: (i) the web bots that use Reinforcement Learning (RL) to update their browsing behaviour based on whether they have been detected or not, and (ii) the web bots that have in their possession several data from human behaviours and use them as input to Generative Adversarial Networks (GANs) to generate images of humanlike mouse trajectories. We show that both approaches increase the evasiveness of the web bots by reducing the performance of the detection framework utilised in each case. We conclude that malicious web bots can exhibit high sophistication levels and com- bine different techniques that increase their evasiveness. Even though web bot detection frameworks can combine different methods to effectively detect such bots, web bots can update their behaviours using, among other, recent advances in machine learning to in- crease their evasiveness. Thus, the detection techniques should be continuously updated to keep up with new techniques introduced by malicious web bots to evade detection

    Proceedings of the NASA Conference on Space Telerobotics, volume 1

    Get PDF
    The theme of the Conference was man-machine collaboration in space. Topics addressed include: redundant manipulators; man-machine systems; telerobot architecture; remote sensing and planning; navigation; neural networks; fundamental AI research; and reasoning under uncertainty

    Robot environment learning with a mixed-linear probabilistic state-space model

    Get PDF
    This thesis proposes the use of a probabilistic state-space model with mixed-linear dynamics for learning to predict a robot's experiences. It is motivated by a desire to bridge the gap between traditional models with predefined objective semantics on the one hand, and the biologically-inspired "black box" behavioural paradigm on the other. A novel EM-type algorithm for the model is presented, which is less compuationally demanding than the Monte Carlo techniques developed for use in (for example) visual applications. The algorithm's E-step is slightly approximative, but an extension is described which would in principle make it asymptotically correct. Investigation using synthetically sampled data shows that the uncorrected E-step can any case make correct inferences about quite complicated systems. Results collected from two simulated mobile robot environments support the claim that mixed-linear models can capture both discontinuous and continuous structure in world in an intuitively natural manner; while they proved to perform only slightly better than simpler autoregressive hidden Markov models on these simple tasks, it is possible to claim tentatively that they might scale more effectively to environments in which trends over time played a larger role. Bayesian confidence regions—easily by mixed-linear model— proved be an effective guard for preventing it from making over-confident predictions outside its area of competence. A section on future extensions discusses how the model's easy invertibility could be harnessed to the ultimate aim of choosing actions, from a continuous space of possibilities, which maximise the robot's expected payoff over several steps into the futur
    corecore