59 research outputs found

    Hash Functions for Episodic Recognition and Retrieval

    Get PDF
    Episodic memory systems for artificially intelligent agents must cope with an ever-growing episodic memory store. This paper presents an approach for minimizing the size of the store by using specialized hash functions to convert each memory into a relatively short binary code. A set of desiderata for such hash functions are presented including locale sensitivity and reversibility. The paper then introduces multiple approaches for such functions and compares their effectiveness

    Towards Lifelong Reasoning with Sparse and Compressive Memory Systems

    Get PDF
    Humans have a remarkable ability to remember information over long time horizons. When reading a book, we build up a compressed representation of the past narrative, such as the characters and events that have built up the story so far. We can do this even if they are separated by thousands of words from the current text, or long stretches of time between readings. During our life, we build up and retain memories that tell us where we live, what we have experienced, and who we are. Adding memory to artificial neural networks has been transformative in machine learning, allowing models to extract structure from temporal data, and more accurately model the future. However the capacity for long-range reasoning in current memory-augmented neural networks is considerably limited, in comparison to humans, despite the access to powerful modern computers. This thesis explores two prominent approaches towards scaling artificial memories to lifelong capacity: sparse access and compressive memory structures. With sparse access, the inspection, retrieval, and updating of only a very small subset of pertinent memory is considered. It is found that sparse memory access is beneficial for learning, allowing for improved data-efficiency and improved generalisation. From a computational perspective - sparsity allows scaling to memories with millions of entities on a simple CPU-based machine. It is shown that memory systems that compress the past to a smaller set of representations reduce redundancy and can speed up the learning of rare classes and improve upon classical data-structures in database systems. Compressive memory architectures are also devised for sequence prediction tasks and are observed to significantly increase the state-of-the-art in modelling natural language

    Episodic Memory for Cognitive Robots in Dynamic, Unstructured Environments

    Full text link
    Elements from cognitive psychology have been applied in a variety of ways to artificial intelligence. One of the lesser studied areas is in how episodic memory can assist learning in cognitive robots. In this dissertation, we investigate how episodic memories can assist a cognitive robot in learning which behaviours are suited to different contexts. We demonstrate the learning system in a domestic robot designed to assist human occupants of a house. People are generally good at anticipating the intentions of others. When around people that we are familiar with, we can predict what they are likely to do next, based on what we have observed them doing before. Our ability to record and recall different types of events that we know are relevant to those types of events is one reason our cognition is so powerful. For a robot to assist rather than hinder a person, artificial agents too require this functionality. This work makes three main contributions. Since episodic memory requires context, we first propose a novel approach to segmenting a metric map into a collection of rooms and corridors. Our approach is based on identifying critical points on a Generalised Voronoi Diagram and creating regions around these critical points. Our results show state of the art accuracy with 98% precision and 96% recall. Our second contribution is our approach to event recall in episodic memory. We take a novel approach in which events in memory are typed and a unique recall policy is learned for each type of event. These policies are learned incrementally, using only information presented to the agent and without any need to take that agent off line. Ripple Down Rules provide a suitable learning mechanism. Our results show that when trained appropriately we achieve a near perfect recall of episodes that match to an observation. Finally we propose a novel approach to how recall policies are trained. Commonly an RDR policy is trained using a human guide where the instructor has the option to discard information that is irrelevant to the situation. However, we show that by using Inductive Logic Programming it is possible to train a recall policy for a given type of event after only a few observations of that type of event

    Perceiving is Believing. Authentication with Behavioural and Cognitive Factors

    Get PDF
    Most computer users have experienced login problems such as, forgetting passwords, loosing token cards and authentication dongles, failing that complicated screen pattern once again, as well as, interaction difficulties in usability. Facing the difficulties of non-flexible strong authentication solutions, users tend to react with poor acceptance or to relax the assumed correct use of authentication procedures and devices, rendering the intended security useless. Biometrics can, sort of, solve some of those problems. However, despite the vast research, there is no perfect solution into designing a secure strong authentication procedure, falling into a trade off between intrusiveness, effectiveness, contextual adequacy and security guarantees. Taking advantage of new technology, recent research onmulti-modal, behavioural and cognitive oriented authentication proposals have sought to optimize trade off towards precision and convenience, reducing intrusiveness for the same amount of security. But these solutions also fall short with respect to different scenarios. Users perform currently multiple authentications everyday, through multiple devices, in panoply of different situations, involving different resources and diverse usage contexts, with no "better authentication solution" for all possible purposes. The proposed framework enhances the recent research in user authentication services with a broader view on the problems involving each solution, towards an usable secure authentication methodology combining and exploring the strengths of each method. It will than be used to prototype instances of new dynamic multifactor models (including novel models of behavioural and cognitive biometrics), materializing the PiB (perceiving is believing) authentication. Ultimately we show how the proposed framework can be smoothly integrated in applications and other authentication services and protocols, namely in the context of SSO Authentication Services and OAuth

    Behaviour Profiling using Wearable Sensors for Pervasive Healthcare

    Get PDF
    In recent years, sensor technology has advanced in terms of hardware sophistication and miniaturisation. This has led to the incorporation of unobtrusive, low-power sensors into networks centred on human participants, called Body Sensor Networks. Amongst the most important applications of these networks is their use in healthcare and healthy living. The technology has the possibility of decreasing burden on the healthcare systems by providing care at home, enabling early detection of symptoms, monitoring recovery remotely, and avoiding serious chronic illnesses by promoting healthy living through objective feedback. In this thesis, machine learning and data mining techniques are developed to estimate medically relevant parameters from a participant‘s activity and behaviour parameters, derived from simple, body-worn sensors. The first abstraction from raw sensor data is the recognition and analysis of activity. Machine learning analysis is applied to a study of activity profiling to detect impaired limb and torso mobility. One of the advances in this thesis to activity recognition research is in the application of machine learning to the analysis of 'transitional activities': transient activity that occurs as people change their activity. A framework is proposed for the detection and analysis of transitional activities. To demonstrate the utility of transition analysis, we apply the algorithms to a study of participants undergoing and recovering from surgery. We demonstrate that it is possible to see meaningful changes in the transitional activity as the participants recover. Assuming long-term monitoring, we expect a large historical database of activity to quickly accumulate. We develop algorithms to mine temporal associations to activity patterns. This gives an outline of the user‘s routine. Methods for visual and quantitative analysis of routine using this summary data structure are proposed and validated. The activity and routine mining methodologies developed for specialised sensors are adapted to a smartphone application, enabling large-scale use. Validation of the algorithms is performed using datasets collected in laboratory settings, and free living scenarios. Finally, future research directions and potential improvements to the techniques developed in this thesis are outlined

    A privacy-aware and secure system for human memory augmentation

    Get PDF
    The ubiquity of digital sensors embedded in today's mobile and wearable devices (e.g., smartphones, wearable cameras, wristbands) has made technology more intertwined with our life. Among many other things, this allows us to seamlessly log our daily experiences in increasing numbers and quality, a process known as ``lifelogging''. This practice produces a great amount of pictures and videos that can potentially improve human memory. Consider how a single photograph can bring back distant childhood memories, or how a song can help us reminisce about our last vacation. Such a vision of a ``memory augmentation system'' can offer considerable benefits, but it also raises new security and privacy challenges. Maybe obviously, a system that captures everywhere we go, and everything we say, see, and do, is greatly increasing the danger to our privacy. Any data breach of such a memory repository, whether accidental or malicious, could negatively impact both our professional and private reputation. In addition, the threat of memory manipulation might be the most worrisome aspect of a memory augmentation system: if an attacker is able to remove, add, or change our captured information, the resulting data may implant memories in our heads that never took place, or, in turn, accelerate the loss of other memories. Starting from such key challenges, this thesis investigates how to design secure memory augmentation systems. In the course of this research, we develop tools and prototypes that can be applied by researchers and system engineers to develop pervasive applications that help users capture and later recall episodic memories in a secure fashion. We build trusted sensors and protocols to securely capture and store experience data, and secure software for the secure and privacy-aware exchange of experience data with others. We explore the suitability of various access control models to put users in control of the plethora of data that the system captures on their behalf. We also explore the possibility of using in situ physical gestures to control different aspects regarding the capturing and sharing of experience data. Ultimately, this thesis contributes to the design and development of secure systems for memory augmentation

    Computer Aided Verification

    Get PDF
    The open access two-volume set LNCS 12224 and 12225 constitutes the refereed proceedings of the 32st International Conference on Computer Aided Verification, CAV 2020, held in Los Angeles, CA, USA, in July 2020.* The 43 full papers presented together with 18 tool papers and 4 case studies, were carefully reviewed and selected from 240 submissions. The papers were organized in the following topical sections: Part I: AI verification; blockchain and Security; Concurrency; hardware verification and decision procedures; and hybrid and dynamic systems. Part II: model checking; software verification; stochastic systems; and synthesis. *The conference was held virtually due to the COVID-19 pandemic

    Pretrained Transformers for Text Ranking: BERT and Beyond

    Get PDF
    The goal of text ranking is to generate an ordered list of texts retrieved from a corpus in response to a query. Although the most common formulation of text ranking is search, instances of the task can also be found in many natural language processing applications. This survey provides an overview of text ranking with neural network architectures known as transformers, of which BERT is the best-known example. The combination of transformers and self-supervised pretraining has been responsible for a paradigm shift in natural language processing (NLP), information retrieval (IR), and beyond. In this survey, we provide a synthesis of existing work as a single point of entry for practitioners who wish to gain a better understanding of how to apply transformers to text ranking problems and researchers who wish to pursue work in this area. We cover a wide range of modern techniques, grouped into two high-level categories: transformer models that perform reranking in multi-stage architectures and dense retrieval techniques that perform ranking directly. There are two themes that pervade our survey: techniques for handling long documents, beyond typical sentence-by-sentence processing in NLP, and techniques for addressing the tradeoff between effectiveness (i.e., result quality) and efficiency (e.g., query latency, model and index size). Although transformer architectures and pretraining techniques are recent innovations, many aspects of how they are applied to text ranking are relatively well understood and represent mature techniques. However, there remain many open research questions, and thus in addition to laying out the foundations of pretrained transformers for text ranking, this survey also attempts to prognosticate where the field is heading
    • …
    corecore