100 research outputs found

    Agoric computation: trust and cyber-physical systems

    Get PDF
    In the past two decades advances in miniaturisation and economies of scale have led to the emergence of billions of connected components that have provided both a spur and a blueprint for the development of smart products acting in specialised environments which are uniquely identifiable, localisable, and capable of autonomy. Adopting the computational perspective of multi-agent systems (MAS) as a technological abstraction married with the engineering perspective of cyber-physical systems (CPS) has provided fertile ground for designing, developing and deploying software applications in smart automated context such as manufacturing, power grids, avionics, healthcare and logistics, capable of being decentralised, intelligent, reconfigurable, modular, flexible, robust, adaptive and responsive. Current agent technologies are, however, ill suited for information-based environments, making it difficult to formalise and implement multiagent systems based on inherently dynamical functional concepts such as trust and reliability, which present special challenges when scaling from small to large systems of agents. To overcome such challenges, it is useful to adopt a unified approach which we term agoric computation, integrating logical, mathematical and programming concepts towards the development of agent-based solutions based on recursive, compositional principles, where smaller systems feed via directed information flows into larger hierarchical systems that define their global environment. Considering information as an integral part of the environment naturally defines a web of operations where components of a systems are wired in some way and each set of inputs and outputs are allowed to carry some value. These operations are stateless abstractions and procedures that act on some stateful cells that cumulate partial information, and it is possible to compose such abstractions into higher-level ones, using a publish-and-subscribe interaction model that keeps track of update messages between abstractions and values in the data. In this thesis we review the logical and mathematical basis of such abstractions and take steps towards the software implementation of agoric modelling as a framework for simulation and verification of the reliability of increasingly complex systems, and report on experimental results related to a few select applications, such as stigmergic interaction in mobile robotics, integrating raw data into agent perceptions, trust and trustworthiness in orchestrated open systems, computing the epistemic cost of trust when reasoning in networks of agents seeded with contradictory information, and trust models for distributed ledgers in the Internet of Things (IoT); and provide a roadmap for future developments of our research

    Automating C++ Execution Exploration to Solve the Out-of-thin-air Problem

    Get PDF
    Modern computers are marvels of engineering. Customisable reasoning engines which can be programmed to complete complex mathematical tasks at incredible speed. Decades of engineering has taken computers from room sized machines to near invisible devices in all aspects of life. With this engineering has come more complex and ornate design, a substantial leap forward being multiprocessing. Modern processors can execute threads of program logic in parallel, coordinating shared resources like memory and device access. Parallel computation leads to significant scaling of compute power, but yields a substantial complexity cost for both processors designers and programmers. Parallel access to shared memory requires coordination on which thread can use a particular fragment of memory at a given time. Simple mechanisms like locks and mutexes ensure only one process at a time can access memory gives an easy to use programming model, but they eschew the benefits of parallel computation. Instead, processors today have complex mechanisms to permit concurrent shared memory access. These mechanisms prevent simple programmer reasoning and require complex formal descriptions to define: memory models. Early memory model research focused on weak memory behaviours which are observable because of hardware design; over time it has become obvious that not only hardware but compilers are capable of making new weak behaviours observable. Substantial and rapid success has been achieved formalising the behaviour of these machines: researchers refined new specifications for shared-memory concurrency and used mechanisation to automate validation of their models. As the models were refined and new behaviours of the hardware were discovered, researchers also began working with processor vendors – helping to inform design choices in new processor designs to keep the weak behaviours within some sensible bounds. Unfortunately when reasoning about shared memory accesses of highly optimised programming languages like C and C++, deep questions are still left open about how best to describe the behaviour of shared memory accesses in the presence of dependency removing compiler optimisations. Until very recently it has not been possible to properly specify the behaviours of these programs without forbidding ii optimisations which are used and observable, or allowing program behaviours which are nonsense and never observable. In this thesis I explore the development of memory models through the lens of tooling: taking at first an industrial approach, and then exploring memory models for highly optimised programming languages. I show that taming the complexity of these models with automated tools aids bug finding even where formal evaluation has not. Further, building tools creates a focus on the computational complexity of the memory model which in turn can steer development of the model towards simpler designs. We will look at 3 case studies: the first is an industrial hardware model of NVIDIA GPUs which we extend to encompass more hardware features than before. This extension was validated using an automated testing process generating tests of finite size, and then verified against the original memory model in Coq. The second case study is an exploration of the first memory model for an optimised programming language which takes proper account of dependencies. We build a tool to automate execution of this model over a series of tests, and in the process discovered subtleties in the definitions which were unexpected – leading to refinement of the model. In the final case study, we develop a memory model that gives a direct definition for compiler preserved dependencies. This model is the first model that can be integrated with relative ease into the C/C++ programming language standard. We built this model alongside its own tooling, yielding a fast tool for giving determinations on a large number of litmus tests – a novelty for this sort of memory model. This model fits well with the existing C/C++ specifications, and we are working with the International Standards Organisation to understand how best to fit this model in the standard

    Separation logic for high-level synthesis

    Get PDF
    High-level synthesis (HLS) promises a significant shortening of the digital hardware design cycle by raising the abstraction level of the design entry to high-level languages such as C/C++. However, applications using dynamic, pointer-based data structures remain difficult to implement well, yet such constructs are widely used in software. Automated optimisations that leverage the memory bandwidth of dedicated hardware implementations by distributing the application data over separate on-chip memories and parallelise the implementation are often ineffective in the presence of dynamic data structures, due to the lack of an automated analysis that disambiguates pointer-based memory accesses. This thesis takes a step towards closing this gap. We explore recent advances in separation logic, a rigorous mathematical framework that enables formal reasoning about the memory access of heap-manipulating programs. We develop a static analysis that automatically splits heap-allocated data structures into provably disjoint regions. Our algorithm focuses on dynamic data structures accessed in loops and is accompanied by automated source-to-source transformations which enable loop parallelisation and physical memory partitioning by off-the-shelf HLS tools. We then extend the scope of our technique to pointer-based memory-intensive implementations that require access to an off-chip memory. The extended HLS design aid generates parallel on-chip multi-cache architectures. It uses the disjointness property of memory accesses to support non-overlapping memory regions by private caches. It also identifies regions which are shared after parallelisation and which are supported by parallel caches with a coherency mechanism and synchronisation, resulting in automatically specialised memory systems. We show up to 15x acceleration from heap partitioning, parallelisation and the insertion of the custom cache system in demonstrably practical applications.Open Acces

    Interactive gesture controller for a motorised wheelchair

    Get PDF
    This paper explores in great detail the design and testing of a gesture controller for a motorised wheelchair. For some, motorised wheelchairs are part of their everyday life. Those individuals who depend on their motorised wheelchair do so for a vast range of reasons; therefore, it is reasonable to assume that modifying and improving upon the standard joystick controller for a motorised wheelchair can benefit a person’s way of life significantly. The design of the gesture controller is heavily based around the user’s needs so as to benefit them and compliment their strengths to give them more control. For individuals with limited movement and dexterity, the user interface, system responsiveness, ergonomics and safety were considered when engineering a system that is intended for people to use. A device capable of recognising a hand gesture was carefully chosen. The technology that is readily available for this application is relatively new and not extensively documented. The LEAP motion sensor was chosen as the hand gesture recognition device to be the controller for a wheelchair. This device has hand recognition software but the device’s software lacks the predictability and accuracy required for a motorised wheelchair controller. Through testing, the controller accuracy improved. Although this controller is adequate for a laboratory environment, further testing and development will be required for this alternative wheelchair controller to evolve into a commercial product. The gesture triggered controller was designed around the capabilities of the developer’s hand; but the method outlined in this paper is transferable to any individual hand size and more importantly the limitations of their hand gestures. The outcome of this thesis is a customised, non-invasive hand gesture controller for a motorised wheelchair that is able to be fully tailored to a person’s capability without losing it responsiveness or accuracy

    A framework for AI-driven neurorehabilitation training: the profiling challenge

    Get PDF
    Cognitive decline is a common sign that a person is ageing. However, abnormal cases can lead to dementia, affecting daily living activities and independent functioning. It is a leading cause of disability and death. Its prevention is a global health priority. One way to address cognitive decline is to undergo cognitive rehabilitation. Cognitive rehabilitation aims to restore or mitigate the symptoms of a cognitive disability, increasing the quality of life for the patient. However, cognitive rehabilitation is stuck to clinical environments and logistics, leading to a suboptimal set of expansive tools that is hard to accommodate every patient’s needs. The BRaNT project aims to create a tool that mitigates this problem. The NeuroAIreh@b is a rehabilitation tool developed within a framework that combines neuropsychological assessments, neurorehabilitation procedures, artificial intelligence and game design, composing a tool that is easy to set up in a clinical environment and accessible to adapt to every patient’s needs. Among all the challenges within NeuroAlreh@b, one focuses on representing a cognitive profile through the aggregation of multiple neuropsychological assessments. To test this possibility, we will need data from patients currently unavailable. In the first part of this master’s project, study the possibility of aggregating neuropsychological assessments for the case of Alzheimer’s disease using the Alzheimer’s Disease Neuroimaging Initiative database. This database contains a vast collection of images and neuropsychological assessments that will serve as a baseline for the NeuroAlreh@b when the time comes. In the second part of this project, we set up a computational system to run all the artificial intelligence models and simulations required for the BRaNT project. The system allocates a database and a webserver to serve all the required pages for the project.O declínio cognitivo é um sinal comum de que uma pessoa está a envelhecer. No entanto, casos anormais podem levar à demência, afetando as atividades diárias e funcionamento independente. Demência é uma das principais causas de incapacidade e morte. Fazendo da sua prevenção uma prioridade para a saúde global. Uma forma de lidar com o declínio cognitivo é submeter-se à reabilitação cognitiva. A reabilitação cognitiva visa restaurar ou mitigar os sintomas de uma deficiência cognitiva, aumentando a qualidade de vida do paciente. No entanto, a reabilitação cognitiva está presa a ambientes clínicos e logística, levando a um conjunto sub-ideal de ferramentas com custos elevados e complicadas de acomodar as necessidades de cada paciente. O projeto BRaNT visa criar uma ferramenta que atenue este problema. O NeuroAIreh@b é uma ferramenta de reabilitação desenvolvida num quadro que combina avaliações neuropsicológicas, reabilitação, inteligência artificial e design de jogos, compondo uma ferramenta fácil de adaptar a um ambiente clínico e acessível para se adaptar às necessidades de cada paciente. Entre todos os desafios dentro de NeuroAlreh@b, foca-se em representar um perfil cognitivo através da agregação de múltiplas avaliações neuropsicológicas. Para testar esta possibilidade, precisaremos de dados de pacientes, que atualmente não temos. Na primeira parte do projeto deste mestrado, vamos testar a possibilidade de agregar avaliações neuropsicológicas para o caso da doença de Alzheimer utilizando a base de dados da Iniciativa de Neuroimagem da Doença de Alzheimer. Esta base de dados contém uma vasta coleção de imagens e avaliações neuropsicológicas que servirão de base para o NeuroAlreh@b quando chegar a hora. Na segunda parte deste projeto, vamos criar um sistema informático para executar todos os modelos e simulações de inteligência artificial necessários para o projeto BRaNT. O sistema também irá alocar uma base de dados e um webserver para servir todas as páginas necessárias para o projeto
    • …
    corecore