5 research outputs found

    Neuromorphic Computing Applications in Robotics

    Get PDF
    Deep learning achieves remarkable success through training using massively labeled datasets. However, the high demands on the datasets impede the feasibility of deep learning in edge computing scenarios and suffer from the data scarcity issue. Rather than relying on labeled data, animals learn by interacting with their surroundings and memorizing the relationships between events and objects. This learning paradigm is referred to as associative learning. The successful implementation of associative learning imitates self-learning schemes analogous to animals which resolve the challenges of deep learning. Current state-of-the-art implementations of associative memory are limited to simulations with small-scale and offline paradigms. Thus, this work implements associative memory with an Unmanned Ground Vehicle (UGV) and neuromorphic hardware, specifically Intel’s Loihi, for an online learning scenario. This system emulates the classic associative learning in rats using the UGV in place of the rats. In specific, it successfully reproduces the fear conditioning with no pretraining procedure or labeled datasets. The UGV is rendered capable of autonomously learning the cause-and-effect relationship of the light stimulus and vibration stimulus and exhibiting a movement response to demonstrate the memorization. Hebbian learning dynamics are used to update the synaptic weights during the associative learning process. The Intel Loihi chip is integrated with this online learning system for processing visual signals with a specialized neural assembly. While processing, the Loihi’s average power usages for computing logic and memory are 30 mW and 29 mW, respectively

    An overview of research on human-centered design in the development of artificial general intelligence

    Full text link
    Abstract: This article offers a comprehensive analysis of Artificial General Intelligence (AGI) development through a humanistic lens. Utilizing a wide array of academic and industry resources, it dissects the technological and ethical complexities inherent in AGI's evolution. Specifically, the paper underlines the societal and individual implications of AGI and argues for its alignment with human values and interests. Purpose: The study aims to explore the role of human-centered design in AGI's development and governance. Design/Methodology/Approach: Employing content analysis and literature review, the research evaluates major themes and concepts in human-centered design within AGI development. It also scrutinizes relevant academic studies, theories, and best practices. Findings: Human-centered design is imperative for ethical and sustainable AGI, emphasizing human dignity, privacy, and autonomy. Incorporating values like empathy, ethics, and social responsibility can significantly influence AGI's ethical deployment. Talent development is also critical, warranting interdisciplinary initiatives. Research Limitations/Implications: There is a need for additional empirical studies focusing on ethics, social responsibility, and talent cultivation within AGI development. Practical Implications: Implementing human-centered values in AGI development enables ethical and sustainable utilization, thus promoting human dignity, privacy, and autonomy. Moreover, a concerted effort across industry, academia, and research sectors can secure a robust talent pool, essential for AGI's stable advancement. Originality/Value: This paper contributes original research to the field by highlighting the necessity of a human-centered approach in AGI development, and discusses its practical ramifications.Comment: 20 page

    On Reward Structures of Markov Decision Processes

    Full text link
    A Markov decision process can be parameterized by a transition kernel and a reward function. Both play essential roles in the study of reinforcement learning as evidenced by their presence in the Bellman equations. In our inquiry of various kinds of "costs" associated with reinforcement learning inspired by the demands in robotic applications, rewards are central to understanding the structure of a Markov decision process and reward-centric notions can elucidate important concepts in reinforcement learning. Specifically, we study the sample complexity of policy evaluation and develop a novel estimator with an instance-specific error bound of O~(Ï„sn)\tilde{O}(\sqrt{\frac{\tau_s}{n}}) for estimating a single state value. Under the online regret minimization setting, we refine the transition-based MDP constant, diameter, into a reward-based constant, maximum expected hitting cost, and with it, provide a theoretical explanation for how a well-known technique, potential-based reward shaping, could accelerate learning with expert knowledge. In an attempt to study safe reinforcement learning, we model hazardous environments with irrecoverability and proposed a quantitative notion of safe learning via reset efficiency. In this setting, we modify a classic algorithm to account for resets achieving promising preliminary numerical results. Lastly, for MDPs with multiple reward functions, we develop a planning algorithm that computationally efficiently finds Pareto-optimal stochastic policies.Comment: This PhD thesis draws heavily from arXiv:1907.02114 and arXiv:2002.06299; minor edit

    The Integration of Neuromorphic Computing in Autonomous Robotic Systems

    Get PDF
    Deep Neural Networks (DNNs) have come a long way in many cognitive tasks by training on large, labeled datasets. However, this method has problems in places with limited data and energy, like when planetary robots are used or when edge computing is used [1]. In contrast to this data-heavy approach, animals demonstrate an innate ability to learn by communicating with their environment and forming associative memories among events and entities, a process known as associative learning [2-4]. For instance, rats in a T-maze learn to associate different stimuli with outcomes through exploration without needing labeled data [5]. This learning paradigm is crucial to overcoming the challenges of deep learning in environments where data and energy are limited. Taking inspiration from this natural learning process, recent advancements [6, 7] have been made in implementing associative learning in artificial systems. This work introduces a pioneering approach by integrating associative learning utilizing an Unmanned Ground Vehicle (UGV) in conjunction with neuromorphic hardware, specifically the XyloA2TestBoard from SynSense, to facilitate online learning scenarios. The system simulates standard associative learning, like the spatial and memory learning observed in rats in a T-maze environment, without any pretraining or labeled datasets. The UGV, akin to the rats in a T-maze, autonomously learns the cause-and-effect relationships between different stimuli, such as visual cues and vibration or audio and visual cues, and demonstrates learned responses through movement. The neuromorphic robot in this system, equipped with SynSense’s neuromorphic chip, processes audio signals with a specialized Spiking Neural Network (SNN) and neural assembly, employing the Hebbian learning rule to adjust synaptic weights throughout the learning period. The XyloA2TestBoard uses little power (17.96 µW on average for logic Analog Front End (AFE) and 213.94 µW for IO circuitry), which shows that neuromorphic chips could work well in places with limited energy, offering a promising direction for advancing associative learning in artificial systems

    Neuromorphic Computing: A Path to Artificial Intelligence Through Emulating Human Brains

    No full text
    The human brain is the most powerful computational machine in this world that has inspired artificial intelligence for many years. One of the latest outcomes of the reverse engineering neural system is deep learning, which emulates the multiple-layer structure of biological neural networks. Deep learning has achieved a variety of unprecedented successes in a large range of cognitive tasks. However, accompanied by the achievements, the shortcomings of deep learning are becoming more and more severe. These drawbacks include the demand for massive data, energy inefficiency, incomprehensibility, etc. One of the innate drawbacks of deep learning is that it implements artificial intelligence through the algorithms and software alone with no consideration of the potential limitations of computational resources. On the contrary, neuromorphic computing, also known as brain-inspired computing, emulates the biological neural networks through a software and hardware co-design approach and aims to break the shackles from the von Neumann architecture and digital representation of information within it. Thus, neuromorphic computing offers an alternative approach for next-generation AI that balances computational complexity, energy efficiency, biological plausibility, and intellectual competence. This chapter aims to comprehensively introduce neuromorphic computing from the fundamentals of biological neural systems, neuron models, to hardware implementations. Lastly, critical challenges and opportunities in neuromorphic computing are discussed
    corecore