171 research outputs found

    Framework and Implications of Virtual Neurorobotics

    Get PDF
    Despite decades of societal investment in artificial learning systems, truly ā€œintelligentā€ systems have yet to be realized. These traditional models are based on input-output pattern optimization and/or cognitive production rule modeling. One response has been social robotics, using the interaction of human and robot to capture important cognitive dynamics such as cooperation and emotion; to date, these systems still incorporate traditional learning algorithms. More recently, investigators are focusing on the core assumptions of the brain ā€œalgorithmā€ itselfā€”trying to replicate uniquely ā€œneuromorphicā€ dynamics such as action potential spiking and synaptic learning. Only now are large-scale neuromorphic models becoming feasible, due to the availability of powerful supercomputers and an expanding supply of parameters derived from research into the brain's interdependent electrophysiological, metabolomic and genomic networks. Personal computer technology has also led to the acceptance of computer-generated humanoid images, or ā€œavatarsā€, to represent intelligent actors in virtual realities. In a recent paper, we proposed a method of virtual neurorobotics (VNR) in which the approaches above (social-emotional robotics, neuromorphic brain architectures, and virtual reality projection) are hybridized to rapidly forward-engineer and develop increasingly complex, intrinsically intelligent systems. In this paper, we synthesize our research and related work in the field and provide a framework for VNR, with wider implications for research and practical applications

    Virtual Neurorobotics (VNR) to Accelerate Development of Plausible Neuromorphic Brain Architectures

    Get PDF
    Traditional research in artificial intelligence and machine learning has viewed the brain as a specially adapted information-processing system. More recently the field of social robotics has been advanced to capture the important dynamics of human cognition and interaction. An overarching societal goal of this research is to incorporate the resultant knowledge about intelligence into technology for prosthetic, assistive, security, and decision support applications. However, despite many decades of investment in learning and classification systems, this paradigm has yet to yield truly ā€œintelligentā€ systems. For this reason, many investigators are now attempting to incorporate more realistic neuromorphic properties into machine learning systems, encouraged by over two decades of neuroscience research that has provided parameters that characterize the brain's interdependent genomic, proteomic, metabolomic, anatomic, and electrophysiological networks. Given the complexity of neural systems, developing tenable models to capture the essence of natural intelligence for real-time application requires that we discriminate features underlying information processing and intrinsic motivation from those reflecting biological constraints (such as maintaining structural integrity and transporting metabolic products). We propose herein a conceptual framework and an iterative method of virtual neurorobotics (VNR) intended to rapidly forward-engineer and test progressively more complex putative neuromorphic brain prototypes for their ability to support intrinsically intelligent, intentional interaction with humans. The VNR system is based on the viewpoint that a truly intelligent system must be driven by emotion rather than programmed tasking, incorporating intrinsic motivation and intentionality. We report pilot results of a closed-loop, real-time interactive VNR system with a spiking neural brain, and provide a video demonstration as online supplemental material

    Connecting Artificial Brains to Robots in a Comprehensive Simulation Framework: The Neurorobotics Platform

    Get PDF
    Combined efforts in the fields of neuroscience, computer science, and biology allowed to design biologically realistic models of the brain based on spiking neural networks. For a proper validation of these models, an embodiment in a dynamic and rich sensory environment, where the model is exposed to a realistic sensory-motor task, is needed. Due to the complexity of these brain models that, at the current stage, cannot deal with real-time constraints, it is not possible to embed them into a real-world task. Rather, the embodiment has to be simulated as well. While adequate tools exist to simulate either complex neural networks or robots and their environments, there is so far no tool that allows to easily establish a communication between brain and body models. The Neurorobotics Platform is a new web-based environment that aims to fill this gap by offering scientists and technology developers a software infrastructure allowing them to connect brain models to detailed simulations of robot bodies and environments and to use the resulting neurorobotic systems for in silico experimentation. In order to simplify the workflow and reduce the level of the required programming skills, the platform provides editors for the specification of experimental sequences and conditions, environments, robots, and brainā€“body connectors. In addition to that, a variety of existing robots and environments are provided. This work presents the architecture of the first release of the Neurorobotics Platform developed in subproject 10 ā€œNeuroroboticsā€ of the Human Brain Project (HBP).1 At the current state, the Neurorobotics Platform allows researchers to design and run basic experiments in neurorobotics using simulated robots and simulated environments linked to simplified versions of brain models. We illustrate the capabilities of the platform with three example experiments: a Braitenberg task implemented on a mobile robot, a sensory-motor learning task based on a robotic controller, and a visual tracking embedding a retina model on the iCub humanoid robot. These use-cases allow to assess the applicability of the Neurorobotics Platform for robotic tasks as well as in neuroscientific experiments.The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 604102 (Human Brain Project) and from the European Unions Horizon 2020 Research and Innovation Programme under Grant Agreement No. 720270 (HBP SGA1)

    The coming decade of digital brain research - A vision for neuroscience at the intersection of technology and computing

    Get PDF
    Brain research has in recent years indisputably entered a new epoch, driven by substantial methodological advances and digitally enabled data integration and modeling at multiple scales ā€“ from molecules to the whole system. Major advances are emerging at the intersection of neuroscience with technology and computing. This new science of the brain integrates high-quality basic research, systematic data integration across multiple scales, a new culture of large-scale collaboration and translation into applications. A systematic approach, as pioneered in Europeā€™s Human Brain Project (HBP), will be essential in meeting the pressing medical and technological challenges of the coming decade. The aims of this paper are: To develop a concept for the coming decade of digital brain research To discuss it with the research community at large, with the aim of identifying points of convergence and common goals. To provide a scientific framework for current and future development of EBRAINS. To inform and engage stakeholders, funding organizations and research institutions regarding future digital brain research. To identify and address key ethical and societal issues. While we do not claim that there is a ā€˜one size fits allā€™ approach to addressing these aspects, we are convinced that discussions around the theme of digital brain research will help drive progress in the broader field of neuroscience

    Artifical Intelligence Librarian as Promotion of IAIN Lhokseumawe Library in the Revolutionary Era 4.0

    Get PDF
    Era 4.0. is a revolution in the industrial world, in the era referred to as the phenomenon distruptive innovation. In the industrial era 4.0, the emphasis lies on the digital economy pattern, artificial intelligence (artificial intelligence) big data, robotics, and automation. The impact of the industrial era 4.0 influential in various fields of work are no exception librarians. Librarian is someone who has the ability and expertise librarianship the librarians in charge to prepare themselves to face the era that is the way to equip themselves with information technology and analytical capabilities of the library so that airport users effective. Then by applying a librarian AI (artificial intelligence) to guide users in using the integrated library information. The presence of librarians AI (artificial intelligence) at IAIN Lhokseumawe made of evidence that has entered the era of disruption 4.0 will be the college library promotion strategy

    Progress and Prospects of the Human-Robot Collaboration

    Get PDF
    International audienceRecent technological advances in hardware designof the robotic platforms enabled the implementationof various control modalities for improved interactions withhumans and unstructured environments. An important applicationarea for the integration of robots with such advancedinteraction capabilities is human-robot collaboration. Thisaspect represents high socio-economic impacts and maintainsthe sense of purpose of the involved people, as the robotsdo not completely replace the humans from the workprocess. The research communityā€™s recent surge of interestin this area has been devoted to the implementation of variousmethodologies to achieve intuitive and seamless humanrobot-environment interactions by incorporating the collaborativepartnersā€™ superior capabilities, e.g. humanā€™s cognitiveand robotā€™s physical power generation capacity. In fact,the main purpose of this paper is to review the state-of-thearton intermediate human-robot interfaces (bi-directional),robot control modalities, system stability, benchmarking andrelevant use cases, and to extend views on the required futuredevelopments in the realm of human-robot collaboration

    RealTHASCā€”a cyber-physical XR testbed for AI-supported real-time human autonomous systems collaborations

    Get PDF
    Todayā€™s research on human-robot teaming requires the ability to test artificial intelligence (AI) algorithms for perception and decision-making in complex real-world environments. Field experiments, also referred to as experiments ā€œin the wild,ā€ do not provide the level of detailed ground truth necessary for thorough performance comparisons and validation. Experiments on pre-recorded real-world data sets are also significantly limited in their usefulness because they do not allow researchers to test the effectiveness of active robot perception and control or decision strategies in the loop. Additionally, research on large human-robot teams requires tests and experiments that are too costly even for the industry and may result in considerable time losses when experiments go awry. The novel Real-Time Human Autonomous Systems Collaborations (RealTHASC) facility at Cornell University interfaces real and virtual robots and humans with photorealistic simulated environments by implementing new concepts for the seamless integration of wearable sensors, motion capture, physics-based simulations, robot hardware and virtual reality (VR). The result is an extended reality (XR) testbed by which real robots and humans in the laboratory are able to experience virtual worlds, inclusive of virtual agents, through real-time visual feedback and interaction. VR body tracking by DeepMotion is employed in conjunction with the OptiTrack motion capture system to transfer every human subject and robot in the real physical laboratory space into a synthetic virtual environment, thereby constructing corresponding human/robot avatars that not only mimic the behaviors of the real agents but also experience the virtual world through virtual sensors and transmit the sensor data back to the real human/robot agent, all in real time. New cross-domain synthetic environments are created in RealTHASC using Unreal Engineā„¢, bridging the simulation-to-reality gap and allowing for the inclusion of underwater/ground/aerial autonomous vehicles, each equipped with a multi-modal sensor suite. The experimental capabilities offered by RealTHASC are demonstrated through three case studies showcasing mixed real/virtual human/robot interactions in diverse domains, leveraging and complementing the benefits of experimentation in simulation and in the real world

    Procedural Aesthetics and the Emergence of NeuroArt

    Get PDF
    Although Neuroart is related to the concept of Neuroaesthetics (S. Zeki), which is based on a scientific approach to aesthetic perception of art, and to the concepts of Neuroplastic arts (G. Novakovic) and Neuromedia (J. Scott) endorsing collaboration between artists and neuroscientists, it is at the same time distinct from them. We are using the term literally to refer to those artworks that are based on neural / brain waves signals and the use of brain-computer interfaces (BCI) or more specifically, EEG headsets in the production and display of artworks. We focus on EEG-based sound art, visual arts, interactive installations, and performance arts, and we identify Neuroart as a novel, emerging form or sub-genre of new media art. However, we do not limit Neuroart to human-generated artworks only. Given that Neuroart applies to detection or inspection of neural electric signals, we claim that the electric nature of those signals also applies to processes inherent in machine processing or neural computing such as Google Deep Dream and other generic platforms that lay the foundations for computer and/or AI generated art forms including database art, software art, visualization art, sonification art as well as those artworks that result in material artifacts presented in traditional exhibition format. We additionally claim that regardless whether the artworks of Neuroart are driven by a human or machine, they can have the same aesthetic discursive value, but within a context of a newly defined discipline of aesthetics that is Procedural Aesthetics. The Procedural Aesthetics (or the aesthetics of signal), can be understood as the discursiveness of the very process of signals (intensities) emission before they enter the sphere of conscious cognition. It is a pre-receptive and pre-semantics phenomenon. It deals with the processes otherwise not available to human perceptive apparatus, trying to reveal them, unmask them, by offering them to interpretation as cultural artifacts. And in order to do this, it relies heavily on technology and technical equipment allowing us the access to these ā€˜invisibleā€™ processes through visualization, sonification, textualization, mapping and other forms of interpretable representations displayed as artworks
    • ā€¦
    corecore