13,310 research outputs found
Brain-Switches for Asynchronous Brain−Computer Interfaces: A Systematic Review
A brain–computer interface (BCI) has been extensively studied to develop a novel communication system for disabled people using their brain activities. An asynchronous BCI system is more realistic and practical than a synchronous BCI system, in that, BCI commands can be generated whenever the user wants. However, the relatively low performance of an asynchronous BCI system is problematic because redundant BCI commands are required to correct false-positive operations. To significantly reduce the number of false-positive operations of an asynchronous BCI system, a two-step approach has been proposed using a brain-switch that first determines whether the user wants to use an asynchronous BCI system before the operation of the asynchronous BCI system. This study presents a systematic review of the state-of-the-art brain-switch techniques and future research directions. To this end, we reviewed brain-switch research articles published from 2000 to 2019 in terms of their (a) neuroimaging modality, (b) paradigm, (c) operation algorithm, and (d) performance
Autonomously managed electrical power systems
The electric power systems for future spacecraft such as the Space Station will necessarily be more sophisticated and will exhibit more nearly autonomous operation than earlier spacecraft. These new power systems will be more reliable and flexible than their predecessors offering greater utility to the users. Automation approaches implemented on various power system breadboards are investigated. These breadboards include the Hubble Space Telescope power system test bed, the Common Module Power Management and Distribution system breadboard, the Autonomusly Managed Power System (AMPS) breadboard, and the 20 kilohertz power system breadboard. Particular attention is given to the AMPS breadboard. Future plans for these breadboards including the employment of artificial intelligence techniques are addressed
AER Building Blocks for Multi-Layer Multi-Chip Neuromorphic Vision Systems
A 5-layer neuromorphic vision processor whose components
communicate spike events asychronously using the address-eventrepresentation
(AER) is demonstrated. The system includes a retina
chip, two convolution chips, a 2D winner-take-all chip, a delay line
chip, a learning classifier chip, and a set of PCBs for computer
interfacing and address space remappings. The components use a
mixture of analog and digital computation and will learn to classify
trajectories of a moving object. A complete experimental setup and
measurements results are shown.Unión Europea IST-2001-34124 (CAVIAR)Ministerio de Ciencia y Tecnología TIC-2003-08164-C0
A versatile all-channel stimulator for electrode arrays, with real-time control
Over the last few decades, technology to record through ever increasing numbers of electrodes has become available to electrophysiologists. For the study of distributed neural processing, however, the ability to stimulate through equal numbers of electrodes, and thus to attain bidirectional communication, is of paramount importance. Here, we present a stimulation system for multi-electrode arrays which interfaces with existing commercial recording hardware, and allows stimulation through any electrode in the array, with rapid switching between channels. The system is controlled through real-time Linux, making it extremely flexible: stimulation sequences can be constructed on-the-fly, and arbitrary stimulus waveforms can be used if desired. A key feature of this design is that it can be readily and inexpensively reproduced in other labs, since it interfaces to standard PC parallel ports and uses only off-the-shelf components. Moreover, adaptation for use with in vivo multi-electrode probes would be straightforward. In combination with our freely available data-acquisition software, MeaBench, this system can provide feedback stimulation in response to recorded action potentials within 15 ms
Recommended from our members
Prototyping a process-centered environment
This paper describes an experimental system developed and used as a vehicle for prototyping the Arcadia-1 software development environment. Prototyping is viewed as a knowledge acquisition process and is used to reduce risks in software development by gaining rapid feedback about the suitability of a production system before the system is completed. Prototyping a software development environment is particularly important due to the lack of experience with them. There is an acute need to acquire knowledge about user interaction requirements for software environments. These needs are especially important for the Arcadia project, as it is one of the first attempts to construct a process-centered environment. Our prototyping effort addresses questions about effective interaction with a process-centered environment by simulating how Arcadia-1 would interact with users in a representative range of usage scenarios. We built a prototyping system, called PRODUCER, and used it to generate a variety of prototypes simulating user interactions with Arcadia-1 process programs.Experience with PRODUCER indicates that our approach is effective at risk reduction. The prototypes greatly improved communication with our customer. They confirmed some of our design decisions but also redirected our research efforts as a result of unexpected insight. We also found that prototyping usage scenarios provides conceptual guides and design information for process programmers. Most of the benefits of our prototyping effort derive from developing and interacting with usage scenarios, so our approach is generalizable to other prototyping systems. This paper reports on our prototyping approach and our experience in prototyping a process-centered environment
Explorations in engagement for humans and robots
This paper explores the concept of engagement, the process by which
individuals in an interaction start, maintain and end their perceived
connection to one another. The paper reports on one aspect of engagement among
human interactors--the effect of tracking faces during an interaction. It also
describes the architecture of a robot that can participate in conversational,
collaborative interactions with engagement gestures. Finally, the paper reports
on findings of experiments with human participants who interacted with a robot
when it either performed or did not perform engagement gestures. Results of the
human-robot studies indicate that people become engaged with robots: they
direct their attention to the robot more often in interactions where engagement
gestures are present, and they find interactions more appropriate when
engagement gestures are present than when they are not.Comment: 31 pages, 5 figures, 3 table
- …