27 research outputs found

    Sustained Negative BOLD Response in Human fMRI Finger Tapping Task

    Get PDF
    In this work, we investigated the sustained negative blood oxygen level-dependent (BOLD) response (sNBR) using functional magnetic resonance imaging during a finger tapping task. We observed that the sNBR for this task was more extensive than has previously been reported. The cortical regions involved in sNBR are divided into the following three groups: frontal, somatosensory and occipital. By investigating the spatial structure, area, amplitude, and dynamics of the sNBR in comparison with those of its positive BOLD response (PBR) counterpart, we made the following observations. First, among the three groups, the somatosensory group contained the greatest number of activated voxels and the fewest deactivated voxels. In addition, the amplitude of the sNBR in this group was the smallest among the three groups. Second, the onset and peak time of the sNBR are both larger than those of the PBR, whereas the falling edge time of the sNBR is less than that of the PBR. Third, the long distance between most sNBR foci and their corresponding PBR foci makes it unlikely that they share the same blood supply artery. Fourth, the couplings between the sNBR and its PBR counterpart are distinct among different regions and thus should be investigated separately. These findings imply that the origin of most sNBR foci in the finger-tapping task is much more likely to be neuronal activity suppression rather than “blood steal.

    Spatio-Temporal Calibration for Omni-Directional Vehicle-Mounted

    Full text link
    We present a solution to the problem of spatio-temporal calibration for event cameras mounted on an onmi-directional vehicle. Different from traditional methods that typically determine the camera's pose with respect to the vehicle's body frame using alignment of trajectories, our approach leverages the kinematic correlation of two sets of linear velocity estimates from event data and wheel odometers, respectively. The overall calibration task consists of estimating the underlying temporal offset between the two heterogeneous sensors, and furthermore, recovering the extrinsic rotation that defines the linear relationship between the two sets of velocity estimates. The first sub-problem is formulated as an optimization one, which looks for the optimal temporal offset that maximizes a correlation measurement invariant to arbitrary linear transformation. Once the temporal offset is compensated, the extrinsic rotation can be worked out with an iterative closed-form solver that incrementally registers associated linear velocity estimates. The proposed algorithm is proved effective on both synthetic data and real data, outperforming traditional methods based on alignment of trajectories

    The Potential of Using Brain Images for Authentication

    Get PDF
    Biometric recognition (also known as biometrics) refers to the automated recognition of individuals based on their biological or behavioral traits. Examples of biometric traits include fingerprint, palmprint, iris, and face. The brain is the most important and complex organ in the human body. Can it be used as a biometric trait? In this study, we analyze the uniqueness of the brain and try to use the brain for identity authentication. The proposed brain-based verification system operates in two stages: gray matter extraction and gray matter matching. A modified brain segmentation algorithm is implemented for extracting gray matter from an input brain image. Then, an alignment-based matching algorithm is developed for brain matching. Experimental results on two data sets show that the proposed brain recognition system meets the high accuracy requirement of identity authentication. Though currently the acquisition of the brain is still time consuming and expensive, brain images are highly unique and have the potential possibility for authentication in view of pattern recognition

    Growing Locally Linear Embedding for Manifold Learning

    No full text
    Locally linear embedding is an effective nonlinear dimensionality reduction method for exploring the intrinsic characteristics of high dimensional data. This paper proposes a new manifold learning method, which is based on locally linear embedding and growing neural gas and is termed growing locally linear embedding (GLLE). GLLE overcomes the major limitations of the original locally linear embedding, which are intrinsic dimensionality estimation, selection of the number of nearest neighbors, and computational complexity. By embedding the topology learning mechanism in growing neural gas, the proposed GLLE algorithm preserves global topological structures and geometric characteristics of input patterns, which makes the projections more stable. The performed theoretical analysis and experimental simulations show that GLLE results in a faster learning procedure and a lower reconstruction error, which widens the applicability of manifold learning

    Angle Measurement of Objects outside the Linear Field of View of a Strapdown Semi-Active Laser Seeker

    No full text
    The accurate angle measurement of objects outside the linear field of view (FOV) is a challenging task for a strapdown semi-active laser seeker and is not yet well resolved. Considering the fact that the strapdown semi-active laser seeker is equipped with GPS and an inertial navigation system (INS) on a missile, in this work, we present an angle measurement method based on the fusion of the seeker’s data and GPS and INS data for a strapdown semi-active laser seeker. When an object is in the nonlinear FOV or outside the FOV, by solving the problems of space consistency and time consistency, the pitch angle and yaw angle of the object can be calculated via the fusion of the last valid angles measured by the seeker and the corresponding GPS and INS data. The numerical simulation results demonstrate the correctness and effectiveness of the proposed method

    Weighted Policy Constraints for Offline Reinforcement Learning

    No full text
    Offline reinforcement learning (RL) aims to learn policy from the passively collected offline dataset. Applying existing RL methods on the static dataset straightforwardly will raise distribution shift, causing these unconstrained RL methods to fail. To cope with the distribution shift problem, a common practice in offline RL is to constrain the policy explicitly or implicitly close to behavioral policy. However, the available dataset usually contains sub-optimal or inferior actions, constraining the policy near all these actions will make the policy inevitably learn inferior behaviors, limiting the performance of the algorithm. Based on this observation, we propose a weighted policy constraints (wPC) method that only constrains the learned policy to desirable behaviors, making room for policy improvement on other parts. Our algorithm outperforms existing state-of-the-art offline RL algorithms on the D4RL offline gym datasets. Moreover, the proposed algorithm is simple to implement with few hyper-parameters, making the proposed wPC algorithm a robust offline RL method with low computational complexity

    Towards BCI-actuated smart wheelchair system

    No full text
    Abstract Background Electroencephalogram-based brain–computer interfaces (BCIs) represent novel human machine interactive technology that allows people to communicate and interact with the external world without relying on their peripheral muscles and nervous system. Among BCI systems, brain-actuated wheelchairs are promising systems for the rehabilitation of severely motor disabled individuals who are unable to control a wheelchair by conventional interfaces. Previous related studies realized the easy use of brain-actuated wheelchairs that enable people to navigate the wheelchair through simple commands; however, these systems rely on offline calibration of the environment. Other systems do not rely on any prior knowledge; however, the control of the system is time consuming. In this paper, we have proposed an improved mobile platform structure equipped with an omnidirectional wheelchair, a lightweight robotic arm, a target recognition module and an auto-control module. Based on the you only look once (YOLO) algorithm, our system can, in real time, recognize and locate the targets in the environment, and the users confirm one target through a P300-based BCI. An expert system plans a proper solution for a specific target; for example, the planned solution for a door is opening the door and then passing through it, and the auto-control system then jointly controls the wheelchair and robotic arm to complete the operation. During the task execution, the target is also tracked by using an image tracking technique. Thus, we have formed an easy-to-use system that can provide accurate services to satisfy user requirements, and this system can accommodate different environments. Results To validate and evaluate our system, an experiment simulating the daily application was performed. The tasks included the user driving the system closer to a walking man and having a conversation with him; going to another room through a door; and picking up a bottle of water on the desk and drinking water. Three patients (cerebral infarction; spinal injury; and stroke) and four healthy subjects participated in the test and all completed the tasks. Conclusion This article presents a brain-actuated smart wheelchair system. The system is intelligent in that it provides efficient and considerate services for users. To test the system, three patients and four healthy subjects were recruited to participate in a test. The results demonstrate that the system works smartly and efficiently; with this system, users only need to issue small commands to get considerate services. This system is of significance for accelerating the application of BCIs in the practical environment, especially for patients who will use a BCI for rehabilitation applications
    corecore