2,359 research outputs found

    Feature extraction using extrema sampling of discrete derivatives for spike sorting in implantable upper-limb neural prostheses

    Get PDF
    Next generation neural interfaces for upper-limb (and other) prostheses aim to develop implantable interfaces for one or more nerves, each interface having many neural signal channels that work reliably in the stump without harming the nerves. To achieve real-time multi-channel processing it is important to integrate spike sorting on-chip to overcome limitations in transmission bandwidth. This requires computationally efficient algorithms for feature extraction and clustering suitable for low-power hardware implementation. This paper describes a new feature extraction method for real-time spike sorting based on extrema analysis (namely positive peaks and negative peaks) of spike shapes and their discrete derivatives at different frequency bands. Employing simulation across different datasets, the accuracy and computational complexity of the proposed method are assessed and compared with other methods. The average classification accuracy of the proposed method in conjunction with online sorting (O-Sort) is 91.6%, outperforming all the other methods tested with the O-Sort clustering algorithm. The proposed method offers a better tradeoff between classification error and computational complexity, making it a particularly strong choice for on-chip spike sorting

    DeepNav: Joint View Learning for Direct Optimal Path Perception in Cochlear Surgical Platform Navigation

    Get PDF
    Although much research has been conducted in the field of automated cochlear implant navigation, the problem remains challenging. Deep learning techniques have recently achieved impressive results in a variety of computer vision problems, raising expectations that they might be applied in other domains, such as identifying the optimal navigation zone (OPZ) in the cochlear. In this paper, a 2.5D joint-view convolutional neural network (2.5D CNN) is proposed and evaluated for the identification of the OPZ in the cochlear segments. The proposed network consists of 2 complementary sagittal and bird-view (or top view) networks for the 3D OPZ recognition, each utilizing a ResNet-8 architecture consisting of 5 convolutional layers with rectified nonlinearity unit (ReLU) activations, followed by average pooling with size equal to the size of the final feature maps. The last fully connected layer of each network has 4 indicators, equivalent to the classes considered: the distance to the adjacent left and right walls, collision probability and heading angle. To demonstrate this, the 2.5D CNN was trained using a parametric data generation model, and then evaluated using anatomically constructed cochlea models from the micro-CT images of different cases. Prediction of the indicators demonstrates the effectiveness of the 2.5D CNN, for example the heading angle has less than 1° error with computation delays of less that <1 milliseconds
    corecore