409 research outputs found

    Fast vision through frameless event-based sensing and convolutional processing: Application to texture recognition

    Get PDF
    Address-event representation (AER) is an emergent hardware technology which shows a high potential for providing in the near future a solid technological substrate for emulating brain-like processing structures. When used for vision, AER sensors and processors are not restricted to capturing and processing still image frames, as in commercial frame-based video technology, but sense and process visual information in a pixel-level event-based frameless manner. As a result, vision processing is practically simultaneous to vision sensing, since there is no need to wait for sensing full frames. Also, only meaningful information is sensed, communicated, and processed. Of special interest for brain-like vision processing are some already reported AER convolutional chips, which have revealed a very high computational throughput as well as the possibility of assembling large convolutional neural networks in a modular fashion. It is expected that in a near future we may witness the appearance of large scale convolutional neural networks with hundreds or thousands of individual modules. In the meantime, some research is needed to investigate how to assemble and configure such large scale convolutional networks for specific applications. In this paper, we analyze AER spiking convolutional neural networks for texture recognition hardware applications. Based on the performance figures of already available individual AER convolution chips, we emulate large scale networks using a custom made event-based behavioral simulator. We have developed a new event-based processing architecture that emulates with AER hardware Manjunath's frame-based feature recognition software algorithm, and have analyzed its performance using our behavioral simulator. Recognition rate performance is not degraded. However, regarding speed, we show that recognition can be achieved before an equivalent frame is fully sensed and transmitted.Ministerio de Educación y Ciencia TEC-2006-11730-C03-01Junta de Andalucía P06-TIC-01417European Union IST-2001-34124, 21677

    Computer-Assisted Planning and Robotics in Epilepsy Surgery

    Get PDF
    Epilepsy is a severe and devastating condition that affects ~1% of the population. Around 30% of these patients are drug-refractory. Epilepsy surgery may provide a cure in selected individuals with drug-resistant focal epilepsy if the epileptogenic zone can be identified and safely resected or ablated. Stereoelectroencephalography (SEEG) is a diagnostic procedure that is performed to aid in the delineation of the seizure onset zone when non-invasive investigations are not sufficiently informative or discordant. Utilizing a multi-modal imaging platform, a novel computer-assisted planning (CAP) algorithm was adapted, applied and clinically validated for optimizing safe SEEG trajectory planning. In an initial retrospective validation study, 13 patients with 116 electrodes were enrolled and safety parameters between automated CAP trajectories and expert manual plans were compared. The automated CAP trajectories returned statistically significant improvements in all of the compared clinical metrics including overall risk score (CAP 0.57 +/- 0.39 (mean +/- SD) and manual 1.00 +/- 0.60, p < 0.001). Assessment of the inter-rater variability revealed there was no difference in external expert surgeon ratings. Both manual and CAP electrodes were rated as feasible in 42.8% (42/98) of cases. CAP was able to provide feasible electrodes in 19.4% (19/98), whereas manual planning was able to generate a feasible electrode in 26.5% (26/98) when the alternative generation method was not feasible. Based on the encouraging results from the retrospective analysis a prospective validation study including an additional 125 electrodes in 13 patients was then undertaken to compare CAP to expert manual plans from two neurosurgeons. The manual plans were performed separately and blindly from the CAP. Computer-generated trajectories were found to carry lower risks scores (absolute difference of 0.04 mm (95% CI = -0.42-0.01), p = 0.04) and were subsequently implanted in all cases without complication. The pipeline has been fully integrated into the clinical service and has now replaced manual SEEG planning at our institution. Further efforts were then focused on the distillation of optimal entry and target points for common SEEG trajectories and applying machine learning methods to develop an active learning algorithm to adapt to individual surgeon preferences. Thirty-two patients were prospectively enrolled in the study. The first 12 patients underwent prospective CAP planning and implantation following the pipeline outlined in the previous study. These patients were used as a training set and all of the 108 electrodes after successful implantation were normalized to atlas space to generate ‘spatial priors’, using a K-Nearest Neighbour (K-NN) classifier. A subsequent test set of 20 patients (210 electrodes) were then used to prospectively validate the spatial priors. From the test set, 78% (123/157) of the implanted trajectories passed through both the entry and target spatial priors defined from the training set. To improve the generalizability of the spatial priors to other neurosurgical centres undertaking SEEG and to take into account the potential for changing institutional practices, an active learning algorithm was implemented. The K-NN classifier was shown to dynamically learn and refine the spatial priors. The progressive refinement of CAP SEEG planning outlined in this and previous studies has culminated in an algorithm that not only optimizes the surgical heuristics and risk scores related to SEEG planning but can also learn from previous experience. Overall, safe and feasible trajectory schema were returning in 30% of the time required for manual SEEG planning. Computer-assisted planning was then applied to optimize laser interstitial thermal therapy (LITT) trajectory planning, which is a minimally invasive alternative to open mesial temporal resections, focal lesion ablation and anterior 2/3 corpus callosotomy. We describe and validate the first CAP algorithm for mesial temporal LITT ablations for epilepsy treatment. Twenty-five patients that had previously undergone LITT ablations at a single institution and with a median follow up of 2 years were included. Trajectory parameters for the CAP algorithm were derived from expert consensus to maximize distance from vasculature and ablation of the amygdalohippocampal complex, minimize collateral damage to adjacent brain structures whilst avoiding transgression of the ventricles and sulci. Trajectory parameters were also optimized to reduce the drilling angle to the skull and overall catheter length. Simulated cavities attributable to the CAP trajectories were calculated using a 5-15 mm ablation diameter. In comparison to manually planned and implemented LITT trajectories,CAP resulted in a significant increase in the percentage ablation of the amygdalohippocampal complex (manual 57.82 +/- 15.05% (mean +/- S.D.) and unablated medial hippocampal head depth (manual 4.45 +/- 1.58 mm (mean +/- S.D.), CAP 1.19 +/- 1.37 (mean +/- S.D.), p = 0.0001). As LITT ablation of the mesial temporal structures is a novel procedure there are no established standards for trajectory planning. A data-driven machine learning approach was, therefore, applied to identify hitherto unknown CAP trajectory parameter combinations. All possible combinations of planning parameters were calculated culminating in 720 unique combinations per patient. Linear regression and random forest machine learning algorithms were trained on half of the data set (3800 trajectories) and tested on the remaining unseen trajectories (3800 trajectories). The linear regression and random forest methods returned good predictive accuracies with both returning Pearson correlations of ρ = 0.7 and root mean squared errors of 0.13 and 0.12 respectively. The machine learning algorithm revealed that the optimal entry points were centred over the junction of the inferior occipital, middle temporal and middle occipital gyri. The optimal target points were anterior and medial translations of the centre of the amygdala. A large multicenter external validation study of 95 patients was then undertaken comparing the manually planned and implemented trajectories, CAP trajectories targeting the centre of the amygdala, the CAP parameters derived from expert consensus and the CAP trajectories utilizing the machine learning derived parameters. Three external blinded expert surgeons were then selected to undertake feasibility ratings and preference rankings of the trajectories. CAP generated trajectories result in a significant improvement in many of the planning metrics, notably the risk score (manual 1.3 +/- 0.1 (mean +/- S.D.), CAP 1.1 +/- 0.2 (mean +/- S.D.), p<0.000) and overall ablation of the amygdala (manual 45.3 +/- 22.2 % (mean +/- S.D.), CAP 64.2 +/- 20 % (mean +/- S.D.), p<0.000). Blinded external feasibility ratings revealed that manual trajectories were less preferable than CAP planned trajectories with an estimated probability of being ranked 4th (lowest) of 0.62. Traditional open corpus callosotomy requires a midline craniotomy, interhemispheric dissection and disconnection of the rostrum, genu and body of the corpus callosum. In cases where drop attacks persist a completion corpus callosotomy to disrupt the remaining fibres in the splenium is then performed. The emergence of LITT technology has raised the possibility of being able to undertake this procedure in a minimally invasive fashion and without the need for a craniotomy using two or three individual trajectories. Early case series have shown LITT anterior two-thirds corpus callosotomy to be safe and efficacious. Whole-brain probabilistic tractography connectomes were generated utilizing 3-Tesla multi-shell imaging data and constrained spherical deconvolution (CSD). Two independent blinded expert neurosurgeons with experience of performing the procedure using LITT then planned the trajectories in each patient following their current clinical practice. Automated trajectories returned a significant reduction in the risk score (manual 1.3 +/- 0.1 (mean +/- S.D.), CAP 1.1 +/- 0.1 (mean +/- S.D.), p<0.000). Finally, we investigate the different methods of surgical implantation for SEEG electrodes. As an initial study, a systematic review and meta-analysis of the literature to date were performed. This revealed a wide variety of implantation methods including traditional frame-based, frameless, robotic and custom-3D printed jigs were being used in clinical practice. Of concern, all comparative reports from institutions that had changed from one implantation method to another, such as following the introduction of robotic systems, did not undertake parallel-group comparisons. This suggests that patients may have been exposed to risks associated with learning curves and potential harms related to the new device until the efficacy was known. A pragmatic randomized control trial of a novel non-CE marked robotic trajectory guidance system (iSYS1) was then devised. Before clinical implantations began a series of pre-clinical investigations utilizing 3D printed phantom heads from previously implanted patients was performed to provide pilot data and also assess the surgical learning curve. The surgeons had comparatively little clinical experience with the new robotic device which replicates the introduction of such novel technologies to clinical practice. The study confirmed that the learning curve with the iSYS1 devices was minimal and the accuracies and workflow were similar to the conventional manual method. The randomized control trial represents the first of its kind for stereotactic neurosurgical procedures. Thirty-two patients were enrolled with 16 patients randomized to the iSYS1 intervention arm and 16 patients to the manual implantation arm. The intervention allocation was concealed from the patients. The surgical and research team could be not blinded. Trial management, independent data monitoring and trial steering committees were convened at four points doing the trial (after every 8 patients implanted). Based on the high level of accuracy required for both methods, the main distinguishing factor would be the time to achieve the alignment to the prespecified trajectory. The primary outcome for comparison, therefore, was the time for individual SEEG electrode implantation. Secondary outcomes included the implantation accuracy derived from the post-operative CT scan, infection, intracranial haemorrhage and neurological deficit rates. Overall, 32 patients (328 electrodes) completed the trial (16 in each intervention arm) and the baseline demographics were broadly similar between the two groups. The time for individual electrode implantation was significantly less with the iSYS1 device (median of 3.36 (95% CI 5.72 to 7.07) than for the PAD group (median of 9.06 minutes (95% CI 8.16 to 10.06), p=0.0001). Target point accuracy was significantly greater with the PAD (median of 1.58 mm (95% CI 1.38 to 1.82) compared to the iSYS1 (median of 1.16 mm (95% CI 1.01 to 1.33), p=0.004). The difference between the target point accuracies are not clinically significant for SEEG but may have implications for procedures such as deep brain stimulation that require higher placement accuracy. All of the electrodes achieved their respective intended anatomical targets. In 12 of 16 patients following robotic implantations, and 10 of 16 following manual PAD implantations a seizure onset zone was identified and resection recommended. The aforementioned systematic review and meta-analysis were updated to include additional studies published during the trial duration. In this context, the iSYS1 device entry and target point accuracies were similar to those reported in other published studies of robotic devices including the ROSA, Neuromate and iSYS1. The PAD accuracies, however, outperformed the previously published results for other frameless stereotaxy methods. In conclusion, the presented studies report the integration and validation of a complex clinical decision support software into the clinical neurosurgical workflow for SEEG planning. The stereotactic planning platform was further refined by integrating machine learning techniques and also extended towards optimisation of LITT trajectories for ablation of mesial temporal structures and corpus callosotomy. The platform was then used to seamlessly integrate with a novel trajectory planning software to effectively and safely guide the implantation of the SEEG electrodes. Through a single-blinded randomised control trial, the ISYS1 device was shown to reduce the time taken for individual electrode insertion. Taken together, this work presents and validates the first fully integrated stereotactic trajectory planning platform that can be used for both SEEG and LITT trajectory planning followed by surgical implantation through the use of a novel trajectory guidance system

    Previous, current, and future stereotactic EEG techniques for localising epileptic foci

    Get PDF
    INTRODUCTION: Drug-resistant focal epilepsy presents a significant morbidity burden globally, and epilepsy surgery has been shown to be an effective treatment modality. Therefore, accurate identification of the epileptogenic zone for surgery is crucial, and in those with unclear noninvasive data, stereoencephalography is required. AREAS COVERED: This review covers the history and current practices in the field of intracranial EEG, particularly analyzing how stereotactic image-guidance, robot-assisted navigation, and improved imaging techniques have increased the accuracy, scope, and use of SEEG globally. EXPERT OPINION: We provide a perspective on the future directions in the field, reviewing improvements in predicting electrode bending, image acquisition, machine learning and artificial intelligence, advances in surgical planning and visualization software and hardware. We also see the development of EEG analysis tools based on machine learning algorithms that are likely to work synergistically with neurophysiology experts and improve the efficiency of EEG and SEEG analysis and 3D visualization. Improving computer-assisted planning to minimize manual input from the surgeon, and seamless integration into an ergonomic and adaptive operating theater, incorporating hybrid microscopes, virtual and augmented reality is likely to be a significant area of improvement in the near future

    EV-FlowNet: Self-Supervised Optical Flow Estimation for Event-based Cameras

    Full text link
    Event-based cameras have shown great promise in a variety of situations where frame based cameras suffer, such as high speed motions and high dynamic range scenes. However, developing algorithms for event measurements requires a new class of hand crafted algorithms. Deep learning has shown great success in providing model free solutions to many problems in the vision community, but existing networks have been developed with frame based images in mind, and there does not exist the wealth of labeled data for events as there does for images for supervised training. To these points, we present EV-FlowNet, a novel self-supervised deep learning pipeline for optical flow estimation for event based cameras. In particular, we introduce an image based representation of a given event stream, which is fed into a self-supervised neural network as the sole input. The corresponding grayscale images captured from the same camera at the same time as the events are then used as a supervisory signal to provide a loss function at training time, given the estimated flow from the network. We show that the resulting network is able to accurately predict optical flow from events only in a variety of different scenes, with performance competitive to image based networks. This method not only allows for accurate estimation of dense optical flow, but also provides a framework for the transfer of other self-supervised methods to the event-based domain.Comment: 9 pages, 5 figures, 1 table. Accompanying video: https://youtu.be/eMHZBSoq0sE. Dataset: https://daniilidis-group.github.io/mvsec/, Robotics: Science and Systems 201
    corecore