13,872 research outputs found

    An Equivariant Observer Design for Visual Localisation and Mapping

    Full text link
    This paper builds on recent work on Simultaneous Localisation and Mapping (SLAM) in the non-linear observer community, by framing the visual localisation and mapping problem as a continuous-time equivariant observer design problem on the symmetry group of a kinematic system. The state-space is a quotient of the robot pose expressed on SE(3) and multiple copies of real projective space, used to represent both points in space and bearings in a single unified framework. An observer with decoupled Riccati-gains for each landmark is derived and we show that its error system is almost globally asymptotically stable and exponentially stable in-the-large.Comment: 12 pages, 2 figures, published in 2019 IEEE CD

    The spatio-temporal mapping of epileptic networks: Combination of EEG–fMRI and EEG source imaging

    Get PDF
    Simultaneous EEG–fMRI acquisitions in patients with epilepsy often reveal distributed patterns of Blood Oxygen Level Dependant (BOLD) change correlated with epileptiform discharges. We investigated if electrical source imaging (ESI) performed on the interictal epileptiform discharges (IED) acquired during fMRI acquisition could be used to study the dynamics of the networks identified by the BOLD effect, thereby avoiding the limitations of combining results from separate recordings. Nine selected patients (13 IED types identified) with focal epilepsy underwent EEG–fMRI. Statistical analysis was performed using SPM5 to create BOLD maps. ESI was performed on the IED recorded during fMRI acquisition using a realistic head model (SMAC) and a distributed linear inverse solution (LAURA). ESI could not be performed in one case. In 10/12 remaining studies, ESI at IED onset (ESIo) was anatomically close to one BOLD cluster. Interestingly, ESIo was closest to the positive BOLD cluster with maximal statistical significance in only 4/12 cases and closest to negative BOLD responses in 4/12 cases. Very small BOLD clusters could also have clinical relevance in some cases. ESI at later time frame (ESIp) showed propagation to remote sources co-localised with other BOLD clusters in half of cases. In concordant cases, the distance between maxima of ESI and the closest EEG–fMRI cluster was less than 33 mm, in agreement with previous studies. We conclude that simultaneous ESI and EEG–fMRI analysis may be able to distinguish areas of BOLD response related to initiation of IED from propagation areas. This combination provides new opportunities for investigating epileptic networks

    EchoFusion: Tracking and Reconstruction of Objects in 4D Freehand Ultrasound Imaging without External Trackers

    Get PDF
    Ultrasound (US) is the most widely used fetal imaging technique. However, US images have limited capture range, and suffer from view dependent artefacts such as acoustic shadows. Compounding of overlapping 3D US acquisitions into a high-resolution volume can extend the field of view and remove image artefacts, which is useful for retrospective analysis including population based studies. However, such volume reconstructions require information about relative transformations between probe positions from which the individual volumes were acquired. In prenatal US scans, the fetus can move independently from the mother, making external trackers such as electromagnetic or optical tracking unable to track the motion between probe position and the moving fetus. We provide a novel methodology for image-based tracking and volume reconstruction by combining recent advances in deep learning and simultaneous localisation and mapping (SLAM). Tracking semantics are established through the use of a Residual 3D U-Net and the output is fed to the SLAM algorithm. As a proof of concept, experiments are conducted on US volumes taken from a whole body fetal phantom, and from the heads of real fetuses. For the fetal head segmentation, we also introduce a novel weak annotation approach to minimise the required manual effort for ground truth annotation. We evaluate our method qualitatively, and quantitatively with respect to tissue discrimination accuracy and tracking robustness.Comment: MICCAI Workshop on Perinatal, Preterm and Paediatric Image analysis (PIPPI), 201

    Simultaneous intracranial EEG and fMRI of interictal epileptic discharges in humans

    Get PDF
    Simultaneous scalp EEG–fMRI measurements allow the study of epileptic networks and more generally, of the coupling between neuronal activity and haemodynamic changes in the brain. Intracranial EEG (icEEG) has greater sensitivity and spatial specificity than scalp EEG but limited spatial sampling. We performed simultaneous icEEG and functional MRI recordings in epileptic patients to study the haemodynamic correlates of intracranial interictal epileptic discharges (IED). Two patients undergoing icEEG with subdural and depth electrodes as part of the presurgical assessment of their pharmaco-resistant epilepsy participated in the study. They were scanned on a 1.5 T MR scanner following a strict safety protocol. Simultaneous recordings of fMRI and icEEG were obtained at rest. IED were subsequently visually identified on icEEG and their fMRI correlates were mapped using a general linear model (GLM). On scalp EEG–fMRI recordings performed prior to the implantation, no IED were detected. icEEG–fMRI was well tolerated and no adverse health effect was observed. intra-MR icEEG was comparable to that obtained outside the scanner. In both cases, significant haemodynamic changes were revealed in relation to IED, both close to the most active electrode contacts and at distant sites. In one case, results showed an epileptic network including regions that could not be sampled by icEEG, in agreement with findings from magneto-encephalography, offering some explanation for the persistence of seizures after surgery. Hence, icEEG–fMRI allows the study of whole-brain human epileptic networks with unprecedented sensitivity and specificity. This could help improve our understanding of epileptic networks with possible implications for epilepsy surgery

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved

    Increasing the Efficiency of 6-DoF Visual Localization Using Multi-Modal Sensory Data

    Full text link
    Localization is a key requirement for mobile robot autonomy and human-robot interaction. Vision-based localization is accurate and flexible, however, it incurs a high computational burden which limits its application on many resource-constrained platforms. In this paper, we address the problem of performing real-time localization in large-scale 3D point cloud maps of ever-growing size. While most systems using multi-modal information reduce localization time by employing side-channel information in a coarse manner (eg. WiFi for a rough prior position estimate), we propose to inter-weave the map with rich sensory data. This multi-modal approach achieves two key goals simultaneously. First, it enables us to harness additional sensory data to localise against a map covering a vast area in real-time; and secondly, it also allows us to roughly localise devices which are not equipped with a camera. The key to our approach is a localization policy based on a sequential Monte Carlo estimator. The localiser uses this policy to attempt point-matching only in nodes where it is likely to succeed, significantly increasing the efficiency of the localization process. The proposed multi-modal localization system is evaluated extensively in a large museum building. The results show that our multi-modal approach not only increases the localization accuracy but significantly reduces computational time.Comment: Presented at IEEE-RAS International Conference on Humanoid Robots (Humanoids) 201
    • …
    corecore