17,020 research outputs found

    Deep convolutional neural networks for segmenting 3D in vivo multiphoton images of vasculature in Alzheimer disease mouse models

    Full text link
    The health and function of tissue rely on its vasculature network to provide reliable blood perfusion. Volumetric imaging approaches, such as multiphoton microscopy, are able to generate detailed 3D images of blood vessels that could contribute to our understanding of the role of vascular structure in normal physiology and in disease mechanisms. The segmentation of vessels, a core image analysis problem, is a bottleneck that has prevented the systematic comparison of 3D vascular architecture across experimental populations. We explored the use of convolutional neural networks to segment 3D vessels within volumetric in vivo images acquired by multiphoton microscopy. We evaluated different network architectures and machine learning techniques in the context of this segmentation problem. We show that our optimized convolutional neural network architecture, which we call DeepVess, yielded a segmentation accuracy that was better than both the current state-of-the-art and a trained human annotator, while also being orders of magnitude faster. To explore the effects of aging and Alzheimer's disease on capillaries, we applied DeepVess to 3D images of cortical blood vessels in young and old mouse models of Alzheimer's disease and wild type littermates. We found little difference in the distribution of capillary diameter or tortuosity between these groups, but did note a decrease in the number of longer capillary segments (>75μm>75\mu m) in aged animals as compared to young, in both wild type and Alzheimer's disease mouse models.Comment: 34 pages, 9 figure

    Tackling 3D ToF Artifacts Through Learning and the FLAT Dataset

    Full text link
    Scene motion, multiple reflections, and sensor noise introduce artifacts in the depth reconstruction performed by time-of-flight cameras. We propose a two-stage, deep-learning approach to address all of these sources of artifacts simultaneously. We also introduce FLAT, a synthetic dataset of 2000 ToF measurements that capture all of these nonidealities, and allows to simulate different camera hardware. Using the Kinect 2 camera as a baseline, we show improved reconstruction errors over state-of-the-art methods, on both simulated and real data.Comment: ECCV 201

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Action Recognition: From Static Datasets to Moving Robots

    Get PDF
    Deep learning models have achieved state-of-the- art performance in recognizing human activities, but often rely on utilizing background cues present in typical computer vision datasets that predominantly have a stationary camera. If these models are to be employed by autonomous robots in real world environments, they must be adapted to perform independently of background cues and camera motion effects. To address these challenges, we propose a new method that firstly generates generic action region proposals with good potential to locate one human action in unconstrained videos regardless of camera motion and then uses action proposals to extract and classify effective shape and motion features by a ConvNet framework. In a range of experiments, we demonstrate that by actively proposing action regions during both training and testing, state-of-the-art or better performance is achieved on benchmarks. We show the outperformance of our approach compared to the state-of-the-art in two new datasets; one emphasizes on irrelevant background, the other highlights the camera motion. We also validate our action recognition method in an abnormal behavior detection scenario to improve workplace safety. The results verify a higher success rate for our method due to the ability of our system to recognize human actions regardless of environment and camera motion

    Anisotropic diffusion of surface normals for feature preserving surface reconstruction

    Get PDF
    Journal ArticleFor 3D surface reconstruction problems with noisy and incomplete range data measured from complex scenes with arbitrary topologies, a low-level representation, such as level set surfaces, is used. Such surface reconstruction is typically accomplished by minimizing a weighted sum of data-model discrepancy and model smoothness terms. This paper introduces a new onlinear model smoothness term for surface reconstruction based on variations of the surface normals. A direct solution requires solving a fourth-order partial differential equation (PDE), which is very difficult with conventional numerical techniques. Our solution is based on processing the normals separately from the surface, which allows us to separate the problem into two second-order PDEs. The proposed method can smooth complex, noisy surfaces, while preserving sharp, geometric features, and it is a natural generalization of edge-preserving methods in image processing, such as anisotropic diffusion

    Anisotropic diffusion of surface normals for feature preserving surface reconstruction

    Get PDF
    Journal ArticleFor 3D surface reconstruction problems with noisy and incomplete range data measure d from complex scenes with arbitrary topologies, a low-level representation, such as level set surfaces, is used. Such surface reconstruction is typically accomplished by minimizing a weighted sum of data-model discrepancy and model smoothness terms. This paper introduces a new nonlinear model smoothness term for surface reconstruction based on variations of the surface normals. A direct solution requires solving a fourth-order partial differential equation (PDE), which is very difficult with conventional numerical techniques. Our solution is based on processing the normals separately from the surface, which allows us to separate the problem into two second-order PDEs. The proposed method can smooth complex, noisy surfaces, while preserving sharp, geometric features, and it is a natural generalization of edge-preserving methods in image processing, such as anisotropic diffusion

    Anisotropic diffusion of surface normals for feature preserving surface reconstruction

    Get PDF
    technical reportFor 3D surface reconstruction problems with noisy and incomplete range data measured from complex scenes with arbitrary topologies, a low-level representation, such as level set surfaces, is used. Such surface reconstruction is typically accomplished by minimizing a weighted sum of data-model discrepancy and model smoothness terms. This paper introduces a new nonlinear model smoothness term for surface reconstruction based on variations of the surface normals. A direct solution requires solving a fourth-order partial differential equation (PDE), which is very difficult with conventional numerical techniques. Our solution is based on processing the normals separately from the surface, which allows us to separate the problem into two second-order PDEs. The proposed method can smooth complex, noisy surfaces, while preserving sharp, geometric features, and it is a natural generalization of edge-preserving methods in image processing, such as anisotropic diffusion

    Thermal dosimetry for bladder hyperthermia treatment. An overview.

    Get PDF
    The urinary bladder is a fluid-filled organ. This makes, on the one hand, the internal surface of the bladder wall relatively easy to heat and ensures in most cases a relatively homogeneous temperature distribution; on the other hand the variable volume, organ motion, and moving fluid cause artefacts for most non-invasive thermometry methods, and require additional efforts in planning accurate thermal treatment of bladder cancer. We give an overview of the thermometry methods currently used and investigated for hyperthermia treatments of bladder cancer, and discuss their advantages and disadvantages within the context of the specific disease (muscle-invasive or non-muscle-invasive bladder cancer) and the heating technique used. The role of treatment simulation to determine the thermal dose delivered is also discussed. Generally speaking, invasive measurement methods are more accurate than non-invasive methods, but provide more limited spatial information; therefore, a combination of both is desirable, preferably supplemented by simulations. Current efforts at research and clinical centres continue to improve non-invasive thermometry methods and the reliability of treatment planning and control software. Due to the challenges in measuring temperature across the non-stationary bladder wall and surrounding tissues, more research is needed to increase our knowledge about the penetration depth and typical heating pattern of the various hyperthermia devices, in order to further improve treatments. The ability to better determine the delivered thermal dose will enable clinicians to investigate the optimal treatment parameters, and consequentially, to give better controlled, thus even more reliable and effective, thermal treatments
    • …
    corecore