42,694 research outputs found

    Neural Models of Motion Integration, Segmentation, and Probablistic Decision-Making

    Full text link
    When brain mechanism carry out motion integration and segmentation processes that compute unambiguous global motion percepts from ambiguous local motion signals? Consider, for example, a deer running at variable speeds behind forest cover. The forest cover is an occluder that creates apertures through which fragments of the deer's motion signals are intermittently experienced. The brain coherently groups these fragments into a trackable percept of the deer in its trajectory. Form and motion processes are needed to accomplish this using feedforward and feedback interactions both within and across cortical processing streams. All the cortical areas V1, V2, MT, and MST are involved in these interactions. Figure-ground processes in the form stream through V2, such as the seperation of occluding boundaries of the forest cover from the boundaries of the deer, select the motion signals which determine global object motion percepts in the motion stream through MT. Sparse, but unambiguous, feauture tracking signals are amplified before they propogate across position and are intergrated with far more numerous ambiguous motion signals. Figure-ground and integration processes together determine the global percept. A neural model predicts the processing stages that embody these form and motion interactions. Model concepts and data are summarized about motion grouping across apertures in response to a wide variety of displays, and probabilistic decision making in parietal cortex in response to random dot displays.National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Pop-up SLAM: Semantic Monocular Plane SLAM for Low-texture Environments

    Full text link
    Existing simultaneous localization and mapping (SLAM) algorithms are not robust in challenging low-texture environments because there are only few salient features. The resulting sparse or semi-dense map also conveys little information for motion planning. Though some work utilize plane or scene layout for dense map regularization, they require decent state estimation from other sources. In this paper, we propose real-time monocular plane SLAM to demonstrate that scene understanding could improve both state estimation and dense mapping especially in low-texture environments. The plane measurements come from a pop-up 3D plane model applied to each single image. We also combine planes with point based SLAM to improve robustness. On a public TUM dataset, our algorithm generates a dense semantic 3D model with pixel depth error of 6.2 cm while existing SLAM algorithms fail. On a 60 m long dataset with loops, our method creates a much better 3D model with state estimation error of 0.67%.Comment: International Conference on Intelligent Robots and Systems (IROS) 201

    Quantitative magnetic resonance image analysis via the EM algorithm with stochastic variation

    Full text link
    Quantitative Magnetic Resonance Imaging (qMRI) provides researchers insight into pathological and physiological alterations of living tissue, with the help of which researchers hope to predict (local) therapeutic efficacy early and determine optimal treatment schedule. However, the analysis of qMRI has been limited to ad-hoc heuristic methods. Our research provides a powerful statistical framework for image analysis and sheds light on future localized adaptive treatment regimes tailored to the individual's response. We assume in an imperfect world we only observe a blurred and noisy version of the underlying pathological/physiological changes via qMRI, due to measurement errors or unpredictable influences. We use a hidden Markov random field to model the spatial dependence in the data and develop a maximum likelihood approach via the Expectation--Maximization algorithm with stochastic variation. An important improvement over previous work is the assessment of variability in parameter estimation, which is the valid basis for statistical inference. More importantly, we focus on the expected changes rather than image segmentation. Our research has shown that the approach is powerful in both simulation studies and on a real dataset, while quite robust in the presence of some model assumption violations.Comment: Published in at http://dx.doi.org/10.1214/07-AOAS157 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Automated quantification and evaluation of motion artifact on coronary CT angiography images

    Get PDF
    Abstract Purpose This study developed and validated a Motion Artifact Quantification algorithm to automatically quantify the severity of motion artifacts on coronary computed tomography angiography (CCTA) images. The algorithm was then used to develop a Motion IQ Decision method to automatically identify whether a CCTA dataset is of sufficient diagnostic image quality or requires further correction. Method The developed Motion Artifact Quantification algorithm includes steps to identify the right coronary artery (RCA) regions of interest (ROIs), segment vessel and shading artifacts, and to calculate the motion artifact score (MAS) metric. The segmentation algorithms were verified against groundā€truth manual segmentations. The segmentation algorithms were also verified by comparing and analyzing the MAS calculated from groundā€truth segmentations and the algorithmā€generated segmentations. The Motion IQ Decision algorithm first identifies slices with unsatisfactory image quality using a MAS threshold. The algorithm then uses an artifactā€length threshold to determine whether the degraded vessel segment is large enough to cause the dataset to be nondiagnostic. An observer study on 30 clinical CCTA datasets was performed to obtain the groundā€truth decisions of whether the datasets were of sufficient image quality. A fiveā€fold crossā€validation was used to identify the thresholds and to evaluate the Motion IQ Decision algorithm. Results The automated segmentation algorithms in the Motion Artifact Quantification algorithm resulted in Dice coefficients of 0.84 for the segmented vessel regions and 0.75 for the segmented shading artifact regions. The MAS calculated using the automated algorithm was within 10% of the values obtained using groundā€truth segmentations. The MAS threshold and artifactā€length thresholds were determined by the ROC analysis to be 0.6 and 6.25 mm by all folds. The Motion IQ Decision algorithm demonstrated 100% sensitivity, 66.7% Ā± 27.9% specificity, and a total accuracy of 86.7% Ā± 12.5% for identifying datasets in which the RCA required correction. The Motion IQ Decision algorithm demonstrated 91.3% sensitivity, 71.4% specificity, and a total accuracy of 86.7% for identifying CCTA datasets that need correction for any of the three main vessels. Conclusion The Motion Artifact Quantification algorithm calculated accurate

    Laminar Cortical Dynamics of Visual Form and Motion Interactions During Coherent Object Motion Perception

    Full text link
    How do visual form and motion processes cooperate to compute object motion when each process separately is insufficient? A 3D FORMOTION model specifies how 3D boundary representations, which separate figures from backgrounds within cortical area V2, capture motion signals at the appropriate depths in MT; how motion signals in MT disambiguate boundaries in V2 via MT-to-Vl-to-V2 feedback; how sparse feature tracking signals are amplified; and how a spatially anisotropic motion grouping process propagates across perceptual space via MT-MST feedback to integrate feature-tracking and ambiguous motion signals to determine a global object motion percept. Simulated data include: the degree of motion coherence of rotating shapes observed through apertures, the coherent vs. element motion percepts separated in depth during the chopsticks illusion, and the rigid vs. non-rigid appearance of rotating ellipses.Air Force Office of Scientific Research (F49620-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (BCS-02-35398, SBE-0354378); Office of Naval Research (N00014-95-1-0409, N00014-01-1-0624
    • ā€¦
    corecore