15,064 research outputs found

    Motion sequence analysis in the presence of figural cues

    Full text link
    Published in final edited form as: Neurocomputing. 2015 January 5, 147: 485–491The perception of 3-D structure in dynamic sequences is believed to be subserved primarily through the use of motion cues. However, real-world sequences contain many figural shape cues besides the dynamic ones. We hypothesize that if figural cues are perceptually significant during sequence analysis, then inconsistencies in these cues over time would lead to percepts of non-rigidity in sequences showing physically rigid objects in motion. We develop an experimental paradigm to test this hypothesis and present results with two patients with impairments in motion perception due to focal neurological damage, as well as two control subjects. Consistent with our hypothesis, the data suggest that figural cues strongly influence the perception of structure in motion sequences, even to the extent of inducing non-rigid percepts in sequences where motion information alone would yield rigid structures. Beyond helping to probe the issue of shape perception, our experimental paradigm might also serve as a possible perceptual assessment tool in a clinical setting.The authors wish to thank all observers who participated in the experiments reported here. This research and the preparation of this manuscript was supported by the National Institutes of Health RO1 NS064100 grant to LMV. (RO1 NS064100 - National Institutes of Health)Accepted manuscrip

    An intelligent real time 3D vision system for robotic welding tasks

    Get PDF
    MARWIN is a top-level robot control system that has been designed for automatic robot welding tasks. It extracts welding parameters and calculates robot trajectories directly from CAD models which are then verified by real-time 3D scanning and registration. MARWIN's 3D computer vision provides a user-centred robot environment in which a task is specified by the user by simply confirming and/or adjusting suggested parameters and welding sequences. The focus of this paper is on describing a mathematical formulation for fast 3D reconstruction using structured light together with the mechanical design and testing of the 3D vision system and show how such technologies can be exploited in robot welding tasks

    Localization of magnetic sources underground by a data adaptive tomographic scanner

    Full text link
    A tomography method is proposed to image magnetic anomaly sources buried below a non-flat ground surface, by developing the expression of the total power associated with a measured magnetic field. By discretising the integral relating a static magnetic field to its source terms, the total power can be written as a sum of crosscorrelation products between the magnetic field data set and the theoretical expression of the magnetic field generated by a source element of unitary strength. Then, applying Schwarz's inequality, an occurrence probability function is derived for imaging any distribution of magnetic anomaly sources in the subsurface. The tomographic procedure consists in scanning the half-space below the survey area by the unitary source and in computing the occurrence probability function at the nodes of a regular grid within the half-space. The grid values are finally contoured in order to single out the zones with high probability of occurrence of buried magnetic anomaly sources. Synthetic and field examples are discussed to test the resolution power of the proposed tomography.Comment: 15 pages, 17 figure

    Fast, Accurate Thin-Structure Obstacle Detection for Autonomous Mobile Robots

    Full text link
    Safety is paramount for mobile robotic platforms such as self-driving cars and unmanned aerial vehicles. This work is devoted to a task that is indispensable for safety yet was largely overlooked in the past -- detecting obstacles that are of very thin structures, such as wires, cables and tree branches. This is a challenging problem, as thin objects can be problematic for active sensors such as lidar and sonar and even for stereo cameras. In this work, we propose to use video sequences for thin obstacle detection. We represent obstacles with edges in the video frames, and reconstruct them in 3D using efficient edge-based visual odometry techniques. We provide both a monocular camera solution and a stereo camera solution. The former incorporates Inertial Measurement Unit (IMU) data to solve scale ambiguity, while the latter enjoys a novel, purely vision-based solution. Experiments demonstrated that the proposed methods are fast and able to detect thin obstacles robustly and accurately under various conditions.Comment: Appeared at IEEE CVPR 2017 Workshop on Embedded Visio

    Developing Interaction 3D Models for E-Learning Applications

    Get PDF
    Some issues concerning the development of interactive 3D models for e-learning applications are considered. Given that 3D data sets are normally large and interactive display demands high performance computation, a natural solution would be placing the computational burden on the client machine rather than on the server. Mozilla and Google opted for a combination of client-side languages, JavaScript and OpenGL, to handle 3D graphics in a web browser (Mozilla 3D and O3D respectively). Based on the O3D model, core web technologies are considered and an example of the full process involving the generation of a 3D model and their interactive visualization in a web browser is described. The challenging issue of creating realistic 3D models of objects in the real world is discussed and a method based on line projection for fast 3D reconstruction is presented. The generated model is then visualized in a web browser. The experiments demonstrate that visualization of 3D data in a web browser can provide quality user experience. Moreover, the development of web applications are facilitated by O3D JavaScript extension allowing web designers to focus on 3D contents generation

    Visualization and Correction of Automated Segmentation, Tracking and Lineaging from 5-D Stem Cell Image Sequences

    Get PDF
    Results: We present an application that enables the quantitative analysis of multichannel 5-D (x, y, z, t, channel) and large montage confocal fluorescence microscopy images. The image sequences show stem cells together with blood vessels, enabling quantification of the dynamic behaviors of stem cells in relation to their vascular niche, with applications in developmental and cancer biology. Our application automatically segments, tracks, and lineages the image sequence data and then allows the user to view and edit the results of automated algorithms in a stereoscopic 3-D window while simultaneously viewing the stem cell lineage tree in a 2-D window. Using the GPU to store and render the image sequence data enables a hybrid computational approach. An inference-based approach utilizing user-provided edits to automatically correct related mistakes executes interactively on the system CPU while the GPU handles 3-D visualization tasks. Conclusions: By exploiting commodity computer gaming hardware, we have developed an application that can be run in the laboratory to facilitate rapid iteration through biological experiments. There is a pressing need for visualization and analysis tools for 5-D live cell image data. We combine accurate unsupervised processes with an intuitive visualization of the results. Our validation interface allows for each data set to be corrected to 100% accuracy, ensuring that downstream data analysis is accurate and verifiable. Our tool is the first to combine all of these aspects, leveraging the synergies obtained by utilizing validation information from stereo visualization to improve the low level image processing tasks.Comment: BioVis 2014 conferenc

    Sketching space

    Get PDF
    In this paper, we present a sketch modelling system which we call Stilton. The program resembles a desktop VRML browser, allowing a user to navigate a three-dimensional model in a perspective projection, or panoramic photographs, which the program maps onto the scene as a `floor' and `walls'. We place an imaginary two-dimensional drawing plane in front of the user, and any geometric information that user sketches onto this plane may be reconstructed to form solid objects through an optimization process. We show how the system can be used to reconstruct geometry from panoramic images, or to add new objects to an existing model. While panoramic imaging can greatly assist with some aspects of site familiarization and qualitative assessment of a site, without the addition of some foreground geometry they offer only limited utility in a design context. Therefore, we suggest that the system may be of use in `just-in-time' CAD recovery of complex environments, such as shop floors, or construction sites, by recovering objects through sketched overlays, where other methods such as automatic line-retrieval may be impossible. The result of using the system in this manner is the `sketching of space' - sketching out a volume around the user - and once the geometry has been recovered, the designer is free to quickly sketch design ideas into the newly constructed context, or analyze the space around them. Although end-user trials have not, as yet, been undertaken we believe that this implementation may afford a user-interface that is both accessible and robust, and that the rapid growth of pen-computing devices will further stimulate activity in this area

    Engineering visualization utilizing advanced animation

    Get PDF
    Engineering visualization is the use of computer graphics to depict engineering analysis and simulation in visual form from project planning through documentation. Graphics displays let engineers see data represented dynamically which permits the quick evaluation of results. The current state of graphics hardware and software generally allows the creation of two types of 3D graphics. The use of animated video as an engineering visualization tool is presented. The engineering, animation, and videography aspects of animated video production are each discussed. Specific issues include the integration of staffing expertise, hardware, software, and the various production processes. A detailed explanation of the animation process reveals the capabilities of this unique engineering visualization method. Automation of animation and video production processes are covered and future directions are proposed

    Efficient utilization of graphics technology for space animation

    Get PDF
    Efficient utilization of computer graphics technology has become a major investment in the work of aerospace engineers and mission designers. These new tools are having a significant impact in the development and analysis of complex tasks and procedures which must be prepared prior to actual space flight. Design and implementation of useful methods in applying these tools has evolved into a complex interaction of hardware, software, network, video and various user interfaces. Because few people can understand every aspect of this broad mix of technology, many specialists are required to build, train, maintain and adapt these tools to changing user needs. Researchers have set out to create systems where an engineering designer can easily work to achieve goals with a minimum of technological distraction. This was accomplished with high-performance flight simulation visual systems and supercomputer computational horsepower. Control throughout the creative process is judiciously applied while maintaining generality and ease of use to accommodate a wide variety of engineering needs

    Compton Imaging of MeV Gamma-Rays with the Liquid Xenon Gamma-Ray Imaging Telescope (LXeGRIT)

    Full text link
    The Liquid Xenon Gamma-Ray Imaging Telescope (LXeGRIT) is the first realization of a liquid xenon time projection chamber for Compton imaging of MeV gamma-ray sources in astrophysics. By measuring the energy deposit and the three spatial coordinates of individual gamma-ray scattering points, the location of the source in the sky is inferred with Compton kinematics reconstruction. The angular resolution is determined by the detector's energy and spatial resolutions, as well as by the separation in space between the first and second scattering. The imaging response of LXeGRIT was established with gamma-rays from radioactive sources, during calibration and integration at the Columbia Astrophysics Laboratory, prior to the 2000 balloon flight mission. In this paper we describe in detail the various steps involved in imaging sources with LXeGRIT and present experimental results on angular resolution and other parameters which characterize its performance as a Compton telescope.Comment: 22 pages, 20 figures, submitted to NIM
    corecore