9,328 research outputs found

    A Minimalist Approach to Type-Agnostic Detection of Quadrics in Point Clouds

    Get PDF
    This paper proposes a segmentation-free, automatic and efficient procedure to detect general geometric quadric forms in point clouds, where clutter and occlusions are inevitable. Our everyday world is dominated by man-made objects which are designed using 3D primitives (such as planes, cones, spheres, cylinders, etc.). These objects are also omnipresent in industrial environments. This gives rise to the possibility of abstracting 3D scenes through primitives, thereby positions these geometric forms as an integral part of perception and high level 3D scene understanding. As opposed to state-of-the-art, where a tailored algorithm treats each primitive type separately, we propose to encapsulate all types in a single robust detection procedure. At the center of our approach lies a closed form 3D quadric fit, operating in both primal & dual spaces and requiring as low as 4 oriented-points. Around this fit, we design a novel, local null-space voting strategy to reduce the 4-point case to 3. Voting is coupled with the famous RANSAC and makes our algorithm orders of magnitude faster than its conventional counterparts. This is the first method capable of performing a generic cross-type multi-object primitive detection in difficult scenes. Results on synthetic and real datasets support the validity of our method.Comment: Accepted for publication at CVPR 201

    CAD-model-based vision for space applications

    Get PDF
    A pose acquisition system operating in space must be able to perform well in a variety of different applications including automated guidance and inspections tasks with many different, but known objects. Since the space station is being designed with automation in mind, there will be CAD models of all the objects, including the station itself. The construction of vision models and procedures directly from the CAD models is the goal of this project. The system that is being designed and implementing must convert CAD models to vision models, predict visible features from a given view point from the vision models, construct view classes representing views of the objects, and use the view class model thus derived to rapidly determine the pose of the object from single images and/or stereo pairs

    Influence of Slippery Pacemaker Leads on Lead-Induced Venous Occlusion

    Get PDF
    The use of medical devices such as pacemakers and implantable cardiac defibrillators have become commonplace to treat arrhythmias. Pacing leads with electrodes are used to send electrical pulses to the heart to treat either abnormally slow heart rates, or abnormal rhythms. Lead induced vessel occlusion, which is commonly seen after placement of pacemaker or implantable cardiac defibrillators leads, may result in lead malfunction and/or superior vena cava syndrome, and makes lead extraction difficult. The association between the anatomic locations at risk for thrombosis and regions of venous stasis have been reported previously. The computational studies reveal obvious flow stasis in the proximity of the leads, due to the no-slip boundary condition imposed on the lead surface. With recent technologies capable of creating slippery surfaces that can repel complex fluids including blood, we explore computationally how local structures may be altered in the regions around the leads when the no-slip boundary condition on the lead surface is relaxed using various slip lengths. The slippery surface is modeled by a Navier slip boundary condition. Analytical studies are performed on idealized geometries, which were then used to validate numerical simulations. A patient-specific model is constructed and studied numerically to investigate the influence of the slippery surface in a more physiologically realistic environment. The findings evaluate the possibility of reducing the risk of lead-induced thrombosis and occlusion by implementing a slippery surface conditions on the leads

    Object representation and recognition

    Get PDF
    One of the primary functions of the human visual system is object recognition, an ability that allows us to relate the visual stimuli falling on our retinas to our knowledge of the world. For example, object recognition allows you to use knowledge of what an apple looks like to find it in the supermarket, to use knowledge of what a shark looks like to swim in th

    Generalized Hyper-cylinders: a Mechanism for Modeling and Visualizing N-D Objects

    Get PDF
    The display of surfaces and solids has usually been restricted to the domain of scientific visualization; however, little work has been done on the visualization of surfaces and solids of dimensionality higher than three or four. Indeed, most high-dimensional visualization focuses on the display of data points. However, the ability to effectively model and visualize higher dimensional objects such as clusters and patterns would be quite useful in studying their shapes, relationships, and changes over time. In this paper we describe a method for the description, extraction, and visualization of N-dimensional surfaces and solids. The approach is to extend generalized cylinders, an object representation used in geometric modeling and computer vision, to arbitrary dimensionality, resulting in what we term Generalized Hyper-cylinders (GHCs). A basic GHC consists of two N-dimensional hyper-spheres connected by a hyper-cylinder whose shape at any point along the cylinder is determined by interpolating between the endpoint shapes. More complex GHCs involve alternate cross-section shapes and curved spines connecting the ends. Several algorithms for constructing or extracting GHCs from multivariate data sets are proposed. Once extracted, the GHCs can be visualized using a variety of projection techniques and methods toconvey cross-section shapes

    Reliable vision-guided grasping

    Get PDF
    Automated assembly of truss structures in space requires vision-guided servoing for grasping a strut when its position and orientation are uncertain. This paper presents a methodology for efficient and robust vision-guided robot grasping alignment. The vision-guided grasping problem is related to vision-guided 'docking' problems. It differs from other hand-in-eye visual servoing problems, such as tracking, in that the distance from the target is a relevant servo parameter. The methodology described in this paper is hierarchy of levels in which the vision/robot interface is decreasingly 'intelligent,' and increasingly fast. Speed is achieved primarily by information reduction. This reduction exploits the use of region-of-interest windows in the image plane and feature motion prediction. These reductions invariably require stringent assumptions about the image. Therefore, at a higher level, these assumptions are verified using slower, more reliable methods. This hierarchy provides for robust error recovery in that when a lower-level routine fails, the next-higher routine will be called and so on. A working system is described which visually aligns a robot to grasp a cylindrical strut. The system uses a single camera mounted on the end effector of a robot and requires only crude calibration parameters. The grasping procedure is fast and reliable, with a multi-level error recovery system
    • …
    corecore