14 research outputs found
3D analysis of tooth surfaces to aid accurate brace placement
Master'sMASTER OF ENGINEERIN
Representations for Cognitive Vision : a Review of Appearance-Based, Spatio-Temporal, and Graph-Based Approaches
The emerging discipline of cognitive vision requires a proper representation of visual information including spatial and temporal relationships, scenes, events, semantics and context. This review article summarizes existing representational schemes in computer vision which might be useful for cognitive vision, a and discusses promising future research directions. The various approaches are categorized according to appearance-based, spatio-temporal, and graph-based representations for cognitive vision. While the representation of objects has been covered extensively in computer vision research, both from a reconstruction as well as from a recognition point of view, cognitive vision will also require new ideas how to represent scenes. We introduce new concepts for scene representations and discuss how these might be efficiently implemented in future cognitive vision systems
Multimodal Three Dimensional Scene Reconstruction, The Gaussian Fields Framework
The focus of this research is on building 3D representations of real world scenes and objects using different imaging sensors. Primarily range acquisition devices (such as laser scanners and stereo systems) that allow the recovery of 3D geometry, and multi-spectral image sequences including visual and thermal IR images that provide additional scene characteristics. The crucial technical challenge that we addressed is the automatic point-sets registration task. In this context our main contribution is the development of an optimization-based method at the core of which lies a unified criterion that solves simultaneously for the dense point correspondence and transformation recovery problems. The new criterion has a straightforward expression in terms of the datasets and the alignment parameters and was used primarily for 3D rigid registration of point-sets. However it proved also useful for feature-based multimodal image alignment. We derived our method from simple Boolean matching principles by approximation and relaxation. One of the main advantages of the proposed approach, as compared to the widely used class of Iterative Closest Point (ICP) algorithms, is convexity in the neighborhood of the registration parameters and continuous differentiability, allowing for the use of standard gradient-based optimization techniques. Physically the criterion is interpreted in terms of a Gaussian Force Field exerted by one point-set on the other. Such formulation proved useful for controlling and increasing the region of convergence, and hence allowing for more autonomy in correspondence tasks. Furthermore, the criterion can be computed with linear complexity using recently developed Fast Gauss Transform numerical techniques. In addition, we also introduced a new local feature descriptor that was derived from visual saliency principles and which enhanced significantly the performance of the registration algorithm. The resulting technique was subjected to a thorough experimental analysis that highlighted its strength and showed its limitations. Our current applications are in the field of 3D modeling for inspection, surveillance, and biometrics. However, since this matching framework can be applied to any type of data, that can be represented as N-dimensional point-sets, the scope of the method is shown to reach many more pattern analysis applications
Efficient Retrieval and Categorization for 3D Models based on Bag-of-Words Approach
Ph.DDOCTOR OF PHILOSOPH
Objekt-Manipulation und Steuerung der Greifkraft durch Verwendung von Taktilen Sensoren
This dissertation describes a new type of tactile sensor and an improved version of the dynamic tactile sensing approach that can provide a regularly updated and accurate estimate of minimum applied forces for use in the control of gripper manipulation. The pre-slip sensing algorithm is proposed and implemented into two-finger robot gripper. An algorithm that can discriminate between types of contact surface and recognize objects at the contact stage is also proposed. A technique for recognizing objects using tactile sensor arrays, and a method based on the quadric surface parameter for classifying grasped objects is described. Tactile arrays can recognize surface types on contact, making it possible for a tactile system to recognize translation, rotation, and scaling of an object independently.Diese Dissertation beschreibt eine neue Art von taktilen Sensoren und einen verbesserten Ansatz zur dynamischen Erfassung von taktilen daten, der in regelmäßigen Zeitabständen eine genaue Bewertung der minimalen Greifkraft liefert, die zur Steuerung des Greifers nötig ist. Ein Berechnungsverfahren zur Voraussage des Schlupfs, das in einen Zwei-Finger-Greifarm eines Roboters eingebaut wurde, wird vorgestellt. Auch ein Algorithmus zur Unterscheidung von verschiedenen Oberflächenarten und zur Erkennung von Objektformen bei der Berührung wird vorgestellt. Ein Verfahren zur Objekterkennung mit Hilfe einer Matrix aus taktilen Sensoren und eine Methode zur Klassifikation ergriffener Objekte, basierend auf den Daten einer rechteckigen Oberfläche, werden beschrieben. Mit Hilfe dieser Matrix können unter schiedliche Arten von Oberflächen bei Berührung erkannt werden, was es für das Tastsystem möglich macht, Verschiebung, Drehung und Größe eines Objektes unabhängig voneinander zu erkennen
Recommended from our members
Evaluating 3D local descriptors and recursive filtering schemes for LIDAR-based uncooperative relative space navigation
We propose a light detection and ranging (LIDAR)‐based relative navigation scheme that is appropriate for uncooperative relative space navigation applications. Our technique combines the encoding power of the three‐dimensional (3D) local descriptors that are matched exploiting a correspondence grouping scheme, with the robust rigid transformation estimation capability of the proposed adaptive recursive filtering techniques. Trials evaluate several current state‐of‐the‐art 3D local descriptors and recursive filtering techniques on a number of both real and simulated scenarios that involve various space objects including satellites and asteroids. Results demonstrate that the proposed architecture affords a 50% odometry accuracy improvement over current solutions, while also affording a low computational burden. From our trials we conclude that the 3D descriptor histogram of distances short (HoD‐S) combined with the adaptive αβ filtering poses the most appealing combination for the majority of the scenarios evaluated, as it combines high quality odometry with a low processing burden
Recommended from our members
HUMAN MACHINE COOPERATIVE TELEROBOTICS
The remediation and deactivation and decommissioning (D&D) of nuclear waste storage tanks using telerobotics is one of the most challenging tasks faced in environmental cleanup. Since a number of tanks have reached the end of their design life and some of them have leaks, the unstructured, uncertain and radioactive environment makes the work inefficient and expensive. However, the execution time of teleoperation consumes ten to hundred times that of direct contact with an associated loss in quality. Thus, a considerable effort has been expended to improve the quality and efficiency of telerobotics by incorporating into teleoperation and robotic control functions such as planning, trajectory generation, vision, and 3-D modeling. One example is the Robot Task Space Analyzer (RTSA), which has been developed at the Robotics and Electromechanical Systems Laboratory (REMSL) at the University of Tennessee in support of the D&D robotic work at the Oak Ridge National Laboratory and the National Energy Technology Laboratory. This system builds 3-D models of the area of interest in task space through automatic image processing and/or human interactive manual modeling. The RTSA generates a task plan file, which describes the execution of a task including manipulator and tooling motions. The high level controller of the manipulator interprets the task plan file and executes the task automatically. Thus, if the environment is not highly unstructured, a tooling task, which interacts with environment, will be executed in the autonomous mode. Therefore, the RTSA not only increases the system efficiency, but also improves the system reliability because the operator will act as backstop for safe operation after the 3-D models and task plan files are generated. However, unstructured conditions of environment and tasks necessitate that the telerobot operates in the teleoperation mode for successful execution of task. The inefficiency in the teleoperation mode led to the research described as Human Machine Cooperative Telerobotics (HMCTR). The HMCTR combines the telerobot with robotic control techniques to improve the system efficiency and reliability in teleoperation mode. In this topical report, the control strategy, configuration and experimental results of Human Machines Cooperative Telerobotics (HMCTR), which modifies and limits the commands of human operator to follow the predefined constraints in the teleoperation mode, is described. The current implementation is a laboratory-scale system that will be incorporated into an engineering-scale system at the Oak Ridge National Laboratory in the future
Efficient Multiple Model Recognition in Cluttered 3-D Scenes
We present a 3-D shape-based object recognition system for
simultaneous recognition of multiple objects in scenes
containing clutter and occlusion. Recognition is based on
matching surfaces by matching points using the spin-image
representation. The spin-image is a data level shape
descriptor that is used to match surfaces represented as
surface meshes. We present a compression scheme for spin images
that results in efficient multiple object recognition
which we verify with results showing the simultaneous
recognition of multiple objects from a library of 20 models.
Furthermore, we demonstrate the robust performance of
recognition in the presence of clutter and occlusion through
analysis of recognition trials on 100 scenes