5,392 research outputs found

    Real-time 3D reconstruction of non-rigid shapes with a single moving camera

    Get PDF
    © . This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/This paper describes a real-time sequential method to simultaneously recover the camera motion and the 3D shape of deformable objects from a calibrated monocular video. For this purpose, we consider the Navier-Cauchy equations used in 3D linear elasticity and solved by finite elements, to model the time-varying shape per frame. These equations are embedded in an extended Kalman filter, resulting in sequential Bayesian estimation approach. We represent the shape, with unknown material properties, as a combination of elastic elements whose nodal points correspond to salient points in the image. The global rigidity of the shape is encoded by a stiffness matrix, computed after assembling each of these elements. With this piecewise model, we can linearly relate the 3D displacements with the 3D acting forces that cause the object deformation, assumed to be normally distributed. While standard finite-element-method techniques require imposing boundary conditions to solve the resulting linear system, in this work we eliminate this requirement by modeling the compliance matrix with a generalized pseudoinverse that enforces a pre-fixed rank. Our framework also ensures surface continuity without the need for a post-processing step to stitch all the piecewise reconstructions into a global smooth shape. We present experimental results using both synthetic and real videos for different scenarios ranging from isometric to elastic deformations. We also show the consistency of the estimation with respect to 3D ground truth data, include several experiments assessing robustness against artifacts and finally, provide an experimental validation of our performance in real time at frame rate for small mapsPeer ReviewedPostprint (author's final draft

    Objekt-Manipulation und Steuerung der Greifkraft durch Verwendung von Taktilen Sensoren

    Get PDF
    This dissertation describes a new type of tactile sensor and an improved version of the dynamic tactile sensing approach that can provide a regularly updated and accurate estimate of minimum applied forces for use in the control of gripper manipulation. The pre-slip sensing algorithm is proposed and implemented into two-finger robot gripper. An algorithm that can discriminate between types of contact surface and recognize objects at the contact stage is also proposed. A technique for recognizing objects using tactile sensor arrays, and a method based on the quadric surface parameter for classifying grasped objects is described. Tactile arrays can recognize surface types on contact, making it possible for a tactile system to recognize translation, rotation, and scaling of an object independently.Diese Dissertation beschreibt eine neue Art von taktilen Sensoren und einen verbesserten Ansatz zur dynamischen Erfassung von taktilen daten, der in regelmäßigen Zeitabständen eine genaue Bewertung der minimalen Greifkraft liefert, die zur Steuerung des Greifers nötig ist. Ein Berechnungsverfahren zur Voraussage des Schlupfs, das in einen Zwei-Finger-Greifarm eines Roboters eingebaut wurde, wird vorgestellt. Auch ein Algorithmus zur Unterscheidung von verschiedenen Oberflächenarten und zur Erkennung von Objektformen bei der Berührung wird vorgestellt. Ein Verfahren zur Objekterkennung mit Hilfe einer Matrix aus taktilen Sensoren und eine Methode zur Klassifikation ergriffener Objekte, basierend auf den Daten einer rechteckigen Oberfläche, werden beschrieben. Mit Hilfe dieser Matrix können unter schiedliche Arten von Oberflächen bei Berührung erkannt werden, was es für das Tastsystem möglich macht, Verschiebung, Drehung und Größe eines Objektes unabhängig voneinander zu erkennen

    Embedded Electronic Systems for Electronic Skin Applications

    Get PDF
    The advances in sensor devices are potentially providing new solutions to many applications including prosthetics and robotics. Endowing upper limb prosthesis with tactile sensors (electronic/sensitive skin) can be used to provide tactile sensory feedback to the amputees. In this regard, the prosthetic device is meant to be equipped with tactile sensing system allowing the user limb to receive tactile feedback about objects and contact surfaces. Thus, embedding tactile sensing system is required for wearable sensors that should cover wide areas of the prosthetics. However, embedding sensing system involves set of challenges in terms of power consumption, data processing, real-time response and design scalability (e-skin may include large number of tactile sensors). The tactile sensing system is constituted of: (i) a tactile sensor array, (ii) an interface electronic circuit, (iii) an embedded processing unit, and (iv) a communication interface to transmit tactile data. The objective of the thesis is to develop an efficient embedded tactile sensing system targeting e-skin application (e.g. prosthetic) by: 1) developing a low power and miniaturized interface electronics circuit, operating in real-time; 2) proposing an efficient algorithm for embedded tactile data processing, affecting the system time latency and power consumption; 3) implementing an efficient communication channel/interface, suitable for large amount of data generated from large number of sensors. Most of the interface electronics for tactile sensing system proposed in the literature are composed of signal conditioning and commercial data acquisition devices (i.e. DAQ). However, these devices are bulky (PC-based) and thus not suitable for portable prosthetics from the size, power consumption and scalability point of view. Regarding the tactile data processing, some works have exploited machine learning methods for extracting meaningful information from tactile data. However, embedding these algorithms poses some challenges because of 1) the high amount of data to be processed significantly affecting the real time functionality, and 2) the complex processing tasks imposing burden in terms of power consumption. On the other hand, the literature shows lack in studies addressing data transfer in tactile sensing system. Thus, dealing with large number of sensors will pose challenges on the communication bandwidth and reliability. Therefore, this thesis exploits three approaches: 1) Developing a low power and miniaturized Interface Electronics (IE), capable of interfacing and acquiring signals from large number of tactile sensors in real-time. We developed a portable IE system based on a low power arm microcontroller and a DDC232 A/D converter, that handles an array of 32 tactile sensors. Upon touch applied to the sensors, the IE acquires and pre-process the sensor signals at low power consumption achieving a battery lifetime of about 22 hours. Then we assessed the functionality of the IE by carrying out Electrical and electromechanical characterization experiments to monitor the response of the interface electronics with PVDF-based piezoelectric sensors. The results of electrical and electromechanical tests validate the correct functionality of the proposed system. In addition, we implemented filtering methods on the IE that reduced the effect of noise in the system. Furthermore, we evaluated our proposed IE by integrating it in tactile sensory feedback system, showing effective deliver of tactile data to the user. The proposed system overcomes similar state of art solutions dealing with higher number of input channels and maintaining real time functionality. 2) Optimizing and implementing a tensorial-based machine learning algorithm for touch modality classification on embedded Zynq System-on-chip (SoC). The algorithm is based on Support Vector Machine classifier to discriminate between three input touch modality classes \u201cbrushing\u201d, \u201crolling\u201d and \u201csliding\u201d. We introduced an efficient algorithm minimizing the hardware implementation complexity in terms of number of operations and memory storage which directly affect time latency and power consumption. With respect to the original algorithm, the proposed approach \u2013 implemented on Zynq SoC \u2013 achieved reduction in the number of operations per inference from 545 M-ops to 18 M-ops and the memory storage from 52.2 KB to 1.7 KB. Moreover, the proposed method speeds up the inference time by a factor of 43 7 at a cost of only 2% loss in accuracy, enabling the algorithm to run on embedded processing unit and to extract tactile information in real-time. 3) Implementing a robust and efficient data transfer channel to transfer aggregated data at high transmission data rate and low power consumption. In this approach, we proposed and demonstrated a tactile sensory feedback system based on an optical communication link for prosthetic applications. The optical link features a low power and wide transmission bandwidth, which makes the feedback system suitable for large number of tactile sensors. The low power transmission is due to the employed UWB-based optical modulation. We implemented a system prototype, consisting of digital transmitter and receiver boards and acquisition circuits to interface 32 piezoelectric sensors. Then we evaluated the system performance by measuring, processing and transmitting data of the 32 piezoelectric sensors at 100 Mbps data rate through the optical link, at 50 pJ/bit communication energy consumption. Experimental results have validated the functionality and demonstrated the real time operation of the proposed sensory feedback system

    Tactile Sensing for Robotic Applications

    Get PDF
    This chapter provides an overview of tactile sensing in robotics. This chapter is an attempt to answer three basic questions: \u2022 What is meant by Tactile Sensing? \u2022 Why Tactile Sensing is important? \u2022 How Tactile Sensing is achieved? The chapter is organized to sequentially provide the answers to above basic questions. Tactile sensing has often been considered as force sensing, which is not wholly true. In order to clarify such misconceptions about tactile sensing, it is defined in section 2. Why tactile section is important for robotics and what parameters are needed to be measured by tactile sensors to successfully perform various tasks, are discussed in section 3. An overview of `How tactile sensing has been achieved\u2019 is given in section 4, where a number of technologies and transduction methods, that have been used to improve the tactile sensing capability of robotic devices, are discussed. Lack of any tactile analog to Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Devices (CCD) optical arrays has often been cited as one of the reasons for the slow development of tactile sensing vis-\ue0-vis other sense modalities like vision sensing. Our own contribution \u2013 development of tactile sensing arrays using piezoelectric polymers and involving silicon micromachining - is an attempt in the direction of achieving tactile analog of CMOS optical arrays. The first phase implementation of these tactile sensing arrays is discussed in section 5. Section 6 concludes the chapter with a brief discussion on the present status of tactile sensing and the challenges that remain to be solved

    Length and orientation constancy learning in 2-dimensions with auditory sensory substitution: the importance of self-initiated movement

    Get PDF
    A subset of sensory substitution (SS) devices translate images into sounds in real time using a portable computer, camera, and headphones. Perceptual constancy is the key to understanding both functional and phenomenological aspects of perception with SS. In particular, constancies enable object externalization, which is critical to the performance of daily tasks such as obstacle avoidance and locating dropped objects. In order to improve daily task performance by the blind, and determine if constancies can be learned with SS, we trained blind (N = 4) and sighted (N = 10) individuals on length and orientation constancy tasks for 8 days at about 1 h per day with an auditory SS device. We found that blind and sighted performance at the constancy tasks significantly improved, and attained constancy performance that was above chance. Furthermore, dynamic interactions with stimuli were critical to constancy learning with the SS device. In particular, improved task learning significantly correlated with the number of spontaneous left-right head-tilting movements while learning length constancy. The improvement from previous head-tilting trials even transferred to a no-head-tilt condition. Therefore, not only can SS learning be improved by encouraging head movement while learning, but head movement may also play an important role in learning constancies in the sighted. In addition, the learning of constancies by the blind and sighted with SS provides evidence that SS may be able to restore vision-like functionality to the blind in daily tasks
    • …
    corecore