3,376 research outputs found

    A Virtual Testbed for Fish-Tank Virtual Reality: Improving Calibration with a Virtual-in-Virtual Display

    Get PDF
    With the development of novel calibration techniques for multimedia projectors and curved projection surfaces, volumetric 3D displays are becoming easier and more affordable to build. The basic requirements include a display shape that defines the volume (e.g. a sphere, cylinder, or cuboid) and a tracking system to provide each user's location for the perspective corrected rendering. When coupled with modern graphics cards, these displays are capable of high resolution, low latency, high frame rate, and even stereoscopic rendering; however, like many previous studies have shown, every component must be precisely calibrated for a compelling 3D effect. While human perceptual requirements have been extensively studied for head-tracked displays, most studies featured seated users in front of a flat display. It remains unclear if results from these flat display studies are applicable to newer, walk-around displays with enclosed or curved shapes. To investigate these issues, we developed a virtual testbed for volumetric head-tracked displays that can measure calibration accuracy of the entire system in real-time. We used this testbed to investigate visual distortions of prototype curved displays, improve existing calibration techniques, study the importance of stereo to performance and perception, and validate perceptual calibration with novice users. Our experiments show that stereo is important for task performance, but requires more accurate calibration, and that novice users can make effective use of perceptual calibration tools. We also propose a novel, real-time calibration method that can be used to fine-tune an existing calibration using perceptual feedback. The findings from this work can be used to build better head-tracked volumetric displays with an unprecedented amount of 3D realism and intuitive calibration tools for novice users

    Robot Visual Servoing Using Discontinuous Control

    Full text link
    This work presents different proposals to deal with common problems in robot visual servoing based on the application of discontinuous control methods. The feasibility and effectiveness of the proposed approaches are substantiated by simulation results and real experiments using a 6R industrial manipulator. The main contributions are: - Geometric invariance using sliding mode control (Chapter 3): the defined higher-order invariance is used by the proposed approaches to tackle problems in visual servoing. Proofs of invariance condition are presented. - Fulfillment of constraints in visual servoing (Chapter 4): the proposal uses sliding mode methods to satisfy mechanical and visual constraints in visual servoing, while a secondary task is considered to properly track the target object. The main advantages of the proposed approach are: low computational cost, robustness and fully utilization of the allowed space for the constraints. - Robust auto tool change for industrial robots using visual servoing (Chapter 4): visual servoing and the proposed method for constraints fulfillment are applied to an automated solution for tool changing in industrial robots. The robustness of the proposed method is due to the control law of the visual servoing, which uses the information acquired by the vision system to close a feedback control loop. Furthermore, sliding mode control is simultaneously used in a prioritized level to satisfy the aforementioned constraints. Thus, the global control accurately places the tool in the warehouse, but satisfying the robot constraints. - Sliding mode controller for reference tracking (Chapter 5): an approach based on sliding mode control is proposed for reference tracking in robot visual servoing using industrial robot manipulators. The novelty of the proposal is the introduction of a sliding mode controller that uses a high-order discontinuous control signal, i.e., joint accelerations or joint jerks, in order to obtain a smoother behavior and ensure the robot system stability, which is demonstrated with a theoretical proof. - PWM and PFM for visual servoing in fully decoupled approaches (Chapter 6): discontinuous control based on pulse width and pulse frequency modulation is proposed for fully decoupled position based visual servoing approaches, in order to get the same convergence time for camera translation and rotation. Moreover, other results obtained in visual servoing applications are also described.Este trabajo presenta diferentes propuestas para tratar problemas habituales en el control de robots por realimentación visual, basadas en la aplicación de métodos de control discontinuos. La viabilidad y eficacia de las propuestas se fundamenta con resultados en simulación y con experimentos reales utilizando un robot manipulador industrial 6R. Las principales contribuciones son: - Invariancia geométrica utilizando control en modo deslizante (Capítulo 3): la invariancia de alto orden definida aquí es utilizada después por los métodos propuestos, para tratar problemas en control por realimentación visual. Se apuertan pruebas teóricas de la condición de invariancia. - Cumplimiento de restricciones en control por realimentación visual (Capítulo 4): esta propuesta utiliza métodos de control en modo deslizante para satisfacer restricciones mecánicas y visuales en control por realimentación visual, mientras una tarea secundaria se encarga del seguimiento del objeto. Las principales ventajas de la propuesta son: bajo coste computacional, robustez y plena utilización del espacio disponible para las restricciones. - Cambio de herramienta robusto para un robot industrial mediante control por realimentación visual (Capítulo 4): el control por realimentación visual y el método propuesto para el cumplimiento de las restricciones se aplican a una solución automatizada para el cambio de herramienta en robots industriales. La robustez de la propuesta radica en el uso del control por realimentación visual, que utiliza información del sistema de visión para cerrar el lazo de control. Además, el control en modo deslizante se utiliza simultáneamente en un nivel de prioridad superior para satisfacer las restricciones. Así pues, el control es capaz de dejar la herramienta en el intercambiador de herramientas de forma precisa, a la par que satisface las restricciones del robot. - Controlador en modo deslizante para seguimiento de referencia (Capítulo 5): se propone un enfoque basado en el control en modo deslizante para seguimiento de referencia en robots manipuladores industriales controlados por realimentación visual. La novedad de la propuesta radica en la introducción de un controlador en modo deslizante que utiliza la señal de control discontinua de alto orden, i.e. aceleraciones o jerks de las articulaciones, para obtener un comportamiento más suave y asegurar la estabilidad del sistema robótico, lo que se demuestra con una prueba teórica. - Control por realimentación visual mediante PWM y PFM en métodos completamente desacoplados (Capítulo 6): se propone un control discontinuo basado en modulación del ancho y frecuencia del pulso para métodos completamente desacoplados de control por realimentación visual basados en posición, con el objetivo de conseguir el mismo tiempo de convergencia para los movimientos de rotación y traslación de la cámara . Además, se presentan también otros resultados obtenidos en aplicaciones de control por realimentación visual.Aquest treball presenta diferents propostes per a tractar problemes habituals en el control de robots per realimentació visual, basades en l'aplicació de mètodes de control discontinus. La viabilitat i eficàcia de les propostes es fonamenta amb resultats en simulació i amb experiments reals utilitzant un robot manipulador industrial 6R. Les principals contribucions són: - Invariància geomètrica utilitzant control en mode lliscant (Capítol 3): la invariància d'alt ordre definida ací és utilitzada després pels mètodes proposats, per a tractar problemes en control per realimentació visual. S'aporten proves teòriques de la condició d'invariància. - Compliment de restriccions en control per realimentació visual (Capítol 4): aquesta proposta utilitza mètodes de control en mode lliscant per a satisfer restriccions mecàniques i visuals en control per realimentació visual, mentre una tasca secundària s'encarrega del seguiment de l'objecte. Els principals avantatges de la proposta són: baix cost computacional, robustesa i plena utilització de l'espai disponible per a les restriccions. - Canvi de ferramenta robust per a un robot industrial mitjançant control per realimentació visual (Capítol 4): el control per realimentació visual i el mètode proposat per al compliment de les restriccions s'apliquen a una solució automatitzada per al canvi de ferramenta en robots industrials. La robustesa de la proposta radica en l'ús del control per realimentació visual, que utilitza informació del sistema de visió per a tancar el llaç de control. A més, el control en mode lliscant s'utilitza simultàniament en un nivell de prioritat superior per a satisfer les restriccions. Així doncs, el control és capaç de deixar la ferramenta en l'intercanviador de ferramentes de forma precisa, a la vegada que satisfà les restriccions del robot. - Controlador en mode lliscant per a seguiment de referència (Capítol 5): es proposa un enfocament basat en el control en mode lliscant per a seguiment de referència en robots manipuladors industrials controlats per realimentació visual. La novetat de la proposta radica en la introducció d'un controlador en mode lliscant que utilitza senyal de control discontínua d'alt ordre, i.e. acceleracions o jerks de les articulacions, per a obtindre un comportament més suau i assegurar l'estabilitat del sistema robòtic, la qual cosa es demostra amb una prova teòrica. - Control per realimentació visual mitjançant PWM i PFM en mètodes completament desacoblats (Capítol 6): es proposa un control discontinu basat en modulació de l'ample i la freqüència del pols per a mètodes completament desacoblats de control per realimentació visual basats en posició, amb l'objectiu d'aconseguir el mateix temps de convergència per als moviments de rotació i translació de la càmera. A més, es presenten també altres resultats obtinguts en aplicacions de control per realimentació visual.Muñoz Benavent, P. (2017). Robot Visual Servoing Using Discontinuous Control [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/90430TESI

    Application for photogrammetry of organisms

    Get PDF
    Single-camera photogrammetry is a well-established procedure to retrieve quantitative information from objects using photography. In biological sciences, photogrammetry is often applied to aid in morphometry studies, focusing on the comparative study of shapes and organisms. Two types of photogrammetry are used in morphometric studies: 2D photogrammetry, where distance and angle measurements are used to quantitatively describe attributes of an object, and 3D photogrammetry, where data on landmark coordinates are used to reconstruct an object true shape. Although there are excellent software tools for 3D photogrammetry available, software specifically designed to aid in the somewhat simpler 2D photogrammetry are lacking. Therefore, most studies applying 2D photogrammetry, still rely on manual acquisition of measurements from pictures, that must then be scaled to an appropriate measuring system. This is often a laborious multistep process, on most cases utilizing diverse software to complete different tasks. In addition to being time-consuming, it is also error-prone since measurement recording is often made manually. The present work aimed at tackling those issues by implementing a new cross-platform software able to integrate and streamline the photogrammetry workflow usually applied in 2D photogrammetry studies. Results from a preliminary study show a decrease of 45% in processing time when using the software developed in the scope of this work in comparison with a competing methodology. Existing limitations and future work towards improved versions of the software are discussed.Fotogrametria em câmera única é um procedimento bem estabelecido para recolher dados quantitativos de objectos através de fotografias. Em biologia, fotogrametria é frequentemente aplicada no contexto de estudos morfométricos, focando-se no estudo comparativo de formas e organismos. Nos estudos morfométricos são utilizados dois tipos de aplicação fotogramétrica: fotogrametria 2D, onde são utilizadas medidas de distância e ângulo para quantitativamente descrever atributos de um objecto, e fotogrametria 3D, onde são utilizadas coordenadas de referência de forma a reconstruir a verdadeira forma de um objeto. Apesar da existência de uma elevada variedade de software no contexto de fotogrametria 3D, a variedade de software concebida especificamente para a a aplicação de fotogrametria 2D é ainda muito reduzida. Consequentemente, é comum observar estudos onde fotogrametria 2D é utilizada através da aquisição manual de medidas a partir de imagens, que posteriormente necessitam de ser escaladas para um sistema apropriado de medida. Este processo de várias etapas é frequentemente moroso e requer a aplicação de diferentes programas de software. Além de ser moroso, é também susceptível a erros, dada a natureza manual na aquisição de dados. O presente trabalho visou abordar os problemas descritos através da implementação de um novo software multiplataforma capaz de integrar e agilizar o processo de fotogrametria presentes em estudos que requerem fotogrametria 2D. Resultados preliminares demonstram um decréscimo de 45% em tempo de processamento na utilização do software desenvolvido no âmbito deste trabalho quando comparado a uma metodologia concorrente. Limitações existentes e trabalho futuro são discutidos

    Beta: Bioprinting engineering technology for academia

    Get PDF
    Higher STEM education is a field of growing potential, but too many middle school and high school students are not testing proficiently in STEM subjects. The BETA team worked to improve biology classroom engagement through the development of technologies for high school biology experiments. The BETA project team expanded functionality of an existing product line to allow for better student and teacher user experience and the execution of more interesting experiments. The BETA project’s first goal was to create a modular incubating Box for the high school classroom. This Box, called the BETA Box was designed with a variety of sensors to allow for custom temperature and lighting environments for each experiment. It was completed with a clear interface to control the settings and an automatic image capture system. The team also conducted a feasibility study on auto calibration and dual-extrusion for SE3D’s existing 3D bioprinter. The findings of this study led to the incorporation of a force sensor for auto calibration and the evidence to support the feasibility of dual extrusion, although further work is needed. These additions to the current SE3D educational product line will increase effectiveness in the classroom and allow the target audience, high school students, to better engage in STEM education activities

    Computed Tomography of Chemiluminescence: A 3D Time Resolved Sensor for Turbulent Combustion

    No full text
    Time resolved 3D measurements of turbulent flames are required to further understanding of combustion and support advanced simulation techniques (LES). Computed Tomography of Chemiluminescence (CTC) allows a flame’s 3D chemiluminescence profile to be obtained by inverting a series of integral measurements. CTC provides the instantaneous 3D flame structure, and can also measure: excited species concentrations, equivalence ratio, heat release rate, and possibly strain rate. High resolutions require simultaneous measurements from many view points, and the cost of multiple sensors has traditionally limited spatial resolutions. However, recent improvements in commodity cameras makes a high resolution CTC sensor possible and is investigated in this work. Using realistic LES Phantoms (known fields), the CT algorithm (ART) is shown to produce low error reconstructions even from limited noisy datasets. Error from selfabsorption is also tested using LES Phantoms and a modification to ART that successfully corrects this error is presented. A proof-of-concept experiment using 48 non-simultaneous views is performed and successfully resolves a Matrix Burner flame to 0.01% of the domain width (D). ART is also extended to 3D (without stacking) to allow 3D camera locations and optical effects to be considered. An optical integral geometry (weighted double-cone) is presented that corrects for limited depth-of-field, and (even with poorly estimated camera parameters) reconstructs the Matrix Burner as well as the standard geometry. CTC is implemented using five PicSight P32M cameras and mirrors to provide 10 simultaneous views. Measurements of the Matrix Burner and a Turbulent Opposed Jet achieve exposure times as low as 62 μs, with even shorter exposures possible. With only 10 views the spatial resolution of the reconstructions is low. However, a cosine Phantom study shows that 20–40 viewing angles are necessary to achieve high resolutions (0.01– 0.04D). With 40 P32M cameras costing £40000, future CTC implementations can achieve high spatial and temporal resolutions

    Appearance Modelling and Reconstruction for Navigation in Minimally Invasive Surgery

    Get PDF
    Minimally invasive surgery is playing an increasingly important role for patient care. Whilst its direct patient benefit in terms of reduced trauma, improved recovery and shortened hospitalisation has been well established, there is a sustained need for improved training of the existing procedures and the development of new smart instruments to tackle the issue of visualisation, ergonomic control, haptic and tactile feedback. For endoscopic intervention, the small field of view in the presence of a complex anatomy can easily introduce disorientation to the operator as the tortuous access pathway is not always easy to predict and control with standard endoscopes. Effective training through simulation devices, based on either virtual reality or mixed-reality simulators, can help to improve the spatial awareness, consistency and safety of these procedures. This thesis examines the use of endoscopic videos for both simulation and navigation purposes. More specifically, it addresses the challenging problem of how to build high-fidelity subject-specific simulation environments for improved training and skills assessment. Issues related to mesh parameterisation and texture blending are investigated. With the maturity of computer vision in terms of both 3D shape reconstruction and localisation and mapping, vision-based techniques have enjoyed significant interest in recent years for surgical navigation. The thesis also tackles the problem of how to use vision-based techniques for providing a detailed 3D map and dynamically expanded field of view to improve spatial awareness and avoid operator disorientation. The key advantage of this approach is that it does not require additional hardware, and thus introduces minimal interference to the existing surgical workflow. The derived 3D map can be effectively integrated with pre-operative data, allowing both global and local 3D navigation by taking into account tissue structural and appearance changes. Both simulation and laboratory-based experiments are conducted throughout this research to assess the practical value of the method proposed

    Visual guidance of unmanned aerial manipulators

    Get PDF
    The ability to fly has greatly expanded the possibilities for robots to perform surveillance, inspection or map generation tasks. Yet it was only in recent years that research in aerial robotics was mature enough to allow active interactions with the environment. The robots responsible for these interactions are called aerial manipulators and usually combine a multirotor platform and one or more robotic arms. The main objective of this thesis is to formalize the concept of aerial manipulator and present guidance methods, using visual information, to provide them with autonomous functionalities. A key competence to control an aerial manipulator is the ability to localize it in the environment. Traditionally, this localization has required external infrastructure of sensors (e.g., GPS or IR cameras), restricting the real applications. Furthermore, localization methods with on-board sensors, exported from other robotics fields such as simultaneous localization and mapping (SLAM), require large computational units becoming a handicap in vehicles where size, load, and power consumption are important restrictions. In this regard, this thesis proposes a method to estimate the state of the vehicle (i.e., position, orientation, velocity and acceleration) by means of on-board, low-cost, light-weight and high-rate sensors. With the physical complexity of these robots, it is required to use advanced control techniques during navigation. Thanks to their redundancy on degrees-of-freedom, they offer the possibility to accomplish not only with mobility requirements but with other tasks simultaneously and hierarchically, prioritizing them depending on their impact to the overall mission success. In this work we present such control laws and define a number of these tasks to drive the vehicle using visual information, guarantee the robot integrity during flight, and improve the platform stability or increase arm operability. The main contributions of this research work are threefold: (1) Present a localization technique to allow autonomous navigation, this method is specifically designed for aerial platforms with size, load and computational burden restrictions. (2) Obtain control commands to drive the vehicle using visual information (visual servo). (3) Integrate the visual servo commands into a hierarchical control law by exploiting the redundancy of the robot to accomplish secondary tasks during flight. These tasks are specific for aerial manipulators and they are also provided. All the techniques presented in this document have been validated throughout extensive experimentation with real robotic platforms.La capacitat de volar ha incrementat molt les possibilitats dels robots per a realitzar tasques de vigilància, inspecció o generació de mapes. Tot i això, no és fins fa pocs anys que la recerca en robòtica aèria ha estat prou madura com per començar a permetre interaccions amb l’entorn d’una manera activa. Els robots per a fer-ho s’anomenen manipuladors aeris i habitualment combinen una plataforma multirotor i un braç robòtic. L’objectiu d’aquesta tesi és formalitzar el concepte de manipulador aeri i presentar mètodes de guiatge, utilitzant informació visual, per dotar d’autonomia aquest tipus de vehicles. Una competència clau per controlar un manipulador aeri és la capacitat de localitzar-se en l’entorn. Tradicionalment aquesta localització ha requerit d’infraestructura sensorial externa (GPS, càmeres IR, etc.), limitant així les aplicacions reals. Pel contrari, sistemes de localització exportats d’altres camps de la robòtica basats en sensors a bord, com per exemple mètodes de localització i mapejat simultànis (SLAM), requereixen de gran capacitat de còmput, característica que penalitza molt en vehicles on la mida, pes i consum elèctric son grans restriccions. En aquest sentit, aquesta tesi proposa un mètode d’estimació d’estat del robot (posició, velocitat, orientació i acceleració) a partir de sensors instal·lats a bord, de baix cost, baix consum computacional i que proporcionen mesures a alta freqüència. Degut a la complexitat física d’aquests robots, és necessari l’ús de tècniques de control avançades. Gràcies a la seva redundància de graus de llibertat, aquests robots ens ofereixen la possibilitat de complir amb els requeriments de mobilitat i, simultàniament, realitzar tasques de manera jeràrquica, ordenant-les segons l’impacte en l’acompliment de la missió. En aquest treball es presenten aquestes lleis de control, juntament amb la descripció de tasques per tal de guiar visualment el vehicle, garantir la integritat del robot durant el vol, millorar de l’estabilitat del vehicle o augmentar la manipulabilitat del braç. Aquesta tesi es centra en tres aspectes fonamentals: (1) Presentar una tècnica de localització per dotar d’autonomia el robot. Aquest mètode està especialment dissenyat per a plataformes amb restriccions de capacitat computacional, mida i pes. (2) Obtenir les comandes de control necessàries per guiar el vehicle a partir d’informació visual. (3) Integrar aquestes accions dins una estructura de control jeràrquica utilitzant la redundància del robot per complir altres tasques durant el vol. Aquestes tasques son específiques per a manipuladors aeris i també es defineixen en aquest document. Totes les tècniques presentades en aquesta tesi han estat avaluades de manera experimental amb plataformes robòtiques real

    Aggressive landing maneuvers for UAVs

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2006.Includes bibliographical references (p. 69-70).VTOL (Vertical Take Off and Landing) vehicle landing is considered to be a critically difficult task for both land, marine, and urban operations. This thesis describes one possible control approach to enable landing of unmanned aircraft systems at all attitudes, including against walls and ceilings as a way to considerably enhance the operational capability of these vehicles. The features of the research include a novel approach to trajectory tracking, whereby the primary system outputs to be tracked are smoothly scheduled according to the state of the vehicle relative to its landing area. The proposed approach is illustrated with several experiments using a low-cost three-degree-of-freedom helicopter. We also include the design details of a testbed for the demonstration of the application of our research endeavor. The testbed is a model helicopter UAV platform that has indoor and outdoor aggressive flight capability.by Selcuk Bayraktar.S.M

    Real-Time Mapping Using Stereoscopic Vision Optimization

    Get PDF
    This research focuses on efficient methods of generating 2D maps from stereo vision in real-time. Instead of attempting to locate edges between objects, we make the assumption that the representative surfaces of objects in a view provide enough information to generate a map while taking less time to locate during processing. Since all real-time vision processing endeavors are extremely computationally intensive, numerous optimization techniques are applied to allow for a real-time application: horizontal spike smoothing for post-disparity noise, masks to focus on close-proximity objects, melding for object synthesis, and rectangular fitting for object extraction under a planar assumption. Additionally, traditional image transformation mechanisms such as rotation, translation, and scaling are integrated. Results from our research are an encouraging 10Hz with no vision post processing and accuracy up to 11 feet. Finally, vision mapping results are compared to simultaneously collected sonar data in three unique experimental settings
    corecore