2,736 research outputs found

    Inertial-sensor bias estimation from brightness/depth images and based on SO(3)-invariant integro/partial-differential equations on the unit sphere

    Full text link
    Constant biases associated to measured linear and angular velocities of a moving object can be estimated from measurements of a static scene by embedded brightness and depth sensors. We propose here a Lyapunov-based observer taking advantage of the SO(3)-invariance of the partial differential equations satisfied by the measured brightness and depth fields. The resulting asymptotic observer is governed by a non-linear integro/partial differential system where the two independent scalar variables indexing the pixels live on the unit sphere of the 3D Euclidian space. The observer design and analysis are strongly simplified by coordinate-free differential calculus on the unit sphere equipped with its natural Riemannian structure. The observer convergence is investigated under C^1 regularity assumptions on the object motion and its scene. It relies on Ascoli-Arzela theorem and pre-compactness of the observer trajectories. It is proved that the estimated biases converge towards the true ones, if and only if, the scene admits no cylindrical symmetry. The observer design can be adapted to realistic sensors where brightness and depth data are only available on a subset of the unit sphere. Preliminary simulations with synthetic brightness and depth images (corrupted by noise around 10%) indicate that such Lyapunov-based observers should be robust and convergent for much weaker regularity assumptions.Comment: 30 pages, 6 figures, submitte

    Neural networks application to divergence-based passive ranging

    Get PDF
    The purpose of this report is to summarize the state of knowledge and outline the planned work in divergence-based/neural networks approach to the problem of passive ranging derived from optical flow. Work in this and closely related areas is reviewed in order to provide the necessary background for further developments. New ideas about devising a monocular passive-ranging system are then introduced. It is shown that image-plan divergence is independent of image-plan location with respect to the focus of expansion and of camera maneuvers because it directly measures the object's expansion which, in turn, is related to the time-to-collision. Thus, a divergence-based method has the potential of providing a reliable range complementing other monocular passive-ranging methods which encounter difficulties in image areas close to the focus of expansion. Image-plan divergence can be thought of as some spatial/temporal pattern. A neural network realization was chosen for this task because neural networks have generally performed well in various other pattern recognition applications. The main goal of this work is to teach a neural network to derive the divergence from the imagery

    Visual guidance of unmanned aerial manipulators

    Get PDF
    The ability to fly has greatly expanded the possibilities for robots to perform surveillance, inspection or map generation tasks. Yet it was only in recent years that research in aerial robotics was mature enough to allow active interactions with the environment. The robots responsible for these interactions are called aerial manipulators and usually combine a multirotor platform and one or more robotic arms. The main objective of this thesis is to formalize the concept of aerial manipulator and present guidance methods, using visual information, to provide them with autonomous functionalities. A key competence to control an aerial manipulator is the ability to localize it in the environment. Traditionally, this localization has required external infrastructure of sensors (e.g., GPS or IR cameras), restricting the real applications. Furthermore, localization methods with on-board sensors, exported from other robotics fields such as simultaneous localization and mapping (SLAM), require large computational units becoming a handicap in vehicles where size, load, and power consumption are important restrictions. In this regard, this thesis proposes a method to estimate the state of the vehicle (i.e., position, orientation, velocity and acceleration) by means of on-board, low-cost, light-weight and high-rate sensors. With the physical complexity of these robots, it is required to use advanced control techniques during navigation. Thanks to their redundancy on degrees-of-freedom, they offer the possibility to accomplish not only with mobility requirements but with other tasks simultaneously and hierarchically, prioritizing them depending on their impact to the overall mission success. In this work we present such control laws and define a number of these tasks to drive the vehicle using visual information, guarantee the robot integrity during flight, and improve the platform stability or increase arm operability. The main contributions of this research work are threefold: (1) Present a localization technique to allow autonomous navigation, this method is specifically designed for aerial platforms with size, load and computational burden restrictions. (2) Obtain control commands to drive the vehicle using visual information (visual servo). (3) Integrate the visual servo commands into a hierarchical control law by exploiting the redundancy of the robot to accomplish secondary tasks during flight. These tasks are specific for aerial manipulators and they are also provided. All the techniques presented in this document have been validated throughout extensive experimentation with real robotic platforms.La capacitat de volar ha incrementat molt les possibilitats dels robots per a realitzar tasques de vigilància, inspecció o generació de mapes. Tot i això, no és fins fa pocs anys que la recerca en robòtica aèria ha estat prou madura com per començar a permetre interaccions amb l’entorn d’una manera activa. Els robots per a fer-ho s’anomenen manipuladors aeris i habitualment combinen una plataforma multirotor i un braç robòtic. L’objectiu d’aquesta tesi és formalitzar el concepte de manipulador aeri i presentar mètodes de guiatge, utilitzant informació visual, per dotar d’autonomia aquest tipus de vehicles. Una competència clau per controlar un manipulador aeri és la capacitat de localitzar-se en l’entorn. Tradicionalment aquesta localització ha requerit d’infraestructura sensorial externa (GPS, càmeres IR, etc.), limitant així les aplicacions reals. Pel contrari, sistemes de localització exportats d’altres camps de la robòtica basats en sensors a bord, com per exemple mètodes de localització i mapejat simultànis (SLAM), requereixen de gran capacitat de còmput, característica que penalitza molt en vehicles on la mida, pes i consum elèctric son grans restriccions. En aquest sentit, aquesta tesi proposa un mètode d’estimació d’estat del robot (posició, velocitat, orientació i acceleració) a partir de sensors instal·lats a bord, de baix cost, baix consum computacional i que proporcionen mesures a alta freqüència. Degut a la complexitat física d’aquests robots, és necessari l’ús de tècniques de control avançades. Gràcies a la seva redundància de graus de llibertat, aquests robots ens ofereixen la possibilitat de complir amb els requeriments de mobilitat i, simultàniament, realitzar tasques de manera jeràrquica, ordenant-les segons l’impacte en l’acompliment de la missió. En aquest treball es presenten aquestes lleis de control, juntament amb la descripció de tasques per tal de guiar visualment el vehicle, garantir la integritat del robot durant el vol, millorar de l’estabilitat del vehicle o augmentar la manipulabilitat del braç. Aquesta tesi es centra en tres aspectes fonamentals: (1) Presentar una tècnica de localització per dotar d’autonomia el robot. Aquest mètode està especialment dissenyat per a plataformes amb restriccions de capacitat computacional, mida i pes. (2) Obtenir les comandes de control necessàries per guiar el vehicle a partir d’informació visual. (3) Integrar aquestes accions dins una estructura de control jeràrquica utilitzant la redundància del robot per complir altres tasques durant el vol. Aquestes tasques son específiques per a manipuladors aeris i també es defineixen en aquest document. Totes les tècniques presentades en aquesta tesi han estat avaluades de manera experimental amb plataformes robòtiques real

    Control-Oriented Reduced Order Modeling of Dipteran Flapping Flight

    Get PDF
    Flying insects achieve flight stabilization and control in a manner that requires only small, specialized neural structures to perform the essential components of sensing and feedback, achieving unparalleled levels of robust aerobatic flight on limited computational resources. An engineering mechanism to replicate these control strategies could provide a dramatic increase in the mobility of small scale aerial robotics, but a formal investigation has not yet yielded tools that both quantitatively and intuitively explain flapping wing flight as an "input-output" relationship. This work uses experimental and simulated measurements of insect flight to create reduced order flight dynamics models. The framework presented here creates models that are relevant for the study of control properties. The work begins with automated measurement of insect wing motions in free flight, which are then used to calculate flight forces via an empirically-derived aerodynamics model. When paired with rigid body dynamics and experimentally measured state feedback, both the bare airframe and closed loop systems may be analyzed using frequency domain system identification. Flight dynamics models describing maneuvering about hover and cruise conditions are presented for example fruit flies (Drosophila melanogaster) and blowflies (Calliphorids). The results show that biologically measured feedback paths are appropriate for flight stabilization and sexual dimorphism is only a minor factor in flight dynamics. A method of ranking kinematic control inputs to maximize maneuverability is also presented, showing that the volume of reachable configurations in state space can be dramatically increased due to appropriate choice of kinematic inputs

    A 'reciprocal' theorem for the prediction of loads on a body moving in an inhomogeneous flow at arbitrary Reynolds number

    Get PDF
    Several forms of a theorem providing general expressions for the force and torque acting on a rigid body of arbitrary shape moving in an inhomogeneous incompressible flow at arbitrary Reynolds number are derived. Inhomogeneity arises because of the presence of a wall that partially or entirely bounds the fluid domain and/or a non-uniform carrying flow. This theorem, which stems directly from Navier–Stokes equations and parallels the well-known Lorentz reciprocal theorem extensively employed in low-Reynolds-number hydrodynamics, makes use of auxiliary solenoidal irrotational velocity fields and extends results previously derived by Quartapelle & Napolitano (AIAA J., vol. 21, 1983, pp. 911–913) and Howe (Q. J. Mech. Appl. Maths, vol. 48, 1995, pp. 401–426) in the case of an unbounded flow domain and a fluid at rest at infinity. As the orientation of the auxiliary velocity may be chosen arbitrarily, any component of the force and torque can be evaluated, irrespective of its orientation with respect to the relative velocity between the body and fluid. Three main forms of the theorem are successively derived. The first of these, given in (2.19), is suitable for a body moving in a fluid at rest in the presence of a wall. The most general form (3.6) extends it to the general situation of a body moving in an arbitrary non-uniform flow. Specific attention is then paid to the case of an underlying timedependent linear flow. Specialized forms of the theorem are provided in this situation for simplified body shapes and flow conditions, in (3.14) and (3.15), making explicit the various couplings between the body’s translation and rotation and the strain rate and vorticity of the carrying flow. The physical meaning of the various contributions to the force and torque and the way in which the present predictions reduce to those provided by available approaches, especially in the inviscid limit, are discussed. Some applications to high-Reynolds-number bubble dynamics, which provide several apparently new predictions, are also presented

    A Continuous-Time Nonlinear Observer for Estimating Structure from Motion from Omnidirectional Optic Flow

    Get PDF
    Various insect species utilize certain types of self-motion to perceive structure in their local environment, a process known as active vision. This dissertation presents the development of a continuous-time formulated observer for estimating structure from motion that emulates the biological phenomenon of active vision. In an attempt to emulate the wide-field of view of compound eyes and neurophysiology of insects, the observer utilizes an omni-directional optic flow field. Exponential stability of the observer is assured provided the persistency of excitation condition is met. Persistency of excitation is assured by altering the direction of motion sufficiently quickly. An equal convergence rate on the entire viewable area can be achieved by executing certain prototypical maneuvers. Practical implementation of the observer is accomplished both in simulation and via an actual flying quadrotor testbed vehicle. Furthermore, this dissertation presents the vehicular implementation of a complimentary navigation methodology known as wide-field integration of the optic flow field. The implementation of the developed insect-inspired navigation methodologies on physical testbed vehicles utilized in this research required the development of many subsystems that comprise a control and navigation suite, including avionics development and state sensing, model development via system identification, feedback controller design, and state estimation strategies. These requisite subsystems and their development are discussed

    Global optimization methods for full-reference and no-reference motion estimation with applications to atherosclerotic plaque motion and strain imaging

    Get PDF
    Pixel-based motion estimation using optical flow models has been extensively researched during the last two decades. The driving force of this research field is the amount of applications that can be developed with the motion estimates. Image segmentation, compression, activity detection, object tracking, pattern recognition, and more recently non-invasive biomedical applications like strain imaging make the estimation of accurate velocity fields necessary. The majority of the research in this area is focused on improving the theoretical and numerical framework of the optical flow models. This effort has resulted in increased method complexity with an increasing number of motion parameters. The standard approach of heuristically setting the motion parameters has become a major source of estimation error. This dissertation is focused in the development of reliable motion estimation based on global parameter optimization methods. Two strategies have been developed. In full-reference optimization, the assumption is that a video training set of realistic motion simulations (or ground truth) are available. Global optimization is used to calculate the best motion parameters that can then be used on a separate set of testing videos. This approach helps provide bounds on what motion estimation methods can achieve. In no-reference optimization, the true displacement field is not available. By optimizing for the agreement between different motion estimation techniques, the no-reference approach closely approximates the best (optimal) motion parameters. The results obtained with the newly developed global no-reference optimization approach agree closely with those produced with the full-reference approach. Moreover, the no-reference approach calculates velocity fields of superior quality than published results for benchmark video sequences. Unreliable velocity estimates are identified using new confidence maps that are associated with the disagreement between methods. Thus, the no-reference global optimization method can provide reliable motion estimation without the need for realistic simulations or access to ground truth. The methods developed in this dissertation are applied to ultrasound videos of carotid artery plaques. The velocity estimates are used to analyze plaque motion and produce novel non-invasive elasticity maps that can help in the identification of vulnerable atherosclerotic plaques

    An analysis of the far-field response to external forcing of a suspension in Stokes flow in a parallel-wall channel

    Full text link
    The leading-order far-field scattered flow produced by a particle in a parallel-wall channel under creeping flow conditions has a form of the parabolic velocity field driven by a 2D dipolar pressure distribution. We show that in a system of hydrodynamically interacting particles, the pressure dipoles contribute to the macroscopic suspension flow in a similar way as the induced electric dipoles contribute to the electrostatic displacement field. Using this result we derive macroscopic equations governing suspension transport under the action of a lateral force, a lateral torque or a macroscopic pressure gradient in the channel. The matrix of linear transport coefficients in the constitutive relations linking the external forcing to the particle and fluid fluxes satisfies the Onsager reciprocal relation. The transport coefficients are evaluated for square and hexagonal periodic arrays of fixed and freely suspended particles, and a simple approximation in a Clausius-Mossotti form is proposed for the channel permeability coefficient. We also find explicit expressions for evaluating the periodic Green's functions for Stokes flow between two parallel walls.Comment: 23 pages, 12 figure

    Plenoptic Signal Processing for Robust Vision in Field Robotics

    Get PDF
    This thesis proposes the use of plenoptic cameras for improving the robustness and simplicity of machine vision in field robotics applications. Dust, rain, fog, snow, murky water and insufficient light can cause even the most sophisticated vision systems to fail. Plenoptic cameras offer an appealing alternative to conventional imagery by gathering significantly more light over a wider depth of field, and capturing a rich 4D light field structure that encodes textural and geometric information. The key contributions of this work lie in exploring the properties of plenoptic signals and developing algorithms for exploiting them. It lays the groundwork for the deployment of plenoptic cameras in field robotics by establishing a decoding, calibration and rectification scheme appropriate to compact, lenslet-based devices. Next, the frequency-domain shape of plenoptic signals is elaborated and exploited by constructing a filter which focuses over a wide depth of field rather than at a single depth. This filter is shown to reject noise, improving contrast in low light and through attenuating media, while mitigating occluders such as snow, rain and underwater particulate matter. Next, a closed-form generalization of optical flow is presented which directly estimates camera motion from first-order derivatives. An elegant adaptation of this "plenoptic flow" to lenslet-based imagery is demonstrated, as well as a simple, additive method for rendering novel views. Finally, the isolation of dynamic elements from a static background is considered, a task complicated by the non-uniform apparent motion caused by a mobile camera. Two elegant closed-form solutions are presented dealing with monocular time-series and light field image pairs. This work emphasizes non-iterative, noise-tolerant, closed-form, linear methods with predictable and constant runtimes, making them suitable for real-time embedded implementation in field robotics applications

    Plenoptic Signal Processing for Robust Vision in Field Robotics

    Get PDF
    This thesis proposes the use of plenoptic cameras for improving the robustness and simplicity of machine vision in field robotics applications. Dust, rain, fog, snow, murky water and insufficient light can cause even the most sophisticated vision systems to fail. Plenoptic cameras offer an appealing alternative to conventional imagery by gathering significantly more light over a wider depth of field, and capturing a rich 4D light field structure that encodes textural and geometric information. The key contributions of this work lie in exploring the properties of plenoptic signals and developing algorithms for exploiting them. It lays the groundwork for the deployment of plenoptic cameras in field robotics by establishing a decoding, calibration and rectification scheme appropriate to compact, lenslet-based devices. Next, the frequency-domain shape of plenoptic signals is elaborated and exploited by constructing a filter which focuses over a wide depth of field rather than at a single depth. This filter is shown to reject noise, improving contrast in low light and through attenuating media, while mitigating occluders such as snow, rain and underwater particulate matter. Next, a closed-form generalization of optical flow is presented which directly estimates camera motion from first-order derivatives. An elegant adaptation of this "plenoptic flow" to lenslet-based imagery is demonstrated, as well as a simple, additive method for rendering novel views. Finally, the isolation of dynamic elements from a static background is considered, a task complicated by the non-uniform apparent motion caused by a mobile camera. Two elegant closed-form solutions are presented dealing with monocular time-series and light field image pairs. This work emphasizes non-iterative, noise-tolerant, closed-form, linear methods with predictable and constant runtimes, making them suitable for real-time embedded implementation in field robotics applications
    corecore