26 research outputs found

    MEMS Accelerometers

    Get PDF
    Micro-electro-mechanical system (MEMS) devices are widely used for inertia, pressure, and ultrasound sensing applications. Research on integrated MEMS technology has undergone extensive development driven by the requirements of a compact footprint, low cost, and increased functionality. Accelerometers are among the most widely used sensors implemented in MEMS technology. MEMS accelerometers are showing a growing presence in almost all industries ranging from automotive to medical. A traditional MEMS accelerometer employs a proof mass suspended to springs, which displaces in response to an external acceleration. A single proof mass can be used for one- or multi-axis sensing. A variety of transduction mechanisms have been used to detect the displacement. They include capacitive, piezoelectric, thermal, tunneling, and optical mechanisms. Capacitive accelerometers are widely used due to their DC measurement interface, thermal stability, reliability, and low cost. However, they are sensitive to electromagnetic field interferences and have poor performance for high-end applications (e.g., precise attitude control for the satellite). Over the past three decades, steady progress has been made in the area of optical accelerometers for high-performance and high-sensitivity applications but several challenges are still to be tackled by researchers and engineers to fully realize opto-mechanical accelerometers, such as chip-scale integration, scaling, low bandwidth, etc

    A Low-Cost Experimental Testbed for Multi-Agent System Coordination Control

    Get PDF
    A multi-agent system can be defined as a coordinated network of mobile, physical agents that execute complex tasks beyond their individual capabilities. Observations of biological multi-agent systems in nature reveal that these ``super-organisms” accomplish large scale tasks by leveraging the inherent advantages of a coordinated group. With this in mind, such systems have the potential to positively impact a wide variety of engineering applications (e.g. surveillance, self-driving cars, and mobile sensor networks). The current state of research in the area of multi-agent systems is quickly evolving from the theoretical development of coordination control algorithms and their computer simulations to experimental validations on proof-of-concept testbeds using small-scale mobile robotic platforms. An in-house testbed would allow for rapid prototyping and validation of control algorithms, and potentially lead to new research directions spawned by experimentally-observed issues. To this end, a custom experimental testbed, TIGER Square, has been designed, developed, built, and tested at Louisiana State University. In this work, the completed design and test results for a centralized testbed is presented. That is, the individual robots follow an overarching control entity and are reliant on a global structure, such as a central processing computer. As part of the validation process, a series of formation control experiments were executed to assess the performance of the testbed. In order to eliminate single-point failures, a multi-agent system must be fully decentralized or distributed. This means that the responsibilities of processing, localization, and communication are distributed to each agent. Therefore, this work concludes with the introduction of a prototype localization module that will be integrated into the existing centralized testbed. This initial step allows for the future decentralization of TIGER Square and opens the path to achieve a fully capable multi-agent system testbed

    Visual Perception For Robotic Spatial Understanding

    Get PDF
    Humans understand the world through vision without much effort. We perceive the structure, objects, and people in the environment and pay little direct attention to most of it, until it becomes useful. Intelligent systems, especially mobile robots, have no such biologically engineered vision mechanism to take for granted. In contrast, we must devise algorithmic methods of taking raw sensor data and converting it to something useful very quickly. Vision is such a necessary part of building a robot or any intelligent system that is meant to interact with the world that it is somewhat surprising we don\u27t have off-the-shelf libraries for this capability. Why is this? The simple answer is that the problem is extremely difficult. There has been progress, but the current state of the art is impressive and depressing at the same time. We now have neural networks that can recognize many objects in 2D images, in some cases performing better than a human. Some algorithms can also provide bounding boxes or pixel-level masks to localize the object. We have visual odometry and mapping algorithms that can build reasonably detailed maps over long distances with the right hardware and conditions. On the other hand, we have robots with many sensors and no efficient way to compute their relative extrinsic poses for integrating the data in a single frame. The same networks that produce good object segmentations and labels in a controlled benchmark still miss obvious objects in the real world and have no mechanism for learning on the fly while the robot is exploring. Finally, while we can detect pose for very specific objects, we don\u27t yet have a mechanism that detects pose that generalizes well over categories or that can describe new objects efficiently. We contribute algorithms in four of the areas mentioned above. First, we describe a practical and effective system for calibrating many sensors on a robot with up to 3 different modalities. Second, we present our approach to visual odometry and mapping that exploits the unique capabilities of RGB-D sensors to efficiently build detailed representations of an environment. Third, we describe a 3-D over-segmentation technique that utilizes the models and ego-motion output in the previous step to generate temporally consistent segmentations with camera motion. Finally, we develop a synthesized dataset of chair objects with part labels and investigate the influence of parts on RGB-D based object pose recognition using a novel network architecture we call PartNet

    Robust navigation for industrial service robots

    Get PDF
    Pla de Doctorats Industrials de la Generalitat de CatalunyaRobust, reliable and safe navigation is one of the fundamental problems of robotics. Throughout the present thesis, we tackle the problem of navigation for robotic industrial mobile-bases. We identify its components and analyze their respective challenges in order to address them. The research work presented here ultimately aims at improving the overall quality of the navigation stack of a commercially available industrial mobile-base. To introduce and survey the overall problem we first break down the navigation framework into clearly identified smaller problems. We examine the Simultaneous Localization and Mapping (SLAM) problem, recalling its mathematical grounding and exploring the state of the art. We then review the problem of planning the trajectory of a mobile-base toward a desired goal in the generated environment representation. Finally we investigate and clarify the use of the subset of the Lie theory that is useful in robotics. The first problem tackled is the recognition of place for closing loops in SLAM. Loop closure refers to the ability of a robot to recognize a previously visited location and infer geometrical information between its current and past locations. Using only a 2D laser range finder sensor, we address the problem using a technique borrowed from the field of Natural Language Processing (NLP) which has been successfully applied to image-based place recognition, namely the Bag-of-Words. We further improve the method with two proposals inspired from NLP. Firstly, the comparison of places is strengthened by considering the natural relative order of features in each individual sensor reading. Secondly, topological correspondences between places in a corpus of visited places are established in order to promote together instances that are ‘close’ to one another. We then tackle the problem of motion model calibration for odometry estimation. Given a mobile-base embedding an exteroceptive sensor able to observe ego-motion, we propose a novel formulation for estimating the intrinsic parameters of an odometry motion model. Resorting to an adaptation of the pre-integration theory initially developed for inertial motion sensors, we employ iterative nonlinear on-manifold optimization to estimate the wheel radii and wheel separation. The method is further extended to jointly estimate both the intrinsic parameters of the odometry model together with the extrinsic parameters of the embedded sensor. The method is shown to accommodate to variation in model parameters quickly when the vehicle is subject to physical changes during operation. Following the generation of a map in which the robot is localized, we address the problem of estimating trajectories for motion planning. We devise a new method for estimating a sequence of robot poses forming a smooth trajectory. Regardless of the Lie group considered, the trajectory is seen as a collection of states lying on a spline with non-vanishing n-th derivatives at each point. Formulated as a multi-objective nonlinear optimization problem, it allows for the addition of cost functions such as velocity and acceleration limits, collision avoidance and more. The proposed method is evaluated for two different motion planning tasks, the planning of trajectories for a mobile-base evolving in the SE(2) manifold, and the planning of the motion of a multi-link robotic arm whose end-effector evolves in the SE(3) manifold. From our study of Lie theory, we developed a new, ready to use, programming library called `manif’. The library is open source, publicly available and is developed following good software programming practices. It is designed so that it is easy to integrate and manipulate, and allows for flexible use while facilitating the possibility to extend it beyond the already implemented Lie groups.La navegación autónoma es uno de los problemas fundamentales de la robótica, y sus diferentes desafíos se han estudiado durante décadas. El desarrollo de métodos de navegación robusta, confiable y segura es un factor clave para la creación de funcionalidades de nivel superior en robots diseñados para operar en entornos con humanos. A lo largo de la presente tesis, abordamos el problema de navegación para bases robóticas móviles industriales; identificamos los elementos de un sistema de navegación; y analizamos y tratamos sus desafíos. El trabajo de investigación presentado aquí tiene como último objetivo mejorar la calidad general del sistema completo de navegación de una base móvil industrial disponible comercialmente. Para estudiar el problema de navegación, primero lo desglosamos en problemas menores claramente identificados. Examinamos el subproblema de mapeo del entorno y localización del robot simultáneamente (SLAM por sus siglas en ingles) y estudiamos el estado del arte del mismo. Al hacerlo, recordamos y detallamos la base matemática del problema de SLAM. Luego revisamos el subproblema de planificación de trayectorias hacia una meta deseada en la representación del entorno generada. Además, como una herramienta para las soluciones que se presentarán más adelante en el desarrollo de la tesis, investigamos y aclaramos el uso de teoría de Lie, centrándonos en el subconjunto de la teoría que es útil para la estimación de estados en robótica. Como primer elemento identificado para mejoras, abordamos el problema de reconocimiento de lugares para cerrar lazos en SLAM. El cierre de lazos se refiere a la capacidad de un robot para reconocer una ubicación visitada previamente e inferí información geométrica entre la ubicación actual del robot y aquellas reconocidas. Usando solo un sensor láser 2D, la tarea es desafiante ya que la percepción del entorno que proporciona el sensor es escasa y limitada. Abordamos el problema utilizando 'bolsas de palabras', una técnica prestada del campo de procesamiento del lenguaje natural (NLP) que se ha aplicado con éxito anteriormente al reconocimiento de lugares basado en imágenes. Nuestro método incluye dos nuevas propuestas inspiradas también en NLP. Primero, la comparación entre lugares candidatos se fortalece teniendo en cuenta el orden relativo natural de las características en cada lectura individual del sensor; y segundo, se establece un corpus de lugares visitados para promover juntos instancias que están "cerca" la una de la otra desde un punto de vista topológico. Evaluamos nuestras propuestas por separado y conjuntamente en varios conjuntos de datos, con y sin ruido, demostrando mejora en la detección de cierres de lazo para sensores láser 2D, con respecto al estado del arte. Luego abordamos el problema de la calibración del modelo de movimiento para la estimación de la edometría. Dado que nuestra base móvil incluye un sensor exteroceptivo capaz de observar el movimiento de la plataforma, proponemos una nueva formulación que permite estimar los parámetros intrínsecos del modelo cinemático de la plataforma durante el cómputo de la edometría del vehículo. Hemos recurrido a una adaptación de la teoría de reintegración inicialmente desarrollado para unidades inerciales de medida, y aplicado la técnica a nuestro modelo cinemático. El método nos permite, mediante optimización iterativa no lineal, la estimación del valor del radio de las ruedas de forma independiente y de la separación entre las mismas. El método se amplía posteriormente par idéntica de forma simultánea, estos parámetros intrínsecos junto con los parámetros extrínsecos que ubican el sensor láser con respecto al sistema de referencia de la base móvil. El método se valida en simulación y en un entorno real y se muestra que converge hacia los verdaderos valores de los parámetros. El método permite la adaptación de los parámetros intrínsecos del modelo cinemático de la plataforma derivados de cambios físicos durante la operación, tales como el impacto que el cambio de carga sobre la plataforma tiene sobre el diámetro de las ruedas. Como tercer subproblema de navegación, abordamos el reto de planificar trayectorias de movimiento de forma suave. Desarrollamos un método para planificar la trayectoria como una secuencia de configuraciones sobre una spline con n-ésimas derivadas en todos los puntos, independientemente del grupo de Lie considerado. Al ser formulado como un problema de optimización no lineal con múltiples objetivos, es posible agregar funciones de coste al problema de optimización que permitan añadir límites de velocidad o aceleración, evasión de colisiones, etc. El método propuesto es evaluado en dos tareas de planificación de movimiento diferentes, la planificación de trayectorias para una base móvil que evoluciona en la variedad SE(2), y la planificación del movimiento de un brazo robótico cuyo efector final evoluciona en la variedad SE(3). Además, cada tarea se evalúa en escenarios con complejidad de forma incremental, y se muestra un rendimiento comparable o mejor que el estado del arte mientras produce resultados más consistentes. Desde nuestro estudio de la teoría de Lie, desarrollamos una nueva biblioteca de programación llamada “manif”. La biblioteca es de código abierto, está disponible públicamente y se desarrolla siguiendo las buenas prácticas de programación de software. Esta diseñado para que sea fácil de integrar y manipular, y permite flexibilidad de uso mientras se facilita la posibilidad de extenderla más allá de los grupos de Lie inicialmente implementados. Además, la biblioteca se muestra eficiente en comparación con otras soluciones existentes. Por fin, llegamos a la conclusión del estudio de doctorado. Examinamos el trabajo de investigación y trazamos líneas para futuras investigaciones. También echamos un vistazo en los últimos años y compartimos una visión personal y experiencia del desarrollo de un doctorado industrial.Postprint (published version

    Visual and Camera Sensors

    Get PDF
    This book includes 13 papers published in Special Issue ("Visual and Camera Sensors") of the journal Sensors. The goal of this Special Issue was to invite high-quality, state-of-the-art research papers dealing with challenging issues in visual and camera sensors

    From locomotion to cognition: Bridging the gap between reactive and cognitive behavior in a quadruped robot

    Full text link
    The cognitivistic paradigm, which states that cognition is a result of computation with symbols that represent the world, has been challenged by many. The opponents have primarily criticized the detachment from direct interaction with the world and pointed to some fundamental problems (for instance the symbol grounding problem). Instead, they emphasized the constitutive role of embodied interaction with the environment. This has motivated the advancement of synthetic methodologies: the phenomenon of interest (cognition) can be studied by building and investigating whole brain-body-environment systems. Our work is centered around a compliant quadruped robot equipped with a multimodal sensory set. In a series of case studies, we investigate the structure of the sensorimotor space that the application of different actions in different environments by the robot brings about. Then, we study how the agent can autonomously abstract the regularities that are induced by the different conditions and use them to improve its behavior. The agent is engaged in path integration, terrain discrimination and gait adaptation, and moving target following tasks. The nature of the tasks forces the robot to leave the ``here-and-now'' time scale of simple reactive stimulus-response behaviors and to learn from its experience, thus creating a ``minimally cognitive'' setting. Solutions to these problems are developed by the agent in a bottom-up fashion. The complete scenarios are then used to illuminate the concepts that are believed to lie at the basis of cognition: sensorimotor contingencies, body schema, and forward internal models. Finally, we discuss how the presented solutions are relevant for applications in robotics, in particular in the area of autonomous model acquisition and adaptation, and, in mobile robots, in dead reckoning and traversability detection
    corecore