198 research outputs found

    Gait Generation of Multilegged Robots by using Hardware Artificial Neural Networks

    Get PDF
    Living organisms can act autonomously because biological neural networks process the environmental information in continuous time. Therefore, living organisms have inspired many applications of autonomous control to small-sized robots. In this chapter, a small-sized robot is controlled by a hardware artificial neural network (ANN) without software programs. Previously, the authors constructed a multilegged walking robot. The link mechanism of the limbs was designed to reduce the number of actuators. The current paper describes the basic characteristics of hardware ANNs that generate the gait for multilegged robots. The pulses emitted by the hardware ANN generate oscillating patterns of electrical activity. The pulse-type hardware ANN model has the basic features of a class II neuron model, which behaves like a resonator. Thus, gait generation by the hardware ANNs mimics the synchronization phenomena in biological neural networks. Consequently, our constructed hardware ANNs can generate multilegged robot gaits without requiring software programs

    Automated Micromanipulation of Micro Objects

    Get PDF
    In recent years, research efforts in the development of Micro Electro Mechanical Systems, (MEMS) including microactuators and micromanipulators, have attracted a great deal of attention. The development of microfabrication techniques has resulted in substantial progress in the miniaturization of devices such as electronic circuits. However, the research in MEMS still lags behind in terms of the development of reliable tools for post-fabrication processes and the precise and dexterous manipulation of individual micro size objects. Current micromanipulation mechanisms are prone to high costs, a large footprint, and poor dexterity and are labour intensive. To overcome such, the research in this thesis is focused on the utilization of microactuators in micromanipulation. Microactuators are compliant structures. They undergo substantial deflection during micromanipulation due to the considerable surface micro forces. Their dominance in governing micromanipulation is so compelling that their effects should be considered in designing microactuators and microsensors. In this thesis, the characterization of the surface micro forces and automated micromanipulation are investigated. An inexpensive experimental setup is proposed as a platform to replace Atomic Force Microscopy (AFM) for analyzing the force characterization of micro scale components. The relationship between the magnitudes of the surface micro forces and the parameters such as the velocity of the pushing process, relative humidity, temperature, hydrophilicity of the substrate, and surface area are empirically examined. In addition, a precision automated micromanipulation system is realized. A class of artificial neural networks (NN) is devised to estimate the unmodelled micro forces during the controlled pushing of micro size object along a desired path. Then, a nonlinear controller is developed for the controlled pushing of the micro objects to guarantee the stability of the closed loop system in the Lyapunov sense. To validate the performance of the proposed controller, an experimental setup is designed. The application of the proposed controller is extended to precisely push several micro objects, each with different characteristics in terms of the surface micro forces governing the manipulation process. The proposed adaptive controller is capable of learning to adjust its weights effectively when the surface micro forces change under varying conditions. By using the controller, a fully automated sequential positioning of three micro objects on a flat substrate is performed. The results are compared with those of the identical sequential pushing by using a conventional linear controller. The results suggest that artificial NNs are a promising tool for the design of adaptive controllers to accurately perform the automated manipulation of multiple objects in the microscopic scale for microassembly

    Power-Scavenging MEMS Robots

    Get PDF
    This thesis includes the design, modeling, and testing of novel, power-scavenging, biologically inspired MEMS microrobots. Over one hundred 500-μm and 990-μm microrobots with two, four, and eight wings were designed, fabricated, characterized. These microrobots constitute the smallest documented attempt at powered flight. Each microrobot wing is comprised of downward-deflecting, laser-powered thermal actuators made of gold and polysilicon; the microrobots were fabricated in PolyMUMPs® (Polysilicon Multi-User MEMS Processes). Characterization results of the microrobots illustrate how wing-tip deflection can be maximized by optimizing the gold-topolysilicon ratio as well as the dimensions of the actuator-wings. From these results, an optimum actuator-wing configuration was identified. It also was determined that the actuator-wing configuration with maximum deflection and surface area yet minimum mass had the greatest lift-to-weight ratio. Powered testing results showed that the microrobots successfully scavenged power from a remote 660-nm laser. These microrobots also demonstrated rapid downward flapping, but none achieved flight. The results show that the microrobots were too heavy and lacked sufficient wing surface area. It was determined that a successfully flying microrobot can be achieved by adding a robust, light-weight material to the optimum actuator-wing configuration—similar to insect wings. The ultimate objective of the flying microrobot project is an autonomous, fully maneuverable flying microrobot that is capable of sensing and acting upon a target. Such a microrobot would be capable of precise lethality, accurate battle-damage assessment, and successful penetration of otherwise inaccessible targets

    Modeling, simulation and control of microrobots for the microfactory.

    Get PDF
    Future assembly technologies will involve higher levels of automation in order to satisfy increased microscale or nanoscale precision requirements. Traditionally, assembly using a top-down robotic approach has been well-studied and applied to the microelectronics and MEMS industries, but less so in nanotechnology. With the boom of nanotechnology since the 1990s, newly designed products with new materials, coatings, and nanoparticles are gradually entering everyone’s lives, while the industry has grown into a billion-dollar volume worldwide. Traditionally, nanotechnology products are assembled using bottom-up methods, such as self-assembly, rather than top-down robotic assembly. This is due to considerations of volume handling of large quantities of components, and the high cost associated with top-down manipulation requiring precision. However, bottom-up manufacturing methods have certain limitations, such as components needing to have predefined shapes and surface coatings, and the number of assembly components being limited to very few. For example, in the case of self-assembly of nano-cubes with an origami design, post-assembly manipulation of cubes in large quantities and cost-efficiency is still challenging. In this thesis, we envision a new paradigm for nanoscale assembly, realized with the help of a wafer-scale microfactory containing large numbers of MEMS microrobots. These robots will work together to enhance the throughput of the factory, while their cost will be reduced when compared to conventional nanopositioners. To fulfill the microfactory vision, numerous challenges related to design, power, control, and nanoscale task completion by these microrobots must be overcome. In this work, we study two classes of microrobots for the microfactory: stationary microrobots and mobile microrobots. For the stationary microrobots in our microfactory application, we have designed and modeled two different types of microrobots, the AFAM (Articulated Four Axes Microrobot) and the SolarPede. The AFAM is a millimeter-size robotic arm working as a nanomanipulator for nanoparticles with four degrees of freedom, while the SolarPede is a light-powered centimeter-size robotic conveyor in the microfactory. For mobile microrobots, we have introduced the world’s first laser-driven micrometer-size locomotor in dry environments, called ChevBot to prove the concept of the motion mechanism. The ChevBot is fabricated using MEMS technology in the cleanroom, following a microassembly step. We showed that it can perform locomotion with pulsed laser energy on a dry surface. Based on the knowledge gained with the ChevBot, we refined tits fabrication process to remove the assembly step and increase its reliability. We designed and fabricated a steerable microrobot, the SerpenBot, in order to achieve controllable behavior with the guidance of a laser beam. Through modeling and experimental study of the characteristics of this type of microrobot, we proposed and validated a new type of deep learning controller, the PID-Bayes neural network controller. The experiments showed that the SerpenBot can achieve closed-loop autonomous operation on a dry substrate

    Aerial Vehicles

    Get PDF
    This book contains 35 chapters written by experts in developing techniques for making aerial vehicles more intelligent, more reliable, more flexible in use, and safer in operation.It will also serve as an inspiration for further improvement of the design and application of aeral vehicles. The advanced techniques and research described here may also be applicable to other high-tech areas such as robotics, avionics, vetronics, and space

    Using MapReduce Streaming for Distributed Life Simulation on the Cloud

    Get PDF
    Distributed software simulations are indispensable in the study of large-scale life models but often require the use of technically complex lower-level distributed computing frameworks, such as MPI. We propose to overcome the complexity challenge by applying the emerging MapReduce (MR) model to distributed life simulations and by running such simulations on the cloud. Technically, we design optimized MR streaming algorithms for discrete and continuous versions of Conway’s life according to a general MR streaming pattern. We chose life because it is simple enough as a testbed for MR’s applicability to a-life simulations and general enough to make our results applicable to various lattice-based a-life models. We implement and empirically evaluate our algorithms’ performance on Amazon’s Elastic MR cloud. Our experiments demonstrate that a single MR optimization technique called strip partitioning can reduce the execution time of continuous life simulations by 64%. To the best of our knowledge, we are the first to propose and evaluate MR streaming algorithms for lattice-based simulations. Our algorithms can serve as prototypes in the development of novel MR simulation algorithms for large-scale lattice-based a-life models.https://digitalcommons.chapman.edu/scs_books/1014/thumbnail.jp

    Neuromorphic auditory computing: towards a digital, event-based implementation of the hearing sense for robotics

    Get PDF
    In this work, it is intended to advance on the development of the neuromorphic audio processing systems in robots through the implementation of an open-source neuromorphic cochlea, event-based models of primary auditory nuclei, and their potential use for real-time robotics applications. First, the main gaps when working with neuromorphic cochleae were identified. Among them, the accessibility and usability of such sensors can be considered as a critical aspect. Silicon cochleae could not be as flexible as desired for some applications. However, FPGA-based sensors can be considered as an alternative for fast prototyping and proof-of-concept applications. Therefore, a software tool was implemented for generating open-source, user-configurable Neuromorphic Auditory Sensor models that can be deployed in any FPGA, removing the aforementioned barriers for the neuromorphic research community. Next, the biological principles of the animals' auditory system were studied with the aim of continuing the development of the Neuromorphic Auditory Sensor. More specifically, the principles of binaural hearing were deeply studied for implementing event-based models to perform real-time sound source localization tasks. Two different approaches were followed to extract inter-aural time differences from event-based auditory signals. On the one hand, a digital, event-based design of the Jeffress model was implemented. On the other hand, a novel digital implementation of the Time Difference Encoder model was designed and implemented on FPGA. Finally, three different robotic platforms were used for evaluating the performance of the proposed real-time neuromorphic audio processing architectures. An audio-guided central pattern generator was used to control a hexapod robot in real-time using spiking neural networks on SpiNNaker. Then, a sensory integration application was implemented combining sound source localization and obstacle avoidance for autonomous robots navigation. Lastly, the Neuromorphic Auditory Sensor was integrated within the iCub robotic platform, being the first time that an event-based cochlea is used in a humanoid robot. Then, the conclusions obtained are presented and new features and improvements are proposed for future works.En este trabajo se pretende avanzar en el desarrollo de los sistemas de procesamiento de audio neuromórficos en robots a través de la implementación de una cóclea neuromórfica de código abierto, modelos basados en eventos de los núcleos auditivos primarios, y su potencial uso para aplicaciones de robótica en tiempo real. En primer lugar, se identificaron los principales problemas a la hora de trabajar con cócleas neuromórficas. Entre ellos, la accesibilidad y usabilidad de dichos sensores puede considerarse un aspecto crítico. Los circuitos integrados analógicos que implementan modelos cocleares pueden no pueden ser tan flexibles como se desea para algunas aplicaciones específicas. Sin embargo, los sensores basados en FPGA pueden considerarse una alternativa para el desarrollo rápido y flexible de prototipos y aplicaciones de prueba de concepto. Por lo tanto, en este trabajo se implementó una herramienta de software para generar modelos de sensores auditivos neuromórficos de código abierto y configurables por el usuario, que pueden desplegarse en cualquier FPGA, eliminando las barreras mencionadas para la comunidad de investigación neuromórfica. A continuación, se estudiaron los principios biológicos del sistema auditivo de los animales con el objetivo de continuar con el desarrollo del Sensor Auditivo Neuromórfico (NAS). Más concretamente, se estudiaron en profundidad los principios de la audición binaural con el fin de implementar modelos basados en eventos para realizar tareas de localización de fuentes sonoras en tiempo real. Se siguieron dos enfoques diferentes para extraer las diferencias temporales interaurales de las señales auditivas basadas en eventos. Por un lado, se implementó un diseño digital basado en eventos del modelo Jeffress. Por otro lado, se diseñó una novedosa implementación digital del modelo de codificador de diferencias temporales y se implementó en FPGA. Por último, se utilizaron tres plataformas robóticas diferentes para evaluar el rendimiento de las arquitecturas de procesamiento de audio neuromórfico en tiempo real propuestas. Se utilizó un generador central de patrones guiado por audio para controlar un robot hexápodo en tiempo real utilizando redes neuronales pulsantes en SpiNNaker. A continuación, se implementó una aplicación de integración sensorial que combina la localización de fuentes de sonido y la evitación de obstáculos para la navegación de robots autónomos. Por último, se integró el Sensor Auditivo Neuromórfico dentro de la plataforma robótica iCub, siendo la primera vez que se utiliza una cóclea basada en eventos en un robot humanoide. Por último, en este trabajo se presentan las conclusiones obtenidas y se proponen nuevas funcionalidades y mejoras para futuros trabajos

    Algorithms for VLSI stereo vision circuits applied to autonomous robots

    Get PDF
    Since the inception of Robotics, visual information has been incorporated in order to allow the robots to perform tasks that require an interaction with their environment, particularly when it is a changing environment. Depth perception is a most useful information for a mobile robot to navigate in its environment and interact with its surroundings. Among the different methods capable of measuring the distance to the objects in the scene, stereo vision is the most advantageous for a small, mobile robot with limited energy and computational power. Stereoscopy implies a low power consumption because it uses passive sensors and it does not require the robot to move. Furthermore, it is more robust, because it does not require a complex optic system with moving elements. On the other hand, stereo vision is computationally intensive. Objects in the scene have to be detected and matched across images. Biological sensory systems are based on simple computational elements that process information in parallel and communicate among them. Analog VLSI chips are an ideal substrate to mimic the massive parallelism and collective computation present in biological nervous systems. For mobile robotics they have the added advantage of low power consumption and high computational power, thus freeing the CPU for other tasks. This dissertation discusses two stereoscopic methods that are based on simple, parallel cal- culations requiring communication only among neighboring processing units (local communication). Algorithms with these properties are easy to implement in analog VLSI and they are also very convenient for digital systems. The first algorithm is phase-based. Disparity, i.e., the spatial shift between left and right images, is recovered as a phase shift in the spatial-frequency domain. Gábor functions are used to recover the frequency spectrum of the image because of their optimum joint spatial and spatial-frequency properties. The Gábor-based algorithm is discussed and tested on a Khepera miniature mobile robot. Two further approximations are introduced to ease the analog VLSI and digital implementations. The second stereoscopic algorithm is difference-based. Disparity is recovered by a simple calculation using the image differences and their spatial derivatives. The algorithm is simulated on a digital system and an analog VLSI implementation is proposed and discussed. The thesis concludes with the description of some tools used in this research project. A stereo vision system has been developed for the Webots mobile robotics simulator, to simplify the testing of different stereo algorithms. Similarly, two stereo vision turrets have been built for the Khepera robot
    corecore