17 research outputs found

    An automated targeting mechanism with free space optical communication functionality for optomechatronic applications

    Get PDF
    This thesis outlines the development of an agile, reliable and precise targeting mechanism complete with free space optical communication (FSOC) capabilities for employment in optomechatronic applications. To construct the complex mechanism, insight into existing technologies was required. These are inclusive to actuator design, control methodology, programming architecture, object recognition and localization and optical communication. Focusing on each component individually resulted in a variety of novel systems, commencing with the creation of a fast (1.3 ms⁻Âč), accurate (micron range) voice coil actuator (VCA). The design, employing a planar, compact composition, with the inclusion of precision position feedback and smooth guidance fulfills size, weight and power (SWaP) characteristics required by many optomechatronic mechanisms. Arranging the VCAs in a parallel nature promoted the use of a parallel orientation manipulator (POM) as the foundation of the targeting structure. Motion control was achieved by adopting a cascade PID-PID control methodology in hardware, resulting in average settling times of 23 ms. In the pursuit of quick and dependable computation, a custom printed circuit board (PCB) containing a field programmable gate array (FPGA), microcontroller and image sensing technology were developed. Subsequently, hardware-based object isolation and parameter identification algorithms were constructed. Furthermore, by integrating these techniques with the dynamic performance of the POM, mathematical equations were generated to allow the targeting of an object in real-time with update rates of 70 ms. Finally, a FSOC architecture utilizing beam splitter technology was constructed and integrated into the targeting device. Thus, producing a system capable of automatically targeting an infrared (IR) light source while simultaneously receiving wireless optical communication achieving ranges beyond 30 feet, at rates of 1 Mbits per second

    Telerobotic Sensor-based Tool Control Derived From Behavior-based Robotics Concepts

    Get PDF
    @font-face { font-family: TimesNewRoman ; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0in 0in 0.0001pt; font-size: 12pt; font-family: Times New Roman ; }div.Section1 { page: Section1; } Teleoperated task execution for hazardous environments is slow and requires highly skilled operators. Attempts to implement telerobotic assists to improve efficiency have been demonstrated in constrained laboratory environments but are not being used in the field because they are not appropriate for use on actual remote systems operating in complex unstructured environments using typical operators. This work describes a methodology for combining select concepts from behavior-based systems with telerobotic tool control in a way that is compatible with existing manipulator architectures used by remote systems typical to operations in hazardous environment. The purpose of the approach is to minimize the task instance modeling in favor of a priori task type models while using sensor information to register the task type model to the task instance. The concept was demonstrated for two tools useful to decontamination & dismantlement type operations—a reciprocating saw and a powered socket tool. The experimental results demonstrated that the approach works to facilitate traded control telerobotic tooling execution by enabling difficult tasks and by limiting tool damage. The role of the tools and tasks as drivers to the telerobotic implementation was better understood in the need for thorough task decomposition and the discovery and examination of the tool process signature. The contributions of this work include: (1) the exploration and evaluation of select features of behavior-based robotics to create a new methodology for integrating telerobotic tool control with positional teleoperation in the execution of complex tool-centric remote tasks, (2) the simplification of task decomposition and the implementation of sensor-based tool control in such a way that eliminates the need for the creation of a task instance model for telerobotic task execution, and (3) the discovery, demonstrated use, and documentation of characteristic tool process signatures that have general value in the investigation of other tool control, tool maintenance, and tool development strategies above and beyond the benefit sustained for the methodology described in this work

    Microrobotique et Micromécatronique pour la Réalisation de Tùches de Micro-Assemblage Complexes et Précises.

    Get PDF
    Ce document prĂ©sente une synthĂšse de mes contributions scientifiques aux domaines de la microrobotique et de la micromĂ©catronique ainsi que des transferts effectuĂ©s, tant Ă  destination de l’industrie que de l’enseignement. Les travaux conduits sont orientĂ©s vers la rĂ©alisation de tĂąches de micro-assemblage complexes, prĂ©cises et automatisĂ©es par approche microrobotique et sont plus particuliĂšrement appliquĂ©s aux MOEMS.L’échelle micromĂ©trique considĂ©rĂ©e induit de nombreuses spĂ©cificitĂ©s qui se traduisent par un dĂ©ficit notable de connaissances du comportement des systĂšmes Ă  cette Ă©chelle. Pour cela, une premiĂšre partie des travaux est dĂ©diĂ©e Ă  l’étude et Ă  la modĂ©lisation multiphysique des systĂšmes microrobotiques et micromĂ©catroniques. Cette connaissance a conduit, dans une seconde partie des travaux, Ă  la proposition de nouveaux principes de mesure et d’actionnement mais Ă©galement au dĂ©veloppement de microsystĂšmes complexes, instrumentĂ©s et intĂ©grĂ©s (micro-banc-optique, micropince, plateformes compliantes). Enfin, des lois de commandes et des stratĂ©gies d’assemblage originales ont Ă©tĂ© proposĂ©es notamment une commande dynamique hybride force-position combinant une commande hybride externe et une commande en impĂ©dance. Celle-ci permet de maĂźtriser la dynamique des transitions contact/non-contact critique Ă  l’échelle micromĂ©trique mais Ă©galement d’automatiser des processus de micro-assemblage complexes. L’ensemble de ces travaux ont fait l’objet de validations expĂ©rimentales permettant de quantifier prĂ©cisĂ©ment les performances obtenues (exactitude de positionnement, temps de cycle, robustesse
). Les perspectives de ces travaux portent sur la proposition de systĂšmes microrobotiques et micromĂ©catroniques compacts et intĂ©grĂ©s utiles au micro-assemblage haute dynamique ainsi qu’à l’assemblage de composants nanophotoniques

    Affordable flexible hybrid manipulator for miniaturised product assembly

    Get PDF
    Miniaturised assembly systems are capable of assembling parts of a few millimetres in size with an accuracy of a few micrometres. Reducing the size and the cost of such a system while increasing its flexibility and accuracy is a challenging issue. The introduction of hybrid manipulation, also called coarse/fine manipulation, within an assembly system is the solution investigated in this thesis. A micro-motion stage (MMS) is designed to be used as the fine positioning mechanism of the hybrid assembly system. MMSs often integrate compliant micro-motion stages (CMMSs) to achieve higher performances than the conventional MMSs. CMMSs are mechanisms that transmit an output force and displacement through the deformation of their structure. Although widely studied, the design and modelling techniques of these mechanisms still need to be improved and simplified. Firstly, the linear modelling of CMMSs is evaluated and two polymer prototypes are fabricated and characterised. It is found that polymer based designs have a low fabrication cost but not suitable for construction of a micro-assembly system. A simplified nonlinear model is then derived and integrated within an analytical model, allowing for the full characterisation of the CMMS in terms of stiffness and range of motion. An aluminium CMMS is fabricated based on the optimisation results from the analytical model and is integrated within an MMS. The MMS is controlled using dual-range positioning to achieve a low-cost positioning accuracy better than 2”m within a workspace of 4.4×4.4mm2. Finally, a hybrid manipulator is designed to assemble mobile-phone cameras and sensors automatically. A conventional robot manipulator is used to pick and place the parts in coarse mode while the aluminium CMMS based MMS is used for fine alignment of the parts. A high-resolution vision system is used to locate the parts on the substrate and to measure the relative position of the manipulator above MMS using a calibration grid with square patterns. The overall placement accuracy of the assembly system is ±24”m at 3σ and can reach 2”m, for a total cost of less than ÂŁ50k, thus demonstrating the suitability of hybrid manipulation for desktop-size miniaturised assembly systems. The precision of the existing system could be significantly improved by making the manipulator stiffer (i.e. preloaded bearings
) and adjustable to compensate for misalignment. Further improvement could also be made on the calibration of the vision system. The system could be either scaled up or down using the same architecture while adapting the controllers to the scale.Engineering and Physical Sciences Research Council (EPSRC

    Vision-based Self-Supervised Depth Perception and Motion Control for Mobile Robots

    Get PDF
    The advances in robotics have enabled many different opportunities to deploy a mobile robot in various settings. However, many current mobile robots are equipped with a sensor suite with multiple types of sensors. This expensive sensor suite and the computationally complex program to fully utilize these sensors may limit the large-scale deployment of these robots. The recent development of computer vision has enabled the possibility to complete various robotic tasks with simply camera systems. This thesis focuses on two problems related to vision-based mobile robots: depth perception and motion control. Commercially available stereo cameras relying on traditional stereo matching algorithms are widely used in robotic applications to obtain depth information. Although their raw (predicted) disparity maps may contain incorrect estimates, they can still provide useful prior information towards more accurate predictions. We propose a data-driven pipeline to incorporate the raw disparity to predict high-quality disparity maps. The pipeline first utilizes a confidence generation component to identify raw disparity inaccuracies. Then a deep neural network, which consists of a feature extraction module, a confidence guided raw disparity fusion module, and a hierarchical occlusion-aware disparity refinement module, computes the final disparity estimates and their corresponding occlusion masks. The pipeline can be trained in a self-supervised manner, removing the need of expensive ground truth training labels. Experimental results on public datasets show that the pipeline has competitive accuracy with real-time processing rate. The pipeline is also tested with images captured by commercial stereo cameras to demonstrate its effectiveness in improving their raw disparity estimates. After the stereo matching pipeline predicts the disparity maps, they are used by a proposed disparity-based direct visual servoing controller to compute the commanded velocity to move a mobile robot towards its target pose. Many previous visual servoing methods rely on complex and error-prone feature extraction and matching steps. The proposed visual servoing framework follows the direct visual servoing approach which does not require any extraction or matching process. Hence, its performance is not affected by the potential errors introduced by these steps. Furthermore, the predicted occlusion masks are also incorporated in the controller to address the occlusion problem inherited from a stereo camera setup. The performance of the proposed control strategy is verified by extensive simulations and experiments

    Methods for the improvement of power resource prediction and residual range estimation for offroad unmanned ground vehicles

    Get PDF
    Unmanned Ground Vehicles (UGVs) are becoming more widespread in their deployment. Advances in technology have improved not only their reliability but also their ability to perform complex tasks. UGVs are particularly attractive for operations that are considered unsuitable for human operatives. These include dangerous operations such as explosive ordnance disarmament, as well as situations where human access is limited including planetary exploration or search and rescue missions involving physically small spaces. As technology advances, UGVs are gaining increased capabilities and consummate increased complexity, allowing them to participate in increasingly wide range of scenarios. UGVs have limited power reserves that can restrict a UGV’s mission duration and also the range of capabilities that it can deploy. As UGVs tend towards increased capabilities and complexity, extra burden is placed on the already stretched power resources. Electric drives and an increasing array of processors, sensors and effectors, all need sufficient power to operate. Accurate prediction of mission power requirements is therefore of utmost importance, especially in safety critical scenarios where the UGV must complete an atomic task or risk the creation of an unsafe environment due to failure caused by depleted power. Live energy prediction for vehicles that traverse typical road surfaces is a wellresearched topic. However, this is not sufficient for modern UGVs as they are required to traverse a wide variety of terrains that may change considerably with prevailing environmental conditions. This thesis addresses the gap by presenting a novel approach to both off and on-line energy prediction that considers the effects of weather conditions on a wide variety of terrains. The prediction is based upon nonlinear polynomial regression using live sensor data to improve upon the accuracy provided by current methods. The new approach is evaluated and compared to existing algorithms using a custom ‘UGV mission power’ simulation tool. The tool allows the user to test the accuracy of various mission energy prediction algorithms over a specified mission routes that include a variety of terrains and prevailing weather conditions. A series of experiments that test and record the ‘real world’ power use of a typical small electric drive UGV are also performed. The tests are conducted for a variety of terrains and weather conditions and the empirical results are used to validate the results of the simulation tool. The new algorithm showed a significant improvement compared with current methods, which will allow for UGVs deployed in real world scenarios where they must contend with a variety of terrains and changeable weather conditions to make accurate energy use predictions. This enables more capabilities to be deployed with a known impact on remaining mission power requirement, more efficient mission durations through avoiding the need to maintain excessive estimated power reserves and increased safety through reduced risk of aborting atomic operations in safety critical scenarios. As supplementary contribution, this work created a power resource usage and prediction test bed UGV and resulting data-sets as well as a novel simulation tool for UGV mission energy prediction. The tool implements a UGV model with accurate power use characteristics, confirmed by an empirical test series. The tool can be used to test a wide variety of scenarios and power prediction algorithms and could be used for the development of further mission energy prediction technology or be used as a mission energy planning tool

    Prédiction de performance d'algorithmes de traitement d'images sur différentes architectures hardwares

    Get PDF
    In computer vision, the choice of a computing architecture is becoming more difficult for image processing experts. Indeed, the number of architectures allowing the computation of image processing algorithms is increasing. Moreover, the number of computer vision applications constrained by computing capacity, power consumption and size is increasing. Furthermore, selecting an hardware architecture, as CPU, GPU or FPGA is also an important issue when considering computer vision applications.The main goal of this study is to predict the system performance in the beginning of a computer vision project. Indeed, for a manufacturer or even a researcher, selecting the computing architecture should be done as soon as possible to minimize the impact on development.A large variety of methods and tools has been developed to predict the performance of computing systems. However, they do not cover a specific area and they cannot predict the performance without analyzing the code or making some benchmarks on architectures. In this works, we specially focus on the prediction of the performance of computer vision algorithms without the need for benchmarking. This allows splitting the image processing algorithms in primitive blocks.In this context, a new paradigm based on splitting every image processing algorithms in primitive blocks has been developed. Furthermore, we propose a method to model the primitive blocks according to the software and hardware parameters. The decomposition in primitive blocks and their modeling was demonstrated to be possible. Herein, the performed experiences, on different architectures, with real data, using algorithms as convolution and wavelets validated the proposed paradigm. This approach is a first step towards the development of a tool allowing to help choosing hardware architecture and optimizing image processing algorithms.Dans le contexte de la vision par ordinateur, le choix d’une architecture de calcul est devenu de plus en plus complexe pour un spĂ©cialiste du traitement d’images. Le nombre d’architectures permettant de rĂ©soudre des algorithmes de traitement d’images augmente d’annĂ©e en annĂ©e. Ces algorithmes s’intĂšgrent dans des cadres eux-mĂȘmes de plus en plus complexes rĂ©pondant Ă  de multiples contraintes, que ce soit en terme de capacitĂ© de calculs, mais aussi en terme de consommation ou d’encombrement. A ces contraintes s’ajoute le nombre grandissant de types d’architectures de calculs pouvant rĂ©pondre aux besoins d’une application (CPU, GPU, FPGA). L’enjeu principal de l’étude est la prĂ©diction de la performance d’un systĂšme, cette prĂ©diction pouvant ĂȘtre rĂ©alisĂ©e en phase amont d’un projet de dĂ©veloppement dans le domaine de la vision. Dans un cadre de dĂ©veloppement, industriel ou de recherche, l’impact en termes de rĂ©duction des coĂ»ts de dĂ©veloppement, est d’autant plus important que le choix de l’architecture de calcul est rĂ©alisĂ© tĂŽt. De nombreux outils et mĂ©thodes d’évaluation de la performance ont Ă©tĂ© dĂ©veloppĂ©s mais ceux-ci, se concentrent rarement sur un domaine prĂ©cis et ne permettent pas d’évaluer la performance sans une Ă©tude complĂšte du code ou sans la rĂ©alisation de tests sur l’architecture Ă©tudiĂ©e. Notre but Ă©tant de s’affranchir totalement de benchmark, nous nous sommes concentrĂ©s sur le domaine du traitement d’images pour pouvoir dĂ©composer les algorithmes du domaine en Ă©lĂ©ments simples ici nommĂ©es briques Ă©lĂ©mentaires. Dans cette optique, un nouveau paradigme qui repose sur une dĂ©composition de tout algorithme de traitement d’images en ces briques Ă©lĂ©mentaires a Ă©tĂ© conçu. Une mĂ©thode est proposĂ©e pour modĂ©liser ces briques en fonction de paramĂštres software et hardwares. L’étude dĂ©montre que la dĂ©composition en briques Ă©lĂ©mentaires est rĂ©alisable et que ces briques Ă©lĂ©mentaires peuvent ĂȘtre modĂ©lisĂ©es. Les premiers tests sur diffĂ©rentes architectures avec des donnĂ©es rĂ©elles et des algorithmes comme la convolution et les ondelettes ont permis de valider l'approche. Ce paradigme est un premier pas vers la rĂ©alisation d’un outil qui permettra de proposer des architectures pour le traitement d’images et d’aider Ă  l’optimisation d’un programme dans ce domaine

    Image Simulation in Remote Sensing

    Get PDF
    Remote sensing is being actively researched in the fields of environment, military and urban planning through technologies such as monitoring of natural climate phenomena on the earth, land cover classification, and object detection. Recently, satellites equipped with observation cameras of various resolutions were launched, and remote sensing images are acquired by various observation methods including cluster satellites. However, the atmospheric and environmental conditions present in the observed scene degrade the quality of images or interrupt the capture of the Earth's surface information. One method to overcome this is by generating synthetic images through image simulation. Synthetic images can be generated by using statistical or knowledge-based models or by using spectral and optic-based models to create a simulated image in place of the unobtained image at a required time. Various proposed methodologies will provide economical utility in the generation of image learning materials and time series data through image simulation. The 6 published articles cover various topics and applications central to Remote sensing image simulation. Although submission to this Special Issue is now closed, the need for further in-depth research and development related to image simulation of High-spatial and spectral resolution, sensor fusion and colorization remains.I would like to take this opportunity to express my most profound appreciation to the MDPI Book staff, the editorial team of Applied Sciences journal, especially Ms. Nimo Lang, the assistant editor of this Special Issue, talented authors, and professional reviewers

    Design and implementation of a vision system for microassembly workstation

    Get PDF
    Rapid development of micro/nano technologies and the evolvement of biotechnology have led to the research of assembling micro components into complex microsystems and manipulation of cells, genes or similar biological components. In order to develop advanced inspection/handling systems and methods for manipulation and assembly of micro products and micro components, robust micromanipulation and microassembly strategies can be implemented on a high-speed, repetitive, reliable, reconfigurable, robust and open-architecture microassembly workstation. Due to high accuracy requirements and specific mechanical and physical laws which govern the microscale world, micromanipulation and microassembly tasks require robust control strategies based on real-time sensory feedback. Vision as a passive sensor can yield high resolutions of micro objects and micro scenes along with a stereoscopic optical microscope. Visual data contains useful information for micromanipulation and microassembly tasks, and can be processed using various image processing and computer vision algorithms. In this thesis, the initial work on the design and implementation of a vision system for microassembly workstation is introduced. Both software and hardware issues are considered. Emphasis is put on the implementation of computer vision algorithms and vision based control techniques which help to build strong basis for the vision part of the microassembly workstation. The main goal of designing such a vision system is to perform automated micromanipulation and microassembly tasks for a variety of applications. Experiments with some teleoperated and semiautomated tasks, which aim to manipulate micro particles manually or automatically by microgripper or probe as manipulation tools, show quite promising results
    corecore