10,779 research outputs found

    Uncertainty-Aware Hand–Eye Calibration

    Get PDF
    We provide a generic framework for the hand–eye calibration of vision-guided industrial robots. In contrast to traditional methods, we explicitly model the uncertainty of the robot in a stochastically founded way. Albeit the repeatability of modern industrial robots is high, their absolute accuracy typically is much lower. This uncertainty—especially if not considered—deteriorates the result of the hand–eye calibration. Our proposed framework does not only result in a high accuracy of the computed hand–eye pose but also provides reliable information about the uncertainty of the robot. It further provides corrected robot poses for a convenient and inexpensive robot calibration. Our framework is computationally efficient and generic in several regards. It supports the use of a calibration target as well as self-calibration without the need for known 3-D points. It optionally enables the simultaneous calibration of the interior camera parameters. The framework is also generic with regard to the robot type and, hence, supports antropomorphic as well as selective compliance assembly robot arm (SCARA) robots, for example. Simulated and real experiments show the validity of the proposed methods. An extensive evaluation of our framework on a public dataset shows a considerably higher accuracy than 15 state-of-the-art methods

    Impact of Imaging and Distance Perception in VR Immersive Visual Experience

    Get PDF
    Virtual reality (VR) headsets have evolved to include unprecedented viewing quality. Meanwhile, they have become lightweight, wireless, and low-cost, which has opened to new applications and a much wider audience. VR headsets can now provide users with greater understanding of events and accuracy of observation, making decision-making faster and more effective. However, the spread of immersive technologies has shown a slow take-up, with the adoption of virtual reality limited to a few applications, typically related to entertainment. This reluctance appears to be due to the often-necessary change of operating paradigm and some scepticism towards the "VR advantage". The need therefore arises to evaluate the contribution that a VR system can make to user performance, for example to monitoring and decision-making. This will help system designers understand when immersive technologies can be proposed to replace or complement standard display systems such as a desktop monitor. In parallel to the VR headsets evolution there has been that of 360 cameras, which are now capable to instantly acquire photographs and videos in stereoscopic 3D (S3D) modality, with very high resolutions. 360° images are innately suited to VR headsets, where the captured view can be observed and explored through the natural rotation of the head. Acquired views can even be experienced and navigated from the inside as they are captured. The combination of omnidirectional images and VR headsets has opened to a new way of creating immersive visual representations. We call it: photo-based VR. This represents a new methodology that combines traditional model-based rendering with high-quality omnidirectional texture-mapping. Photo-based VR is particularly suitable for applications related to remote visits and realistic scene reconstruction, useful for monitoring and surveillance systems, control panels and operator training. The presented PhD study investigates the potential of photo-based VR representations. It starts by evaluating the role of immersion and user’s performance in today's graphical visual experience, to then use it as a reference to develop and evaluate new photo-based VR solutions. With the current literature on photo-based VR experience and associated user performance being very limited, this study builds new knowledge from the proposed assessments. We conduct five user studies on a few representative applications examining how visual representations can be affected by system factors (camera and display related) and how it can influence human factors (such as realism, presence, and emotions). Particular attention is paid to realistic depth perception, to support which we develop target solutions for photo-based VR. They are intended to provide users with a correct perception of space dimension and objects size. We call it: true-dimensional visualization. The presented work contributes to unexplored fields including photo-based VR and true-dimensional visualization, offering immersive system designers a thorough comprehension of the benefits, potential, and type of applications in which these new methods can make the difference. This thesis manuscript and its findings have been partly presented in scientific publications. In particular, five conference papers on Springer and the IEEE symposia, [1], [2], [3], [4], [5], and one journal article in an IEEE periodical [6], have been published

    RoboCrane: a system for providing a power and a communication link between lunar surface and lunar caves for exploring robots

    Get PDF
    Lava caves are the result of a geological process related to the cooling of basaltic lava flows. On the Moon, this process may lead to caves several kilometers long and diameters of hundreds of meters. Access to lava tubes can be granted through skylights, a vertical pit between the lava tube and the lunar surface. This represents an outstanding opportunity for long-term missions, for future permanent human settlements, and for accessing pristine samples of lava, secondary minerals and volatiles. Given this, the ESA launched a campaign through the Open Space Innovation Platform calling for ideas that would tackle the many challenges of exploring lava pits. Five projects, including Robocrane, were selected. Solar light and direct line of sight (for communications) with the lunar surface are not available inside lava tubes. This is a problem for any robot (or swarm of robots) exploring the lava tubes. Robocrane tackles both problems by deploying an element (called the Charging head, or CH) at the bottom of the skylight by means of a crane. This CH behaves as a battery charger and a communication relay for the exploring robots. The required energy is extracted from the crane’s solar panel (on the surface) and driven to the bottom of the skylight through an electrical wire running in parallel to the crane hoisting wire. Using a crane allows the system to deal with unstable terrain around the skylight rim and protect the wires from abrasion from the rocky surface and the pit rim. The charger in the CH is wireless so that the charging process can begin as soon as any of the robots get close enough to the CH. This avoids complex and time-consuming docking operations, aggravated by the skylight floor orography. The crane infrastructure can also be used to deploy the exploring robots inside the pit, reducing their design constraints and mass budget, as the robots do not need to implement their own self-deployment system. Finally, RoboCrane includes all the sensors and actuators for remote operation from a ground station. RoboCrane has been designed in a parametric tool so it can be dynamically and rapidly adjusted to input-variable changes, such as the number of exploring robots, their electrical characteristics, and crane reach, etc.Agencia Estatal de Investigación | Ref. RTI2018-099682-A-I0

    Precision Surface Processing and Software Modelling Using Shear-Thickening Polishing Slurries

    Get PDF
    Mid-spatial frequency surface error is a known manufacturing defect for aspherical and freeform precision surfaces. These surface ripples decrease imaging contrast and system signal-to-noise ratio. Existing sub-aperture polishing techniques are limited in their abilities to smooth mid-spatial frequency errors. Shear-thickening slurries have been hypothesised to reduce mid-spatial frequency errors on precision optical surfaces by increasing the viscosity at the tool-part interface. Currently, controlling the generation and mitigating existing mid-spatial frequency surface errors for aspherical and freeform surfaces requires extensive setup and the experience of seasoned workers. This thesis reports on the experimental trials of shear-thickening polishing slurries on glass surfaces. By incorporating shear-thickening slurries with the precessed bonnet technology, the aim is to enhance the ability of the precessions technology in mitigating mid-spatial frequency errors. The findings could facilitate a more streamlined manufacturing chain for precision optics for the versatile precessions technology from form correction and texture improvement, to MSF mitigation, without needing to rely on other polishing technologies. Such improvement on the existing bonnet polishing would provide a vital steppingstone towards building a fully autonomous manufacturing cell in a market of continual economic growth. The experiments in this thesis analysed the capabilities of two shear-thickening slurry systems: (1) polyethylene glycol with silica nanoparticle suspension, and (2) water and cornstarch suspension. Both slurry systems demonstrated the ability at mitigating existing surface ripples. Looking at power spectral density graphs, polyethylene glycol slurries reduced the power of the mid-spatial frequencies by ~50% and cornstarch suspension slurries by 60-90%. Experiments of a novel polishing approach are also reported in this thesis to rotate a precessed bonnet at a predetermined working distance above the workpiece surface. The rapidly rotating tool draws in the shear-thickening slurry through the gap to stiffen the fluid for polishing. This technique demonstrated material removal capabilities using cornstarch suspension slurries at a working distance of 1.0-1.5mm. The volumetric removal rate from this process is ~5% of that of contact bonnet polishing, so this aligns more as a finishing process. This polishing technique was given the term rheological bonnet finishing. The rheological properties of cornstarch suspension slurries were tested using a rheometer and modelled through CFD simulation. Using the empirical rheological data, polishing simulations of the rheological bonnet finishing process were modelled in Ansys to analyse the effects of various input parameters such as working distance, tool headspeed, precess angle, and slurry viscosity

    Proceedings of the 10th International congress on architectural technology (ICAT 2024): architectural technology transformation.

    Get PDF
    The profession of architectural technology is influential in the transformation of the built environment regionally, nationally, and internationally. The congress provides a platform for industry, educators, researchers, and the next generation of built environment students and professionals to showcase where their influence is transforming the built environment through novel ideas, businesses, leadership, innovation, digital transformation, research and development, and sustainable forward-thinking technological and construction assembly design

    Improved interlayer performance of short carbon fiber reinforced composites with bio-inspired structured interfaces

    Get PDF
    The weak layer interfaces of 3D-printed short carbon fiber (SCF) reinforced polymer composites have remained an issue due to planar layer printing by traditional 3D printers. Recently, multi-axis 3D printing technology which can realize non-planar layer printing has been developed. This study’s aim was to evaluate and compare the bonding performance of non-planar interfaces produced by multi-axis 3D printing with that of planar interfaces. The tested non-planar interfaces were designed as bio-inspired structured interfaces (BISIs) based on microstructural interfacial elements in biological materials. The standard specimens with the 0°/90° and 0° infill line directions were printed by a robotic arm multi-axis 3D printer. Double cantilever beam (DCB) and end-notched flexure (ENF) tests were conducted to obtain Mode Ⅰ and Mode Ⅱ interlaminar toughness of SCF-reinforced composites. Test results showed that the critical energy release rates of the integrally formed BISI were significantly improved compared with the planar interface (PLAI) for both Mode I and Mode II delamination. In particular, the BISI with 0° infill line direction exhibited the greatest increase in critical energy release rate, and the damaged areas were spatially swept through the curved interfaces of the BISI with different infill line directions by scanning electron microscopy (SEM) and computed tomography (CT), which showed that the higher critical energy release rate was always accompanied with a larger damaged area. In addition, the tensile and flexural properties of 0°-infilled PLAI and BISI specimens were also measured. This work provides an in-depth investigation of the PLAI and BISI properties of SCF-reinforced composites, demonstrating the potential benefits of integrally formed BISI by multi-axis 3D printing and fostering new perspectives to enhance layer interfaces of 3D printed composites

    Teleoperation Methods for High-Risk, High-Latency Environments

    Get PDF
    In-Space Servicing, Assembly, and Manufacturing (ISAM) can enable larger-scale and longer-lived infrastructure projects in space, with interest ranging from commercial entities to the US government. Servicing, in particular, has the potential to vastly increase the usable lifetimes of satellites. However, the vast majority of spacecraft on low Earth orbit today were not designed to be serviced on-orbit. As such, several of the manipulations during servicing cannot easily be automated and instead require ground-based teleoperation. Ground-based teleoperation of on-orbit robots brings its own challenges of high latency communications, with telemetry delays of several seconds, and difficulties in visualizing the remote environment due to limited camera views. We explore teleoperation methods to alleviate these difficulties, increase task success, and reduce operator load. First, we investigate a model-based teleoperation interface intended to provide the benefits of direct teleoperation even in the presence of time delay. We evaluate the model-based teleoperation method using professional robot operators, then use feedback from that study to inform the design of a visual planning tool for this task, Interactive Planning and Supervised Execution (IPSE). We describe and evaluate the IPSE system and two interfaces, one 2D using a traditional mouse and keyboard and one 3D using an Intuitive Surgical da Vinci master console. We then describe and evaluate an alternative 3D interface using a Meta Quest head-mounted display. Finally, we describe an extension of IPSE to allow human-in-the-loop planning for a redundant robot. Overall, we find that IPSE improves task success rate and decreases operator workload compared to a conventional teleoperation interface

    Sizing the Actuators for a Dragon Fly Prototype

    Get PDF
    In order to improve the design of the actuators of a Dragon Fly prototype, we study the loads applied to the actuators in operation. Both external and inertial forces are taken into account, as well as internal loads, for the purposes of evaluating the influence of the compliance of the arms on that of the "end-effector". We have shown many inadequacies of the arms regarding the stiffness needed to meet the initial design requirements. In order to reduce these inadequacies, a careful structural analysis of the stiffness of the actuators is carried out with a FEM technique, aimed at identifying the design methodology necessary to identify the mechanical elements of the arms to be stiffened. As an example, the design of the actuators is presented, with the aim of proposing an indirect calibration strategy. We have shown that the performances of the Dragon Fly prototype can be improved by developing and including in the control system a suitable module to compensate the incoming errors. By implementing our model in some practical simulations, with a maximum load on the actuators, and internal stresses, we have shown the efficiency of our model by collected experimental data. A FEM analysis is carried out on each actuator to identify the critical elements to be stiffened, and a calibration strategy is used to evaluate and compensate the expected kinematic errors due to gravity and external loads. The obtained results are used to assess the size of the actuators. The sensitivity analysis on the effects of global compliance within the structure enables us to identify and stiffen the critical elements (typically the extremities of the actuators). The worst loading conditions have been evaluated, by considering the internal loads in the critical points of the machine structure results in enabling us the sizing of the actuators. So that the Dragon fly prototype project has been set up, and the first optimal design of the arms has been performed by means of FEM analysis

    Functional Nanomaterials and Polymer Nanocomposites: Current Uses and Potential Applications

    Get PDF
    This book covers a broad range of subjects, from smart nanoparticles and polymer nanocomposite synthesis and the study of their fundamental properties to the fabrication and characterization of devices and emerging technologies with smart nanoparticles and polymer integration

    Une méthode de mesure du mouvement humain pour la programmation par démonstration

    Full text link
    Programming by demonstration (PbD) is an intuitive approach to impart a task to a robot from one or several demonstrations by the human teacher. The acquisition of the demonstrations involves the solution of the correspondence problem when the teacher and the learner differ in sensing and actuation. Kinesthetic guidance is widely used to perform demonstrations. With such a method, the robot is manipulated by the teacher and the demonstrations are recorded by the robot's encoders. In this way, the correspondence problem is trivial but the teacher dexterity is afflicted which may impact the PbD process. Methods that are more practical for the teacher usually require the identification of some mappings to solve the correspondence problem. The demonstration acquisition method is based on a compromise between the difficulty of identifying these mappings, the level of accuracy of the recorded elements and the user-friendliness and convenience for the teacher. This thesis proposes an inertial human motion tracking method based on inertial measurement units (IMUs) for PbD for pick-and-place tasks. Compared to kinesthetic guidance, IMUs are convenient and easy to use but can present a limited accuracy. Their potential for PbD applications is investigated. To estimate the trajectory of the teacher's hand, 3 IMUs are placed on her/his arm segments (arm, forearm and hand) to estimate their orientations. A specific method is proposed to partially compensate the well-known drift of the sensor orientation estimation around the gravity direction by exploiting the particular configuration of the demonstration. This method, called heading reset, is based on the assumption that the sensor passes through its original heading with stationary phases several times during the demonstration. The heading reset is implemented in an integration and vector observation algorithm. Several experiments illustrate the advantages of this heading reset. A comprehensive inertial human hand motion tracking (IHMT) method for PbD is then developed. It includes an initialization procedure to estimate the orientation of each sensor with respect to the human arm segment and the initial orientation of the sensor with respect to the teacher attached frame. The procedure involves a rotation and a static position of the extended arm. The measurement system is thus robust with respect to the positioning of the sensors on the segments. A procedure for estimating the position of the human teacher relative to the robot and a calibration procedure for the parameters of the method are also proposed. At the end, the error of the human hand trajectory is measured experimentally and is found in an interval between 28.528.5 mm and 61.861.8 mm. The mappings to solve the correspondence problem are identified. Unfortunately, the observed level of accuracy of this IHMT method is not sufficient for a PbD process. In order to reach the necessary level of accuracy, a method is proposed to correct the hand trajectory obtained by IHMT using vision data. A vision system presents a certain complementarity with inertial sensors. For the sake of simplicity and robustness, the vision system only tracks the objects but not the teacher. The correction is based on so-called Positions Of Interest (POIs) and involves 3 steps: the identification of the POIs in the inertial and vision data, the pairing of the hand POIs to objects POIs that correspond to the same action in the task, and finally, the correction of the hand trajectory based on the pairs of POIs. The complete method for demonstration acquisition is experimentally evaluated in a full PbD process. This experiment reveals the advantages of the proposed method over kinesthesy in the context of this work.La programmation par démonstration est une approche intuitive permettant de transmettre une tâche à un robot à partir d'une ou plusieurs démonstrations faites par un enseignant humain. L'acquisition des démonstrations nécessite cependant la résolution d'un problème de correspondance quand les systèmes sensitifs et moteurs de l'enseignant et de l'apprenant diffèrent. De nombreux travaux utilisent des démonstrations faites par kinesthésie, i.e., l'enseignant manipule directement le robot pour lui faire faire la tâche. Ce dernier enregistre ses mouvements grâce à ses propres encodeurs. De cette façon, le problème de correspondance est trivial. Lors de telles démonstrations, la dextérité de l'enseignant peut être altérée et impacter tout le processus de programmation par démonstration. Les méthodes d'acquisition de démonstration moins invalidantes pour l'enseignant nécessitent souvent des procédures spécifiques pour résoudre le problème de correspondance. Ainsi l'acquisition des démonstrations se base sur un compromis entre complexité de ces procédures, le niveau de précision des éléments enregistrés et la commodité pour l'enseignant. Cette thèse propose ainsi une méthode de mesure du mouvement humain par capteurs inertiels pour la programmation par démonstration de tâches de ``pick-and-place''. Les capteurs inertiels sont en effet pratiques et faciles à utiliser, mais sont d'une précision limitée. Nous étudions leur potentiel pour la programmation par démonstration. Pour estimer la trajectoire de la main de l'enseignant, des capteurs inertiels sont placés sur son bras, son avant-bras et sa main afin d'estimer leurs orientations. Une méthode est proposée afin de compenser partiellement la dérive de l'estimation de l'orientation des capteurs autour de la direction de la gravité. Cette méthode, appelée ``heading reset'', est basée sur l'hypothèse que le capteur passe plusieurs fois par son azimut initial avec des phases stationnaires lors d'une démonstration. Cette méthode est implémentée dans un algorithme d'intégration et d'observation de vecteur. Des expériences illustrent les avantages du ``heading reset''. Cette thèse développe ensuite une méthode complète de mesure des mouvements de la main humaine par capteurs inertiels (IHMT). Elle comprend une première procédure d'initialisation pour estimer l'orientation des capteurs par rapport aux segments du bras humain ainsi que l'orientation initiale des capteurs par rapport au repère de référence de l'humain. Cette procédure, consistant en une rotation et une position statique du bras tendu, est robuste au positionnement des capteurs. Une seconde procédure est proposée pour estimer la position de l'humain par rapport au robot et pour calibrer les paramètres de la méthode. Finalement, l'erreur moyenne sur la trajectoire de la main humaine est mesurée expérimentalement entre 28.5 mm et 61.8 mm, ce qui n'est cependant pas suffisant pour la programmation par démonstration. Afin d'atteindre le niveau de précision nécessaire, une nouvelle méthode est développée afin de corriger la trajectoire de la main par IHMT à partir de données issues d'un système de vision, complémentaire des capteurs inertiels. Pour maintenir une certaine simplicité et robustesse, le système de vision ne suit que les objets et pas l'enseignant. La méthode de correction, basée sur des ``Positions Of Interest (POIs)'', est constituée de 3 étapes: l'identification des POIs dans les données issues des capteurs inertiels et du système de vision, puis l'association de POIs liées à la main et de POIs liées aux objets correspondant à la même action, et enfin, la correction de la trajectoire de la main à partir des paires de POIs. Finalement, la méthode IHMT corrigée est expérimentalement évaluée dans un processus complet de programmation par démonstration. Cette expérience montre l'avantage de la méthode proposée sur la kinesthésie dans le contexte de ce travail
    • …
    corecore