343 research outputs found

    Practical identifiability of the manipulator link stiffness parameters

    Get PDF
    International audienceThe paper addresses a problem of the manipulator stiffness modeling, which is extremely important for the precise manufacturing of contemporary aeronautic materials where the machining force causes significant compliance errors in the robot end-effector position. The main contributions are in the area of the elastostatic parameters identification. Particular attention is paid to the practical identifiability of the model parameters, which completely differs from the theoretical one that relies on the rank of the observation matrix only, without taking into account essential differences in the model parameter magnitudes and the measurement noise impact. This problem is relatively new in robotics and essentially differs from that arising in geometrical calibration. To solve the problem, several physical and statistical model reduction methods are proposed. They are based on the stiffness matrix sparseness taking into account the physical properties of the manipulator elements and also on the heuristic selection of the practically non-identifiable parameters that employs numerical analyses of the parameter estimates. The advantages of the developed approach are illustrated by an application example that deals with the stiffness modeling of an industrial robot used in aerospace industry

    Postprocesamiento CAM-ROBOTICA orientado al prototipado y mecanizado en células robotizadas complejas

    Full text link
    The main interest of this thesis consists of the study and implementation of postprocessors to adapt the toolpath generated by a Computer Aided Manufacturing (CAM) system to a complex robotic workcell of eight joints, devoted to the rapid prototyping of 3D CAD-defined products. It consists of a 6R industrial manipulator mounted on a linear track and synchronized with a rotary table. To accomplish this main objective, previous work is required. Each task carried out entails a methodology, objective and partial results that complement each other, namely: - It is described the architecture of the workcell in depth, at both displacement and joint-rate levels, for both direct and inverse resolutions. The conditioning of the Jacobian matrix is described as kinetostatic performance index to evaluate the vicinity to singular postures. These ones are analysed from a geometric point of view. - Prior to any machining, the additional external joints require a calibration done in situ, usually in an industrial environment. A novel Non-contact Planar Constraint Calibration method is developed to estimate the external joints configuration parameters by means of a laser displacement sensor. - A first control is originally done by means of a fuzzy inference engine at the displacement level, which is integrated within the postprocessor of the CAM software. - Several Redundancy Resolution Schemes (RRS) at the joint-rate level are compared for the configuration of the postprocessor, dealing not only with the additional joints (intrinsic redundancy) but also with the redundancy due to the symmetry on the milling tool (functional redundancy). - The use of these schemes is optimized by adjusting two performance criterion vectors related to both singularity avoidance and maintenance of a preferred reference posture, as secondary tasks to be done during the path tracking. Two innovative fuzzy inference engines actively adjust the weight of each joint in these tasks.Andrés De La Esperanza, FJ. (2011). Postprocesamiento CAM-ROBOTICA orientado al prototipado y mecanizado en células robotizadas complejas [Tesis doctoral no publicada]. Universitat PolitÚcnica de ValÚncia. https://doi.org/10.4995/Thesis/10251/10627Palanci

    Telerobotic Sensor-based Tool Control Derived From Behavior-based Robotics Concepts

    Get PDF
    @font-face { font-family: TimesNewRoman ; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0in 0in 0.0001pt; font-size: 12pt; font-family: Times New Roman ; }div.Section1 { page: Section1; } Teleoperated task execution for hazardous environments is slow and requires highly skilled operators. Attempts to implement telerobotic assists to improve efficiency have been demonstrated in constrained laboratory environments but are not being used in the field because they are not appropriate for use on actual remote systems operating in complex unstructured environments using typical operators. This work describes a methodology for combining select concepts from behavior-based systems with telerobotic tool control in a way that is compatible with existing manipulator architectures used by remote systems typical to operations in hazardous environment. The purpose of the approach is to minimize the task instance modeling in favor of a priori task type models while using sensor information to register the task type model to the task instance. The concept was demonstrated for two tools useful to decontamination & dismantlement type operations—a reciprocating saw and a powered socket tool. The experimental results demonstrated that the approach works to facilitate traded control telerobotic tooling execution by enabling difficult tasks and by limiting tool damage. The role of the tools and tasks as drivers to the telerobotic implementation was better understood in the need for thorough task decomposition and the discovery and examination of the tool process signature. The contributions of this work include: (1) the exploration and evaluation of select features of behavior-based robotics to create a new methodology for integrating telerobotic tool control with positional teleoperation in the execution of complex tool-centric remote tasks, (2) the simplification of task decomposition and the implementation of sensor-based tool control in such a way that eliminates the need for the creation of a task instance model for telerobotic task execution, and (3) the discovery, demonstrated use, and documentation of characteristic tool process signatures that have general value in the investigation of other tool control, tool maintenance, and tool development strategies above and beyond the benefit sustained for the methodology described in this work

    A screw theory based approach to determining the identifiable parameters for calibration of parallel manipulators

    Get PDF
    Establishing complete, continuous and minimal error models is fundamentally significant for the calibration of robotic manipulators. Motivated by practical needs for models suited to coarse plus fine calibration strategies, this paper presents a screw theory based approach to determining the identifiable geometric errors of parallel manipulators at the model level. The paper first addresses two specific issues: (1) developing a simple approach that enables all encoder offsets to be retained in the minimal error model of serial kinematic chains; and (2) exploiting a fully justifiable criterion that allows the detection of the unidentifiable structural errors of parallel manipulators. Merging these two threads leads to a new, more rigorous formula for calculating precisely the number of identifiable geometric errors, including both encoder offsets and identifiable structural errors, of parallel manipulators. It shows that the identifiability of structural errors in parallel manipulators depends highly upon joint geometry and actuator arrangement of the limb involved. The process is used to determine the unidentifiable structural errors of two lower mobility parallel mechanisms to illustrate the effectiveness of the proposed approach

    Towards a Vision-Based Mobile Manipulator for Autonomous Chess Gameplay

    Get PDF
    With the rise of robotic arms in both industrial and research applications, a growing need is observed for autonomous robotic arm applications. This thesis aims to provide an example case of this need and also to showcase the possibility and limitations of vision-based solutions, specifically in automating chess. The focus is on developing a modular system that is able to autonomously recognize chessboard, detect and manipulate chess pieces. The modular design allows for further exploration into autonomous mobile manipulators. The key components include chessboard recognition using fiducial markers to facilitate accurate chessboard recognition and utilizing image processing techniques like segmentation, absolute difference matching, and perspective warping to analyze and extract meaningful information. By mounting a camera above the chessboard, it enables the detection algorithm to accurately capture and analyze the most important information about the environment to determine the current state of the game. Using this information, human move detection is enabled. Then, a custom protocol is utilized to communicate between the detection algorithm and the chess engine, encapsulating information about the game state changes within the system. The chess engine serves the purpose of game analysis and provides legal moves for the robot manipulator to execute. Manipulation happens through careful motion planning and execution, ensuring the safety of the robot and its environment. Extensive evaluation proves that the system demonstrates high accuracy and success rates for piece manipulation and move detection

    Autonomous vehicle guidance in unknown environments

    Get PDF
    Gaining from significant advances in their performance granted by technological evolution, Autonomous Vehicles are rapidly increasing the number of fields of possible and effective applications. From operations in hostile, dangerous environments (military use in removing unexploded projectiles, survey of nuclear power and chemical industrial plants following accidents) to repetitive 24h tasks (border surveillance), from power-multipliers helping in production to less exotic commercial application in household activities (cleaning robots as consumer electronics products), the combination of autonomy and motion offers nowadays impressive options. In fact, an autonomous vehicle can be completed by a number of sensors, actuators, devices making it able to exploit a quite large number of tasks. However, in order to successfully attain these results, the vehicle should be capable to navigate its path in different, sometimes unknown environments. This is the goal of this dissertation: to analyze and - mainly - to propose a suitable solution for the guidance of autonomous vehicles. The frame in which this research takes its steps is the activity carried on at the Guidance and Navigation Lab of Sapienza – Università di Roma, hosted at the School of Aerospace Engineering. Indeed, the solution proposed has an intrinsic, while not limiting, bias towards possible space applications, that will become obvious in some of the following content. A second bias dictated by the Guidance and Navigation Lab activities is represented by the choice of a sample platform. In fact, it would be difficult to perform a meaningful study keeping it a very general level, independent on the characteristics of the targeted kind of vehicle: it is easy to see from the rough list of applications cited above that these characteristics are extremely varied. The Lab hosted – even before the beginning of this thesis activity – a simple, home-designed and manufactured model of a small, yet performing enough autonomous vehicle, called RAGNO (standing for Rover for Autonomous Guidance Navigation and Observation): it was an obvious choice to select that rover as the reference platform to identify solutions for guidance, and to use it, cooperating to its improvement, for the test activities which should be considered as mandatory in this kind of thesis work to validate the suggested approaches. The draft of the thesis includes four main chapters, plus introduction, final remarks and future perspectives, and the list of references. The first chapter (“Autonomous Guidance Exploiting Stereoscopic Vision”) investigates in detail the technique which has been deemed as the most interesting for small vehicles. The current availability of low cost, high performance cameras suggests the adoption of the stereoscopic vision as a quite effective technique, also capable to making available to remote crew a view of the scenario quite similar to the one humans would have. Several advanced image analysis techniques have been investigated for the extraction of the features from left- and right-eye images, with SURF and BRISK algorithm being selected as the most promising one. In short, SURF is a blob detector with an associated descriptor of 64 elements, where the generic feature is extracted by applying sequential box filters to the surrounding area. The features are then localized in the point of the image where the determinant of the Hessian matrix H(x,y) is maximum. The descriptor vector is than determined by calculating the Haar wavelet response in a sampling pattern centered in the feature. BRISK is instead a corner detector with an associated binary descriptor of 512 bit. The generic feature is identified as the brightest point in a sampling circular area of N pixels while the descriptor vector is calculated by computing the brightness gradient of each of the N(N-1)/2 pairs of sampling points. Once left and right features have been extracted, their descriptors are compared in order to determine the corresponding pairs. The matching criterion consists in seeking for the two descriptors for which their relative distance (Euclidean norm for SURF, Hamming distance for BRISK) is minimum. The matching process is computationally expensive: to reduce the required time the thesis successfully explored the theory of the epipolar geometry, based on the geometric constraint existing between the left and right projection of the scene point P, and indeed limiting the space to be searched. Overall, the selected techniques require between 200 and 300 ms on a 2.4GHz clock CPU for the feature extraction and matching in a single (left+right) capture, making it a feasible solution for slow motion vehicles. Once matching phase has been finalized, a disparity map can be prepared highlighting the position of the identified objects, and by means of a triangulation (the baseline between the two cameras is known, the size of the targeted object is measured in pixels in both images) the position and distance of the obstacles can be obtained. The second chapter (“A Vehicle Prototype and its Guidance System”) is devoted to the implementation of the stereoscopic vision onboard a small test vehicle, which is the previously cited RAGNO rover. Indeed, a description of the vehicle – the chassis, the propulsion system with four electric motors empowering the wheels, the good roadside performance attainable, the commanding options – either fully autonomous, partly autonomous with remote monitoring, or fully remotely controlled via TCP/IP on mobile networks - is included first, with a focus on different sensors that, depending on the scenario, can integrate the stereoscopic vision system. The intelligence-side of guidance subsystem, exploiting the navigation information provided by the camera, is then detailed. Two guidance techniques have been studied and implemented to identify the optimal trajectory in a field with scattered obstacles: the artificial potential guidance, based on the Lyapunov approach, and the A-star algorithm, looking for the minimum of a cost function built on graphs joining the cells of a mesh over-imposed to the scenario. Performance of the two techniques are assessed for two specific test-cases, and the possibility of unstable behavior of the artificial potential guidance, bouncing among local minima, has been highlighted. Overall, A-star guidance is the suggested solution in terms of time, cost and reliability. Notice that, withstanding the noise affecting information from sensors, an estimation process based on Kalman filtering has been also included in the process to improve the smoothness of the targeted trajectory. The third chapter (“Examples of Possible Missions and Applications”) reports two experimental campaigns adopting RAGNO for the detection of dangerous gases. In the first one, the rover accommodates a specific sensor, and autonomously moves in open fields, avoiding possible obstacles, to exploit measurements at given time intervals. The same configuration for RAGNO is also used in the second campaign: this time, however, the path of the rover is autonomously computed on the basis of the way points communicated by a drone which is flying above the area of measurements and identifies possible targets of interest. The fourth chapter (“Guidance of Fleet of Autonomous Vehicles ”) stresses this successful idea of fleet of vehicles, and numerically investigates by algorithms purposely written in Matlab the performance of a simple swarm of two rovers exploring an unknown scenario, pretending – as an example - to represent a case of planetary surface exploration. The awareness of the surrounding environment is dictated by the characteristics of the sensors accommodated onboard, which have been assumed on the basis of the experience gained with the material of previous chapter. Moreover, the communication issues that would likely affect real world cases are included in the scheme by the possibility to model the comm link, and by running the simulation in a multi-task configuration where the two rovers are assigned to two different computer processes, each of them having a different TCP/IP address with a behavior actually depending on the flow of information received form the other explorer. Even if at a simulation-level only, it is deemed that such a final step collects different aspects investigated during the PhD period, with feasible sensors’ characteristics (obviously focusing on stereoscopic vision), guidance technique, coordination among autonomous agents and possible interesting application cases

    The Use of Agricultural Robots in Orchard Management

    Full text link
    Book chapter that summarizes recent research on agricultural robotics in orchard management, including Robotic pruning, Robotic thinning, Robotic spraying, Robotic harvesting, Robotic fruit transportation, and future trends.Comment: 22 page

    Extrinsic Parameter Calibration for Line Scanning Cameras on Ground Vehicles with Navigation Systems Using a Calibration Pattern

    Full text link
    Line scanning cameras, which capture only a single line of pixels, have been increasingly used in ground based mobile or robotic platforms. In applications where it is advantageous to directly georeference the camera data to world coordinates, an accurate estimate of the camera's 6D pose is required. This paper focuses on the common case where a mobile platform is equipped with a rigidly mounted line scanning camera, whose pose is unknown, and a navigation system providing vehicle body pose estimates. We propose a novel method that estimates the camera's pose relative to the navigation system. The approach involves imaging and manually labelling a calibration pattern with distinctly identifiable points, triangulating these points from camera and navigation system data and reprojecting them in order to compute a likelihood, which is maximised to estimate the 6D camera pose. Additionally, a Markov Chain Monte Carlo (MCMC) algorithm is used to estimate the uncertainty of the offset. Tested on two different platforms, the method was able to estimate the pose to within 0.06 m / 1.05∘^{\circ} and 0.18 m / 2.39∘^{\circ}. We also propose several approaches to displaying and interpreting the 6D results in a human readable way.Comment: Published in MDPI Sensors, 30 October 201
    • 

    corecore