5 research outputs found

    Measuring Digital System Latency from Sensing to Actuation at Continuous 1 Millisecond Resolution

    Get PDF
    This thesis describes a new method for measuring the end-to-end latency between sensing and actuation in a digital computing system. Compared to previous work, which generally measures the latency at 16-33 ms intervals or at discrete events separated by hundreds of ms, our new method measures the latency continuously at 1 millisecond resolution. This allows for the observation of variations in latency over sub 1 s periods, instead of relying upon averages of measurements. We have applied our method to two systems, the ?rst using a camera for sensing and an LCD monitor for actuation, and the second using an orientation sensor for sensing and a motor for actuation. Our results show two interesting ?ndings. First, a cyclical variation in latency can be seen based upon the relative rates of the sensor and actuator clocks and bu?er times; for the components we tested the variation was in the range of 15-50 Hz with a magnitude of 10-20 ms. Second, orientation sensor error can look like a variation in latency; for the sensor we tested the variation was in the range of 0.5-1.0 Hz with a magnitude of 20-100 ms. Both of these ?ndings have implications for robotics and virtual reality systems. In particular, it is possible that the variation in apparent latency caused by orientation sensor error may have some relation to \u27simulator sickness\u27

    Seam tracking and gap bridging during robotic laser beam welding via grayscale imaging and wobbling

    Get PDF
    The use of laser beam welding with robotic manipulators is expanding towards wider industrial applications as the system availability increases with reduced capital costs. Conventionally, laser welding requires high positioning and coupling accuracy. Due to the variability in the part geometry and positioning, as well as the thermal deformation that may occur during the process, joint position and fit-up are not always acceptable nor predictable a-priori if simple fixtures are used. This makes the passage from virtual CAD/CAM environment to real production site not trivial, limiting applications where short part preparations are a need like small-batch productions. Solutions that render the laser welding operations feasible for production series with non-stringent tolerances are required to serve a wider range of industrial applications. Such solutions should be able to track the seam as well as tolerating variable gaps formed between the parts to be joined. In this work, an online correction for robot trajectory based on a greyscale coaxial vision system with external illumination and an adaptive wobbling strategy are proposed as means to increase the overall flexibility of a manufacturing plant. The underlying vision algorithm and control architectures are presented; the robustness of the system to poor illumination conditions and variable reflection conditions is also discussed. The developed solution employed two control loops: the first is able to change the robot pose to follow varying trajectories; the second, able to vary the amplitude of circular wobbling as a function of the gap formed in butt-joint welds. Demonstrator cases on butt-joint welds with AISI 301 stainless steel with increased complexity were used to test the efficacy of the solution. The system was successfully tested on 2 mm thick, planar stainless-steel sheets at a maximum welding speed of 25 mm/s and yielded a maximum positioning and yaw-orientation errors of respectively 0.325 mm and 4.5°. Continuous welds could be achieved with up to 1 mm gaps and variable seam position with the developed control method. The acceptable weld quality could be maintained up to 0.6 mm gap in the employed autogenous welding configuration

    The Effect of Varying Latency in a Head-Mounted Display on Task Performance and Motion Sickness

    Get PDF
    The purpose of this study was to determine how latency in a head-mounted display affects human performance. Virtual environments are used frequently for training, however simulator sickness is a common problem and may affect transfer of training. Aspects of virtual environments that cause simulator sickness are not fully understood, but varying latency has been shown to increase simulator sickness symptoms. The impact of varying latency on task performance and the interaction between performance and simulator sickness symptoms has not been examined. Twenty-nine subjects (15 male) participated in a repeated measures study in which they were exposed to two different latency conditions in a Head-Mounted Display (HMD): constant (70 ms) and varying (70 ms – 270 ms). Experimental sessions were separated by 14-daysto minimize the effects of adaptation. While wearing the HMD, subjects used a laser pointer to repeatedly shoot at 8 laser targets, arrayed in a 180-degree arc around the lab,over the course of 200 trials per session, presented in 5 blocks of 40 trials. Sickness levels, accuracy and time-to-hit data were recorded for analysis. Subjects scored fewer hits and took longer to hit targets in the varying latency condition, F (1,54) = 35.20, p \u3c .01, 2p = .40, than in the constant latency condition F (4,51) = 13.50, p \u3c .01, 2 p = .51. These findings indicate that individuals exposed to varying latency performed worse than individuals exposed to constant latency. However, it is unclear if the performance effects are due mostly to the latency itself or another underlying causal influence such as simulator sickness

    High Resolution Vision-Based Servomechanism Using a Dynamic Target with Application to CNC Machines

    Get PDF
    This dissertation introduces a novel three dimensional vision-based servomechanism with application to real time position control for manufacturing equipment, such as Computer Numerical Control (CNC) machine tools. The proposed system directly observes the multi-dimensional position of a point on the moving tool relative to a fixed ground, thus bypassing the inaccurate kinematic model normally used to convert axis sensor-readings into an estimate of the tool position. A charge-coupled device (CCD camera) is used as the position transducer, which directly measures the current position error of the tool referenced to an absolute coordinate system. Due to the direct-sensing nature of the transducer no geometric error compensation is required. Two new signal processing algorithms, based on a recursive Newton-Raphson optimization routine, are developed to process the input data collected through digital imaging. The algorithms allow simultaneous high-precision position and orientation estimation from single readings. The desired displacement command of the tool in a planar environment is emulated, in one end of the kinematic chain, by an active element or active target pattern on a liquid-crystal display (LCD). On the other end of the kinematic chain the digital camera observes the active target and provides visual feedback information utilized for position control of the tool. Implementation is carried out on an XYθZ stage, which is position with high resolution. The introduction of the camera into the control loop yields a visual servo architecture; the dynamic problems and stability assessment of which are analyzed in depth for the case study of the single CAM- single image processing thread-configuration. Finally, two new command generation protocols are explained for full implementation of the proposed structure in real-time control applications. Command issuing resolutions do not depend upon the size of the smallest element of the grid/display being imaged, but can instead be determined in accordance with the sensor\u27s resolution

    A Timing Model for Vision-Based Control of Industrial Robot Manipulators

    No full text
    Visual sensing for robotics has been around for decades, but our understanding of a timing model remains crude. By timing model, we refer to the delays (processing lag and motion lag) between “reality ” (when a part is sensed), through data processing (the processing of image data to determine part position and orientation), through control (the computation and initiation of robot motion), through “arrival ” (when the robot reaches “reality”). In this work we introduce a timing model where sensing and control operate asynchronously. We apply this model to a robotic workcell consisting of St ¢ £ a ubli RX-130 industrial robot manipulator, a network of six cameras for sensing, and an off-the-shelf Adept MV-19 controller. We demonstrate some experiments to show how the model can be applied
    corecore