20 research outputs found

    Providing Real-Time Captured Information Input to a Machine-Learned Model to Improve Query Services

    Get PDF
    This publication describes techniques and methods that a computing device uses to provide improved query services (e.g., autofill suggestions, speech biasing for automatic speech recognition) to applications on the computing device. To this end, an information collector on the computing device collects application activity information, information displayed on a display, and event information. This collected information can be provided as input to a machine-learned model implemented on the computing device. Responsive to the input received, the machine-learned model can classify the collected information to determine relevant attributes (e.g., keywords, searched locations, names) and make suggestions for utilization by query services provided by the computing device. Through these techniques and methods, user privacy is maintained, less power is consumed by the computing device, and the resources of the computing device (e.g., memory) are conserved

    Industrial robot trajectory optimization- a review

    Full text link
    This paper is the result of the literature review on the trajectory optimization of the serial industrial robots, a study developed in a wider research in the field of the trajectory generation mechanisms of the serial industrial robots. After a short presentation of the importance of the industrial robots in the current context and future challenges, are presented the main optimal trajectory planning criteria approached in several specialized scientific papers, especially in the last few years

    Robots in machining

    Get PDF
    Robotic machining centers offer diverse advantages: large operation reach with large reorientation capability, and a low cost, to name a few. Many challenges have slowed down the adoption or sometimes inhibited the use of robots for machining tasks. This paper deals with the current usage and status of robots in machining, as well as the necessary modelling and identification for enabling optimization, process planning and process control. Recent research addressing deburring, milling, incremental forming, polishing or thin wall machining is presented. We discuss various processes in which robots need to deal with significant process forces while fulfilling their machining task

    Hardware accelerated ambient occlusion techniques on gpus

    No full text
    Figure 1: These images illustrate our ambient occlusion approximation running in real-time on a modern GPU. Our method can be used for a number of applications including (a) Rendering high-detail models; this model has around 65K triangles. (b) Dynamic/deforming models (that undergo non-rigid deformations) (c) Enhancing natural objects such as trees, and in games where models such as cars are used. (d) Realistic rendering of molecular data. We introduce a visually pleasant ambient occlusion approximation running on real-time graphics hardware. Our method is a multi-pass algorithm that separates the ambient occlusion problem into highfrequency, detailed ambient occlusion and low-frequency, distant ambient occlusion domains, both capable of running independently and in parallel. The high-frequency detailed approach uses an image-space method to approximate the ambient occlusion due to nearby occluders caused by high surface detail. The low-frequency approach uses the intrinsic properties of a modern GPU to greatly reduce the search area for large and distant occluders with the help of a low-detail approximated version of the occluder geometry. Our method utilizes the highly parallel, stream processors (GPUs) to perform real-time visually pleasant ambient occlusion. We show that our ambient occlusion approximation works on a wide variety of applications such as molecular data visualization, dynamic deformable animated models, highly detailed geometry. Our algorithm demonstrates scalability and is well-suited for the current and upcoming graphics hardware

    GA‐based camera calibration for vision‐assisted robotic assembly system

    No full text
    Vision sensors are employed in robotic assembly system to sense the dynamic environment and to position the manipulator precisely based on the sensor feedback. This process is termed as visual servoing. Precise calibration of the camera and camera/robot system are required to estimate the desired velocity of the robot and accurate positioning of the mating parts. In position‐based visual servoing, roughly calibrated camera leads to errors in robot/camera pose identification that affects the positional accuracy and time to reach the target position. A camera calibration procedure based on genetic algorithm (GA) is proposed in this study to estimate the intrinsic and extrinsic parameters of the camera model for improving positional accuracy and faster convergence. The proposed algorithm is implemented with two‐stage procedure and it comprises: determination of the camera parameters for distortion‐less model and reduction of re‐projection error through GA with linearly determined camera distortion‐less parameters as an initial solution. The proposed camera calibration algorithm has been tested and compared with the dataset images in the literature for its performance in terms of measurement accuracy. The result shows that the proposed algorithm has the capability to calibrate the distorted images with minimum re‐projection error using single image
    corecore