458 research outputs found

    A graphical environment and applications for discrete event and hybrid systems in robotics and automation

    Get PDF
    technical reportIn this paper we present an overview for the development of a graphical environment for simulating, analyzing, synthesizing, monitoring, and controlling complex discrete event and hybrid systems within the robotics, automation, and intelligent system domain. We start by presenting an overview of discrete event and hybrid systems, and then discuss the proposed framework. We also present two applications within the robotics and automation domain for such complex systems. The first is for formulating an observer for manipulating agents, and the second is for designing sensing strategies for the inspection of machine parts

    Computer Vision Algorithms For An Automated Harvester

    Get PDF
    Image classification and segmentation are the two main important parts in the 3D vision system of a harvesting robot. Regarding the first part, the vision system aids in the real time identification of contaminated areas of the farm based on the damage identified using the robot’s camera. To solve the problem of identification, a fast and non-destructive method, Support Vector Machine (SVM), is applied to improve the recognition accuracy and efficiency of the robot. Initially, a median filter is applied to remove the inherent noise in the colored image. SIFT features of the image are then extracted and computed forming a vector, which is then quantized into visual words. Finally, the histogram of the frequency of each element in the visual vocabulary is created and fed into an SVM classifier, which categorizes the mushrooms as either class one or class two. Our preliminary results for image classification were promising and the experiments carried out on the data set highlight fast computation time and a high rate of accuracy, reaching over 90% using this method, which can be employed in real life scenario. As pertains to image Segmentation on the other hand, the vision system aids in real time identification of mushrooms but a stiff challenge is encountered in robot vision as the irregularly spaced mushrooms of uneven sizes often occlude each other due to the nature of mushroom growth in the growing environment. We address the issue of mushroom segmentation by following a multi-step process; the images are first segmented in HSV color space to locate the area of interest and then both the image gradient information from the area of interest and Hough transform methods are used to locate the center position and perimeter of each individual mushroom in XY plane. Afterwards, the depth map information given by Microsoft Kinect is employed to estimate the Z- depth of each individual mushroom, which is then being used to measure the distance between the robot end effector and center coordinate of each individual mushroom. We tested this algorithm under various environmental conditions and our segmentation results indicate this method provides sufficient computational speed and accuracy

    Survey of Visual and Force/Tactile Control of Robots for Physical Interaction in Spain

    Get PDF
    Sensors provide robotic systems with the information required to perceive the changes that happen in unstructured environments and modify their actions accordingly. The robotic controllers which process and analyze this sensory information are usually based on three types of sensors (visual, force/torque and tactile) which identify the most widespread robotic control strategies: visual servoing control, force control and tactile control. This paper presents a detailed review on the sensor architectures, algorithmic techniques and applications which have been developed by Spanish researchers in order to implement these mono-sensor and multi-sensor controllers which combine several sensors

    Visual control of multi-rotor UAVs

    Get PDF
    Recent miniaturization of computer hardware, MEMs sensors, and high energy density batteries have enabled highly capable mobile robots to become available at low cost. This has driven the rapid expansion of interest in multi-rotor unmanned aerial vehicles. Another area which has expanded simultaneously is small powerful computers, in the form of smartphones, which nearly always have a camera attached, many of which now contain a OpenCL compatible graphics processing units. By combining the results of those two developments a low-cost multi-rotor UAV can be produced with a low-power onboard computer capable of real-time computer vision. The system should also use general purpose computer vision software to facilitate a variety of experiments. To demonstrate this I have built a quadrotor UAV based on control hardware from the Pixhawk project, and paired it with an ARM based single board computer, similar those in high-end smartphones. The quadrotor weights 980 g and has a flight time of 10 minutes. The onboard computer capable of running a pose estimation algorithm above the 10 Hz requirement for stable visual control of a quadrotor. A feature tracking algorithm was developed for efficient pose estimation, which relaxed the requirement for outlier rejection during matching. Compared with a RANSAC- only algorithm the pose estimates were less variable with a Z-axis standard deviation 0.2 cm compared with 2.4 cm for RANSAC. Processing time per frame was also faster with tracking, with 95 % confidence that tracking would process the frame within 50 ms, while for RANSAC the 95 % confidence time was 73 ms. The onboard computer ran the algorithm with a total system load of less than 25 %. All computer vision software uses the OpenCV library for common computer vision algorithms, fulfilling the requirement for running general purpose software. The tracking algorithm was used to demonstrate the capability of the system by per- forming visual servoing of the quadrotor (after manual takeoff). Response to external perturbations was poor however, requiring manual intervention to avoid crashing. This was due to poor visual controller tuning, and to variations in image acquisition and attitude estimate timing due to using free running image acquisition. The system, and the tracking algorithm, serve as proof of concept that visual control of a quadrotor is possible using small low-power computers and general purpose computer vision software

    Perceptual Segmentation of Visual Streams by Tracking of Objects and Parts

    Get PDF
    corecore