4,079 research outputs found

    A novel approach to recognition of the detected moving objects in non-stationary background using heuristics and colour measurements : a thesis presented in partial fulfilment of the requirement for the degree of Master of Engineering at Massey University, Albany, New Zealand

    Get PDF
    Computer vision has become a growing area of research which involves two fundamental steps, object detection and object recognition. These two steps have been implemented in real world scenarios such as video surveillance systems, traffic cameras for counting cars, or more explicit detection such as detecting faces and recognizing facial expressions. Humans have a vision system that provides sophisticated ways to detect and recognize objects. Colour detection, depth of view and our past experience helps us determine the class of objects with respect to object’s size, shape and the context of the environment. Detection of moving objects on a non-stationary background and recognizing the class of these detected objects, are tasks that have been approached in many different ways. However, the accuracy and efficiency of current methods for object detection are still quite low, due to high computation time and memory intensive approaches. Similarly, object recognition has been approached in many ways but lacks the perceptive methodology to recognise objects. This thesis presents an improved algorithm for detection of moving objects on a non-stationary background. It also proposes a new method for object recognition. Detection of moving objects is initiated by detecting SURF features to identify unique keypoints in the first frame. These keypoints are then searched through individually in another frame using cross correlation, resulting in a process called optical flow. Rejection of outliers is performed by using keypoints to compute global shift of pixels due to camera motion, which helps isolate the points that belong to the moving objects. These points are grouped into clusters using the proposed improved clustering algorithm. The clustering function is capable of adapting to the search radius around a feature point by taking the average Euclidean distance between all the feature points into account. The detected object is then processed through colour measurement and heuristics. Heuristics provide context of the surroundings to recognize the class of the object based upon the object’s size, shape and the environment it is in. This gives object recognition a perceptive approach. Results from the proposed method have shown successful detection of moving objects in various scenes with dynamic backgrounds achieving an efficiency for object detection of over 95% for both indoor and outdoor scenes. The average processing time was computed to be around 16.5 seconds which includes the time taken to detect objects, as well as recognize them. On the other hand, Heuristic and colour based object recognition methodology achieved an efficiency of over 97%

    Visual perception for the 3D recognition of geometric pieces in robotic manipulation

    Get PDF
    During grasping and intelligent robotic manipulation tasks, the camera position relative to the scene changes dramatically because the robot is moving to adapt its path and correctly grasp objects. This is because the camera is mounted at the robot effector. For this reason, in this type of environment, a visual recognition system must be implemented to recognize and “automatically and autonomously” obtain the positions of objects in the scene. Furthermore, in industrial environments, all objects that are manipulated by robots are made of the same material and cannot be differentiated by features such as texture or color. In this work, first, a study and analysis of 3D recognition descriptors has been completed for application in these environments. Second, a visual recognition system designed from specific distributed client-server architecture has been proposed to be applied in the recognition process of industrial objects without these appearance features. Our system has been implemented to overcome problems of recognition when the objects can only be recognized by geometric shape and the simplicity of shapes could create ambiguity. Finally, some real tests are performed and illustrated to verify the satisfactory performance of the proposed system.The research leading to these result has received funding from the Spanish Government and European FEDER funds (DPI2012-32390) and the Valencia Regional Government (PROMETEO/2013/085)

    Hand gesture recognition with jointly calibrated Leap Motion and depth sensor

    Get PDF
    Novel 3D acquisition devices like depth cameras and the Leap Motion have recently reached the market. Depth cameras allow to obtain a complete 3D description of the framed scene while the Leap Motion sensor is a device explicitly targeted for hand gesture recognition and provides only a limited set of relevant points. This paper shows how to jointly exploit the two types of sensors for accurate gesture recognition. An ad-hoc solution for the joint calibration of the two devices is firstly presented. Then a set of novel feature descriptors is introduced both for the Leap Motion and for depth data. Various schemes based on the distances of the hand samples from the centroid, on the curvature of the hand contour and on the convex hull of the hand shape are employed and the use of Leap Motion data to aid feature extraction is also considered. The proposed feature sets are fed to two different classifiers, one based on multi-class SVMs and one exploiting Random Forests. Different feature selection algorithms have also been tested in order to reduce the complexity of the approach. Experimental results show that a very high accuracy can be obtained from the proposed method. The current implementation is also able to run in real-time

    Human activity recognition from object interaction in domestic scenarios

    Get PDF
    This work describes the recognition of human activity based on the interaction between people and objects in domestic settings, specifically in a kitchen. In order to achieve the aim of recognizing activity it is necessary to establish a procedure and essential equipment. Regarding the procedure, in a simplified manner, it is based on capturing local images where the activity takes place using a colour camera (RGB), and processing the above mentioned images to recognize the present objects and its location. The interaction with the objects is classified as five types of possible actions (unchanged, add, remove, move and Indeterminate), which are used to analyze the probability of the human activity that is being performed at the moment. As for the technological tools employed, the system works with Ubuntu as general Operating System, ROS (Robot Operating System) as framework, OpenCV (Open Source Computer Vision) for the vision algorithms used, and Python programming language. The development starts with the segmentation using the "difference image" method that obtains the area that the objects take up in the image the recognition of objects is carried out by distinguishing them according to its colour histogram. the positioning is obtained through its centroid, applying the corresponding homography to go from the coordinate system of the image to the coordinates of the real world using comparisons of the historical and the new information of the objects we determine the actions that have been fulfilled as final stage, we filter the relevant objects on the basis of the actions carried out and compare with the objects defined for the accomplishment of every activity the result is the probability of executing each activity

    Creating 3D object descriptors using a genetic algorithm

    Get PDF
    In the technological world that we live in, the need for computer vision became almost as important as human vision. We are surrounded be all kinds of machines that need to have their own virtual eyes. The most developed cars have software that can analyze traffic signs in order to warn the driver about the eventsontheroad. Whenwesendaspacerovertootherplanetitisimportantthatitcananalyzetheground in order to avoid obstacles that would lead to its destruction. Thereisstillmuchworktobedoneinthefieldofcomputervisionwiththeviewtoimprovetheperformance and speed of recognition tasks. There are many available descriptors used for 3D point cloud recognition and some of them are explained in this thesis. The aim of this work is to design descriptors that can match correctly 3D point clouds. The idea is to use artificial intelligence, in the form of a GA to obtain optimized parameters for the descriptors. For this purpose the PCL [RC11] is used, which deals with the manipulation of 3D points data. The created descriptors are explained and experiments are done to illustrate their performance. The main conclusions are that there is still much work to be done in shape recognition. The descriptor developed in this thesis that use only color information is better than the descriptors that use only shape data. Although we have achieved descriptors withgoodperformanceinthisthesis,therecouldbeawaytoimprovethemevenmore. As the descriptor that use only color data is better than the shape-only descriptors, we can expect that there is a better way to represent the shape of an object. Humans can recognize better objects by shape than by color, what makes us wonder if there is a way to improve the techniques used for shape description

    Vision-Aided Navigation for GPS-Denied Environments Using Landmark Feature Identification

    Get PDF
    In recent years, unmanned autonomous vehicles have been used in diverse applications because of their multifaceted capabilities. In most cases, the navigation systems for these vehicles are dependent on Global Positioning System (GPS) technology. Many applications of interest, however, entail operations in environments in which GPS is intermittent or completely denied. These applications include operations in complex urban or indoor environments as well as missions in adversarial environments where GPS might be denied using jamming technology. This thesis investigate the development of vision-aided navigation algorithms that utilize processed images from a monocular camera as an alternative to GPS. The vision-aided navigation approach explored in this thesis entails defining a set of inertial landmarks, the locations of which are known within the environment, and employing image processing algorithms to detect these landmarks in image frames collected from an onboard monocular camera. These vision-based landmark measurements effectively serve as surrogate GPS measurements that can be incorporated into a navigation filter. Several image processing algorithms were considered for landmark detection and this thesis focuses in particular on two approaches: the continuous adaptive mean shift (CAMSHIFT) algorithm and the adaptable compressive (ADCOM) tracking algorithm. These algorithms are discussed in detail and applied for the detection and tracking of landmarks in monocular camera images. Navigation filters are then designed that employ sensor fusion of accelerometer and rate gyro data from an inertial measurement unit (IMU) with vision-based measurements of the centroids of one or more landmarks in the scene. These filters are tested in simulated navigation scenarios subject to varying levels of sensor and measurement noise and varying number of landmarks. Finally, conclusions and recommendations are provided regarding the implementation of this vision-aided navigation approach for autonomous vehicle navigation systems
    corecore