224 research outputs found

    Spatiotemporal Saliency Detection: State of Art

    Get PDF
    Saliency detection has become a very prominent subject for research in recent time. Many techniques has been defined for the saliency detection.In this paper number of techniques has been explained that include the saliency detection from the year 2000 to 2015, almost every technique has been included.all the methods are explained briefly including their advantages and disadvantages. Comparison between various techniques has been done. With the help of table which includes authors name,paper name,year,techniques,algorithms and challenges. A comparison between levels of acceptance rates and accuracy levels are made

    Physical Interaction of Autonomous Robots in Complex Environments

    Get PDF
    Recent breakthroughs in the fields of computer vision and robotics are firmly changing the people perception about robots. The idea of robots that substitute humansisnowturningintorobotsthatcollaboratewiththem. Serviceroboticsconsidersrobotsaspersonalassistants. Itsafelyplacesrobotsindomesticenvironments in order to facilitate humans daily life. Industrial robotics is now reconsidering its basic idea of robot as a worker. Currently, the primary method to guarantee the personnels safety in industrial environments is the installation of physical barriers around the working area of robots. The development of new technologies and new algorithms in the sensor field and in the robotic one has led to a new generation of lightweight and collaborative robots. Therefore, industrial robotics leveraged the intrinsic properties of this kind of robots to generate a robot co-worker that is able to safely coexist, collaborate and interact inside its workspace with both personnels and objects. This Ph.D. dissertation focuses on the generation of a pipeline for fast object pose estimation and distance computation of moving objects,in both structured and unstructured environments,using RGB-D images. This pipeline outputs the command actions which let the robot complete its main task and fulfil the safety human-robot coexistence behaviour at once. The proposed pipeline is divided into an object segmentation part,a 6D.o.F. object pose estimation part and a real-time collision avoidance part for safe human-robot coexistence. Firstly, the segmentation module finds candidate object clusters out of RGB-D images of clutter scenes using a graph-based image segmentation technique. This segmentation technique generates a cluster of pixels for each object found in the image. The candidate object clusters are then fed as input to the 6 D.o.F. object pose estimation module. The latter is in charge of estimating both the translation and the orientation in 3D space of each candidate object clusters. The object pose is then employed by the robotic arm to compute a suitable grasping policy. The last module generates a force vector field of the environment surrounding the robot, the objects and the humans. This force vector field drives the robot toward its goal while any potential collision against objects and/or humans is safely avoided. This work has been carried out at Politecnico di Torino, in collaboration with Telecom Italia S.p.A

    Infrared based monocular relative navigation for active debris removal

    No full text
    In space, visual based relative navigation systems suffer from the harsh illumination conditions of the target (e.g. eclipse conditions, solar glare, etc.). In current Rendezvous and Docking (RvD) missions, most of these issues are addressed by advanced mission planning techniques (e.g strict manoeuvre timings). However, such planning would not always be feasible for Active Debris Removal (ADR) missions which have more unknowns. Fortunately, thermal infrared technology can operate under any lighting conditions and therefore has the potential to be exploited in the ADR scenario. In this context, this study investigates the benefits and the challenges of infrared based relative navigation. The infrared environment of ADR is very much different to that of terrestrial applications. This study proposes a methodology of modelling this environment in a computationally cost effective way to create a simulation environment in which the navigation solution can be tested. Through an intelligent classification of possible target surface coatings, the study is generalised to simulate the thermal environment of space debris in different orbit profiles. Through modelling various scenarios, the study also discusses the possible challenges of the infrared technology. In laboratory conditions, providing the thermal-vacuum environment of ADR, these theoretical findings were replicated. By use of this novel space debris set-up, the study investigates the behaviour of infrared cues extracted by different techniques and identifies the issue of short-lifespan features in the ADR scenarios. Based on these findings, the study suggests two different relative navigation methods based on the degree of target cooperativeness: partially cooperative targets, and uncooperative targets. Both algorithms provide the navigation solution with respect to an online reconstruction of the target. The method for partially cooperative targets provides a solution for smooth trajectories by exploiting the subsequent image tracks of features extracted from the first frame. The second algorithm is for uncooperative targets and exploits the target motion (e.g. tumbling) by formulating the problem in terms of a static target and a moving map (i.e. target structure) within a filtering framework. The optical flow information is related to the target motion derivatives and the target structure. A novel technique that uses the quality of the infrared cues to improve the algorithm performance is introduced. The problem of short measurement duration due to target tumbling motion is addressed by an innovative smart initialisation procedure. Both navigation solutions were tested in a number of different scenarios by using computer simulations and a specific laboratory set-up with real infrared camera. It is shown that these methods can perform well as the infrared-based navigation solutions using monocular cameras where knowledge relating to the infrared appearance of the target is limited

    A novel target detection method for SAR images based on shadow proposal and saliency analysis

    Get PDF
    Conventional synthetic aperture radar (SAR) based target detection methods generally use high intensity pixels in the pre-screening stage while ignoring shadow information. Furthermore, they cannot accurately extract the target area and also have poor performance in cluttered environments. To solve this problem, a novel SAR target detection method which combines shadow proposal and saliency analysis is presented in this paper. The detection process is divided into shadow proposal, saliency detection and One-Class Support Vector Machine (OC-SVM) screening stages. In the shadow proposal stage, localizing targets is performed rst with the detected shadow regions to generate proposal chips that may contain potential targets. Then saliency detection is conducted to extract salient regions of the proposal chips using local spatial autocorrelation and signicance tests. Afterwards, in the last stage, the OC-SVM is employed to identify the real targets from the salient regions. Experimental results show that the proposed saliency detection method possesses higher detection accuracy than several state of the art methods on SAR images. Furthermore, the proposed SAR target detection method is demonstrated to be robust under dierent imaging environments. to extract salient regions of the proposal chips using local spatial autocorrelation and signicance tests. Afterwards, in the last stage, the OC-SVM is employe

    Recovering light directions and camera poses from a single sphere

    Get PDF
    LNCS v. 5302 is the conference proceedings of ECCV 2008This paper introduces a novel method for recovering both the light directions and camera poses from a single sphere. Traditional methods for estimating light directions using spheres either assume both the radius and center of the sphere being known precisely, or they depend on multiple calibrated views to recover these parameters. It will be shown in this paper that the light directions can be uniquely determined from the specular highlights observed in a single view of a sphere without knowing or recovering the exact radius and center of the sphere. Besides, if the sphere is being observed by multiple cameras, its images will uniquely define the translation vector of each camera from a common world origin centered at the sphere center. It will be shown that the relative rotations between the cameras can be recovered using two or more light directions estimated from each view. Closed form solutions for recovering the light directions and camera poses are presented, and experimental results on both synthetic and real data show the practicality of the proposed method. © 2008 Springer Berlin Heidelberg.postprintThe 10th European Conference on Computer Vision (ECCV 2008), Marseille, France, 12-18 October 2008. In Lecture Notes in Computer Science, 2008, v. 5302, pt. 1, p. 631-64

    Visual Tracking and Motion Estimation for an On-orbit Servicing of a Satellite

    Get PDF
    This thesis addresses visual tracking of a non-cooperative as well as a partially cooperative satellite, to enable close-range rendezvous between a servicer and a target satellite. Visual tracking and estimation of relative motion between a servicer and a target satellite are critical abilities for rendezvous and proximity operation such as repairing and deorbiting. For this purpose, Lidar has been widely employed in cooperative rendezvous and docking missions. Despite its robustness to harsh space illumination, Lidar has high weight and rotating parts and consumes more power, thus undermines the stringent requirements of a satellite design. On the other hand, inexpensive on-board cameras can provide an effective solution, working at a wide range of distances. However, conditions of space lighting are particularly challenging for image based tracking algorithms, because of the direct sunlight exposure, and due to the glossy surface of the satellite that creates strong reflection and image saturation, which leads to difficulties in tracking procedures. In order to address these difficulties, the relevant literature is examined in the fields of computer vision, and satellite rendezvous and docking. Two classes of problems are identified and relevant solutions, implemented on a standard computer are provided. Firstly, in the absence of a geometric model of the satellite, the thesis presents a robust feature-based method with prediction capability in case of insufficient features, relying on a point-wise motion model. Secondly, we employ a robust model-based hierarchical position localization method to handle change of image features along a range of distances, and localize an attitude-controlled (partially cooperative) satellite. Moreover, the thesis presents a pose tracking method addressing ambiguities in edge-matching, and a pose detection algorithm based on appearance model learning. For the validation of the methods, real camera images and ground truth data, generated with a laboratory tet bed similar to space conditions are used. The experimental results indicate that camera based methods provide robust and accurate tracking for the approach of malfunctioning satellites in spite of the difficulties associated with specularities and direct sunlight. Also exceptional lighting conditions associated to the sun angle are discussed, aimed at achieving fully reliable localization system in a certain mission

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion
    • …
    corecore