4 research outputs found

    A Novel Illumination-Invariant Loss for Monocular 3D Pose Estimation

    Get PDF
    The problem of identifying the 3D pose of a known object from a given 2D image has important applications in Computer Vision. Our proposed method of registering a 3D model of a known object on a given 2D photo of the object has numerous advantages over existing methods. It does not require prior training, knowledge of the camera parameters, explicit point correspondences or matching features between the image and model. Unlike techniques that estimate a partial 3D pose (as in an overhead view of traffic or machine parts on a conveyor belt), our method estimates the complete 3D pose of the object. It works on a single static image from a given view under varying and unknown lighting conditions. For this purpose we derive a novel illumination-invariant distance measure between the 2D photo and projected 3D model, which is then minimised to find the best pose parameters. Results for vehicle pose detection in real photographs are presented

    A novel illumination-invariant loss for monocular 3D pose estimation

    No full text
    The problem of identifying the 3D pose of a known object from a given 2D image has important applications in Computer Vision. Our proposed method of registering a 3D model of a known object on a given 2D photo of the object has numerous advantages over existing methods. It does not require prior training, knowledge of the camera parameters, explicit point correspondences or matching features between the image and model. Unlike techniques that estimate a partial 3D pose (as in an overhead view of traffic or machine parts on a conveyor belt), our method estimates the complete 3D pose of the object. It works on a single static image from a given view under varying and unknown lighting conditions. For this purpose we derive a novel illumination-invariant distance measure between the 2D photo and projected 3D model, which is then minimised to find the best pose parameters. Results for vehicle pose detection in real photographs are presented

    Machine-Vision-Based Pose Estimation System Using Sensor Fusion for Autonomous Satellite Grappling

    Get PDF
    When capturing a non-cooperative satellite during an on-orbit satellite servicing mission, the position and orientation (pose) of the satellite with respect to the servicing vessel is required in order to guide the robotic arm of the vessel towards the satellite. The main objective of this research is the development of a machine vision-based pose estimation system for capturing a non-cooperative satellite. The proposed system finds the satellite pose using three types of natural geometric features: circles, lines and points, and it merges data from two monocular cameras and three different algorithms (one for each type of geometric feature) to increase the robustness of the pose estimation. It is assumed that the satellite has an interface ring (which is used to attach a satellite to the launch vehicle) and that the cameras are mounted on the robot end effector which contains the capture tool to grapple the satellite. The three algorithms are based on a feature extraction and detection scheme to provide the detected geometric features on the camera images that belong to the satellite, which its geometry is assumed to be known. Since the projection of a circle on the image plane is an ellipse, an ellipse detection system is used to find the 3D-coordinates of the center of the interface ring and its normal vector using its corresponding detected ellipse on the image plane. The sensor and data fusion is performed in two steps. In the first step, a pose solver system finds pose using the conjugate gradient method to optimize a cost function and to reduce the re-projection error of the detected features, which reduces the pose estimation error. In the second step, an extended Kalman filter merges data from the pose solver and the ellipse detection system, and gives the final estimated pose. The inputs of the pose estimation system are the camera images and the outputs are the position and orientation of the satellite with respect to the end-effector where the cameras are mounted. Virtual and real simulations using a full-scale realistic satellite-mockup and a 7DOF robotic manipulator were performed to evaluate the system performance. Two different lighting conditions and three scenarios each with a different set of features were used. Tracking of the satellite was performed successfully. The total translation error is between 25 mm and 50 mm and the total rotation error is between 2 deg and 3 deg when the target is at 0.7 m from the end effector

    Image based automatic vehicle damage detection

    No full text
    Automatically detecting vehicle damage using photographs taken at the accident scene is very useful as it can greatly reduce the cost of processing insurance claims, as well as provide greater convenience for vehicle users. An ideal scenario would be where the vehicle user can upload a few photographs of the damaged car taken from a mobile phone and have the dam- age assessment and insurance claim processing done automatically. However, such a solution remains a challenging task due to a number of factors. For a start, the scene of the accident is typically an unknown and uncontrolled outdoor environment with a plethora of factors beyond our control including scene illumination and the presence of surrounding objects which are not known a priori. In addition, since vehicles have very reflective metallic bodies the photographs taken in such an uncontrolled environment can be expected to have a considerable amount of inter object reflection. Therefore, the application of standard computer vision techniques in this context is a very challenging task. Moreover, solving this task opens up a fascinating repertoire of computer vision problems which need to be addressed in the context of a very challenging scenario. This thesis describes research undertaken to address the problem of au- tomatic vehicle damage detection using photographs. A pipeline addressing a vertical slice of the broad problem is considered while focusing on mild vehicle damage detection. We propose to use 3D CAD models of undamaged vehicles which are used to obtain ground truth information in order to infer what the vehicle with mild damage in the photograph should have looked like, if it had not been damaged. To this end, we develop 3D pose estimation algorithms to register an undamaged 3D CAD model over a photograph of the known dam- aged vehicle. We present a 3D pose estimation method using image gradient information of the photograph and the 3D model projection. We show how the 3D model projection at the recovered 3D pose can be used to identify components of a vehicle in the photograph which may have mild damage. In addition, we present a more robust 3D pose estimation method by minimizing a novel illumination invariant distance measure, which is based on a Mahalanobis distance between attributes of the 3D model projection and the pixels in the photograph. In principle, image edges which are not present in the 3D CAD model projection can be considered to be vehicle damage. However, since the vehicle body is very reflective, there is a large amount of inter object reflection in the photograph which may be misclassified as damage. In order to detect image edges caused by inter object reflection, we propose to apply multi-view geometry techniques on two photographs of the vehicle taken from different view points. To this end, we also develop a robust method to obtain reliable point correspondences across the photographs which are dominated by large reflective and mostly homogeneous regions. The performance of the proposed methods are experimentally evaluated on real photographs using 3D CAD models of varying accuracy. We expect that the research presented in this thesis will provide the groundwork for designing an automatic photograph based vehicle damage de- tection system. Moreover, we hope that our method will provide the foundation for interesting future research
    corecore