72 research outputs found

    An algorithm of search of key point pairs strong correspondences in images and depth maps

    Get PDF
    Развитие эффективных методов компьютерного зрения постоянно находится в центре исследований многих учёных, так как они дают возможность повысить скорость и эффективность решения задач в различных отраслях промышленности: картография, робототехника, системы виртуальной и дополненной реальности, системы автоматизированного проектирования. Значительную перспективу имеют современные исследования, методы и алгоритмы решения задач стереозрения, распознавания образов, в том числе те, которые работают в режиме реального времени. Одной из важных задач стереозрения является задача сопоставления карт глубины для получения трёхмерной модели сцены, но есть некоторые нерешенные вопросы процесса сопоставления карт глубин для крупномасштабных сцен окружающей среды, полученных беспилотными летательными аппаратами, а именно: низкое разрешение по глубине из-за большого расстоянию сцены от камеры, и проблема наличия шума вследствие дефектов камеры. Указанные проблемы затрудняют обнаружение ключевых точек на изображениях для их дальнейшего сопоставления. В представленной работе предлагается подход к определению ключевых точек на смежных картах глубин на основе поиска ключевых точек, находящихся в близких областях пространства параметров. Подход базируется на поиске множества ключевых точек в двух последовательных видеокадрах и нахождении среди них пар точек таких, что каждая точка пары соответствует одной и той же точке сцены на входном изображении. Соответствующие пары ключевых точек, которые локализованы детектором признаков, могут быть ложно-положительными. Предложенный алгоритм может устранить такие пары точек путём определения преобладающего направления движения ключевых точек в локальных участках изображения, а также алгоритм даёт возможность определения центра смещение точки обзора камеры, чем обеспечивает лучшую оценку положения съёмочного оборудования. Результаты работы реализованы в виде программного приложения и протестированы на видеоматериалах, полученных беспилотным летательным средством.The development of effective computer vision methods resides at the center of many scholars, as they can increase the speed and efficiency of many tasks in various industries - cartography, robotics, in systems of virtual and additional reality, computer-aided design systems. Modern research, methods and algorithms of solving stereovision problems, image recognition, including those that work realtime, have significant perspective. One of the important tasks of the stereovision is the task of depth maps matching into a three-dimensional model of the scene, but there are some unresolved issues regarding the process of mapping depth maps of large-scale environmental scenes obtained by unmanned aerial vehicles namely: low resolution in depth due to a large distance from the scene to the camera and the noise problem because of camera defects. Depicted problems make harder the search of key points in images for their further registration. The approach for the determination of key points on adjacent depth maps based on the search for key points, which locate in close-range areas of the parameter space. The approach is based on a search of key points set from two sequential video frames and a search from that set of pairs of points, that every point from each pair match to the same point in the scene. The corresponding pairs of key points that are localized by the feature detector may be false-positive. The above algorithm can eliminate these pairs of points by determining the preferred direction of movement of key points in the local areas of the image, and also the algorithm makes it possible to determine the displacement center of the camera's point of view, which provides a better estimate of the camera pose estimation. The method is implemented as a program application and tested on video materials, obtained by an unmanned aerial vehicle

    Modeling and Calibration of Coupled Fish-Eye CCD Camera and Laser Range Scanner for Outdoor Environment Reconstruction

    No full text
    International audiencePrecise and realistic models of outdoor environments such as cities and roads are useful for various applications. In order to do so, geometry and photography of environments must be captured. We present in this paper a coupled system , based on a fish-eye lens CCD camera and a laser range scanner, aimed at capturing color and geometry in this context. To use this system, a revelant model and a accurate calibration method are presented. The calibration method uses a simplified fish-eye model; the method uses only one image for fish-eye parameters, and avoids the use of large calibration pattern as required in others methods. The validity and precision of the method are assessed and example of colored 3D points produced by the system is presented

    Automatic facial expression tracking for 4D range scans

    Get PDF
    This paper presents a fully automatic approach of spatio-temporal facial expression tracking for 4D range scans without any manual interventions (such as specifying landmarks). The approach consists of three steps: rigid registration, facial model reconstruction, and facial expression tracking. A Scaling Iterative Closest Points (SICP) algorithm is introduced to compute the optimal rigid registration between a template facial model and a range scan with consideration of the scale problem. A deformable model, physically based on thin shells, is proposed to faithfully reconstruct the facial surface and texture from that range data. And then the reconstructed facial model is used to track facial expressions presented in a sequence of range scans by the deformable model

    Automatic 3D facial model and texture reconstruction from range scans

    Get PDF
    This paper presents a fully automatic approach to fitting a generic facial model to detailed range scans of human faces to reconstruct 3D facial models and textures with no manual intervention (such as specifying landmarks). A Scaling Iterative Closest Points (SICP) algorithm is introduced to compute the optimal rigid registrations between the generic model and the range scans with different sizes. And then a new template-fitting method, formulated in an optmization framework of minimizing the physically based elastic energy derived from thin shells, faithfully reconstructs the surfaces and the textures from the range scans and yields dense point correspondences across the reconstructed facial models. Finally, we demonstrate a facial expression transfer method to clone facial expressions from the generic model onto the reconstructed facial models by using the deformation transfer technique

    Tracking of secondary and temporary objects in structural concrete work

    Get PDF
    Previous research has shown that “Scan-vs-BIM ” object recognition systems, that fuse 3D point clouds from Terrestrial Laser Scanning (TLS) or digital photogrammetry with 4D project BIM, provide valuable information for tracking structural works. However, until now, the potential of these systems has been demonstrated for tracking progress of permanent structures only; no work has been reported yet on tracking secondary or temporary structures. For structural concrete work, temporary structures include formwork, scaffolding and shoring, while secondary components include rebar. Together, they constitute most of the earned value in concrete work. The impact of tracking such elements would thus be added veracity and detail to earned value calculations, and subsequently better project control and performance. This paper presents three different techniques for recognizing concrete construction secondary and temporary objects in TLS point clouds. Two of the techniques are tested using real-life data collected from a reinforced concrete building construction site. The preliminary experimental results show that it is feasible to recognize secondary and temporary objects in TLS point clouds with good accuracy; but it is envisaged that superior results could be achieved by using additional cues such colour and 3D edge information

    Self-Calibration of Mobile Manipulator Kinematic and Sensor Extrinsic Parameters Through Contact-Based Interaction

    Full text link
    We present a novel approach for mobile manipulator self-calibration using contact information. Our method, based on point cloud registration, is applied to estimate the extrinsic transform between a fixed vision sensor mounted on a mobile base and an end effector. Beyond sensor calibration, we demonstrate that the method can be extended to include manipulator kinematic model parameters, which involves a non-rigid registration process. Our procedure uses on-board sensing exclusively and does not rely on any external measurement devices, fiducial markers, or calibration rigs. Further, it is fully automatic in the general case. We experimentally validate the proposed method on a custom mobile manipulator platform, and demonstrate centimetre-level post-calibration accuracy in positioning of the end effector using visual guidance only. We also discuss the stability properties of the registration algorithm, in order to determine the conditions under which calibration is possible.Comment: In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA'18), Brisbane, Australia, May 21-25, 201
    corecore