18 research outputs found

    A Structured-Light Approach for the Reconstruction of Complex Objects

    Get PDF
    Recently, one of the central issues in the fields of Photogrammetry, Computer Vision, Computer Graphics and Image Processing is the development of tools for the automatic reconstruction of complex 3D objects. Among various approaches, one of the most promising is Structured Light 3D scanning (SL) which combines automation and high accuracy with low cost, given the steady decrease in price of cameras and projectors. SL relies on the projection of different light patterns, by means of a video projector, on 3D object sur faces, which are recorded by one or more digital cameras. Automatic pattern identification on images allows reconstructing the shape of recorded 3D objects via triangulation of the optical rays corresponding to projector and camera pixels. Models draped with realistic phototexture may be thus also generated, reproducing both geometry and appearance of the 3D world. In this context, subject of our research is a synthesis of state-of-the-art as well as the development of novel algorithms, in order to implement a 3D scanning system consisting, at this stage, of one consumer digital camera (DSLR) and a video projector. In the following, the main principles of structured light scanning and the algorithms implemented in our system are presented, and results are given to demonstrate the potential of such a system. Since this work is part of an ongoing research project, future tasks are also discussed

    MindSpaces:Art-driven Adaptive Outdoors and Indoors Design

    Get PDF
    MindSpaces provides solutions for creating functionally and emotionally appealing architectural designs in urban spaces. Social media services, physiological sensing devices and video cameras provide data from sensing environments. State-of-the-Art technology including VR, 3D design tools, emotion extraction, visual behaviour analysis, and textual analysis will be incorporated in MindSpaces platform for analysing data and adapting the design of spaces.</p

    Geometric information from single images in photogrammetry and computer vision

    No full text
    Motivation of this thesis is to investigate the geometry of single perspective images in three main aspects. First, the aim is to further understand this geometry by formulating alternative approaches for certain significant problems, regarding particularly the geometry of the image vanishing points and their relation with the question of camera calibration, on the one hand, and with issues of object reconstruction, on the other. A second important aspect was to carry out this research in the typically Euclidean conceptual framework of Photogrammetry, but at the same time to clearly interpret the relation of the algorithms developed here with corresponding approaches from the field of geometric (computational) Computer Vision, where single images are mainly studied. In this sense, the thesis at the same time presents Euclidean interpretations, useful to Photogrammetry, of methods Computer Vision which, as a rule, rely on projective geometry. Finally, of course, an important goal of this thesis was to implement the theoretical investigation in the form of specific automatic algorithms, in order to thereby demonstrate its direct practical usefulness. Thus, the thesis starts with certain introductory comments regarding similarities and differences between Photogrammetry and Computer Vision, as well as the theoretical and practical significance of projective geometry, while certain fundamental properties of projective geometry, necessary for the following, are also presented. Next, the geometry of an image with three vanishing points of orthogonal space directions and its relation to intrinsic camera geometry as well as to homography are studied, while different algorithms for camera calibration via vanishing points - one of which allows simultaneous estimation of radial-symmetric lens distortion - are evaluated against self-calibrating multi-image bundle adjustment. After this, the geometric properties and the equations which relate camera interior orientation with the vanishing points of two orthogonal directions are formulated. This relation is expressed by the introduction of a geometric locus for the projection centre (“calibration sphere”), which is compared to corresponding projective representations from Computer Vision. On this basis a camera calibration algorithm is formulated and evaluated experimentally, which allows combining different single images with two vanishing points from the same camera. Then the algorithm is generalized for automatic camera calibration using one image with three vanishing points of orthogonal directions or more independent images with at least one pair of vanishing points. The approach is fully automated with the automatic localization of vanishing points on one or more images via a novel technique and a unified least-squares adjustment of observations on all participating images through the line-fitting and “calibration sphere” equations. Results from such automatic calibrations using real data were regarded as satisfactory compared to those from multi-images self-calibration as regards both calibration as well as 3D object reconstruction. The thesis addresses next the question of affine reconstruction of planar objects. The geometry and equations are explained which allow affine reconstruction of uncalibrated images, both frontal and oblique with different orientation, through one single vanishing point. The algorithm (with exclusive metric information a dimension in the direction of the road) is applied and evaluated in cases of vehicle speed measurement from video sequences. In this algorithm moving vehicles are detected and tracked automatically on affine image rectifications, while a shadow detection technique is also incorporated. In this way, it is demonstrated that purely projective approaches have also a direct practical relevance for Photogrammetry. The thesis closes with the conclusions from this research regarding the theoretical and practical significance of single images in different photogrammetric tasks. Finally, certain interesting questions for further investigation are indicated, particularly regarding accuracy issues of the approaches.Αντικείμενο της παρούσας διατριβής είναι η διερεύνηση της γεωμετρίας της μεμονωμένης προοπτικής εικόνας, με έναν τριπλό στόχο. Κατ’ αρχάς, να κατανοηθεί περαιτέρω η γεωμετρία αυτή με την διατύπωση εναλλακτικών προσεγγίσεων για ορισμένα σημαντικά προβλήματα, ιδιαίτερα όσον αφορά την γεωμετρία των σημείων φυγής της εικόνας και την σχέση τους αφενός με το πρόβλημα της βαθμονόμησης μηχανής και αφετέρου με ζητήματα ανακατασκευής του χώρου. Ο δεύτερος σημαντικός στόχος ήταν η έρευνα αυτή να πραγματοποιηθεί μεν στο τυπικά ευκλείδειο εννοιολογικό πλαίσιο της Φωτογραμμετρίας, αλλά παράλληλα να εξηγηθεί με σαφήνεια η σχέση όσων αλγορίθμων διατυπώνονται εδώ με ανάλογες προσεγγίσεις από τον χώρο της γεωμετρικής (ή υπολογιστικής) Όρασης Υπολογιστών, όπου και κυρίως μελετάται η γεωμετρία της μεμονωμένης εικόνας. Με την έννοια αυτή, η διατριβή διατυπώνει ταυτόχρονα ευκλείδειες ερμηνείες, χρήσιμες για την Φωτογραμμετρία, μεθόδων της Όρασης Υπολογιστών, οι οποίες κατά κανόνα στηρίζονται στην προβολική γεωμετρία. Κύριος στόχος της διατριβής, τέλος, ήταν βέβαια και να υλοποιηθεί η θεωρητική διερεύνηση σε συγκεκριμένους αυτόματους αλγορίθμους, ώστε με αυτόν τον τρόπο να φανεί και η άμεση πρακτική χρησιμότητά της. Έτσι, στην διατριβή υπάρχουν αρχικά ορισμένες εισαγωγικές παρατηρήσεις για τις ομοιότητες και διαφορές Φωτογραμμετρίας - Όρασης Υπολογιστών και την θεωρητική αλλά και πρακτική σημασία της προβολικής γεωμετρίας, ενώ ακόμα παρουσιάζονται ορισμένες θεμελιώδεις ιδιότητες της τελευταίας που είναι αναγκαίες για την συνέχεια της διατριβής. Κατόπιν μελετάται η γεωμετρία εικόνας με σημεία φυγής που αντιστοιχούν σε τρεις ορθογώνιες διευθύνσεις του χώρου, η σχέση της με την γεωμετρία της μηχανής και την ομογραφία, ενώ αξιολογούνται (συγκρινόμενοι με δεδομένα από πολυεικονική συνόρθωση δέσμης με αυτοβαθμονόμηση) διαφορετικοί αλγόριθμοι για την βαθμονόμηση μηχανής μέσω σημείων φυγής, ένας από τους οποίους επιτρέπει να εκτιμάται και η ακτινική διαστροφή του φακού σε ενιαία συνόρθωση. Στην συνέχεια διατυπώνονται οι γεωμετρικές ιδιότητες και οι εξισώσεις που συνδέουν τον εσωτερικό προσανατολισμό εικόνας με τα σημεία φυγής δύο ορθογώνιων διευθύνσεων του χώρου. Ως έκφραση αυτής της σχέσης εισάγεται ο γεωμετρικός τόπος του σημείου λήψης (“σφαίρας βαθμονόμησης”) και συγκρίνεται με αντίστοιχες προβολικές εκφράσεις από την Όραση Υπολογιστών. Βάσει αυτού διατυπώνεται και αξιολογείται πειραματικά αλγόριθμος βαθμονόμησης που επιτρέπει να συνδυάζονται διαφορετικές εικόνες με δύο σημεία φυγής από την ίδια μηχανή. Εν συνεχεία, ο αλγόριθμος γενικεύεται για την αυτόματη βαθμονόμηση μηχανής από μία εικόνα με τρία σημεία φυγής ορθογωνικών διευθύνσεων ή από περισσότερες ανεξάρτητες εικόνες με τουλάχιστον ένα ζεύγος σημείων φυγής. Η μέθοδος αυτοματοποιείται πλήρως με τον αυτόματο εντοπισμό σημείων φυγής σε μία ή και περισσότερες εικόνες μέσω μιας νέας τεχνικής και την ελαχιστοτετραγωνική συνόρθωση σε ένα βήμα των παρατηρήσεων από όλες τις εικόνες μέσω εξισώσεων παρεμβολής ευθείας και “σφαιρών βαθμονόμησης”. Αποτελέσματα της αυτόματης βαθμονόμησης από πραγματικά δεδομένα θεωρήθηκαν ικανοποιητικά, από άποψη βαθμονόμησης αλλά και 3D ανακατασκευής, συγκρινόμενα με εκείνα από συνόρθωση δέσμης. Κατόπιν, η διατριβή πραγματεύεται την αφινική ανακατασκευή επιπέδου. Εξηγείται η γεωμετρία και οι εξισώσεις που επιτρέπουν να επανασυσταθούν αφινικά μέσω ενός μοναδικού σημείου φυγής μη βαθμονομημένες εικόνες, μετωπικές και πλάγιες με διαφορετική γεωμετρία. Ο αλγόριθμος (με μοναδική εξωτερική πληροφορία ένα γνωστό μήκος στην διεύθυνση του άξονα του δρόμου) αξιολογείται με μετρήσεις ταχύτητας οχημάτων από διαδοχικές εικόνες ψηφιακών λήψεων βίντεο. Στον αλγόριθμο αυτόν τα κινούμενα οχήματα εντοπίζονται και παρακολουθούνται εντελώς αυτόματα σε αφινικά ανηγμένες εικόνες, ενώ ενσωματώνεται και τεχνική για να ανιχνεύεται η σκιά κινούμενων αντικειμένων. Έτσι, δείχνεται ότι αμιγώς προβολικές προσεγγίσεις έχουν επίσης άμεσο πρακτικό ενδιαφέρον για την Φωτογραμμετρία. Η εργασία ολοκληρώνεται με τα συμπεράσματα της διατριβής σχετικά με την θεωρητική σημασία όσο και την πρακτική χρησιμότητα μονοεικονικών μεθόδων σε διαφορετικά φωτογραμμετρικά προβλήματα. Τέλος, επισημαίνονται ορισμένα ενδιαφέροντα ερωτήματα για περαιτέρω μελέτη, ιδίως όσον αφορά την διερεύνηση θεμάτων σχετικών με την ακρίβεια των μεθόδων

    An automatic approach for camera calibration from vanishing points

    No full text
    Camera calibration is a fundamental task in photogrammetry and computer vision. This paper presents an approach for the automatic estimation of interior orientation from images with three vanishing points of orthogonal directions. Extraction of image line segments and their clustering into groups corresponding to three dominant vanishing points are performed without any human interaction. Camera parameters (camera constant, location of principal point, two coefficients of radial lens distortion) and the vanishing points are estimated in a one-step adjustment of all participating image points. The approach may function in a single-image mode, but is also capable of handling input from independent images (i.e. images not necessarily of the same object) with three and/or two vanishing points in a common solution. The reported experimental tests indicate that, within certain limits, results from single images compare satisfactorily with those from multi-image bundle adjustment

    3D ASPECTS OF 2D EPIPOLAR GEOMETRY

    No full text
    Relative orientation in a stereo pair (establishing 3D epipolar geometry) is generally described as a rigid body transformation, with one arbitrary translation component, between two formed bundles of rays. In the uncalibrated case, however, only the 2D projective pencils of epipolar lines can be established from simple image point homologies. These may be related to each other in infinite variations of perspective positions in space, each defining different camera geometries and relative orientation of image bundles. It is of interest in photogrammetry to also approach the 3D image configurations embedded in 2D epipolar geometry in a Euclidean (rather than a projective-algebraic) framework. This contribution attempts such an approach initially in 2D to propose a parameterization of epipolar geometry; when fixing some of the parameters, the remaining ones correspond to a ‘circular locus ’ for the second epipole. Every point on this circle is related to a specific direction on the plane representing the intersection line of image planes. Each of these points defines, in turn, a circle as locus of the epipole in space (to accommodate all possible angles of intersection of the image planes). It is further seen that knowledge of the lines joining the epipoles with the respective principal points suffices for establishing the relative position of image planes and the direction of the base line in model space; knowledge of the actual position of the principal points allows full relative orientation and camera calibration of central perspective cameras. Issues of critical configuration are also addressed. Possible future tasks include study of different a priori knowledge as well as the case of the image triplet. 1

    A Structured-Light Approach for the Reconstruction of Complex Objects

    Get PDF
    Recently, one of the central issues in the fields of Photogrammetry, Computer Vision, Computer Graphics and Image Processing is the development of tools for the automatic reconstruction of complex 3D objects. Among various approaches, one of the most promising is Structured Light 3D scanning (SL) which combines automation and high accuracy with low cost, given the steady decrease in price of cameras and projectors. SL relies on the projection of different light patterns, by means of a video projector, on 3D object sur faces, which are recorded by one or more digital cameras. Automatic pattern identification on images allows reconstructing the shape of recorded 3D objects via triangulation of the optical rays corresponding to projector and camera pixels. Models draped with realistic phototexture may be thus also generated, reproducing both geometry and appearance of the 3D world. In this context, subject of our research is a synthesis of state-of-the-art as well as the development of novel algorithms, in order to implement a 3D scanning system consisting, at this stage, of one consumer digital camera (DSLR) and a video projector. In the following, the main principles of structured light scanning and the algorithms implemented in our system are presented, and results are given to demonstrate the potential of such a system. Since this work is part of an ongoing research project, future tasks are also discussed

    ABSTRACT AUTOMATIC ESTIMATION OF VEHICLE SPEED FROM UNCALIBRATED VIDEO SEQUENCES

    No full text
    Video sequences of road and traffic scenes are currently used for various purposes, such as studies of the traffic character of freeways. The task here is to automatically estimate vehicle speed from video sequences, acquired with a downward tilted camera from a bridge. Assuming that the studied road segment is planar and straight, the vanishing point in the road direction is extracted automatically by exploiting lane demarcations. Thus, the projective distortion of the road surface can be removed allowing affine rectification. Consequently, given one known ground distance along the road axis, 1D measurement of vehicle position in the correctly scaled road direction is possible. Vehicles are automatically detected and tracked along frames. First, the background image (the empty road) is created from several frames by an iterative per channel exclusion of outlying colour values based on thresholding. Next, the subtraction of the background image from the current frame is binarized, and morphological filters are employed for vehicle clustering. At the lowest part of vehicle clusters a window is defined for normalised cross-correlation among frames to allow vehicle tracking. The reference data for vehicle speed came from rigorous 2D projective transformation based on control points (which had been previously evaluated against GPS measurements). Compared to these, our automatic approach gave a very satisfactory estimated accuracy in vehicle speed of about ±3 km/h. 1

    A UNIFIED APPROACH FOR AUTOMATIC CAMERA CALIBRATION FROM VANISHING POINTS

    No full text
    A novel approach is presented for automatic camera calibration from single images with three finite vanishing points in mutually orthogonal directions (or of more independent images having two and/or three such vanishing points). Assuming ‘natural camera’, estimation of the three basic elements of interior orientation (camera constant, principal point location), along with the two coefficients of radial-symmetric lens distortion, is possible without any user interaction. First, image edges are extracted with sub-pixel accuracy, linked to segments and subjected to least-squares line-fitting. Next, these line segments are clustered into dominant space directions. In the vanishing point detection technique proposed here, the contribution of each image segment is calculated via a voting scheme, which involves the slope uncertainty of fitted lines to allow a unified treatment of long and short segments. After checking potential vanishing points against certain geometric criteria, the triplet having the highest score indicates the three dominant vanishing points. Coming to camera calibration, a main issue here is the simultaneous adjustment of image point observations for vanishing point estimation, radial distortion compensation and recovery of interior orientation in one single step. Thus, line-fitting from vanishing points along with estimation of lens distortion is combined with constraints relating vanishing points to camera parameters. Here, the principal point may be considered as the zero point of distortion and participate in both sets of equations as a common unknown. If a redundancy in vanishing points exists – e.g. when more independent images from the same camera with three, or even two, vanishing points are at hand and are to be combined for camera calibration – such a unified adjustment is undoubtedly advantageous. After th

    AN OPEN-SOURCE EDUCATIONAL SOFTWARE FOR BASIC PHOTOGRAMMETRIC TASKS

    No full text
    The current implementation of an educational package for basic photogrammetric operations is outlined. The context of this open-source software (MPT), developed in Matlab, is primarily the introductory course of Photogrammetry. Thus, the scope here is not to show students ‘how to do it ’ but rather to clarify ‘what is actually being done ’ in every step. In this sense, the stress lies mainly on basic photogrammetric adjustments. Students can work with one or two images at a time and perform monoscopic measurements of image points, lines or polylines. Exterior orientation is handled in a variety of ways: space resection with or without camera calibration (with or without estimation of radial lens distortion); linear and non-linear DLT approach (again with or without lens distortion); relative orientation and absolute orientation. Detailed results are presented, including standard error of the adjustment, residuals, covariance matrix of estimated parameters, correlations. Individual observations may be optionally excluded to study their effect on the adjustment. For fully oriented stereo pairs, 3D reconstruction is then possible. The 3D plot may be observed in the program’s 3D viewer and exported in DXF format. Besides, 2D projective transformation is also possible, allowing rectification of vector data or resampling of digital images. Other features (e.g. image enhancement tools or self-calibrating bundle adjustment) are already implemented and will soon be incorporated. 1
    corecore