19 research outputs found

    MindSpaces:Art-driven Adaptive Outdoors and Indoors Design

    Get PDF
    MindSpaces provides solutions for creating functionally and emotionally appealing architectural designs in urban spaces. Social media services, physiological sensing devices and video cameras provide data from sensing environments. State-of-the-Art technology including VR, 3D design tools, emotion extraction, visual behaviour analysis, and textual analysis will be incorporated in MindSpaces platform for analysing data and adapting the design of spaces.</p

    Epipolar geometry in projective and euclidean space

    No full text
    Subject of this PhD Thesis is the geometry of the uncalibrated stereopair. In particular, the question is addressed as to which information may be extracted from two overlapping images regarding their position in space, the cameras from which they originate and the recorded objects. Starting point was the treatment of this issue in projective space both (today) by researchers in the field of computer vision as well as (more than a century ago) by pioneering photogrammetrists; but, a basic aim here was to simultaneously handle the question also in the Euclidian space. In the Introduction, the subject of the thesis is posed, its wider research framework is defined through a review of related literature in the fields of photogrammetry and particularly computer vision, and the original aspects of this work are outlined. Next, certain introductory concepts of projective geometry, indispensable for the description and further study of image geometry are given. Following this, the epipolar geometry of the stereopair is presented in the projective framework of computer vision. For calibrated images the linear expression of the coplanarity condition and relative orientation represented by the essential matrix is described and its properties are given. Also, the concept of 2D epipolar geometry for uncalibrated camera pairs is introduced. This is expressed by the fundamental matrix which describes, directly on the two image planes, the one-way correspondence between image points and epipolar lines. Different geometric interpretations of the fundamental matrix are presented along with algorithms for its computation from point correspondences as well as for projective reconstruction of the imaged space but also for its upgrading to affine or Euclidean through geometric constraints referring to object space or the camera parameters. Further, constraints are given between the fundamental matrix and interior orientation, which allow camera autocalibation from ≥3 images or partial camera calibration in the case of the stereopair. The following chapters present the novel contributions of this thesis. Given the two epipoles and the projective correspondence between homologue epipolar lines - information inscribed in the fundamental matrix - the 2D epipolar geometry is studied in the Euclidean 3D space. Specifically, a mathematical expression is formulated to encompass the infinite, compatible with the fundamental matrix, combinations of relative and interior orientation of the two images and, consequently, the corresponding projective reconstruction of imaged space. This description relies on the definition of 4 geometric parameters of the stereopair (direction of the intersection line of the image planes; angle formed by these planes; location of projective centres on the base-line) which are independent of the fundamental matrix. Furthermore, geometric loci are established which describe changes of these parameters but also different constraints on camera geometry parameters. In this context, an alternative geometric parameterization of the 7 degrees of freedom of the fundamental matrix is formulated, based on the proof that, with suitable 2D translation and rotation, two planar projective bundles of rays - which generally intersect on a conic section - may be brought to a given perspective position, i.e. to intersect on a straight line. On the other hand, with a different but still Euclidean approach equations are formulated to express in a new form the constraints posed by the fundamental matrix upon the parameters of interior orientation. These equations rely on the equality between angles formed by the faces of tetrahedra which are defined, independently for the two images, by specific epipolar planes and the planes formed by the optical axes and the base of the stereopair. Based on these equations, four new closed-form algorithms for partial camera calibration are formulated. In the last part, the algorithms developed in this thesis are evaluated with both simulated as well as real image data (including images from datasets available on the Internet). The computation of the fundamental matrix from automatically extracted image point correspondences is examined; examples of image configurations in 3D Euclidean space compatible with specific 2D epipolar geometries are given; the algorithms for partial camera calibration are compared with those from computer vision literature. The thesis is completed with conclusions from this study, justification of the originality of the performed research work and certain thoughts regarding further possible research issues in the context of the present investigation.Αντικείμενο της διδακτορικής διατριβής είναι η γεωμετρία του ζεύγους εικόνων άγνωστου εσωτερικού προσανατολισμού. Συγκεκριμένα, διερευνάται η γεωμετρική πληροφορία που είναι δυνατόν να εξαχθεί από δύο επικαλυπτόμενες εικόνες σχετικά με την θέση τους στο χώρο, τις μηχανές λήψης από τις οποίες προέρχονται και τα απεικονιζόμενα αντικείμενα. Αφετηρία αποτέλεσε η αντιμετώπιση του ζητήματος στον προβολικό χώρο τόσο, σήμερα, από ερευνητές της όρασης υπολογιστών όσο και, πριν από έναν και πλέον αιώνα, από πρωτοπόρους επιστήμονες της φωτογραμμετρίας, ωστόσο βασικός στόχος εδώ ήταν η ταυτόχρονη προσέγγιση του προβλήματος και στον ευκλείδειο χώρο. Η διατριβή ξεκινά με μία εισαγωγή όπου τίθεται το θέμα της, ορίζεται - με επισκόπηση της συναφούς βιβλιογραφίας από τα επιστημονικά πεδία της Φωτογραμμετρίας και κυρίως της Όρασης Υπολογιστών - το ευρύτερο πλαίσιο στο οποίο εντάσσεται η έρευνα που πραγματοποιήθηκε και περιγράφονται τα στοιχεία πρωτοτυπίας της. Στην συνέχεια δίδονται ορισμένα εισαγωγικά στοιχεία προβολικής γεωμετρίας, αναγκαία για την περιγραφή και την περαιτέρω διερεύνηση της γεωμετρίας των εικόνων. Κατόπιν παρουσιάζεται η επιπολική γεωμετρία του στερεοζεύγους στο προβολικό πλαίσιο της Όρασης Υπολογιστών. Για εικόνες γνωστού εσωτερικού προσανατολισμού περιγράφεται η γραμμική έκφραση της συνθήκης συνεπιπεδότητας και του σχετικού προσανατολισμού μέσω του δεσμευμένου επιπολικού πίνακα (essential matrix) και δίνονται οι ιδιότητες του. Εισάγεται, επίσης, η έννοια της 2D επιπολικής γεωμετρίας για εικόνες από μη βαθμονομημένες μηχανές. Αυτή εκφράζεται μαθηματικά μέσω του επιπολικού πίνακα (fundamental matrix) που περιγράφει, απευθείας στα επίπεδα των εικόνων, την μονοσήμαντη αντιστοιχία εικονοσημείων και επιπολικών ευθειών. Παρουσιάζονται διαφορετικές γεωμετρικές ερμηνείες του επιπολικού πίνακα, αλγόριθμοι υπολογισμού του από ομολογίες εικονοσημείων καθώς και η δυνατότητα για προβολική ανακατασκευή του απεικονιζόμενου χώρου αλλά και την αναβάθμιση της ανακατασκευής σε αφινική ή ευκλείδεια μέσω γεωμετρικών δεσμεύσεων που αφορούν τα αντικείμενα ή την μηχανή λήψης. Δίδονται, ακόμα, οι δεσμεύσεις μεταξύ των στοιχείων του επιπολικού πίνακα και των παραμέτρων εσωτερικού προσανατολισμού οι οποίες επιτρέπουν την αυτοβαθμονόμηση της μηχανής λήψης από ≥3 εικόνες ή την μερική βαθμονόμησή της στην περίπτωση του ζεύγους. Οι επόμενες ενότητες παρουσιάζουν τα αποτελέσματα της πρωτότυπης έρευνας της διατριβής. Με δεδομένους του πόλους των δύο εικόνων και την προβολική αντιστοιχία μεταξύ των ομόλογων επιπολικών ευθειών - πληροφορία που εμπεριέχεται στον επιπολικό πίνακα - διερευνάται η 2D επιπολική γεωμετρία στον ευκλείδειο 3D χώρο. Δίνεται δηλαδή μία μαθηματική περιγραφή του συνόλου των άπειρων, συμβατών προς τον επιπολικό πίνακα, συνδυασμών σχετικού και εσωτερικού προσανατολισμού των εικόνων και, κατά συνέπεια, των αντίστοιχων προβολικών ανακατασκευών των απεικονιζόμενων αντικειμένων. Αυτή η περιγραφή στηρίζεται στον εντοπισμό 4 γεωμετρικών παραμέτρων του στερεοζεύγους (διεύθυνση ευθείας τομής των επιπέδων των εικόνων, δίεδρη γωνία τους, θέση προβολικών κέντρων επί της βάσης) που είναι ανεξάρτητες του επιπολικού πίνακα. Ακόμα, διατυπώνονται γεωμετρικοί τόποι που αντιστοιχούν σε μεταβολές των παραμέτρων αυτών αλλά και σε διαφορετικές δεσμεύσεις του εσωτερικού προσανατολισμού των μηχανών λήψης. Παράλληλα προτείνεται εναλλακτική γεωμετρική παραμετροποίηση για τους 7 βαθμούς ελευθερίας του επιπολικού πίνακα, η οποία βασίζεται στην απόδειξη ότι, με κατάλληλη 2D μετάθεση και στροφή, δύο επίπεδες προβολικές δέσμες ευθειών - που στην γενική περίπτωση τέμνονται επί κωνικής τομής - μπορούν να έρθουν σε δεδομένη προοπτική θέση, να τέμνονται δηλαδή επί δεδομένης ευθείας. Επίσης, μέσω διαφορετικής ευκλείδειας προσέγγισης διατυπώνονται μαθηματικές σχέσεις που εκφράζουν εκ νέου τις δεσμεύσεις του επιπολικού πίνακα στις παραμέτρους εσωτερικού προσανατολισμού των εικόνων. Οι σχέσεις αυτές στηρίζονται στην ισότητα δίεδρων γωνιών μεταξύ των πλευρών τετραέδρων που ορίζονται, ανεξάρτητα στις δύο εικόνες, από συγκεκριμένα επιπολικά επίπεδα και τα επίπεδα που σχηματίζουν οι οπτικοί άξονες των εικόνων με την βάση του ζεύγους. Μέσω αυτών αναπτύσσονται τέσσερις νέοι αλγόριθμοι, κλειστής μορφής (closed form), για την μερική βαθμονόμηση των μηχανών λήψης. Στο τελευταίο κεφάλαιο ελέγχονται και αξιολογούνται οι αλγόριθμοι της διατριβής με πραγματικές εικόνες αλλά και προσομοιωμένα δεδομένα λήψεων. Εξετάζεται ο υπολογισμός του επιπολικού πίνακα από ομολογίες εικονοσημείων που μετρήθηκαν αυτόματα, δίνονται παραδείγματα για την διερεύνηση της 2D επιπολικής γεωμετρίας στον 3D Ευκλείδειο χώρο και συγκρίνονται οι αλγόριθμοι μερικής βαθμονόμησης της διατριβής με αντίστοιχους από την βιβλιογραφία της Όρασης Υπολογιστών. Η διατριβή ολοκληρώνεται με την σύνοψη των συμπερασμάτων που προέκυψαν, την τεκμηρίωση της πρωτοτυπίας της και ορισμένες σκέψεις για περαιτέρω έρευνα στο πεδίο του αντικειμένου της

    AUTOMATIC CALIBRATION OF DIGITAL CAMERAS USING PLANAR CHESS-BOARD PATTERNS

    No full text
    A variety of methods for camera calibration, relying on different camera models, algorithms and a priori object information, have been reported and reviewed in literature. Use of simple 2D patterns of the chess-board type represents an interesting approach, for which several ‘calibration toolboxes’ are available on the Internet, requiring varying degrees of human interaction. This paper presents an automatic multi-image approach exclusively for camera calibration purposes on the assumption that the imaged pattern consists of adjacent light and dark squares of equal size. Calibration results, also based on image sets from Internet sources, are viewed as satisfactory and comparable to those from other approaches. Questions regarding the role of image configuration need further investigation. 1

    3D ASPECTS OF 2D EPIPOLAR GEOMETRY

    No full text
    Relative orientation in a stereo pair (establishing 3D epipolar geometry) is generally described as a rigid body transformation, with one arbitrary translation component, between two formed bundles of rays. In the uncalibrated case, however, only the 2D projective pencils of epipolar lines can be established from simple image point homologies. These may be related to each other in infinite variations of perspective positions in space, each defining different camera geometries and relative orientation of image bundles. It is of interest in photogrammetry to also approach the 3D image configurations embedded in 2D epipolar geometry in a Euclidean (rather than a projective-algebraic) framework. This contribution attempts such an approach initially in 2D to propose a parameterization of epipolar geometry; when fixing some of the parameters, the remaining ones correspond to a ‘circular locus ’ for the second epipole. Every point on this circle is related to a specific direction on the plane representing the intersection line of image planes. Each of these points defines, in turn, a circle as locus of the epipole in space (to accommodate all possible angles of intersection of the image planes). It is further seen that knowledge of the lines joining the epipoles with the respective principal points suffices for establishing the relative position of image planes and the direction of the base line in model space; knowledge of the actual position of the principal points allows full relative orientation and camera calibration of central perspective cameras. Issues of critical configuration are also addressed. Possible future tasks include study of different a priori knowledge as well as the case of the image triplet. 1

    A Structured-Light Approach for the Reconstruction of Complex Objects

    Get PDF
    Recently, one of the central issues in the fields of Photogrammetry, Computer Vision, Computer Graphics and Image Processing is the development of tools for the automatic reconstruction of complex 3D objects. Among various approaches, one of the most promising is Structured Light 3D scanning (SL) which combines automation and high accuracy with low cost, given the steady decrease in price of cameras and projectors. SL relies on the projection of different light patterns, by means of a video projector, on 3D object sur faces, which are recorded by one or more digital cameras. Automatic pattern identification on images allows reconstructing the shape of recorded 3D objects via triangulation of the optical rays corresponding to projector and camera pixels. Models draped with realistic phototexture may be thus also generated, reproducing both geometry and appearance of the 3D world. In this context, subject of our research is a synthesis of state-of-the-art as well as the development of novel algorithms, in order to implement a 3D scanning system consisting, at this stage, of one consumer digital camera (DSLR) and a video projector. In the following, the main principles of structured light scanning and the algorithms implemented in our system are presented, and results are given to demonstrate the potential of such a system. Since this work is part of an ongoing research project, future tasks are also discussed

    Fully automatic camera calibration using regular planar patterns

    No full text
    Estimation of camera geometry represents an essential task in photogrammetry and computer vision. Various algorithms for recovering camera parameters have been reported and reviewed in literature, relying on different camera models, algorithms and a priori object information. Simple 2D chess-board patterns, serving as test-fields for camera calibration, allow developing interesting automated approaches based on feature extraction tools. Several such ‘calibration toolboxes’ are available on the Internet, requiring varying degrees of human interaction. The present contribution extends our implemented fully automatic algorithm for the exclusive purpose of camera calibration. The approach relies on image sets depicting chess-board patterns, on the sole assumption that these consist of alternating light and dark squares. Among points extracted via a sub-pixel Harris operator, the valid chess-board corner points are automatically identified and sorted in chess-board rows and columns by exploiting differences in brightness on either side of a valid line segment. All sorted nodes on each image are related to object nodes in systems possibly differing in rotation and translation (this is irrelevant for camera calibration). Using initial values for all unknown parameters estimated from the vanishing points of the two main chess-board directions, an iterative bundle adjustment recovers all camera geometry parameters (including image aspect ratio and skewness as well as lens distortions). Only points belonging to intersecting image lines are initially accepted as valid nodes; yet, after a first bundle solution, back-projection allows to identify and introduce into the adjustment all detected nodes. Results for datasets from different cameras available on the Web and comparison with other accessible algorithms indicate that this fully automatic approach performs very well, at least with images typically acquired for calibration purposes (substantial image portions occupied by the chess-board pattern, no excessive irrelevant image detail)

    A UNIFIED APPROACH FOR AUTOMATIC CAMERA CALIBRATION FROM VANISHING POINTS

    No full text
    A novel approach is presented for automatic camera calibration from single images with three finite vanishing points in mutually orthogonal directions (or of more independent images having two and/or three such vanishing points). Assuming ‘natural camera’, estimation of the three basic elements of interior orientation (camera constant, principal point location), along with the two coefficients of radial-symmetric lens distortion, is possible without any user interaction. First, image edges are extracted with sub-pixel accuracy, linked to segments and subjected to least-squares line-fitting. Next, these line segments are clustered into dominant space directions. In the vanishing point detection technique proposed here, the contribution of each image segment is calculated via a voting scheme, which involves the slope uncertainty of fitted lines to allow a unified treatment of long and short segments. After checking potential vanishing points against certain geometric criteria, the triplet having the highest score indicates the three dominant vanishing points. Coming to camera calibration, a main issue here is the simultaneous adjustment of image point observations for vanishing point estimation, radial distortion compensation and recovery of interior orientation in one single step. Thus, line-fitting from vanishing points along with estimation of lens distortion is combined with constraints relating vanishing points to camera parameters. Here, the principal point may be considered as the zero point of distortion and participate in both sets of equations as a common unknown. If a redundancy in vanishing points exists – e.g. when more independent images from the same camera with three, or even two, vanishing points are at hand and are to be combined for camera calibration – such a unified adjustment is undoubtedly advantageous. After th

    Generation of orthoimages and perspective views with automatic visibility checking and texture blending. Photogrammetric Engineering and Remote

    No full text
    Conventional orthorectification software cannot handle surface occlusions and image visibility. The approach presented here synthesizes related work in photogrammetry and computer graphics/vision to automatically produce orthographic and perspective views based on fully 3D surface data (supplied by laser scanning). Surface occlusions in the direction of projection are detected to create the depth map of the new image. This information allows identifying, by visibility checking through back-projection of surface triangles, all source images which are entitled to contribute color to each pixel of the novel image. Weighted texture blending allows regulating the local radiometric contribution of each source image involved, while outlying color values are automatically discarded with a basic statistical test. Experimental results from a close-range project indicate that this fusion of laser scanning with multiview photogrammetry could indeed combine geometric accuracy with high visual quality and speed. A discussion of intended improvements of the algorithm is also included
    corecore