154 research outputs found

    Geometric Accuracy Testing, Evaluation and Applicability of Space Imagery to the Small Scale Topographic Mapping of the Sudan

    Get PDF
    The geometric accuracy, interpretabilty and the applicability of using space imagery for the production of small-scale topographic maps of the Sudan have been assessed. Two test areas have been selected. The first test area was selected in the central Sudan including the area between the Blue Nile and the White Nile and extending to Atbara in the Nile Province. The second test area was selected in the Red Sea Hills area which has modern 1:100,000 scale topographic map coverage and has been covered by six types of images, Landsat MSS TM and RBV; MOMS; Metric Camera (MC); and Large format Camera (LFC). Geometric accuracy testing has been carried out using a test field of well-defined control points whose terrain coordinates have been obtained from the existing maps. The same points were measured on each of the images in a Zeiss Jena Stereocomparator (Stecometer C II) and transformed into the terrain coordinate system using polynomial transformations in the case of the scanner and RBV images; and space resection/intersection, relative/absolute orientation and bundle adjustment in the case of the MC and LFC photographs. The two sets of coordinates were then compared. The planimetric accuracies (root mean square errors) obtained for the scanner and RBV images were: Landsat MSS +/-80 m; TM +/-45 m; REV +/-40 m; and MOMS +/-28 m. The accuracies of the 3-dimensional coordinates obtained from the photographs were: MC:-X=+/-16 m, Y=+/-16 m, Z=+/-30 m; and LFC:- X=+/-14 m, Y=+/-14 m, and Z=+/-20 m. The planimetric accuracy figures are compatible with the specifications for topographic maps at scales of 1:250,000 in the case of MSS; 1:125,000 scale in the case of TM and RBV; and 1:100,000 scale in the case of MOMS. The planimetric accuracies (vector =+/-20 m) achieved with the two space cameras are compatible with topographic mapping at 1:60,000 to 1:70,000 scale. However, the spot height accuracies of +/-20 to +/-30 m - equivalent to a contour interval of 50 to 60 m - fall short of the required heighting accuracies for 1:60,000 to 1:100,000 scale mapping. The interpretation tests carried out on the MSS, TM, and RBV images showed that, while the main terrain features (hills, ridges, wadis, etc.) can be mapped reasonably well, there was an almost complete failure to pick up the cultural features - towns, villages, roads, railways, etc. - present in the test areas. The high resolution MOMS images and the space photographs were much more satisfactory in this respect though still the cultural features are difficult to pick up due to the buildings and roads being built out of local material and exhibiting little contrast on the images

    3D object reconstruction using computer vision : reconstruction and characterization applications for external human anatomical structures

    Get PDF
    Tese de doutoramento. Engenharia Informática. Faculdade de Engenharia. Universidade do Porto. 201

    텍스트와 특징점 기반의 목적함수 최적화를 이용한 문서와 텍스트 평활화 기법

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2014. 8. 조남익.There are many techniques and applications that detect and recognize text information in the images, e.g., document retrieval using the camera-captured document image, book reader for visually impaired, and augmented reality based on text recognition. In these applications, the planar surfaces which contain the text are often distorted in the captured image due to the perspective view (e.g., road signs), curvature (e.g., unfolded books), and wrinkles (e.g., old documents). Specifically, recovering the original document texture by removing these distortions from the camera-captured document images is called the document rectification. In this dissertation, new text surface rectification algorithms are proposed, for improving text recognition accuracy and visual quality. The proposed methods are categorized into 3 types depending on the types of the input. The contributions of the proposed methods can be summarized as follows. In the first rectification algorithm, the dense text-lines in the documents are employed to rectify the images. Unlike the conventional approaches, the proposed method does not directly use the text-line. Instead, the proposed method use the discrete representation of text-lines and text-blocks which are the sets of connected components. Also, the geometric distortion caused by page curl and perspective view are modeled as generalized cylindrical surfaces and camera rotation respectively. With these distortion model and discrete representation of the features, a cost function whose minimization yields parameters of the distortion model is developed. In the cost function, the properties of the pages such as text-block alignment, line-spacing, and the straightness of text-lines are encoded. By describing the text features using the sets of discrete points, the cost function can be easily defined and well solved by Levenberg-Marquadt algorithm. Experiments show that the proposed method works well for the various layouts and curved surfaces, and compares favorably with the conventional methods on the standard dataset. The second algorithm is a unified framework to rectify and stitch multiple document images using visual feature points instead of text lines. This is similar to the method employed in general image stitching algorithm. However, the general image stitching algorithm usually assumes fixed center of camera, which is not taken for granted in capturing the document. To deal with the camera motion between images, a new parametric family of motion model is proposed in this dissertation. Besides, to remove the ambiguity in the reference plane, a new cost function is developed to impose the constraints on the reference plane. This enables the estimation of physically correct reference plane without prior knowledge. The estimated reference plane can also be used to rectify the stitching result. Furthermore, the proposed method can be applied to any other planar object such as building facades or mural paintings as well as the camera-captured document image since it employs the general features. The third rectification method is based on scene text detection algorithm, which is independent from the language model. The conventional methods assume that a character consists of a single connected component (CC) like English alphabet. However, this assumption is brittle in the Asian characters such as Korean, Chinese, and Japanese, where a single character consists of several CCs. Therefore, it is difficult to divide CCs into text lines without language model. To alleviate this problem, the proposed method clusters the candidate regions based on the similarity measure considering inter-character relation. The adjacency measure is trained on the data set labeled with the bounding box of text region. Non-text regions that remain after clustering are filtered out in text/non-text classification step. Final text regions are merged or divided into each text line considering the orientation and location. The detected text is rectified using the orientation of text-line and vertical strokes. The proposed method outperforms state-of-the-art algorithms in English as well as Asian characters in the extensive experiments.1 Introduction 1 1.1 Document rectification via text-line based optimization . . . . . . . 2 1.2 A unified approach of rectification and stitching for document images 4 1.3 Rectification via scene text detection . . . . . . . . . . . . . . . . . . 5 1.4 Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2 Related work 9 2.1 Document rectification . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.1.1 Document dewarping without text-lines . . . . . . . . . . . . 9 2.1.2 Document dewarping with text-lines . . . . . . . . . . . . . . 10 2.1.3 Text-block identification and text-line extraction . . . . . . . 11 2.2 Document stitching . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.3 Scene text detection . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3 Document rectification based on text-lines 15 3.1 Proposed approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 3.1.1 Image acquisition model . . . . . . . . . . . . . . . . . . . . . 16 3.1.2 Proposed approach to document dewarping . . . . . . . . . . 18 3.2 Proposed cost function and its optimization . . . . . . . . . . . . . . 22 3.2.1 Design of Estr(·) . . . . . . . . . . . . . . . . . . . . . . . . . 22 3.2.2 Minimization of Estr(·) . . . . . . . . . . . . . . . . . . . . . 23 3.2.3 Alignment type classification . . . . . . . . . . . . . . . . . . 28 3.2.4 Design of Ealign(·) . . . . . . . . . . . . . . . . . . . . . . . . 29 3.2.5 Design of Espacing(·) . . . . . . . . . . . . . . . . . . . . . . . 31 3.3 Extension to unfolded book surfaces . . . . . . . . . . . . . . . . . . 32 3.4 Experimental result . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 3.4.1 Experiments on synthetic data . . . . . . . . . . . . . . . . . 36 3.4.2 Experiments on real images . . . . . . . . . . . . . . . . . . . 39 3.4.3 Comparison with existing methods . . . . . . . . . . . . . . . 43 3.4.4 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4 Document rectification based on feature detection 49 4.1 Proposed approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 4.2 Proposed cost function and its optimization . . . . . . . . . . . . . . 51 4.2.1 Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 4.2.2 Homography between the i-th image and E . . . . . . . . . 52 4.2.3 Proposed cost function . . . . . . . . . . . . . . . . . . . . . . 53 4.2.4 Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 4.2.5 Relation to the model in [17] . . . . . . . . . . . . . . . . . . 55 4.3 Post-processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 4.3.1 Classification of two cases . . . . . . . . . . . . . . . . . . . . 56 4.3.2 Skew removal . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 4.4 Experimental results . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.4.1 Quantitative evaluation on metric reconstruction performance 57 4.4.2 Experiments on real images . . . . . . . . . . . . . . . . . . . 58 5 Scene text detection and rectification 67 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 5.1.1 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 5.1.2 Proposed approach . . . . . . . . . . . . . . . . . . . . . . . . 69 5.2 Candidate region detection . . . . . . . . . . . . . . . . . . . . . . . 70 5.2.1 CC extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 5.2.2 Computation of similarity between CCs . . . . . . . . . . . . 70 5.2.3 CC clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 5.3 Rectification of candidate region . . . . . . . . . . . . . . . . . . . . 73 5.4 Text/non-text classification . . . . . . . . . . . . . . . . . . . . . . . 76 5.5 Experimental result . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 5.5.1 Experimental results on ICDAR 2011 dataset . . . . . . . . . 80 5.5.2 Experimental results on the Asian character dataset . . . . . 80 6 Conclusion 83 Bibliography 87 Abstract (Korean) 97Docto

    Image Processing Systems and Algorithms for estimating Deformations of Aircraft Structures in Flight

    Get PDF
    If you have ever been on an aircraft and looked at the window, you may have noticed the remarkable deformations of its wings. This observation actually conveys a lot of information about the aerodynamic efforts that are applied to the aircraft. Long before the first flight of an aircraft, manufacturers are able to predict its mechanical behavior in various scenarii depending for instance on the aircraft weight, speed or angle of attack, based on accurate theoretical models. As part of the aircraft certification procedure, these models have to be validated and refined through in-flight estimation of wing deformations. However, as the quality and accuracy of the wing models increase, the methods used to obtain the actual measurements should also evolve. In this work, a new system is developed and evaluated to estimate the 3D shape of a wing in flight. To answer the new needs of dense mapping, precision, or frequency, while introducing no disturbance on the wing aerodynamic behavior, this study is focusing on the methods of non-contact 3D reconstruction. After performing a detailed study about state-ofthe-art systems in this field, a photogrammetry approach using multiple cameras installed at the aircraft windows was retained, and a full algorithmic and hardware system was developed. Similarly to most standard photogrammetry methods, the proposed approach is based on Bundle Adjustment (BA), a classical method that simultaneously estimates camera positions and surrounding 3D scene. BA is an iterative optimization algorithm that aims at minimizing a non-convex and non-linear cost function. Therefore, one cannot guarantee its convergence to a global minimum, and the choice of the initial conditions is crucial in practical applications. Consequently, the application of photogrammetry to 3D wing reconstruction in flight is a very challenging problem, due to strong installation constraints, and highly varying environment with vibrations, luminosity changes, potential reflections and shadows. To face these challenges, this work presents a new constrained BA, which uses prior knowledge resulting from wing mechanical limits beyond which the wing would break, and improves reconstruction results as demonstrated through realistic tests. In a second step, an in-depth study of error sources and reconstruction uncertainty is provided in order to guarantee the quality of the 3D estimation, as well as the possibility of having a better interpretation of reconstruction errors. To this aim, all potential sources of uncertainty are evaluated, and propagated through the proposed framework using three approaches: analytical calculation, Monte-Carlo simulation, and experimental validation on synthetic images. The different implementations and results allowed one to conclude on the advantages and disadvantages of each method. They also prove that the developed system meets the expectations of Airbus. Finally, the designed system is validated on real tests with an A350-1000 of the flight test center in Airbus. These experimentations conducted in real conditions show the pertinence of the proposed solution with respect to the observed sources of uncertainty, and provide promising results

    Correction of Errors in Time of Flight Cameras

    Get PDF
    En esta tesis se aborda la corrección de errores en cámaras de profundidad basadas en tiempo de vuelo (Time of Flight - ToF). De entre las más recientes tecnologías, las cámaras ToF de modulación continua (Continuous Wave Modulation - CWM) son una alternativa prometedora para la creación de sensores compactos y rápidos. Sin embargo, existen gran variedad de errores que afectan notablemente la medida de profundidad, poniendo en compromiso posibles aplicaciones. La corrección de dichos errores propone un reto desafiante. Actualmente, se consideran dos fuentes principales de error: i) sistemático y ii) no sistemático. Mientras que el primero admite calibración, el último depende de la geometría y el movimiento relativo de la escena. Esta tesis propone métodos que abordan i) la distorsión sistemática de profundidad y dos de las fuentes de error no sistemático más relevantes: ii.a) la interferencia por multicamino (Multipath Interference - MpI) y ii.b) los artefactos de movimiento. La distorsión sistemática de profundidad en cámaras ToF surge principalmente debido al uso de señales sinusoidales no perfectas para modular. Como resultado, las medidas de profundidad aparecen distorsionadas, pudiendo ser reducidas con una etapa de calibración. Esta tesis propone un método de calibración basado en mostrar a la cámara un plano en diferentes posiciones y orientaciones. Este método no requiere de patrones de calibración y, por tanto, puede emplear los planos, que de manera natural, aparecen en la escena. El método propuesto encuentra una función que obtiene la corrección de profundidad correspondiente a cada píxel. Esta tesis mejora los métodos existentes en cuanto a precisión, eficiencia e idoneidad. La interferencia por multicamino surge debido a la superposición de la señal reflejada por diferentes caminos con la reflexión directa, produciendo distorsiones que se hacen más notables en superficies convexas. La MpI es la causa de importantes errores en la estimación de profundidad en cámaras CWM ToF. Esta tesis propone un método que elimina la MpI a partir de un solo mapa de profundidad. El enfoque propuesto no requiere más información acerca de la escena que las medidas ToF. El método se fundamenta en un modelo radio-métrico de las medidas que se emplea para estimar de manera muy precisa el mapa de profundidad sin distorsión. Una de las tecnologías líderes para la obtención de profundidad en imagen ToF está basada en Photonic Mixer Device (PMD), la cual obtiene la profundidad mediante el muestreado secuencial de la correlación entre la señal de modulación y la señal proveniente de la escena en diferentes desplazamientos de fase. Con movimiento, los píxeles PMD capturan profundidades diferentes en cada etapa de muestreo, produciendo artefactos de movimiento. El método propuesto en esta tesis para la corrección de dichos artefactos destaca por su velocidad y sencillez, pudiendo ser incluido fácilmente en el hardware de la cámara. La profundidad de cada píxel se recupera gracias a la consistencia entre las muestras de correlación en el píxel PMD y de la vecindad local. Este método obtiene correcciones precisas, reduciendo los artefactos de movimiento enormemente. Además, como resultado de este método, puede obtenerse el flujo óptico en los contornos en movimiento a partir de una sola captura. A pesar de ser una alternativa muy prometedora para la obtención de profundidad, las cámaras ToF todavía tienen que resolver problemas desafiantes en relación a la corrección de errores sistemáticos y no sistemáticos. Esta tesis propone métodos eficaces para enfrentarse con estos errores

    Geometric uncertainty models for correspondence problems in digital image processing

    Get PDF
    Many recent advances in technology rely heavily on the correct interpretation of an enormous amount of visual information. All available sources of visual data (e.g. cameras in surveillance networks, smartphones, game consoles) must be adequately processed to retrieve the most interesting user information. Therefore, computer vision and image processing techniques gain significant interest at the moment, and will do so in the near future. Most commonly applied image processing algorithms require a reliable solution for correspondence problems. The solution involves, first, the localization of corresponding points -visualizing the same 3D point in the observed scene- in the different images of distinct sources, and second, the computation of consistent geometric transformations relating correspondences on scene objects. This PhD presents a theoretical framework for solving correspondence problems with geometric features (such as points and straight lines) representing rigid objects in image sequences of complex scenes with static and dynamic cameras. The research focuses on localization uncertainty due to errors in feature detection and measurement, and its effect on each step in the solution of a correspondence problem. Whereas most other recent methods apply statistical-based models for spatial localization uncertainty, this work considers a novel geometric approach. Localization uncertainty is modeled as a convex polygonal region in the image space. This model can be efficiently propagated throughout the correspondence finding procedure. It allows for an easy extension toward transformation uncertainty models, and to infer confidence measures to verify the reliability of the outcome in the correspondence framework. Our procedure aims at finding reliable consistent transformations in sets of few and ill-localized features, possibly containing a large fraction of false candidate correspondences. The evaluation of the proposed procedure in practical correspondence problems shows that correct consistent correspondence sets are returned in over 95% of the experiments for small sets of 10-40 features contaminated with up to 400% of false positives and 40% of false negatives. The presented techniques prove to be beneficial in typical image processing applications, such as image registration and rigid object tracking

    Assessment of the CORONA series of satellite imagery for landscape archaeology: a case study from the Orontes valley, Syria

    Get PDF
    In 1995, a large database of satellite imagery with worldwide coverage taken from 1960 until 1972 was declassified. The main advantages of this imagery known as CORONA that made it attractive for archaeology were its moderate cost and its historical value. The main disadvantages were its unknown quality, format, geometry and the limited base of known applications. This thesis has sought to explore the properties and potential of CORONA imagery and thus enhance its value for applications in landscape archaeology. In order to ground these investigations in a real dataset, the properties and characteristics of CORONA imagery were explored through the case study of a landscape archaeology project working in the Orontes Valley, Syria. Present-day high-resolution IKONOS imagery was integrated within the study and assessed alongside CORONA imagery. The combination of these two image datasets was shown to provide a powerful set of tools for investigating past archaeological landscape in the Middle East. The imagery was assessed qualitatively through photointerpretation for its ability to detect archaeological remains, and quantitatively through the extraction of height information after the creation of stereomodels. The imagery was also assessed spectrally through fieldwork and spectroradiometric analysis, and for its Multiple View Angle (MVA) capability through visual and statistical analysis. Landscape archaeology requires a variety of data to be gathered from a large area, in an effective and inexpensive way. This study demonstrates an effective methodology for the deployment of CORONA and IKONOS imagery and raises a number of technical points of which the archaeological researcher community need to be aware of. Simultaneously, it identified certain limitations of the data and suggested solutions for the more effective exploitation of the strengths of CORONA imagery

    Uniscale and multiscale gait recognition in realistic scenario

    Get PDF
    The performance of a gait recognition method is affected by numerous challenging factors that degrade its reliability as a behavioural biometrics for subject identification in realistic scenario. Thus for effective visual surveillance, this thesis presents five gait recog- nition methods that address various challenging factors to reliably identify a subject in realistic scenario with low computational complexity. It presents a gait recognition method that analyses spatio-temporal motion of a subject with statistical and physical parameters using Procrustes shape analysis and elliptic Fourier descriptors (EFD). It introduces a part- based EFD analysis to achieve invariance to carrying conditions, and the use of physical parameters enables it to achieve invariance to across-day gait variation. Although spatio- temporal deformation of a subject’s shape in gait sequences provides better discriminative power than its kinematics, inclusion of dynamical motion characteristics improves the iden- tification rate. Therefore, the thesis presents a gait recognition method which combines spatio-temporal shape and dynamic motion characteristics of a subject to achieve robust- ness against the maximum number of challenging factors compared to related state-of-the- art methods. A region-based gait recognition method that analyses a subject’s shape in image and feature spaces is presented to achieve invariance to clothing variation and carry- ing conditions. To take into account of arbitrary moving directions of a subject in realistic scenario, a gait recognition method must be robust against variation in view. Hence, the the- sis presents a robust view-invariant multiscale gait recognition method. Finally, the thesis proposes a gait recognition method based on low spatial and low temporal resolution video sequences captured by a CCTV. The computational complexity of each method is analysed. Experimental analyses on public datasets demonstrate the efficacy of the proposed methods
    corecore