2,921 research outputs found

    Potentialities and limitations of research on VHRS data : Alexander the Great’s military camp at Gaugamela on the Navkur Plain in Kurdish Iraq as a test case

    Get PDF
    This paper presents a selected aspect of research conducted within the Gaugamela Project, which seeks to finally identify the location of one of the most important ancient battles: the Battle of Gaugamela (331 BCE). The aim of this study was to discover material remains of the Macedonian military camp on the Navkur Plain in Kurdish Iraq. For this purpose, three very high resolution satellite (VHRS) datasets from Pleiades and WorldView-2 were acquired and subjected to multi-variant image processing (development of different color composites, integration of multispectral and panchromatic images, use of principle component analysis transformation, use of vegetation indices). Documentation of photointerpretation was carried out through the vectorization of features/areas. Due to the character of the sought-after artifacts (remnants of a large enclosure), features were categorized into two types: linear features and areal features. As a result, 19 linear features and 2 areal features were found in the study area of the Mahad hills. However, only a few features fulfilled the expected geometric criteria (layout and size) and were subjected to field groundtruthing, which ended in negative results. It is concluded that no traces have been found that could be interpreted as remnants of an earthen enclosure capable of accommodating around 47,000 soldiers. Further research perspectives are also suggested

    Connected Attribute Filtering Based on Contour Smoothness

    Get PDF

    Image enhancement techniques applied to solar feature detection

    Get PDF
    This dissertation presents the development of automatic image enhancement techniques for solar feature detection. The new method allows for detection and tracking of the evolution of filaments in solar images. Series of H-alpha full-disk images are taken in regular time intervals to observe the changes of the solar disk features. In each picture, the solar chromosphere filaments are identified for further evolution examination. The initial preprocessing step involves local thresholding to convert grayscale images into black-and-white pictures with chromosphere granularity enhanced. An alternative preprocessing method, based on image normalization and global thresholding is presented. The next step employs morphological closing operations with multi-directional linear structuring elements to extract elongated shapes in the image. After logical union of directional filtering results, the remaining noise is removed from the final outcome using morphological dilation and erosion with a circular structuring element. Experimental results show that the developed techniques can achieve excellent results in detecting large filaments and good detection rates for small filaments. The final chapter discusses proposed directions of the future research and applications to other areas of solar image processing, in particular to detection of solar flares, plages and sunspots

    Interpretation of multispectral satellite data as a tool for detecting archaeological artifacts (Navkur Plain and Karamleis Plain, Iraq)

    Get PDF
    Contemporary studies of geographical space, including archaeological research, incorporate multiple spatial digital data. Such data provide an opportunity to extend research to large areas, and to objectify studies on the basis of quantitative data thus obtained and gaining access to the hard-to-reach study area. Examples of such data are satellite images at various spatial resolutions and in a wide spectrum of electromagnetic radiation (visible, infrared, and microwave). The authors made an attempt to use satellite images to analyze the areas of probable location of the Battle of Gaugamela (the Navkur Plain and the Karamleis Plain in Iraq). The photointerpretation was performed, enhanced by the multivariate processing of the multispectral image. The aim of the work was indicating the most likely places where the camp and the battle were located based on the visual interpretation of an array of satellite data. The adopted methodology of precise allocation of interpretative values to remote sensing materials for every detected artifact provided an opportunity to accumulate an extensive amount of information. It also provided the basis for a synthetic analysis regarding the methods of image processing on the one hand and the dates of recording on the other. It turned out that the season in which the photos are recorded is very important—although the best data for analysis turned out to be the autumn data (38% of all recognized artifacts), the use of data from three seasons increased the total number of indicated artifacts by as much as about 50% (the so-called unique detections). In addition, advanced image processing (such as principal component analysis and decorrelation stretch) turned out to be important, as it increased the number of areal artifacts by 31% compared to the interpretation of only photos in natural (true) color composite and false color composite (with near-infrared). The conducted analyses have confirmed the usefulness of high-resolution satellite data for archaeological applications, and the detected and described anomalies visible in satellite images are excellent material for selecting sites for detailed field research

    Exploring new representations and applications for motion analysis

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 153-164).The focus of motion analysis has been on estimating a flow vector for every pixel by matching intensities. In my thesis, I will explore motion representations beyond the pixel level and new applications to which these representations lead. I first focus on analyzing motion from video sequences. Traditional motion analysis suffers from the inappropriate modeling of the grouping relationship of pixels and from a lack of ground-truth data. Using layers as the interface for humans to interact with videos, we build a human-assisted motion annotation system to obtain ground-truth motion, missing in the literature, for natural video sequences. Furthermore, we show that with the layer representation, we can detect and magnify small motions to make them visible to human eyes. Then we move to a contour presentation to analyze the motion for textureless objects under occlusion. We demonstrate that simultaneous boundary grouping and motion analysis can solve challenging data, where the traditional pixel-wise motion analysis fails. In the second part of my thesis, I will show the benefits of matching local image structures instead of intensity values. We propose SIFT flow that establishes dense, semantically meaningful correspondence between two images across scenes by matching pixel-wise SIFT features. Using SIFT flow, we develop a new framework for image parsing by transferring the metadata information, such as annotation, motion and depth, from the images in a large database to an unknown query image. We demonstrate this framework using new applications such as predicting motion from a single image and motion synthesis via object transfer.(cont.) Based on SIFT flow, we introduce a nonparametric scene parsing system using label transfer, with very promising experimental results suggesting that our system outperforms state-of-the-art techniques based on training classifiers.by Ce Liu.Ph.D

    Multimodal Remote Sensing Image Registration with Accuracy Estimation at Local and Global Scales

    Full text link
    This paper focuses on potential accuracy of remote sensing images registration. We investigate how this accuracy can be estimated without ground truth available and used to improve registration quality of mono- and multi-modal pair of images. At the local scale of image fragments, the Cramer-Rao lower bound (CRLB) on registration error is estimated for each local correspondence between coarsely registered pair of images. This CRLB is defined by local image texture and noise properties. Opposite to the standard approach, where registration accuracy is only evaluated at the output of the registration process, such valuable information is used by us as an additional input knowledge. It greatly helps detecting and discarding outliers and refining the estimation of geometrical transformation model parameters. Based on these ideas, a new area-based registration method called RAE (Registration with Accuracy Estimation) is proposed. In addition to its ability to automatically register very complex multimodal image pairs with high accuracy, the RAE method provides registration accuracy at the global scale as covariance matrix of estimation error of geometrical transformation model parameters or as point-wise registration Standard Deviation. This accuracy does not depend on any ground truth availability and characterizes each pair of registered images individually. Thus, the RAE method can identify image areas for which a predefined registration accuracy is guaranteed. The RAE method is proved successful with reaching subpixel accuracy while registering eight complex mono/multimodal and multitemporal image pairs including optical to optical, optical to radar, optical to Digital Elevation Model (DEM) images and DEM to radar cases. Other methods employed in comparisons fail to provide in a stable manner accurate results on the same test cases.Comment: 48 pages, 8 figures, 5 tables, 51 references Revised arguments in sections 2 and 3. Additional test cases added in Section 4; comparison with the state-of-the-art improved. References added. Conclusions unchanged. Proofrea

    Comparison and evaluation of global publicly available bathymetry grids in the Arctic

    Get PDF
    In this study we evaluate the differences between six publicly available bathymetry grids in different regions of the Arctic. The independent, high-resolution and accuracy multibeam sonar derived grids are used as a ground truth against which the analyzed grids are compared. The specific bathymetry grids assessed, IBCAO, GEBCO 1 minute, GEBCO_08, ETOPO1, SRTM30_Plus, and Smith and Sandwell, are separated into two major Types: Type A, grids based solely on sounding data sources, and Type B, grids based on sounding data combined with gravity data. The differences were evaluated in terms of source data accuracy, depth accuracy, internal consistency, presence of artifacts, interpolation accuracy, registration issues and resolution of the coastline. These parameters were chosen as quality metrics important for the choice of the grid for any given purpose. We find that Type A bathymetry grids (in particular GEBCO_08) perform better than Type B grids in terms of internal consistency, and have higher accuracy in the different morphological provinces, especially the continental shelf, mainly due to the better source data coverage. Type B grids, on the other hand, have pronounced artifacts and have low accuracy on the shelf due to the scarcity of source data in the region and, in general, the poor performance of gravity prediction in shallow areas and high latitudes. Finally, we propose qualitative metrics that are important when choosing a bathymetry grid and support these metrics with a quality model to guide the choice of the most appropriate grid

    Partial shape matching using CCP map and weighted graph transformation matching

    Get PDF
    La détection de la similarité ou de la différence entre les images et leur mise en correspondance sont des problèmes fondamentaux dans le traitement de l'image. Pour résoudre ces problèmes, on utilise, dans la littérature, différents algorithmes d'appariement. Malgré leur nouveauté, ces algorithmes sont pour la plupart inefficaces et ne peuvent pas fonctionner correctement dans les situations d’images bruitées. Dans ce mémoire, nous résolvons la plupart des problèmes de ces méthodes en utilisant un algorithme fiable pour segmenter la carte des contours image, appelée carte des CCPs, et une nouvelle méthode d'appariement. Dans notre algorithme, nous utilisons un descripteur local qui est rapide à calculer, est invariant aux transformations affines et est fiable pour des objets non rigides et des situations d’occultation. Après avoir trouvé le meilleur appariement pour chaque contour, nous devons vérifier si ces derniers sont correctement appariés. Pour ce faire, nous utilisons l'approche « Weighted Graph Transformation Matching » (WGTM), qui est capable d'éliminer les appariements aberrants en fonction de leur proximité et de leurs relations géométriques. WGTM fonctionne correctement pour les objets à la fois rigides et non rigides et est robuste aux distorsions importantes. Pour évaluer notre méthode, le jeu de données ETHZ comportant cinq classes différentes d'objets (bouteilles, cygnes, tasses, girafes, logos Apple) est utilisé. Enfin, notre méthode est comparée à plusieurs méthodes célèbres proposées par d'autres chercheurs dans la littérature. Bien que notre méthode donne un résultat comparable à celui des méthodes de référence en termes du rappel et de la précision de localisation des frontières, elle améliore significativement la précision moyenne pour toutes les catégories du jeu de données ETHZ.Matching and detecting similarity or dissimilarity between images is a fundamental problem in image processing. Different matching algorithms are used in literature to solve this fundamental problem. Despite their novelty, these algorithms are mostly inefficient and cannot perform properly in noisy situations. In this thesis, we solve most of the problems of previous methods by using a reliable algorithm for segmenting image contour map, called CCP Map, and a new matching method. In our algorithm, we use a local shape descriptor that is very fast, invariant to affine transform, and robust for dealing with non-rigid objects and occlusion. After finding the best match for the contours, we need to verify if they are correctly matched. For this matter, we use the Weighted Graph Transformation Matching (WGTM) approach, which is capable of removing outliers based on their adjacency and geometrical relationships. WGTM works properly for both rigid and non-rigid objects and is robust to high order distortions. For evaluating our method, the ETHZ dataset including five diverse classes of objects (bottles, swans, mugs, giraffes, apple-logos) is used. Finally, our method is compared to several famous methods proposed by other researchers in the literature. While our method shows a comparable result to other benchmarks in terms of recall and the precision of boundary localization, it significantly improves the average precision for all of the categories in the ETHZ dataset
    • …
    corecore