14 research outputs found

    Advanced Pre-Processing and Change-Detection Techniques for the Analysis of Multitemporal VHR Remote Sensing Images

    Get PDF
    Remote sensing images regularly acquired by satellite over the same geographical areas (multitemporal images) provide very important information on the land cover dynamic. In the last years the ever increasing availability of multitemporal very high geometrical resolution (VHR) remote sensing images (which have sub-metric resolution) resulted in new potentially relevant applications related to environmental monitoring and land cover control and management. The most of these applications are associated with the analysis of dynamic phenomena (both anthropic and non anthropic) that occur at different scales and result in changes on the Earth surface. In this context, in order to adequately exploit the huge amount of data acquired by remote sensing satellites, it is mandatory to develop unsupervised and automatic techniques for an efficient and effective analysis of such kind of multitemporal data. In the literature several techniques have been developed for the automatic analysis of multitemporal medium/high resolution data. However these techniques do not result effective when dealing with VHR images. The main reasons consist in their inability both to exploit the high geometrical detail content of VHR data and to model the multiscale nature of the scene (and therefore of possible changes). In this framework it is important to develop unsupervised change-detection(CD) methods able to automatically manage the large amount of information of VHR data, without the need of any prior information on the area under investigation. Even if these methods usually identify only the presence/absence of changes without giving information about the kind of change occurred, they are considered the most interesting from an operational perspective, as in the most of the applications no multitemporal ground truth information is available. Considering the above mentioned limitations, in this thesis we study the main problems related to multitemporal VHR images with particular attention to registration noise (i.e. the noise related to a non-perfect alignment of the multitemporal images under investigation). Then, on the basis of the results of the conducted analysis, we develop robust unsupervised and automatic change-detection methods. In particular, the following specific issues are addressed in this work: 1. Analysis of the effects of registration noise in multitemporal VHR images and definition of a method for the estimation of the distribution of such kind of noise useful for defining: a. Change-detection techniques robust to registration noise (RN); the proposed techniques are able to significantly reduce the false alarm rate due to RN that is raised by the standard CD techniques when dealing with VHR images. b. Effective registration methods; the proposed strategies are based on a multiscale analysis of the scene which allows one to extract accurate control points for the registration of VHR images. 2. Detection and discrimination of multiple changes in multitemporal images; this techniques allow one to overcome the limitation of the existing unsupervised techniques, as they are able to identify and separate different kinds of change without any prior information on the study areas. 3. Pre-processing techniques for optimizing change detection on VHR images; in particular, in this context we evaluate the impact of: a. Image transformation techniques on the results of the CD process; b. Different strategies of image pansharpening applied to the original multitemporal images on the results of the CD process. For each of the above mentioned topic an analysis of the state of the art is carried out, the limitations of existing methods are pointed out and the proposed solutions to the addressed problems are described in details. Finally, experimental results conducted on both simulated and real data are reported in order to show and confirm the validity of all the proposed methods

    Advances in Image Processing, Analysis and Recognition Technology

    Get PDF
    For many decades, researchers have been trying to make computers’ analysis of images as effective as the system of human vision is. For this purpose, many algorithms and systems have previously been created. The whole process covers various stages, including image processing, representation and recognition. The results of this work can be applied to many computer-assisted areas of everyday life. They improve particular activities and provide handy tools, which are sometimes only for entertainment, but quite often, they significantly increase our safety. In fact, the practical implementation of image processing algorithms is particularly wide. Moreover, the rapid growth of computational complexity and computer efficiency has allowed for the development of more sophisticated and effective algorithms and tools. Although significant progress has been made so far, many issues still remain, resulting in the need for the development of novel approaches

    GEOBIA 2016 : Solutions and Synergies., 14-16 September 2016, University of Twente Faculty of Geo-Information and Earth Observation (ITC): open access e-book

    Get PDF

    Pre-processing, classification and semantic querying of large-scale Earth observation spaceborne/airborne/terrestrial image databases: Process and product innovations.

    Get PDF
    By definition of Wikipedia, “big data is the term adopted for a collection of data sets so large and complex that it becomes difficult to process using on-hand database management tools or traditional data processing applications. The big data challenges typically include capture, curation, storage, search, sharing, transfer, analysis and visualization”. Proposed by the intergovernmental Group on Earth Observations (GEO), the visionary goal of the Global Earth Observation System of Systems (GEOSS) implementation plan for years 2005-2015 is systematic transformation of multisource Earth Observation (EO) “big data” into timely, comprehensive and operational EO value-adding products and services, submitted to the GEO Quality Assurance Framework for Earth Observation (QA4EO) calibration/validation (Cal/Val) requirements. To date the GEOSS mission cannot be considered fulfilled by the remote sensing (RS) community. This is tantamount to saying that past and existing EO image understanding systems (EO-IUSs) have been outpaced by the rate of collection of EO sensory big data, whose quality and quantity are ever-increasing. This true-fact is supported by several observations. For example, no European Space Agency (ESA) EO Level 2 product has ever been systematically generated at the ground segment. By definition, an ESA EO Level 2 product comprises a single-date multi-spectral (MS) image radiometrically calibrated into surface reflectance (SURF) values corrected for geometric, atmospheric, adjacency and topographic effects, stacked with its data-derived scene classification map (SCM), whose thematic legend is general-purpose, user- and application-independent and includes quality layers, such as cloud and cloud-shadow. Since no GEOSS exists to date, present EO content-based image retrieval (CBIR) systems lack EO image understanding capabilities. Hence, no semantic CBIR (SCBIR) system exists to date either, where semantic querying is synonym of semantics-enabled knowledge/information discovery in multi-source big image databases. In set theory, if set A is a strict superset of (or strictly includes) set B, then A B. This doctoral project moved from the working hypothesis that SCBIR computer vision (CV), where vision is synonym of scene-from-image reconstruction and understanding EO image understanding (EO-IU) in operating mode, synonym of GEOSS ESA EO Level 2 product human vision. Meaning that necessary not sufficient pre-condition for SCBIR is CV in operating mode, this working hypothesis has two corollaries. First, human visual perception, encompassing well-known visual illusions such as Mach bands illusion, acts as lower bound of CV within the multi-disciplinary domain of cognitive science, i.e., CV is conditioned to include a computational model of human vision. Second, a necessary not sufficient pre-condition for a yet-unfulfilled GEOSS development is systematic generation at the ground segment of ESA EO Level 2 product. Starting from this working hypothesis the overarching goal of this doctoral project was to contribute in research and technical development (R&D) toward filling an analytic and pragmatic information gap from EO big sensory data to EO value-adding information products and services. This R&D objective was conceived to be twofold. First, to develop an original EO-IUS in operating mode, synonym of GEOSS, capable of systematic ESA EO Level 2 product generation from multi-source EO imagery. EO imaging sources vary in terms of: (i) platform, either spaceborne, airborne or terrestrial, (ii) imaging sensor, either: (a) optical, encompassing radiometrically calibrated or uncalibrated images, panchromatic or color images, either true- or false color red-green-blue (RGB), multi-spectral (MS), super-spectral (SS) or hyper-spectral (HS) images, featuring spatial resolution from low (> 1km) to very high (< 1m), or (b) synthetic aperture radar (SAR), specifically, bi-temporal RGB SAR imagery. The second R&D objective was to design and develop a prototypical implementation of an integrated closed-loop EO-IU for semantic querying (EO-IU4SQ) system as a GEOSS proof-of-concept in support of SCBIR. The proposed closed-loop EO-IU4SQ system prototype consists of two subsystems for incremental learning. A primary (dominant, necessary not sufficient) hybrid (combined deductive/top-down/physical model-based and inductive/bottom-up/statistical model-based) feedback EO-IU subsystem in operating mode requires no human-machine interaction to automatically transform in linear time a single-date MS image into an ESA EO Level 2 product as initial condition. A secondary (dependent) hybrid feedback EO Semantic Querying (EO-SQ) subsystem is provided with a graphic user interface (GUI) to streamline human-machine interaction in support of spatiotemporal EO big data analytics and SCBIR operations. EO information products generated as output by the closed-loop EO-IU4SQ system monotonically increase their value-added with closed-loop iterations

    Improving contrast for the detection of archaeological vegetation marks using optical remote sensing techniques.

    Get PDF
    Airborne archaeological prospection in arable crops relies on detecting features using contrasts in the growth of the overlying crop as a proxy. This is possible because thecomposition of the soil in the features differs from the unmodified subsoil, and this exerts influence on the state of the crop. This influence is expressed as changes in crop canopydensity, structure, and in periods of resource constraint, variations in vegetation stressand vigour. These contrasts are dynamic, and vary temporally with local weather, andspatially with variations in drift geology and land use. This means that the archaeologicalfeatures have no unique spectral signature usable for classification. Rather, contrast isexpressed as relative, local variation in the crop. The extent to which the features are detectable using a particular technique is dependanton the strength of the contrast and the ability of the sensor to resolve it. Current practicerelies heavily on photography in the visible spectrum, but other sensors and processingtechniques have the potential to improve our ability to resolve subtle contrasts. This isimportant, as it affords the opportunity to extend the detection temporally and in soiltypes not normally considered conducive to detection. This work uses multi-temporal spectro-radiometry and ground-based survey to studycontrast at two sites in southern England. From these measurements leaf area index, vegetationindices, the red-edge position, chlorophyll fluorescence and continuum removalof foliar absorption features were derived and compared to evaluate contrast. The knowledgegained from the ground-based surveys was used to inform the analysis of the airbornesurveys. This included the application of vegetation indices to RGB cameras, theuse of multi-temporal and full-waveform LiDAR to detect biomass variations, and the useof various techniques with hyper-spectral imaging spectroscopy. These methods providea demonstrable improvement in contrast, particularly in methods sensitve to chlorophyllfluorescence, which afford the opportunity to record transient and short term contraststhat are not resolved by other sensors

    LIDAR based semi-automatic pattern recognition within an archaeological landscape

    Get PDF
    LIDAR-Daten bieten einen neuartigen Ansatz zur Lokalisierung und Überwachung des kulturellen Erbes in der Landschaft, insbesondere in schwierig zu erreichenden Gebieten, wie im Wald, im unwegsamen GelĂ€nde oder in sehr abgelegenen Gebieten. Die manuelle Lokalisation und Kartierung von archĂ€ologischen Informationen einer Kulturlandschaft ist in der herkömmlichen Herangehensweise eine sehr zeitaufwĂ€ndige Aufgabe des Fundstellenmanagements (Cultural Heritage Management). Um die Möglichkeiten in der Erkennung und bei der Verwaltung des kulturellem Erbes zu verbessern und zu ergĂ€nzen, können computergestĂŒtzte Verfahren einige neue LösungsansĂ€tze bieten, die darĂŒber hinaus sogar die Identifizierung von fĂŒr das menschliche Auge bei visueller Sichtung nicht erkennbaren Details ermöglichen. Aus archĂ€ologischer Sicht ist die vorliegende Dissertation dadurch motiviert, dass sie LIDAR-GelĂ€ndemodelle mit archĂ€ologischen Befunden durch automatisierte und semiautomatisierte Methoden zur Identifizierung weiterer archĂ€ologischer Muster zu Bodendenkmalen als digitale „LIDAR-Landschaft“ bewertet. Dabei wird auf möglichst einfache und freie verfĂŒgbare algorithmische AnsĂ€tze (Open Source) aus der Bildmustererkennung und Computer Vision zur Segmentierung und Klassifizierung der LIDAR-Landschaften zur großflĂ€chigen Erkennung archĂ€ologischer DenkmĂ€ler zurĂŒckgegriffen. Die Dissertation gibt dabei einen umfassenden Überblick ĂŒber die archĂ€ologische Nutzung und das Potential von LIDAR-Daten und definiert anhand qualitativer und quantitativer AnsĂ€tze den Entwicklungsstand der semiautomatisierten Erkennung archĂ€ologischer Strukturen im Rahmen archĂ€ologischer Prospektion und Fernerkundungen. DarĂŒber hinaus erlĂ€utert sie Best Practice-Beispiele und den einhergehenden aktuellen Forschungsstand. Und sie veranschaulicht die QualitĂ€t der Erkennung von BodendenkmĂ€lern durch die semiautomatisierte Segmentierung und Klassifizierung visualisierter LIDAR-Daten. Letztlich identifiziert sie das Feld fĂŒr weitere Anwendungen, wobei durch eigene, algorithmische Template Matching-Verfahren großflĂ€chige Untersuchungen zum kulturellen Erbe ermöglicht werden. ResĂŒmierend vergleicht sie die analoge und computergestĂŒtzte Bildmustererkennung zu Bodendenkmalen, und diskutiert abschließend das weitere Potential LIDAR-basierter Mustererkennung in archĂ€ologischen Kulturlandschaften

    Mechanisms and Consequences of Microtubule-Based Symmetry Breaking in Plant Roots

    Get PDF
    Directional growth in plants is primarily determined by the axis of cell expansion, which is specified by the net orientation of cortical microtubules. Microtubules guide the deposition of cellulose and other cell wall materials. In rapidly elongating cells, transversely oriented microtubules create material anisotropy in the cell wall that prevents radial cell expansion, channeling cell expansion in the longitudinal direction. Mutations perturbing microtubule organization frequently lead to aberrant cell growth in land plants, with some mutations leading to helical growth patterns (called ‘twisted mutants’), often in roots. This phenotype manifests as right-handed or left-handed twisting of cell files along the long axis of plant organs, which correlates with rightward or leftward organ growth, respectively. Helical growth is a common occurrence in the plant kingdom and serves a variety of purposes, but the molecular mechanisms that produce helical growth and define handedness are not well understood. Furthermore, how molecular-level processes propagate across spatial scales to control organ-level growth is undefined. Here, I used the model plant Arabidopsis thaliana as an experimentally tractable system, focusing on the root organ to study the mechanisms underlying helical growth in plants. In this work, I used roots as a model plant organ to investigate the molecular mechanisms that control symmetry maintenance and symmetry breaking in plants. Arabidopsis roots are ideally suited for this work because of their simple, concentric ring-like cellular anatomy and well-defined process of development. I selected two Arabidopsis twisted mutants with opposite chirality to study whether the emergence of right-handed and left-handed helical growth involves conserved or distinct mechanisms. Cortical microtubules are skewed in the right-handed spr1 mutant, which lacks a microtubule plus end-associated protein that regulates polymerization dynamics. In contrast, cortical microtubules tend to be laterally displaced in the left-handed cmu1 mutant, which lacks a protein that contributes to the attachment of cortical microtubules to the plasma membrane. Using a cell-type specific complementation approach, I showed that both SPR1 and CMU1 gene expression in the epidermis alone is sufficient to maintain wild-type-like straight cell files and root growth. In addition, epidermal expression of SPR1 restores both the morphology and skew of the cortical cell file to wild-type-like. By genetically disrupting cell-cell adhesion in the spr1 mutant, I found that a physical connection between epidermal and cortical cells is required for the epidermis to cause organ-level skewed growth. Together, these data demonstrate that the epidermis plays a central role in maintaining straight root growth, suggesting that twisted plant growth in nature could arise by altering microtubule behavior in the epidermis alone and does not require null alleles in all cells. To examine whether cortical microtubule defects in the spr1-3 mutant affect cell growth, I conducted morphometry analysis. I found that while skewed cortical microtubule orientation correlates with asymmetric epidermal cell morphology and growth in the spr1-3 mutant root meristem, cell file twisting is not manifested until the differentiation zone of the root where cell growth slows down and root hairs emerge. Furthermore, I demonstrated that cell file twisting is not sufficient to generate skewed growth at the organ level, which requires that the root is grown on an agar medium, a mechanically heterogeneous environment. Increasing the stiffness of the agar medium caused the spr1-3 and cmu1 mutant roots to grow straight, indicating that mechanical stimuli influence twisted root growth. Despite their important role in root anchorage, root hairs on the epidermis are not required for skewed root growth, nor for reorienting root skewing in response to changes in the mechanical environment. Overall, this work provides new insights into how symmetry breaking affects root mechanoresponse. Spatial heterogeneity in the composition and organization of the plant cell wall affects its mechanics to control cell shape and directional growth. In the last chapter of this work, I describe a new methodology for imaging plant primary cell walls at the nanoscale using atomic force microscopy coupled with infrared spectroscopy (AFM-IR). I contributed to generating a novel sample preparation technique and employed AFM-IR and spectral deconvolution to generate high-resolution spatial maps of the mechanochemical signatures of the Arabidopsis epidermal cell wall. Cross-correlation analysis of the spatial distribution of chemical and mechanical properties suggested that the carbohydrate composition of cell wall junctions correlates with increased local stiffness. In developing this methodology, this chapter provides an essential foundation for applying AFM-IR to understand the complex mechanochemistry of intact plant cell walls at nanometer resolution
    corecore