30 research outputs found

    A WebGIS platform for the monitoring of Farm Animal Genetic Resources (GENMON)

    Get PDF
    Background In 2007, the Food and Agriculture Organization of the United Nations (FAO) initiated the Global plan of action for Farm Animal Genetic Resources (FAnGR). The main goal of this plan is to reduce further loss of genetic diversity in farm animals, so as to protect and promote the diversity of farm animal resources. An important step to reach this goal is to monitor and prioritize endangered breeds in the context of conservation programs. Methodology/Web portal implementation The GENMON WebGIS platform is able to monitor FAnGR and to evaluate the degree of endangerment of livestock breeds. The system takes into account pedigree and introgression information, the geographical concentration of animals, the cryo-conservation plan and the sustainability of breeding activities based on socio-economic data as well as present and future land use conditions. A multi-criteria decision tool supports the aggregation of the multi-thematic indices mentioned above using the MACBETH method, which is based on a weighted average using satisfaction thresholds. GENMON is a monitoring tool to reach subjective decisions made by a government agency. It relies on open source software and is available at http://lasigsrv2.epfl.ch/genmon-ch. Results/Significance GENMON allows users to upload pedigree-information (animal ID, parents, birthdate, sex, location and introgression) from a specific livestock breed and to define species and/or region-specific weighting parameters and thresholds. The program then completes a pedigree analysis and derives several indices that are used to calculate an integrated score of conservation prioritization for the breeds under investigation. The score can be visualized on a geographic map and allows a fast, intuitive and regional identification of breeds in danger. Appropriate conservation actions and breeding programs can thus be undertaken in order to promote the recovery of the genetic diversity in livestock breeds in need. The use of the platform is illustrated by means of an example based on three local livestock breeds from different species in Switzerland

    Transfer component analysis for domain adaptation in image classification

    Get PDF
    This contribution studies a feature extraction technique aiming at reducing differences between domains in image classification. The purpose is to find a common feature space between labeled samples issued from a source image and test samples belonging to a related target image. The presented approach, Transfer Component Analysis, finds a transformation matrix performing a joint mapping of the two domains by minimizing a probability distribution distance measure, the Maximum Mean Discrepancy criterion. When predicting on a target image, such a projection allows to apply a supervised classifier trained exclusively on labeled source pixels mapped in this common latent subspace. Promising results are observed on a urban scene captured by a hyperspectral image. The experiments reveal improvements with respect to a standard classification model built on the original source image and other feature extraction techniques

    Spatio-temporal avalanche forecasting with support vector machines

    Get PDF
    This paper explores the use of the Support Vector Machine (SVM) as a data exploration tool and a predictive engine for spatio-temporal forecasting of snow avalanches. Based on the historical observations of avalanche activity, meteorological conditions and snowpack observations in the field, an SVM is used to build a data-driven spatio-temporal forecast for the local mountain region. It incorporates the outputs of simple physics-based and statistical approaches used to interpolate meteorological and snowpack-related data over a digital elevation model of the region. The interpretation of the produced forecast is discussed, and the quality of the model is validated using observations and avalanche bulletins of the recent years. The insight into the model behaviour is presented to highlight the interpretability of the model, its abilities to produce reliable forecasts for individual avalanche paths and sensitivity to input data. Estimates of prediction uncertainty are obtained with ensemble forecasting. The case study was carried out using data from the avalanche forecasting service in the Locaber region of Scotland, where avalanches are forecast on a daily basis during the winter months

    Enhanced change detection using nonlinear feature extraction

    No full text
    This paper presents an application of the kernel principal component analysis aiming at aligning optical images before the application of change detection techniques. The approach relies on the extraction of nonlinear features from a selected subset of pixels representing unchanged areas in the images. Both images are then projected into the aligned space defined by the eigenvectors associated to largest variance (eigenvalues). In the transformed space, unchanged pixels of both datasets are mapped next to each other, thus reducing within-class variance. The difference image that results from differencing the (kernel) principal components is likely to provide a more suitable representation for the detection of changes. A bi-temporal subset of Landsat TM images validates the proposed approach, which is used to provide a suitable representation before applying the change vector analysis and the support vector domain description

    Semi-supervised multiview embedding for hyperspectral data classification

    No full text
    In this paper, a method for semi-supervised multiview feature extraction based on the multiset regularized kernel canonical correlation analysis (kCCA) is proposed for the classification of hyperspectral images. The covariance matrix of this type of data is naturally composed of distinct blocks of spectral channels, which in turn compose the hypercube. To reduce the dimensionality of the data and extract discriminant features taking advantage of this particular structure, a multiview feature extraction method is applied prior to the classification. The proposed scheme exploits both the labels (as a distinct view on the data) and unlabeled pixels into the computation of cross-correlations and regularizations terms. First, we propose a technique to automatically obtain the segmentation of the spectral profile, based on the correlation between channels. Then, the multiset kernel canonical correlation analysis is applied to find a latent space which represents mutually correlated projected views and labels. Experiments on three real hyperspectral images with two linear classifiers and comparisons to state-of-the-art feature extraction methods show the benefits of this approach, which provides classification accuracies equal or superior to those obtained by training classifiers on the original input space but with only a fraction of the original data dimensionality. (C) 2014 Elsevier B.V. All rights reserved

    Learning the relevant image features with multiple kernels

    No full text
    This paper proposes to learn the relevant features of remote sensing images for automatic spatio-spectral classification with the automatic optimization of multiple kernels. The method consists of building dedicated kernels for different sets of bands, contextual or textural features. The optimal linear combination of kernels is optimized through gradient descent on the support vector machine (SVM) objective function. Since a naïve implementation is computationally demanding, we propose an efficient model selection procedure based on kernel alignment. The result is a weight -learned from the data- for each kernel where both relevant and meaningless image features emerge after training. Excellent results are observed in both multi and hyperspectral image classification, improving standard SVM and other spatio-spectral formulations

    Learning relevant image features with multiple-kernel classification

    No full text
    The increase in spatial and spectral resolution of the satellite sensors, along with the shortening of the time-revisiting periods, has provided high-quality data for remote sensing image classification. However, the high-dimensional feature space induced by using many heterogeneous information sources precludes the use of simple classifiers: thus, a proper feature selection is required for discarding irrelevant features and adapting the model to the specific problem. This paper proposes to classify the images and simultaneously to learn the relevant features in such high-dimensional scenarios. The proposed method is based on the automatic optimization of a linear combination of kernels dedicated to different meaningful sets of features. Such sets can be groups of bands, contextual or textural features, or bands acquired by different sensors. The combination of kernels is optimized through gradient descent on the support vector machine objective function. Even though the combination is linear, the ranked relevance takes into account the intrinsic nonlinearity of the data through kernels. Since a naive selection of the free parameters of the multiple-kernel method is computationally demanding, we propose an efficient model selection procedure based on the kernel alignment. The result is a weight (learned from the data) for each kernel where both relevant and meaningless image features automatically emerge after training the model. Experiments carried out in multi- and hyperspectral, contextual, and multisource remote sensing data classification confirm the capability of the method in ranking the relevant features and show the computational efficience of the proposed strategy

    Understanding angular effects in VHR imagery and their significance for urban land-cover model portability: A study of two multi-angle in-track image sequences

    Full text link
    This paper investigates the angular effects causing spectral distortions in multi-angle remote sensing imagery. We study two WorldView-2 multispectral in-track sequences acquired over the cities of Atlanta, USA, and Rio de Janeiro, Brazil, consisting of 13 and 20 co-located images, respectively. The sequences possess off-nadir acquisition angles up to 47.5˚ and bear markedly different sun-satellite configurations with respect to each other. Both scenes comprise classic urban structures such as buildings of different size, road networks, and parks. First, we quantify the degree of distortion affecting the sequences by means of a non-linear measure of distance between probability distributions, the Maximum Mean Discrepancy. Second, we assess the ability of a classification model trained on an image acquired at a certain view angle to predict the land-cover of all the other images in the sequence. The portability across the sequence is investigated for supervised classifiers of different nature by analyzing the evolution of the classification accuracy with respect to the off-nadir look angle. For both datasets, the effectiveness of physically- and statistically-based normalization methods in obtaining angle-invariant data spaces is compared and synergies are discussed. The empirical results indicate that, after a suitable normalization (histogram matching, atmospheric compensation), the loss in classification accuracy when using a model trained on the near-nadir image to classify the most off-nadir acquisitions can be reduced to as little as 0.06 (Atlanta) or 0.03 (Rio de Janeiro) Kappa points when using a SVM classifier
    corecore