512 research outputs found

    KERNEL FEATURE CROSS-CORRELATION FOR UNSUPERVISED QUANTIFICATION OF DAMAGE FROM WINDTHROW IN FORESTS

    Get PDF
    In this study estimation of tree damage from a windthrow event using feature detection on RGB high resolution imagery is assessed. An accurate quantitative assessment of the damage in terms of volume is important and can be done by ground sampling, which is notably expensive and time-consuming, or by manual interpretation and analyses of aerial images. This latter manual method also requires an expert operator investing time to manually detect damaged trees and apply relation functions between measures and volume which are also error-prone. In the proposed method RGB images with 0.2 m ground sample distance are analysed using an adaptive template matching method. Ten images corresponding to ten separate study areas are tested. A 13 7 13 pixels kernel with a simplified lin ear-feature representation of a cylinder is applied at different rotation angles (from 0\ub0 to 170\ub0 at 10\ub0 steps). The higher values of the normalized cross-correlation (NCC) of all angles are recorded for each pixel for each image. Several features are tested: percentiles (75, 80, 85, 90, 95, 99, max) and sum and number of pixels with NCC above 0.55. Three regression methods are tested, multiple regression (mr), support vector machines (SVM) with linear kernel and random forests. The first two methods gave the best results. The ground-truth was acquired by ground sampling, and total volumes of damaged trees are estimated for each of the 10 areas. Damaged volumes in the ten areas range from 3c1.8 7 102 m3 to 3c1.2 7 104 m3. Regression results show that smv regression method over the sum gives an R-squared of 0.92, a mean of absolute errors (MAE) of 255 m3 and a relative absolute error (RAE) of 34% using leave-one-out cross validation from the 10 observations. These initial results are encouraging and support further investigations on more finely tuned kernel template metrics to define an unsupervised image analysis process to automatically assess forest damage from windthrow

    The Attractiveness of Countries for FDI. A Fuzzy Approach

    Get PDF
    This paper presents a new method for measuring the attractiveness of countries for FDI. A ranking is built using a fuzzy expert system whereby the function producing the final evaluation is not necessarily linear and the weights of the variables, usually defined numerically, are replaced by linguistic rules. More precisely, weights derive from expert opinions and from econometric tests on the determinants of countries’ FDI. As a second step, the view-point of investors from two different investing economies, the UK and Italy, are taken into account. Country-specific factors, such as the geographic, cultural and institutional distances existing between the investing and the partner economies are included in the analysis. This shows how the base ranking changes with the investor’s perspectiveforeign direct investments; fuzzy expert systems; attractiveness;

    OPEN SOURCE WEB TOOL FOR TRACKING IN A LOWCOST MOBILE MAPPING SYSTEM

    Get PDF
    During the last decade several Mobile Mapping Systems (MMSs), i.e. systems able to acquire efficiently three dimensional data using moving sensors (Guarnieri et al., 2008, Schwarz and El-Sheimy, 2004), have been developed. Research and commercial products have been implemented on terrestrial, aerial and marine platforms, and even on human-carried equipment, e.g. backpack (Lo et al., 2015, Nex and Remondino, 2014, Ellum and El-Sheimy, 2002, Leica Pegasus backpack, 2016, Masiero et al., 2017, Fissore et al., 2018).<br><br> Such systems are composed of an integrated array of time-synchronised navigation sensors and imaging sensors mounted on a mobile platform (Puente et al., 2013, Tao and Li, 2007). Usually the MMS implies integration of different types of sensors, such as GNSS, IMU, video camera and/or laser scanners that allow accurate and quick mapping (Li, 1997, Petrie, 2010, Tao, 2000). The typical requirement of high-accuracy 3D georeferenced reconstruction often makes such systems quite expensive. Indeed, at time of writing most of the terrestrial MMSs on the market have a cost usually greater than 50000, which might be expensive for certain applications (Ellum and El-Sheimy, 2002, Piras et al., 2008). In order to allow best performance sensors have to be properly calibrated (Dong et al., 2007, Ellum and El-Sheimy, 2002).<br><br> Sensors in MMSs are usually integrated and managed through a dedicated software, which is developed ad hoc for the devices mounted on the mobile platform and hence tailored for the specific used sensors. Despite the fact that commercial solutions are complete, very specific and particularly related to the typology of survey, their price is a factor that restricts the number of users and the possible interested sectors.<br><br> This paper describes a (relatively low cost) terrestrial Mobile Mapping System developed at the University of Padua (TESAF, Department of Land Environment Agriculture and Forestry) by the research team in CIRGEO, in order to test an alternative solution to other more expensive MMSs. The first objective of this paper is to report on the development of a prototype of MMS for the collection of geospatial data based on the assembly of low cost sensors managed through a web interface developed using open source libraries. The main goal is to provide a system accessible by any type of user, and flexible to any type of upgrade or introduction of new models of sensors or versions thereof. After a presentation of the hardware components used in our system, a more detailed description of the software developed for the management of the MMS will be provided, which is the part of the innovation of the project. According to the worldwide request for having big data available through the web from everywhere in the world (Pirotti et al., 2011), the proposed solution allows to retrieve data from a web interface Figure 4. Actually, this is part of a project for the development of a new web infrastructure in the University of Padua (but it will be available for external users as well), in order to ease collaboration between researchers from different areas.<br><br> Finally, strengths, weaknesses and future developments of the low cost MMS are discussed

    The attractiveness of countries for FDI. A fuzzy approach

    Get PDF
    This paper presents a new method for measuring the attractiveness of countries for FDI. A ranking is built using a fuzzy expert system whereby the function producing the final evaluation is not necessarily linear and the weights of the variables, usually defined numerically, are replaced by linguistic rules. More precisely, weights derive from expert opinions and from econometric tests on the determinants of countries’ FDI. As a second step, the view-point of investors from two different investing economies, the UK and Italy, are taken into account. Country-specific factors, such as the geographic, cultural and institutional distances existing between the investing and the partner economies are included in the analysis. This shows how the base ranking changes with the investor’s perspective.foreign direct investments; fuzzy expert systems; attractiveness

    Open source R for applying machine learning to RPAS remote sensing images

    Get PDF
    The increase in the number of remote sensing platforms, ranging from satellites to close-range Remotely Piloted Aircraft System (RPAS), is leading to a growing demand for new image processing and classification tools. This article presents a comparison of the Random Forest (RF) and Support Vector Machine (SVM) machine-learning algorithms for extracting land-use classes in RPAS-derived orthomosaic using open source R packages. The camera used in this work captures the reflectance of the Red, Blue, Green and Near Infrared channels of a target. The full dataset is therefore a 4-channel raster image. The classification performance of the two methods is tested at varying sizes of training sets. The SVM and RF are evaluated using Kappa index, classification accuracy and classification error as accuracy metrics. The training sets are randomly obtained as subset of 2 to 20% of the total number of raster cells, with stratified sampling according to the land-use classes. Ten runs are done for each training set to calculate the variance in results. The control dataset consists of an independent classification obtained by photointerpretation. The validation is carried out(i) using the K-Fold cross validation, (ii) using the pixels from the validation test set, and (iii) using the pixels from the full test set. Validation with K-fold and with the validation dataset show SVM give better results, but RF prove to be more performing when training size is larger. Classification error and classification accuracy follow the trend of Kappa index

    Airborne and Terrestrial Laser Scanning Data for the Assessment of Standing and Lying Deadwood: Current Situation and New Perspectives

    Get PDF
    LiDAR technology is finding uses in the forest sector, not only for surveys in producing forests but also as a tool to gain a deeper understanding of the importance of the three-dimensional component of forest environments. Developments of platforms and sensors in the last decades have highlighted the capacity of this technology to catch relevant details, even at finer scales. This drives its usage towards more ecological topics and applications for forest management. In recent years, nature protection policies have been focusing on deadwood as a key element for the health of forest ecosystems and wide-scale assessments are necessary for the planning process on a landscape scale. Initial studies showed promising results in the identification of bigger deadwood components (e.g., snags, logs, stumps), employing data not specifically collected for the purpose. Nevertheless, many efforts should still be made to transfer the available methodologies to an operational level. Newly available platforms (e.g., Mobile Laser Scanner) and sensors (e.g., Multispectral Laser Scanner) might provide new opportunities for this field of study in the near future

    Implementation and assessment of two density-based outlier detection methods over large spatial point clouds

    Get PDF
    Several technologies provide datasets consisting of a large number of spatial points, commonly referred to as point-clouds. These point datasets provide spatial information regarding the phenomenon that is to be investigated, adding value through knowledge of forms and spatial relationships. Accurate methods for automatic outlier detection is a key step. In this note we use a completely open-source workflow to assess two outlier detection methods, statistical outlier removal (SOR) filter and local outlier factor (LOF) filter. The latter was implemented ex-novo for this work using the Point Cloud Library (PCL) environment. Source code is available in a GitHub repository for inclusion in PCL builds. Two very different spatial point datasets are used for accuracy assessment. One is obtained from dense image matching of a photogrammetric survey (SfM) and the other from floating car data (FCD) coming from a smart-city mobility framework providing a position every second of two public transportation bus tracks. Outliers were simulated in the SfM dataset, and manually detected and selected in the FCD dataset. Simulation in SfM was carried out in order to create a controlled set with two classes of outliers: clustered points (up to 30 points per cluster) and isolated points, in both cases at random distances from the other points. Optimal number of nearest neighbours (KNN) and optimal thresholds of SOR and LOF values were defined using area under the curve (AUC) of the receiver operating characteristic (ROC) curve. Absolute differences from median values of LOF and SOR (defined as LOF2 and SOR2) were also tested as metrics for detecting outliers, and optimal thresholds defined through AUC of ROC curves. Results show a strong dependency on the point distribution in the dataset and in the local density fluctuations. In SfM dataset the LOF2 and SOR2 methods performed best, with an optimal KNN value of 60; LOF2 approach gave a slightly better result if considering clustered outliers (true positive rate: LOF2\u2009=\u200959.7% SOR2\u2009=\u200953%). For FCD, SOR with low KNN values performed better for one of the two bus tracks, and LOF with high KNN values for the other; these differences are due to very different local point density. We conclude that choice of outlier detection algorithm very much depends on characteristic of the dataset\u2019s point distribution, no one-solution-fits-all. Conclusions provide some information of what characteristics of the datasets can help to choose the optimal method and KNN values

    Semi-automated detection of surface degradation on bridges based on a level set method

    Get PDF
    Due to the effect of climate factors, natural phenomena and human usage, buildings and infrastructures are subject of progressive degradation. The deterioration of these structures has to be monitored in order to avoid hazards for human beings and for the natural environment in their neighborhood. Hence, on the one hand, monitoring such infrastructures is of primarily importance. On the other hand, unfortunately, nowadays this monitoring effort is mostly done by expert and skilled personnel, which follow the overall data acquisition, analysis and result reporting process, making the whole monitoring procedure quite expensive for the public (and private, as well) agencies. This paper proposes the use of a partially user-assisted procedure in order to reduce the monitoring cost and to make the obtained result less subjective as well. The developed method relies on the use of images acquired with standard cameras by even inexperienced personnel. The deterioration on the infrastructure surface is detected by image segmentation based on a level sets method. The results of the semi-automated analysis procedure are remapped on a 3D model of the infrastructure obtained by means of a terrestrial laser scanning acquisition. The proposed method has been successfully tested on a portion of a road bridge in Perarolo di Cadore (BL), Italy

    Benchmark of machine learning methods for classification of a Sentinel-2 image

    Get PDF
    Thanks to mainly ESA and USGS, a large bulk of free images of the Earth is readily available nowadays. One of the main goals of remote sensing is to label images according to a set of semantic categories, i.e. image classification. This is a very challenging issue since land cover of a specific class may present a large spatial and spectral variability and objects may appear at different scales and orientations. In this study, we report the results of benchmarking 9 machine learning algorithms tested for accuracy and speed in training and classification of land-cover classes in a Sentinel-2 dataset. The following machine learning methods (MLM) have been tested: linear discriminant analysis, k-nearest neighbour, random forests, support vector machines, multi layered perceptron, multi layered perceptron ensemble, ctree, boosting, logarithmic regression. The validation is carried out using a control dataset which consists of an independent classification in 11 land-cover classes of an area about 60 km2, obtained by manual visual interpretation of high resolution images (20 cm ground sampling distance) by experts. In this study five out of the eleven classes are used since the others have too few samples (pixels) for testing and validating subsets. The classes used are the following: (i) urban (ii) sowable areas (iii) water (iv) tree plantations (v) grasslands. Validation is carried out using three different approaches: (i) using pixels from the training dataset (train), (ii) using pixels from the training dataset and applying cross-validation with the k-fold method (kfold) and (iii) using all pixels from the control dataset. Five accuracy indices are calculated for the comparison between the values predicted with each model and control values over three sets of data: the training dataset (train), the whole control dataset (full) and with k-fold cross-validation (kfold) with ten folds. Results from validation of predictions of the whole dataset (full) show the random forests method with the highest values; kappa index ranging from 0.55 to 0.42 respectively with the most and least number pixels for training. The two neural networks (multi layered perceptron and its ensemble) and the support vector machines - with default radial basis function kernel - methods follow closely with comparable performanc
    • …
    corecore