180 research outputs found
Class reconstruction driven adversarial domain adaptation for hyperspectral image classification
We address the problem of cross-domain classification of hyperspectral image (HSI) pairs under the notion of unsupervised domain adaptation (UDA). The UDA problem aims at classifying the test samples of a target domain by exploiting the labeled training samples from a related but different source domain. In this respect, the use of adversarial training driven domain classifiers is popular which seeks to learn a shared feature space for both the domains. However, such a formalism apparently fails to ensure the (i) discriminativeness, and (ii) non-redundancy of the learned space. In general, the feature space learned by domain classifier does not convey any meaningful insight regarding the data. On the other hand, we are interested in constraining the space which is deemed to be simultaneously discriminative and reconstructive at the class-scale. In particular, the reconstructive constraint enables the learning of category-specific meaningful feature abstractions and UDA in such a latent space is expected to better associate the domains. On the other hand, we consider an orthogonality constraint to ensure non-redundancy of the learned space. Experimental results obtained on benchmark HSI datasets (Botswana and Pavia) confirm the efficacy of the proposal approach
Multitemporal Very High Resolution from Space: Outcome of the 2016 IEEE GRSS Data Fusion Contest
In this paper, the scientific outcomes of the 2016 Data Fusion Contest organized by the Image Analysis and Data Fusion Technical Committee of the IEEE Geoscience and Remote Sensing Society are discussed. The 2016 Contest was an open topic competition based on a multitemporal and multimodal dataset, which included a temporal pair of very high resolution panchromatic and multispectral Deimos-2 images and a video captured by the Iris camera on-board the International Space Station. The problems addressed and the techniques proposed by the participants to the Contest spanned across a rather broad range of topics, and mixed ideas and methodologies from the remote sensing, video processing, and computer vision. In particular, the winning team developed a deep learning method to jointly address spatial scene labeling and temporal activity modeling using the available image and video data. The second place team proposed a random field model to simultaneously perform coregistration of multitemporal data, semantic segmentation, and change detection. The methodological key ideas of both these approaches and the main results of the corresponding experimental validation are discussed in this paper
Open Data for Global Multimodal Land Use Classification: Outcome of the 2017 IEEE GRSS Data Fusion Contest
In this paper, we present the scientific outcomes of the 2017 Data Fusion Contest organized by the Image Analysis and Data Fusion Technical Committee of the IEEE Geoscience and Remote Sensing Society. The 2017 Contest was aimed at addressing the problem of local climate zones classification based on a multitemporal and multimodal dataset, including image (Landsat 8 and Sentinel-2) and vector data (from OpenStreetMap). The competition, based on separate geographical locations for the training and testing of the proposed solution, aimed at models that were accurate (assessed by accuracy metrics on an undisclosed reference for the test cities), general (assessed by spreading the test cities across the globe), and computationally feasible (assessed by having a test phase of limited time). The techniques proposed by the participants to the Contest spanned across a rather broad range of topics, and of mixed ideas and methodologies deriving from computer vision and machine learning but also deeply rooted in the specificities of remote sensing. In particular, rigorous atmospheric correction, the use of multidate images, and the use of ensemble methods fusing results obtained from different data sources/time instants made the difference
CHANGE DETECTION BETWEEN DIGITAL SURFACE MODELS FROM AIRBORNE LASER SCANNING AND DENSE IMAGE MATCHING USING CONVOLUTIONAL NEURAL NETWORKS
Airborne photogrammetry and airborne laser scanning are two commonly used technologies used for topographical data acquisition at the city level. Change detection between airborne laser scanning data and photogrammetric data is challenging since the two point clouds show different characteristics. After comparing the two types of point clouds, this paper proposes a feed-forward Convolutional Neural Network (CNN) to detect building changes between them. The motivation from an application point of view is that the multimodal point clouds might be available for different epochs. Our method contains three steps: First, the point clouds and orthoimages are converted to raster images. Second, square patches are cropped from raster images and then fed into CNN for change detection. Finally, the original change map is post-processed with a simple connected component analysis. Experimental results show that the patch-based recall rate reaches 0.8146 and the precision rate reaches 0.7632. Object-based evaluation shows that 74 out of 86 building changes are correctly detected
Mapping drivers of tropical forest loss with satellite image time series and machine learning
The rates of tropical deforestation remain high, resulting in carbon emissions, biodiversity loss, and impacts on local communities. To design effective policies to tackle this, it is necessary to know what the drivers behind deforestation are. Since drivers vary in space and time, producing accurate spatially explicit maps with regular temporal updates is essential. Drivers can be recognized from satellite imagery but the scale of tropical deforestation makes it unfeasible to do so manually. Machine learning opens up possibilities for automating and scaling up this process. In this study, we developed and trained a deep learning model to classify the drivers of any forest loss—including deforestation—from satellite image time series. Our model architecture allows understanding of how the input time series is used to make a prediction, showing the model learns different patterns for recognizing each driver and highlighting the need for temporal data. We used our model to classify over 588 ′ 000 sites to produce a map detailing the drivers behind tropical forest loss. The results confirm that the majority of it is driven by agriculture, but also show significant regional differences. Such data is a crucial source of information to enable targeting specific drivers locally and can be updated in the future using free satellite data
Towards Estimation of 3D Poses and Shapes of Animals from Oblique Drone Imagery
Wildlife research in both terrestrial and aquatic ecosystems now deploys drone technology for tasks such as monitoring, census counts and habitat analysis. Unlike camera traps, drones offer real-time flexibility for adaptable flight paths and camera views, thus making them ideal for capturing multi-view data on wildlife like zebras or lions. With recent advancements in animals’ 3D shape & pose estimation, there is an increasing interest in bringing 3D analysis from ground to sky by means of drones. The paper reports some activities of the EU-funded WildDrone project and performs, for the first time, 3D analyses of animals exploiting oblique drone imagery. Using parametric model fitting, we estimate 3D shape and pose of animals from frames of a monocular RGB video. With the goal of appending metric information to parametric animal models using photogrammetric evidence, we propose a pipeline where we perform a point cloud reconstruction of the scene to scale and localize the animal within the 3D scene. Challenges, planned next steps and future directions are also reported
Perspectives in machine learning for wildlife conservation
Data acquisition in animal ecology is rapidly accelerating due to inexpensive
and accessible sensors such as smartphones, drones, satellites, audio recorders
and bio-logging devices. These new technologies and the data they generate hold
great potential for large-scale environmental monitoring and understanding, but
are limited by current data processing approaches which are inefficient in how
they ingest, digest, and distill data into relevant information. We argue that
machine learning, and especially deep learning approaches, can meet this
analytic challenge to enhance our understanding, monitoring capacity, and
conservation of wildlife species. Incorporating machine learning into
ecological workflows could improve inputs for population and behavior models
and eventually lead to integrated hybrid modeling tools, with ecological models
acting as constraints for machine learning models and the latter providing
data-supported insights. In essence, by combining new machine learning
approaches with ecological domain knowledge, animal ecologists can capitalize
on the abundance of data generated by modern sensor technologies in order to
reliably estimate population abundances, study animal behavior and mitigate
human/wildlife conflicts. To succeed, this approach will require close
collaboration and cross-disciplinary education between the computer science and
animal ecology communities in order to ensure the quality of machine learning
approaches and train a new generation of data scientists in ecology and
conservation
Deep Learning Techniques for Geospatial Data Analysis
Consumer electronic devices such as mobile handsets, goods tagged with RFID
labels, location and position sensors are continuously generating a vast amount
of location enriched data called geospatial data. Conventionally such
geospatial data is used for military applications. In recent times, many useful
civilian applications have been designed and deployed around such geospatial
data. For example, a recommendation system to suggest restaurants or places of
attraction to a tourist visiting a particular locality. At the same time, civic
bodies are harnessing geospatial data generated through remote sensing devices
to provide better services to citizens such as traffic monitoring, pothole
identification, and weather reporting. Typically such applications are
leveraged upon non-hierarchical machine learning techniques such as Naive-Bayes
Classifiers, Support Vector Machines, and decision trees. Recent advances in
the field of deep-learning showed that Neural Network-based techniques
outperform conventional techniques and provide effective solutions for many
geospatial data analysis tasks such as object recognition, image
classification, and scene understanding. The chapter presents a survey on the
current state of the applications of deep learning techniques for analyzing
geospatial data.
The chapter is organized as below: (i) A brief overview of deep learning
algorithms. (ii)Geospatial Analysis: a Data Science Perspective (iii)
Deep-learning techniques for Remote Sensing data analytics tasks (iv)
Deep-learning techniques for GPS data analytics(iv) Deep-learning techniques
for RFID data analytics.Comment: This is a pre-print of the following chapter: Arvind W. Kiwelekar,
Geetanjali S. Mahamunkar, Laxman D. Netak, Valmik B Nikam, {\em Deep Learning
Techniques for Geospatial Data Analysis}, published in {\bf Machine Learning
Paradigms}, edited by George A. TsihrintzisLakhmi C. Jain, 2020, publisher
Springer, Cham reproduced with permission of publisher Springer, Cha
Adapting clonally propagated crops to climatic changes: a global approach for taro (Colocasia esculenta (L.) Schott)
Clonally propagated crop species are less
adaptable to environmental changes than those propagating
sexually. DNA studies have shown that in all
countries where taro (Colocasia esculenta (L.) Schott)
has been introduced clonally its genetic base is
narrow. As genetic variation is the most important
source of adaptive potential, it appears interesting to
attempt to increase genetic and phenotypic diversity to
strengthen smallholders’ capacity to adapt to climatic
changes. A global experiment, involving 14 countries
from America, Africa, Asia and the Pacific was
conducted to test this approach. Every country
received a set of 50 indexed genotypes in vitro
assembling significant genetic diversity. After onstation
agronomic evaluation trials, the best genotypes
were distributed to farmers for participatory on-farm
evaluation. Results indicated that hybrids tolerant to
taro leaf blight (TLB, Phytophthora colocasiae Raciborski),
developed by Hawaii, Papua New Guinea and
Samoa breeding programmes outperformed local cultivars in most locations. However, several elite
cultivars from SE Asia, also tolerant to TLB, outperformed
improved hybrids in four countries and in one
country none of the introductions performed better
than the local cultivars. Introduced genotypes were
successfully crossed (controlled crossing) with local
cultivars and new hybrids were produced. For the first
time in the history of Aroids research, seeds were
exchanged internationally injecting tremendous allelic
diversity in different countries. If climatic changes are
going to cause the problems envisaged, then breeding
crops with wide genetic diversity appears to be an
appropriate approach to overcome the disasters that
will otherwise ensue.This research was financially supported
by the Europe-Aid project ‘‘Adapting clonally propagated crops
to climatic and commercial changes’’ (Grant No. DCI-FOOD/
2010/230-267 SPC). Thanks are due to the 14 different countries
technicians working on research stations and to farmers and
their families for their enthusiastic contributioninfo:eu-repo/semantics/publishedVersio
- …