11 research outputs found
Multi-Modal Earth Observation and Deep Learning for Urban Scene Understanding
This research explores the nuances of semantic segmentation in remote sensing data through deep learning, with a focus on multi-modal data integration, the impact of label noise, and the need for diverse datasets in Earth Observation (EO). It introduces a novel model named TransFusion, designed to integrate 2D images and 3D point clouds directly, avoiding the complexities common with traditional fusion methods used in the conext of semantic segmentation. This approach led to improvements in segmentation accuracy, demonstrated by higher mean Intersection over Union (mIoU) scores for the Vaihingen and Potsdam datasets. This indicates the model's capability to better interpret spatial and structural information from multi-modal data.The study also investigates the effects of label noise—incorrect annotations in training data, a prevalent issue in remote sensing. Through experiments involving high-resolution aerial images with intentionally inaccurate labels, it was discovered that label noise influences model performance differently across various object classes, with the size of an object significantly affecting the model's ability to handle errors. The research highlights that models are somewhat resilient to random noise, although accuracy decreases even with a small proportion of incorrect labels.Addressing the challenge of geographic bias in urban semantic segmentation datasets, primarily focused on Europe and North America, the research introduces the UAVPal dataset from Bhopal, India. This effort, along with the development of a new dense predictor head for semantic segmentation, aims to better represent the diverse urban landscapes globally. The new segmentation head, which efficiently leverages multi-scale features and notably reduces computational demands, showed improved mIoU scores across various classes and datasets.Overall, the study contributes to the field of semantic segmentation for EO by improving data fusion methods, offering insights into the effects of label noise, and encouraging the inclusion of diverse geographic data for broader representation. These efforts are steps toward more accurate and efficient remote sensing applications
UAVPal:A New Dataset for Semantic Segmentation in Complex Urban Landscape with Efficient Multiscale Segmentation
Semantic segmentation has recently emerged as a prominent area of interest in Earth observation. Several semantic segmentation datasets already exist, facilitating comparisons among different methods in complex urban scenes. However, most open high-resolution urban datasets are geographically skewed toward Europe and North America, while coverage of Southeast Asia is very limited. The considerable variation in city designs worldwide presents an obstacle to the applicability of computer vision models, especially when the training dataset lacks significant diversity. On the other hand, naively applying computationally expensive models leads to inefficacies and sometimes poor performance. To tackle the lack of data diversity, we introduce a new UAVPal dataset of complex urban scenes from the city of Bhopal, India. We complement this by introducing a novel dense predictor head and demonstrate that a well-designed head can efficiently take advantage of the multiscale features to enhance the benefits of a strong feature extractor backbone. We design our segmentation head to learn the importance of features at various scales for each individual class and refine the final dense prediction accordingly. We tested our proposed head with a state-of-the-art backbone on multiple UAV datasets and a high-resolution satellite image dataset for LULC classification. We observed improved intersection over union (IoU) in various classes and up to 2 better mean IoU. Apart from the performance improvements, we also observed nearly 50 reduction in computing operations required when using the proposed head compared to the traditional segmentation head.</p
Polarimetric calibration of spaceborne and airborne multifrequency SAR data for scattering-based characterization of manmade and natural features
The Polarimetric Synthetic Aperture Radar (PolSAR) systems use electromagnetic radiations of different polarizations in the microwave frequency to collect the scattering information from targets on the Earth. Nevertheless, as with any other electronic device, the PolSAR systems are also not ideal and subjected to distortions. The most important of these distortions are the polarimetric distortions caused due to the channel imbalance, phase bias, and crosstalk between the different polarization channels. For the spaceborne PolSAR systems, the Earth's ionosphere also contributes to an additional polarimetric distortion known as the Faraday rotation. An effort was made in this study to perform the polarimetric calibration of the Quad-pol and Compact-pol PolSAR datasets acquired using different airborne and spaceborne PolSAR systems to estimate and minimize these polarimetric distortions. The investigation was also done to analyze the impact of these polarimetric distortions on the scattering mechanisms from ground targets and on its dependency on the radar wavelength. The study was done using the UAVSAR L-band Quad-pol dataset, RADARSAT-2 Quad-pol dataset, ALOS-2 PALSAR-2, ISRO's L&S- Band Airborne SAR (LS-ASAR) Quad-pol and Compact-pol datasets, and the RISAT-1 Compact-pol dataset. Calibration of the airborne PolSAR data was carried to understand the level of polarimetric distortions in the LS-ASAR product that is a precursor mission to the spaceborne Dual-Frequency L&S Band NASA-ISRO Synthetic Aperture Radar (NISAR) mission. It is understood that the crosstalk is the dominant polarimetric distortion, which severely affects the PolSAR datasets compared to the other polarimetric distortions, and it is more for the higher wavelength PolSAR systems. The Quegan, Improved Quegan, and Ainsworth algorithms for crosstalk estimation and minimization was performed for the different Quad-pol datasets and it was found that the Improved Quegan algorithm is suitable for removing crosstalk from datasets having high crosstalk and the Ainsworth algorithm is suitable for removing crosstalk from datasets having low crosstalk. The Freeman method of the polarimetric calibration was implemented for the compact-pol datasets and it was able to considerably minimize the polarimetric distortions. The coherency matrix, scattering matrix, model-based decomposition, polarimetric signatures, and roll invariant parameter-based analysis revealed that all the datasets after polarimetric calibration were showing the correct scattering responses expected from the ground targets.Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.Mathematical Geodesy and PositioningPhysical and Space Geodes
Isolation, proliferation, characterization and <i>in vivo</i> osteogenic potential of bone-marrow derived mesenchymal stem cells (rBMSC) in rabbit model
79-87Information on isolation, characterization of rabbit MSC and its evaluation in critical bone defect (CSD) is scarcely available. Here, we attempted to isolate, proliferate, differentiate, characterize and evaluate the in vivo osteogenic potential of bone marrow derived mesenchymal stem cells (BMSCs) collected from New Zealand White rabbits. They were isolated and proliferated in antibiotic supplemented DMEM (Dulbecco’s Modified Eagle’s media). Osteogenic differentiation of rabbit bone marrow derived mesenchymal stem cells (rBMSCs) was induced by osteogenic supplements and evaluated by alizarin red staining and alkaline phosphatase activity assay and characterized by specific CD surface antigen markers through FACS (Fluorescent activated cell shorting) and RT-PCR. Day ‘0’ cells were round/oval and floating, and on day 3-5, cell attachment with spindle/polygonal/star morphology was seen. On subsequent passages, they assumed uniform spindle shaped morphology. After culturing in respective differentiation media rBMSCs showed increased alkaline phosphatase activity, intense alizarin red staining, blue staining for Alcian blue and deep red colour on oil red O staining supporting the osteogenic, chondrogenic and adipogenic differentiation ability. In vivo osteogenic potential of rBMSCs was evaluated in a 30 mm critical sized defect of rabbit radius. The cellular morphology of plastic adherent cells was seen as single cell form in P0 and in P1, P2 and P3, as elongated/spindle-shape in clusters. The rBMSCs were positive for CD44, CD73 and CD105 and negative for CD34 and CD45 and could differentiate to osteogenic cells in osteogenic induction media. The in vivo experiments in rabbit CSD model confirmed that rBMSCs promote faster healing of critical size defects. Hence, we may suggest that rBMSCs are suitable for bone formation in fracture healing and non-union
Artificial Intelligence for the Advancement of Lunar and Planetary Science and Exploration
Over the past decades of NASA’s inner solar system exploration, data obtained from the Moon alone accounts for ~76%. Most of the lunar orbital spacecraft of the past and present carried imaging cameras and spectrometers (including multispectral and hyperspectral payloads), as well as a large variety of other passive and active instruments. For example, NASA’s Lunar Reconnaissance Orbiter (LRO) has been operating for more than 10 years, providing us with ~1206 TB of lunar data which amounts to ~99.5% of the total data contributed by NASA built instruments. Given recent advances in instrument and communication capabilities, the amount of data returned from spacecraft is expected to keep rising quickly. The white paper focus on potential components of AI and ML that could help to accelerate the future exploration of the Moon and other planetary bodies. The white paper highlights on selected AI/ML-based approaches for lunar and planetary surface science and exploration, the need for open-source availability of training, validation, and testing datasets for AI-ML based approaches, and need for opportunities to further bridge the gap between industry and academia for advancing AI-ML based research in lunar and planetary science and exploration
Artificial Intelligence for the Advancement of Lunar and Planetary Science and Exploration
Over the past decades of NASA’s inner solar system exploration, data obtained from the Moon alone accounts for ~76%. Most of the lunar orbital spacecraft of the past and present carried imaging cameras and spectrometers (including multispectral and hyperspectral payloads), as well as a large variety of other passive and active instruments. For example, NASA’s Lunar Reconnaissance Orbiter (LRO) has been operating for more than 10 years, providing us with ~1206 TB of lunar data which amounts to ~99.5% of the total data contributed by NASA built instruments. Given recent advances in instrument and communication capabilities, the amount of data returned from spacecraft is expected to keep rising quickly. The white paper focus on potential components of AI and ML that could help to accelerate the future exploration of the Moon and other planetary bodies. The white paper highlights on selected AI/ML-based approaches for lunar and planetary surface science and exploration, the need for open-source availability of training, validation, and testing datasets for AI-ML based approaches, and need for opportunities to further bridge the gap between industry and academia for advancing AI-ML based research in lunar and planetary science and exploration