76 research outputs found

    Synthetic Aperture Radar (SAR) Meets Deep Learning

    Get PDF
    This reprint focuses on the application of the combination of synthetic aperture radars and depth learning technology. It aims to further promote the development of SAR image intelligent interpretation technology. A synthetic aperture radar (SAR) is an important active microwave imaging sensor, whose all-day and all-weather working capacity give it an important place in the remote sensing community. Since the United States launched the first SAR satellite, SAR has received much attention in the remote sensing community, e.g., in geological exploration, topographic mapping, disaster forecast, and traffic monitoring. It is valuable and meaningful, therefore, to study SAR-based remote sensing applications. In recent years, deep learning represented by convolution neural networks has promoted significant progress in the computer vision community, e.g., in face recognition, the driverless field and Internet of things (IoT). Deep learning can enable computational models with multiple processing layers to learn data representations with multiple-level abstractions. This can greatly improve the performance of various applications. This reprint provides a platform for researchers to handle the above significant challenges and present their innovative and cutting-edge research results when applying deep learning to SAR in various manuscript types, e.g., articles, letters, reviews and technical reports

    Deep Learning for Time-Series Analysis of Optical Satellite Imagery

    Get PDF
    In this cumulative thesis, I cover four papers on time-series analysis of optical satellite imagery. The contribution is split into two parts. The first one introduces DENETHOR and DynamicEarthNet, two landmark datasets with high-quality ground truth data for agricultural monitoring and change detection. Second, I introduce SiROC and SemiSiROC, two methodological contributions to label-efficient change detection

    Machine learning based anomaly detection for industry 4.0 systems.

    Get PDF
    223 p.This thesis studies anomaly detection in industrial systems using technologies from the Fourth Industrial Revolution (4IR), such as the Internet of Things, Artificial Intelligence, 3D Printing, and Augmented Reality. The goal is to provide tools that can be used in real-world scenarios to detect system anomalies, intending to improve production and maintenance processes. The thesis investigates the applicability and implementation of 4IR technology architectures, AI-driven machine learning systems, and advanced visualization tools to support decision-making based on the detection of anomalies. The work covers a range of topics, including the conception of a 4IR system based on a generic architecture, the design of a data acquisition system for analysis and modelling, the creation of ensemble supervised and semi-supervised models for anomaly detection, the detection of anomalies through frequency analysis, and the visualization of associated data using Visual Analytics. The results show that the proposed methodology for integrating anomaly detection systems in new or existing industries is valid and that combining 4IR architectures, ensemble machine learning models, and Visual Analytics tools significantly enhances theanomaly detection processes for industrial systems. Furthermore, the thesis presents a guiding framework for data engineers and end-users

    Real-time Ultrasound Signals Processing: Denoising and Super-resolution

    Get PDF
    Ultrasound acquisition is widespread in the biomedical field, due to its properties of low cost, portability, and non-invasiveness for the patient. The processing and analysis of US signals, such as images, 2D videos, and volumetric images, allows the physician to monitor the evolution of the patient's disease, and support diagnosis, and treatments (e.g., surgery). US images are affected by speckle noise, generated by the overlap of US waves. Furthermore, low-resolution images are acquired when a high acquisition frequency is applied to accurately characterise the behaviour of anatomical features that quickly change over time. Denoising and super-resolution of US signals are relevant to improve the visual evaluation of the physician and the performance and accuracy of processing methods, such as segmentation and classification. The main requirements for the processing and analysis of US signals are real-time execution, preservation of anatomical features, and reduction of artefacts. In this context, we present a novel framework for the real-time denoising of US 2D images based on deep learning and high-performance computing, which reduces noise while preserving anatomical features in real-time execution. We extend our framework to the denoise of arbitrary US signals, such as 2D videos and 3D images, and we apply denoising algorithms that account for spatio-temporal signal properties into an image-to-image deep learning model. As a building block of this framework, we propose a novel denoising method belonging to the class of low-rank approximations, which learns and predicts the optimal thresholds of the Singular Value Decomposition. While previous denoise work compromises the computational cost and effectiveness of the method, the proposed framework achieves the results of the best denoising algorithms in terms of noise removal, anatomical feature preservation, and geometric and texture properties conservation, in a real-time execution that respects industrial constraints. The framework reduces the artefacts (e.g., blurring) and preserves the spatio-temporal consistency among frames/slices; also, it is general to the denoising algorithm, anatomical district, and noise intensity. Then, we introduce a novel framework for the real-time reconstruction of the non-acquired scan lines through an interpolating method; a deep learning model improves the results of the interpolation to match the target image (i.e., the high-resolution image). We improve the accuracy of the prediction of the reconstructed lines through the design of the network architecture and the loss function. %The design of the deep learning architecture and the loss function allow the network to improve the accuracy of the prediction of the reconstructed lines. In the context of signal approximation, we introduce our kernel-based sampling method for the reconstruction of 2D and 3D signals defined on regular and irregular grids, with an application to US 2D and 3D images. Our method improves previous work in terms of sampling quality, approximation accuracy, and geometry reconstruction with a slightly higher computational cost. For both denoising and super-resolution, we evaluate the compliance with the real-time requirement of US applications in the medical domain and provide a quantitative evaluation of denoising and super-resolution methods on US and synthetic images. Finally, we discuss the role of denoising and super-resolution as pre-processing steps for segmentation and predictive analysis of breast pathologies

    XVI Agricultural Science Congress 2023: Transformation of Agri-Food Systems for Achieving Sustainable Development Goals

    Get PDF
    The XVI Agricultural Science Congress being jointly organized by the National Academy of Agricultural Sciences (NAAS) and the Indian Council of Agricultural Research (ICAR) during 10-13 October 2023, at hotel Le Meridien, Kochi, is a mega event echoing the theme “Transformation of Agri-Food Systems for achieving Sustainable Development Goals”. ICAR-Central Marine Fisheries Research Institute takes great pride in hosting the XVI ASC, which will be the perfect point of convergence of academicians, researchers, students, farmers, fishers, traders, entrepreneurs, and other stakeholders involved in agri-production systems that ensure food and nutritional security for a burgeoning population. With impeding challenges like growing urbanization, increasing unemployment, growing population, increasing food demands, degradation of natural resources through human interference, climate change impacts and natural calamities, the challenges ahead for India to achieve the Sustainable Development Goals (SDGs) set out by the United Nations are many. The XVI ASC will provide an interface for dissemination of useful information across all sectors of stakeholders invested in developing India’s agri-food systems, not only to meet the SDGs, but also to ensure a stable structure on par with agri-food systems around the world. It is an honour to present this Book of Abstracts which is a compilation of a total of 668 abstracts that convey the results of R&D programs being done in India. The abstracts have been categorized under 10 major Themes – 1. Ensuring Food & Nutritional Security: Production, Consumption and Value addition; 2. Climate Action for Sustainable Agri-Food Systems; 3. Frontier Science and emerging Genetic Technologies: Genome, Breeding, Gene Editing; 4. Livestock-based Transformation of Food Systems; 5. Horticulture-based Transformation of Food Systems; 6. Aquaculture & Fisheries-based Transformation of Food Systems; 7. Nature-based Solutions for Sustainable AgriFood Systems; 8. Next Generation Technologies: Digital Agriculture, Precision Farming and AI-based Systems; 9. Policies and Institutions for Transforming Agri-Food Systems; 10. International Partnership for Research, Education and Development. This Book of Abstracts sets the stage for the mega event itself, which will see a flow of knowledge emanating from a zeal to transform and push India’s Agri-Food Systems to perform par excellence and achieve not only the SDGs of the UN but also to rise as a world leader in the sector. I thank and congratulate all the participants who have submitted abstracts for this mega event, and I also applaud the team that has strived hard to publish this Book of Abstracts ahead of the event. I wish all the delegates and participants a very vibrant and memorable time at the XVI ASC

    A Deep Wavelet AutoEncoder Scheme for Image Compression

    Get PDF
    For many years and since its appearance, Digital Wavelet Transform DWT has been used with great success in a wide range of applications especially in image compression and signal de-noising. Combined with several and various approaches, this powerful mathematical tool has shown its strength to compress images with high compression ratio and good visual quality. This paper attempts to demonstrate that it is needless to follow the classical three stages process of compression: pixels transformation, quantization and binary coding when compressing images using the baseline method. Indeed, in this work, we propose a new scheme of image compression system based on an unsupervised convolutional neural network AutoEncoder (CAE) that will reconstruct the approximate sub-band issue from image decomposition by the wavelet transform DWT. In order To evaluate the model’s performance we use Kodak dataset containing a set of 24 images never compressed with a lossy algorithm technique and applied the approach on every one of them. We compared our achieved results with those obtained using standard compression method. We draw this comparison in terms of four performance parameters: Structural Similarity Index Metrix SSIM, Peak Signal to Noise Ratio PSNR, Mean Square Error MSE and Compression Ratio CR. The proposed scheme offers significate improvement in distortion metrics over the traditional image compression method when evaluated for perceptual quality moreover it produces better visual quality images with clearer details and textures which demonstrates its effectiveness and its robustness

    Systematic Approaches for Telemedicine and Data Coordination for COVID-19 in Baja California, Mexico

    Get PDF
    Conference proceedings info: ICICT 2023: 2023 The 6th International Conference on Information and Computer Technologies Raleigh, HI, United States, March 24-26, 2023 Pages 529-542We provide a model for systematic implementation of telemedicine within a large evaluation center for COVID-19 in the area of Baja California, Mexico. Our model is based on human-centric design factors and cross disciplinary collaborations for scalable data-driven enablement of smartphone, cellular, and video Teleconsul-tation technologies to link hospitals, clinics, and emergency medical services for point-of-care assessments of COVID testing, and for subsequent treatment and quar-antine decisions. A multidisciplinary team was rapidly created, in cooperation with different institutions, including: the Autonomous University of Baja California, the Ministry of Health, the Command, Communication and Computer Control Center of the Ministry of the State of Baja California (C4), Colleges of Medicine, and the College of Psychologists. Our objective is to provide information to the public and to evaluate COVID-19 in real time and to track, regional, municipal, and state-wide data in real time that informs supply chains and resource allocation with the anticipation of a surge in COVID-19 cases. RESUMEN Proporcionamos un modelo para la implementación sistemática de la telemedicina dentro de un gran centro de evaluación de COVID-19 en el área de Baja California, México. Nuestro modelo se basa en factores de diseño centrados en el ser humano y colaboraciones interdisciplinarias para la habilitación escalable basada en datos de tecnologías de teleconsulta de teléfonos inteligentes, celulares y video para vincular hospitales, clínicas y servicios médicos de emergencia para evaluaciones de COVID en el punto de atención. pruebas, y para el tratamiento posterior y decisiones de cuarentena. Rápidamente se creó un equipo multidisciplinario, en cooperación con diferentes instituciones, entre ellas: la Universidad Autónoma de Baja California, la Secretaría de Salud, el Centro de Comando, Comunicaciones y Control Informático. de la Secretaría del Estado de Baja California (C4), Facultades de Medicina y Colegio de Psicólogos. Nuestro objetivo es proporcionar información al público y evaluar COVID-19 en tiempo real y rastrear datos regionales, municipales y estatales en tiempo real que informan las cadenas de suministro y la asignación de recursos con la anticipación de un aumento de COVID-19. 19 casos.ICICT 2023: 2023 The 6th International Conference on Information and Computer Technologieshttps://doi.org/10.1007/978-981-99-3236-

    A new convolutional neural network based on combination of circlets and wavelets for macular OCT classification

    Get PDF
    Artificial intelligence (AI) algorithms, encompassing machine learning and deep learning, can assist ophthalmologists in early detection of various ocular abnormalities through the analysis of retinal optical coherence tomography (OCT) images. Despite considerable progress in these algorithms, several limitations persist in medical imaging fields, where a lack of data is a common issue. Accordingly, specific image processing techniques, such as time–frequency transforms, can be employed in conjunction with AI algorithms to enhance diagnostic accuracy. This research investigates the influence of non-data-adaptive time–frequency transforms, specifically X-lets, on the classification of OCT B-scans. For this purpose, each B-scan was transformed using every considered X-let individually, and all the sub-bands were utilized as the input for a designed 2D Convolutional Neural Network (CNN) to extract optimal features, which were subsequently fed to the classifiers. Evaluating per-class accuracy shows that the use of the 2D Discrete Wavelet Transform (2D-DWT) yields superior outcomes for normal cases, whereas the circlet transform outperforms other X-lets for abnormal cases characterized by circles in their retinal structure (due to the accumulation of fluid). As a result, we propose a novel transform named CircWave by concatenating all sub-bands from the 2D-DWT and the circlet transform. The objective is to enhance the per-class accuracy of both normal and abnormal cases simultaneously. Our findings show that classification results based on the CircWave transform outperform those derived from original images or any individual transform. Furthermore, Grad-CAM class activation visualization for B-scans reconstructed from CircWave sub-bands highlights a greater emphasis on circular formations in abnormal cases and straight lines in normal cases, in contrast to the focus on irrelevant regions in original B-scans. To assess the generalizability of our method, we applied it to another dataset obtained from a different imaging system. We achieved promising accuracies of 94.5% and 90% for the first and second datasets, respectively, which are comparable with results from previous studies. The proposed CNN based on CircWave sub-bands (i.e. CircWaveNet) not only produces superior outcomes but also offers more interpretable results with a heightened focus on features crucial for ophthalmologists

    Applications of Physically Accurate Deep Learning for Processing Digital Rock Images

    Full text link
    Digital rock analysis aims to improve our understanding of the fluid flow properties of reservoir rocks, which are important for enhanced oil recovery, hydrogen storage, carbonate dioxide storage, and groundwater management. X-ray microcomputed tomography (micro-CT) is the primary approach to capturing the structure of porous rock samples for digital rock analysis. Initially, the obtained micro-CT images are processed using image-based techniques, such as registration, denoising, and segmentation depending on various requirements. Numerical simulations are then conducted on the digital models for petrophysical prediction. The accuracy of the numerical simulation highly depends on the quality of the micro-CT images. Therefore, image processing is a critical step for digital rock analysis. Recent advances in deep learning have surpassed conventional methods for image processing. Herein, the utility of convolutional neural networks (CNN) and generative adversarial networks (GAN) are assessed in regard to various applications in digital rock image processing, such as segmentation, super-resolution, and denoising. To obtain training data, different sandstone and carbonate samples were scanned using various micro-CT facilities. After that, validation images previously unseen by the trained neural networks are utilised to evaluate the performance and robustness of the proposed deep learning techniques. Various threshold scenarios are applied to segment the reconstructed digital rock images for sensitivity analyses. Then, quantitative petrophysical analyses, such as porosity, absolute/relative permeability, and pore size distribution, are implemented to estimate the physical accuracy of the digital rock data with the corresponding ground truth data. The results show that both CNN and GAN deep learning methods can provide physically accurate digital rock images with less user bias than traditional approaches. These results unlock new pathways for various applications related to the reservoir characterisation of porous reservoir rocks

    Advanced Techniques for Ground Penetrating Radar Imaging

    Get PDF
    Ground penetrating radar (GPR) has become one of the key technologies in subsurface sensing and, in general, in non-destructive testing (NDT), since it is able to detect both metallic and nonmetallic targets. GPR for NDT has been successfully introduced in a wide range of sectors, such as mining and geology, glaciology, civil engineering and civil works, archaeology, and security and defense. In recent decades, improvements in georeferencing and positioning systems have enabled the introduction of synthetic aperture radar (SAR) techniques in GPR systems, yielding GPR–SAR systems capable of providing high-resolution microwave images. In parallel, the radiofrequency front-end of GPR systems has been optimized in terms of compactness (e.g., smaller Tx/Rx antennas) and cost. These advances, combined with improvements in autonomous platforms, such as unmanned terrestrial and aerial vehicles, have fostered new fields of application for GPR, where fast and reliable detection capabilities are demanded. In addition, processing techniques have been improved, taking advantage of the research conducted in related fields like inverse scattering and imaging. As a result, novel and robust algorithms have been developed for clutter reduction, automatic target recognition, and efficient processing of large sets of measurements to enable real-time imaging, among others. This Special Issue provides an overview of the state of the art in GPR imaging, focusing on the latest advances from both hardware and software perspectives
    corecore