320 research outputs found

    Hardware-Accelerated SAR Simulation with NVIDIA-RTX Technology

    Full text link
    Synthetic Aperture Radar (SAR) is a critical sensing technology that is notably independent of the sensor-to-target distance and has numerous cross-cutting applications, e.g., target recognition, mapping, surveillance, oceanography, geology, forestry (biomass, deforestation), disaster monitoring (volcano eruptions, oil spills, flooding), and infrastructure tracking (urban growth, structure mapping). SAR uses a high-power antenna to illuminate target locations with electromagnetic radiation, e.g., 10GHz radio waves, and illuminated surface backscatter is sensed by the antenna which is then used to generate images of structures. Real SAR data is difficult and costly to produce and, for research, lacks a reliable source ground truth. This article proposes a open source SAR simulator to compute phase histories for arbitrary 3D scenes using newly available ray-tracing hardware made available commercially through the NVIDIA's RTX graphics cards series. The OptiX GPU ray tracing library for NVIDIA GPUs is used to calculate SAR phase histories at unprecedented computational speeds. The simulation results are validated against existing SAR simulation code for spotlight SAR illumination of point targets. The computational performance of this approach provides orders of magnitude speed increases over CPU simulation. An additional order of magnitude of GPU acceleration when simulations are run on RTX GPUs which include hardware specifically to accelerate OptiX ray tracing. The article describes the OptiX simulator structure, processing framework and calculations that afford execution on massively parallel GPU computation device. The shortcoming of the OptiX library's restriction to single precision float representation is discussed and modifications of sensitive calculations are proposed to reduce truncation error thereby increasing the simulation accuracy under this constraint.Comment: 17 pages, 7 figures, Algorithms for Synthetic Aperture Radar Imagery XXVII, SPIE Defense + Commercial Sensing 202

    A review of synthetic-aperture radar image formation algorithms and implementations: a computational perspective

    Get PDF
    Designing synthetic-aperture radar image formation systems can be challenging due to the numerous options of algorithms and devices that can be used. There are many SAR image formation algorithms, such as backprojection, matched-filter, polar format, Range–Doppler and chirp scaling algorithms. Each algorithm presents its own advantages and disadvantages considering efficiency and image quality; thus, we aim to introduce some of the most common SAR image formation algorithms and compare them based on these two aspects. Depending on the requisites of each individual system and implementation, there are many device options to choose from, for in stance, FPGAs, GPUs, CPUs, many-core CPUs, and microcontrollers. We present a review of the state of the art of SAR imaging systems implementations. We also compare such implementations in terms of power consumption, execution time, and image quality for the different algorithms used.info:eu-repo/semantics/publishedVersio

    SAR Image Formation via Subapertures and 2D Backprojection

    Get PDF
    Radar imaging requires the use of wide bandwidth and a long coherent processing interval, resulting in range and Doppler migration throughout the observation period. This migration must be compensated in order to properly image a scene of interest at full resolution and there are many available algorithms having various strengths and weaknesses. Here, a subaperture-based imaging algorithm is proposed, which first forms range-Doppler (RD) images from slow-time sub-intervals, and then coherently integrates over the resulting coarse-resolution RD maps to produce a full resolution SAR image. A two-dimensional backprojection-style approach is used to perform distortion-free integration of these RD maps. This technique benefits from many of the same benefits as traditional backprojection; however, the architecture of the algorithm is chosen such that several steps are shared with typical target detection algorithms. These steps are chosen such that no compromises need to be made to data quality, allowing for high quality imaging while also preserving data for implementation of detection algorithms. Additionally, the algorithm benefits from computational savings that make it an excellent imaging algorithm for implementation in a simultaneous SAR-GMTI architecture

    Scalable computing for earth observation - Application on Sea Ice analysis

    Get PDF
    In recent years, Deep learning (DL) networks have shown considerable improvements and have become a preferred methodology in many different applications. These networks have outperformed other classical techniques, particularly in large data settings. In earth observation from the satellite field, for example, DL algorithms have demonstrated the ability to learn complicated nonlinear relationships in input data accurately. Thus, it contributed to advancement in this field. However, the training process of these networks has heavy computational overheads. The reason is two-fold: The sizable complexity of these networks and the high number of training samples needed to learn all parameters comprising these architectures. Although the quantity of training data enhances the accuracy of the trained models in general, the computational cost may restrict the amount of analysis that can be done. This issue is particularly critical in satellite remote sensing, where a myriad of satellites generate an enormous amount of data daily, and acquiring in-situ ground truth for building a large training dataset is a fundamental prerequisite. This dissertation considers various aspects of deep learning based sea ice monitoring from SAR data. In this application, labeling data is very costly and time-consuming. Also, in some cases, it is not even achievable due to challenges in establishing the required domain knowledge, specifically when it comes to monitoring Arctic Sea ice with Synthetic Aperture Radar (SAR), which is the application domain of this thesis. Because the Arctic is remote, has long dark seasons, and has a very dynamic weather system, the collection of reliable in-situ data is very demanding. In addition to the challenges of interpreting SAR data of sea ice, this issue makes SAR-based sea ice analysis with DL networks a complicated process. We propose novel DL methods to cope with the problems of scarce training data and address the computational cost of the training process. We analyze DL network capabilities based on self-designed architectures and learn strategies, such as transfer learning for sea ice classification. We also address the scarcity of training data by proposing a novel deep semi-supervised learning method based on SAR data for incorporating unlabeled data information into the training process. Finally, a new distributed DL method that can be used in a semi-supervised manner is proposed to address the computational complexity of deep neural network training

    Incremental closeness centrality in distributed memory

    Get PDF
    Networks are commonly used to model traffic patterns, social interactions, or web pages. The vertices in a network do not possess the same characteristics: some vertices are naturally more connected and some vertices can be more important. Closeness centrality (CC) is a global metric that quantifies how important is a given vertex in the network. When the network is dynamic and keeps changing, the relative importance of the vertices also changes. The best known algorithm to compute the CC scores makes it impractical to recompute them from scratch after each modification. In this paper, we propose Streamer, a distributed memory framework for incrementally maintaining the closeness centrality scores of a network upon changes. It leverages pipelined, replicated parallelism, and SpMM-based BFSs, and it takes NUMA effects into account. It makes maintaining the Closeness Centrality values of real-life networks with millions of interactions significantly faster and obtains almost linear speedups on a 64 nodes 8 threads/node cluster

    NOVEL ALGORITHMS AND TOOLS FOR LIGAND-BASED DRUG DESIGN

    Get PDF
    Computer-aided drug design (CADD) has become an indispensible component in modern drug discovery projects. The prediction of physicochemical properties and pharmacological properties of candidate compounds effectively increases the probability for drug candidates to pass latter phases of clinic trials. Ligand-based virtual screening exhibits advantages over structure-based drug design, in terms of its wide applicability and high computational efficiency. The established chemical repositories and reported bioassays form a gigantic knowledgebase to derive quantitative structure-activity relationship (QSAR) and structure-property relationship (QSPR). In addition, the rapid advance of machine learning techniques suggests new solutions for data-mining huge compound databases. In this thesis, a novel ligand classification algorithm, Ligand Classifier of Adaptively Boosting Ensemble Decision Stumps (LiCABEDS), was reported for the prediction of diverse categorical pharmacological properties. LiCABEDS was successfully applied to model 5-HT1A ligand functionality, ligand selectivity of cannabinoid receptor subtypes, and blood-brain-barrier (BBB) passage. LiCABEDS was implemented and integrated with graphical user interface, data import/export, automated model training/ prediction, and project management. Besides, a non-linear ligand classifier was proposed, using a novel Topomer kernel function in support vector machine. With the emphasis on green high-performance computing, graphics processing units are alternative platforms for computationally expensive tasks. A novel GPU algorithm was designed and implemented in order to accelerate the calculation of chemical similarities with dense-format molecular fingerprints. Finally, a compound acquisition algorithm was reported to construct structurally diverse screening library in order to enhance hit rates in high-throughput screening

    BRUISE DETECTION IN APPLES USING 3D INFRARED IMAGING AND MACHINE LEARNING TECHNOLOGIES

    Get PDF
    Bruise detection plays an important role in fruit grading. A bruise detection system capable of finding and removing damaged products on the production lines will distinctly improve the quality of fruits for sale, and consequently improve the fruit economy. This dissertation presents a novel automatic detection system based on surface information obtained from 3D near-infrared imaging technique for bruised apple identification. The proposed 3D bruise detection system is expected to provide better performance in bruise detection than the existing 2D systems. We first propose a mesh denoising filter to reduce noise effect while preserving the geometric features of the meshes. Compared with several existing mesh denoising filters, the proposed filter achieves better performance in reducing noise effect as well as preserving bruised regions in 3D meshes of bruised apples. Next, we investigate two different machine learning techniques for the identification of bruised apples. The first technique is to extract hand-crafted feature from 3D meshes, and train a predictive classifier based on hand-crafted features. It is shown that the predictive model trained on the proposed hand-crafted features outperforms the same models trained on several other local shape descriptors. The second technique is to apply deep learning to learn the feature representation automatically from the mesh data, and then use the deep learning model or a new predictive model for the classification. The optimized deep learning model achieves very high classification accuracy, and it outperforms the performance of the detection system based on the proposed hand-crafted features. At last, we investigate GPU techniques for accelerating the proposed apple bruise detection system. Specifically, the dissertation proposes a GPU framework, implemented in CUDA, for the acceleration of the algorithm that extracts vertex-based local binary patterns. Experimental results show that the proposed GPU program speeds up the process of extracting local binary patterns by 5 times compared to a single-core CPU program

    Efficient Algorithms for Large-Scale Image Analysis

    Get PDF
    This work develops highly efficient algorithms for analyzing large images. Applications include object-based change detection and screening. The algorithms are 10-100 times as fast as existing software, sometimes even outperforming FGPA/GPU hardware, because they are designed to suit the computer architecture. This thesis describes the implementation details and the underlying algorithm engineering methodology, so that both may also be applied to other applications

    Hochauflösende Prozessierung flugzeuggestützter SAR Daten auf GPU

    Get PDF
    The airborne based radar system F-SAR built and operated by the Microwave and Radar Systems Institute is a major contribution to the project VABENE. With its help it is possible to take pictures of the earth independent from weather and daylight. In contrast to optical pictures it is necessary to transform the incoming signals using computationally expensive algorithms in the first place, so that humans can interprete them. Within the work of the VABENE project there is the need to acquire high resolution data as fast as possible. Neither the possibility to compute radar images on board nor offline on the ground meets the requirements. The onboard processor only has low resolution and full resolution can’t be provided in real-time. That’s the reason why CPU-based hardware architectures will not allow solving this problem. New technologies are needed and that’s why general purpose graphics processing units (GPGPU) are playing an important role. Normal GPUs are specialized to work with computergraphics. With the help of the programming architecture CUDA they become a computing machine capable of solving computationally demanding tasks in short time. GPUs are normally used for computer graphics because they are designed for massiv parallelized computations. The development of the last years shows that, because of the programming architecture CUDA, it is now possible to port radar data processing on GPU to speed up processing time. The topic of this bachelor thesis is to implement the radar data proccessing steps for GPU usage and to find a way to integrate the software into the existing software systems of the F-SAR processor. Tests show that integrating CUDA code in the existing software architectures is best done by implementing dynamic libraries – shared objects. With these, CUDA code is portable and can be used by every C or IDL program. It is also important to be flexible as algorithmic steps in the radar data processing chain may change. Benchmark tests performed for every module have shown that the computing time of individual modules compared to single-core CPU processing is 10 to 70 times faster. That shows clearly the potential of the GPU architecture. But there are more possibilities left to further optimize the proccessing for full real-time processing. Because the algorithms for radar data processing can be massively parallelized, the capacity of the GPU computation force can be used efficently. Therefore porting the whole radar processing to the GPU enables the radar system to calculate a full resolution radar image in real time despite of the complex algorithms used. However, because of present F-SAR hardware restrictions this performance is presently limited to offline environments

    Synthetic Aperture Radar (SAR) Meets Deep Learning

    Get PDF
    This reprint focuses on the application of the combination of synthetic aperture radars and depth learning technology. It aims to further promote the development of SAR image intelligent interpretation technology. A synthetic aperture radar (SAR) is an important active microwave imaging sensor, whose all-day and all-weather working capacity give it an important place in the remote sensing community. Since the United States launched the first SAR satellite, SAR has received much attention in the remote sensing community, e.g., in geological exploration, topographic mapping, disaster forecast, and traffic monitoring. It is valuable and meaningful, therefore, to study SAR-based remote sensing applications. In recent years, deep learning represented by convolution neural networks has promoted significant progress in the computer vision community, e.g., in face recognition, the driverless field and Internet of things (IoT). Deep learning can enable computational models with multiple processing layers to learn data representations with multiple-level abstractions. This can greatly improve the performance of various applications. This reprint provides a platform for researchers to handle the above significant challenges and present their innovative and cutting-edge research results when applying deep learning to SAR in various manuscript types, e.g., articles, letters, reviews and technical reports
    • …
    corecore