13 research outputs found
PARALLEL VISIBILITY AND FRESNEL-ZONES CALCULATION USING GRAPHICS PROCESSING UNITS
Delo opisuje inovativno metodo izračuna vidnosti [61, 62] in Fresnelovih con na digitalnih
zemljevidih z uporabo grafično procesnih kartic CUDA NVIDIA. Izdelani so trije
vzporedni algoritmi:
• modificiran vzporedni algoritem R2 za računanje vidnosti (R2-P),
• algoritem za izračun zakrivanj Fresnelovih con (FZC),
• algoritem za izračun prečnega preseka Fresnelovih con med oddajnikom in sprejemnikom
(FZTI).
Na osnovi uveljavljenega sekvenčnega algoritma R2 za računanje vidnosti je razvit modificiran
vzporedni algoritem R2-P, ki za pohitritev izračuna poleg večnitenja izkorišča še
druge uporabne lastnosti grafične procesne enote. Združen dostop do globalnega pomnilnika
pripomore k hitrejšemu pretoku podatkov in s tem k hitrejšemu izračunu. Izmenjava
informacij med nitmi v času računanja igra ključno vlogo pri pohitritvi. Izračun vidnosti
na poljubno velikih podatkih je omogočeno s segmentacijo digitalnega zemljevida.
Modificiran vzporedni algoritem R2 je primerjan z že implementiranimi algoritmi za
izračun vidnosti v smislu točnosti izračuna in časa izračuna. Izkaže se, da je novi algoritem
enako točen kot že uveljavljeni sekvenčni algoritem R2, hkrati pa omogoča bistveno
pohitritev izračuna. Čas izračuna je skrajšan iz reda nekaj minut na red nekaj sekund.
To pa v praksi pomeni možnost interaktivnega dela.
Pri načrtovanju radijskega pokrivanja je poleg vidnosti zelo uporaben podatek o zakrivanju
Fresnelovih con. Pri algoritmu za izračun zakrivanj Fresnelovih con se izbere
lokacijo radijskega oddajnika, višino oddajnika, opazovano višino sprejemnika nad terenom
in valovno dolžino radijskega valovanja. Algoritem za vsako točko terena izračuna,
katera Fresnelova cona je zakrita. Rezultat je digitalni zemljevid z izrisanimi območji
zakrivanj Fresnelovih con, kar o radijskem signalu na terenu pove precej več kot izračun
vidnosti. Predvsem na področjih, kjer je prva Fresnelova cona povsem zakrita, se v primerjavi
z izračunom vidnosti pridobi v praksi zelo uporabna informacija. Algoritem ima
tudi možnost upoštevanja rabe tal, kjer se višina terena poveča v odvisnosti od rabe tal
(npr. za gozdno površino reda 15 m). Z modifikacijami, kot sta vpeljava Friisove enačbe in upoštevanje smernega diagrama
anten, postane algoritem enostaven propagacijski model in tako primeren za izračun radijskega
pokrivanja. Izračun radijskega signala se primerja z izmerjenimi vrednostmi na
terenu za frekvence 90 Mhz (FM), 800 MHz (LTE) in 1800 MHz (LTE). Za različne
vhodne parametre enostavnega propagacijskega modela se izračuna standardna deviacija
sprememb med izmerjenimi in izračunanimi vrednostmi in se jih prikaže na grafih. Tako se
pridobijo najbolj optimalne vrednosti vhodnih parametrov za vsako frekvenčno področje
posebej.
Algoritem za izračun prečnega preseka Fresnelovih con med oddajnikom in sprejemnikom
izračuna sliko Fresnelovih con, ki predstavlja matematični presek vseh skaliranih
prečnih presekov Fresnelovih con vzdolž radijske poti. Rezultat je vizualna slika, ki pokaže
lastnosti radijske (linkovske) zveze v smislu zakritja posameznih Fresnelovih con. V
praksi bi algoritem najbolj koristil pri načrtovanju radijskih linkov, kjer bi lahko preverili,
koliko in kateri del Fresnelovih con manjka zaradi ovir (terena).
Vsi trije algoritmi so implementirani kot moduli GRASS GIS in se lahko uporabljajo
na vsakem osebnem računalniku, ki ima vgrajeno grafično procesno enoto CUDA NVIDIA
in naloženo ustrezno prosto dostopno programsko opremo.The work describes an innovative method with which to calculate the visibility [61, 62]
and Fresnel zones on digital maps using graphics processing NVIDIA CUDA cards. Three
parallel algorithms were formulated:
• modified R2 parallel algorithm for calculating visibility (R2-P),
• algorithm for calculating Fresnel zone clearance (FZC),
• algorithm for calculating Fresnel zone transverse intersection between the transmitter
and the receiver (FZTI).
The R2 parallel algorithm was developed based on the established R2 sequential algorithm
for computing visibility. Aside from threading, other useful features of the graphics
processing unit were used to speed up calculation time. Coalesced access to the global
memory helps speed up the flow of information and thus also speeds up the calculation.
Exchange of information between threads during computation plays a key role in the
speedup. The segmentation of the digital map enables the calculation of visibility for
huge data sets.
The modified parallel R2 algorithm was compared with the already implemented algorithms
for the viewshed calculation in term of accuracy and duration of the calculation.
It turned out that the new algorithm R2-P had the same accuracy as the already established
sequential algorithm R2, although the former also makes it possible to significantly
speed up the calculation. Calculation time is reduced from the order of a few minutes to
the order of a couple of seconds. This, in practice, means that there is a possibility of
interactive work.
In addition to the viewshed, Fresnel zone clearance is very useful for planning the radio
coverage. Algorithm FZC starts with the location of the radio transmitter, the height of
the transmitter, the receiver observation height above terrain, and the wavelength of
the radio waves. The algorithm for each point of the terrain calculates the first clear
Fresnel zone. The result is a digital map with the plotted areas of Fresnel zone clearance.
This map provides better information about the radio signal than just a calculation of the
viewshed. Indeed areas where the first Fresnel zone is completely obscured are particularly
good for providing very useful information. The algorithm also has the ability to take
into account land use, where the height of the terrain is raised as a function of land use
(eg. For the forest area, raising can be 15 m).
With modifications, such as the introduction of the Friis transmission equation and
consideration of the radiation pattern, the algorithm becomes a simple radio propagation
model and thus is suitable for the calculation of radio coverage. Calculation of the radio
propagation is compared with the measured values on a field for frequencies of 90 MHz
(FM), 800 MHz (LTE) and 1800 MHz (LTE). For a variety of input parameters, the
standard deviation of changes between the field measurements and calculated propagation
is presented in graphs. In this way, the optimal values of the input parameters for each
frequency band can be obtained.
The algorithm for calculating Fresnel zone transverse intersection between the transmitter
and the receiver produces an image of Fresnel zones, which represents the mathematical
section of all scale cross-sectional Fresnel zones along the transmission path. The
result is a visual image that shows the characteristics of the radio link in terms of masking
individual Fresnel zones. In practice, the algorithm is most useful in the design of radio
links, where man can check how much and which part of the Fresnel zone is missing due
to terrain obstacles.
All three algorithms were implemented as GRASS GIS modules and can be used on any
PC with an integrated GPU NVIDIA CUDA and loaded with the appropriate free-access
software
Synthetic Aperture Radar (SAR) Meets Deep Learning
This reprint focuses on the application of the combination of synthetic aperture radars and depth learning technology. It aims to further promote the development of SAR image intelligent interpretation technology. A synthetic aperture radar (SAR) is an important active microwave imaging sensor, whose all-day and all-weather working capacity give it an important place in the remote sensing community. Since the United States launched the first SAR satellite, SAR has received much attention in the remote sensing community, e.g., in geological exploration, topographic mapping, disaster forecast, and traffic monitoring. It is valuable and meaningful, therefore, to study SAR-based remote sensing applications. In recent years, deep learning represented by convolution neural networks has promoted significant progress in the computer vision community, e.g., in face recognition, the driverless field and Internet of things (IoT). Deep learning can enable computational models with multiple processing layers to learn data representations with multiple-level abstractions. This can greatly improve the performance of various applications. This reprint provides a platform for researchers to handle the above significant challenges and present their innovative and cutting-edge research results when applying deep learning to SAR in various manuscript types, e.g., articles, letters, reviews and technical reports
Novel Hybrid-Learning Algorithms for Improved Millimeter-Wave Imaging Systems
Increasing attention is being paid to millimeter-wave (mmWave), 30 GHz to 300
GHz, and terahertz (THz), 300 GHz to 10 THz, sensing applications including
security sensing, industrial packaging, medical imaging, and non-destructive
testing. Traditional methods for perception and imaging are challenged by novel
data-driven algorithms that offer improved resolution, localization, and
detection rates. Over the past decade, deep learning technology has garnered
substantial popularity, particularly in perception and computer vision
applications. Whereas conventional signal processing techniques are more easily
generalized to various applications, hybrid approaches where signal processing
and learning-based algorithms are interleaved pose a promising compromise
between performance and generalizability. Furthermore, such hybrid algorithms
improve model training by leveraging the known characteristics of radio
frequency (RF) waveforms, thus yielding more efficiently trained deep learning
algorithms and offering higher performance than conventional methods. This
dissertation introduces novel hybrid-learning algorithms for improved mmWave
imaging systems applicable to a host of problems in perception and sensing.
Various problem spaces are explored, including static and dynamic gesture
classification; precise hand localization for human computer interaction;
high-resolution near-field mmWave imaging using forward synthetic aperture
radar (SAR); SAR under irregular scanning geometries; mmWave image
super-resolution using deep neural network (DNN) and Vision Transformer (ViT)
architectures; and data-level multiband radar fusion using a novel
hybrid-learning architecture. Furthermore, we introduce several novel
approaches for deep learning model training and dataset synthesis.Comment: PhD Dissertation Submitted to UTD ECE Departmen
Visual and Camera Sensors
This book includes 13 papers published in Special Issue ("Visual and Camera Sensors") of the journal Sensors. The goal of this Special Issue was to invite high-quality, state-of-the-art research papers dealing with challenging issues in visual and camera sensors
Electromagnetic ray-tracing for the investigation of multipath and vibration signatures in radar imagery
Synthetic Aperture Radar (SAR) imagery has been used extensively within UK Defence and Intelligence for many years. Despite this, the exploitation of SAR imagery is still challenging to the inexperienced imagery analyst as the non-literal image provided for exploitation requires careful consideration of the imaging geometry, the target being imaged and the physics of radar interactions with objects. It is therefore not surprising to note that in 2017 the most useful tool available to a radar imagery analyst is a contextual optical image of the same area. This body of work presents a way to address this by adopting recent advances in radar signal processing and computational geometry to develop a SAR simulator called SARCASTIC (SAR Ray-Caster for the Intelligence Community) that can rapidly render a scene with the precise collection geometry of an image being exploited. The work provides a detailed derivation of the simulator from first principals. It is then validated against a range of real-world SAR collection systems. The work shows that such a simulator can provide an analyst with the necessary tools to extract intelligence from a collection that is unavailable to a conventional imaging system. The thesis then describes a new technique that allows a vibrating target to be detected within a SAR collection. The simulator is used to predict a unique scattering signature - described as a one-sided paired echo. Finally an experiment is described that was performed by Cranfield University to specifications determined by SARCASTIC which show that the unique radar signature can actually occur within a SAR collection
Local Binary Pattern based algorithms for the discrimination and detection of crops and weeds with similar morphologies
In cultivated agricultural fields, weeds are unwanted species that compete with the crop plants for nutrients, water, sunlight and soil, thus constraining their growth. Applying new real-time weed detection and spraying technologies to agriculture would enhance current farming practices, leading to higher crop yields and lower production costs. Various weed detection methods have been developed for Site-Specific Weed Management (SSWM) aimed at maximising the crop yield through efficient control of weeds. Blanket application of herbicide chemicals is currently the most popular weed eradication practice in weed management and weed invasion. However, the excessive use of herbicides has a detrimental impact on the human health, economy and environment. Before weeds are resistant to herbicides and respond better to weed control strategies, it is necessary to control them in the fallow, pre-sowing, early post-emergent and in pasture phases. Moreover, the development of herbicide resistance in weeds is the driving force for inventing precision and automation weed treatments. Various weed detection techniques have been developed to identify weed species in crop fields, aimed at improving the crop quality, reducing herbicide and water usage and minimising environmental impacts.
In this thesis, Local Binary Pattern (LBP)-based algorithms are developed and tested experimentally, which are based on extracting dominant plant features from camera images to precisely detecting weeds from crops in real time. Based on the efficient computation and robustness of the first LBP method, an improved LBP-based method is developed based on using three different LBP operators for plant feature extraction in conjunction with a Support Vector Machine (SVM) method for multiclass plant classification. A 24,000-image dataset, collected using a testing facility under simulated field conditions (Testbed system), is used for algorithm training, validation and testing. The dataset, which is published online under the name “bccr-segset”, consists of four subclasses: background, Canola (Brassica napus), Corn (Zea mays), and Wild radish (Raphanus raphanistrum). In addition, the dataset comprises plant images collected at four crop growth stages, for each subclass. The computer-controlled Testbed is designed to rapidly label plant images and generate the “bccr-segset” dataset. Experimental results show that the classification accuracy of the improved LBP-based algorithm is 91.85%, for the four classes.
Due to the similarity of the morphologies of the canola (crop) and wild radish (weed) leaves, the conventional LBP-based method has limited ability to discriminate broadleaf crops from weeds. To overcome this limitation and complex field conditions (illumination variation, poses, viewpoints, and occlusions), a novel LBP-based method (denoted k-FLBPCM) is developed to enhance the classification accuracy of crops and weeds with similar morphologies. Our contributions include (i) the use of opening and closing morphological operators in pre-processing of plant images, (ii) the development of the k-FLBPCM method by combining two methods, namely, the filtered local binary pattern (LBP) method and the contour-based masking method with a coefficient k, and (iii) the optimal use of SVM with the radial basis function (RBF) kernel to precisely identify broadleaf plants based on their distinctive features. The high performance of this k-FLBPCM method is demonstrated by experimentally attaining up to 98.63% classification accuracy at four different growth stages for all classes of the “bccr-segset” dataset.
To evaluate performance of the k-FLBPCM algorithm in real-time, a comparison analysis between our novel method (k-FLBPCM) and deep convolutional neural networks (DCNNs) is conducted on morphologically similar crops and weeds. Various DCNN models, namely VGG-16, VGG-19, ResNet50 and InceptionV3, are optimised, by fine-tuning their hyper-parameters, and tested. Based on the experimental results on the “bccr-segset” dataset collected from the laboratory and the “fieldtrip_can_weeds” dataset collected from the field under practical environments, the classification accuracies of the DCNN models and the k-FLBPCM method are almost similar. Another experiment is conducted by training the algorithms with plant images obtained at mature stages and testing them at early stages. In this case, the new k-FLBPCM method outperformed the state-of-the-art CNN models in identifying small leaf shapes of canola-radish (crop-weed) at early growth stages, with an order of magnitude lower error rates in comparison with DCNN models. Furthermore, the execution time of the k-FLBPCM method during the training and test phases was faster than the DCNN counterparts, with an identification time difference of approximately 0.224ms per image for the laboratory dataset and 0.346ms per image for the field dataset. These results demonstrate the ability of the k-FLBPCM method to rapidly detect weeds from crops of similar appearance in real time with less data, and generalize to different size plants better than the CNN-based methods
Machine Learning in Sensors and Imaging
Machine learning is extending its applications in various fields, such as image processing, the Internet of Things, user interface, big data, manufacturing, management, etc. As data are required to build machine learning networks, sensors are one of the most important technologies. In addition, machine learning networks can contribute to the improvement in sensor performance and the creation of new sensor applications. This Special Issue addresses all types of machine learning applications related to sensors and imaging. It covers computer vision-based control, activity recognition, fuzzy label classification, failure classification, motor temperature estimation, the camera calibration of intelligent vehicles, error detection, color prior model, compressive sensing, wildfire risk assessment, shelf auditing, forest-growing stem volume estimation, road management, image denoising, and touchscreens