12 research outputs found
PARALLEL VISIBILITY AND FRESNEL-ZONES CALCULATION USING GRAPHICS PROCESSING UNITS
Delo opisuje inovativno metodo izračuna vidnosti [61, 62] in Fresnelovih con na digitalnih
zemljevidih z uporabo grafično procesnih kartic CUDA NVIDIA. Izdelani so trije
vzporedni algoritmi:
• modificiran vzporedni algoritem R2 za računanje vidnosti (R2-P),
• algoritem za izračun zakrivanj Fresnelovih con (FZC),
• algoritem za izračun prečnega preseka Fresnelovih con med oddajnikom in sprejemnikom
(FZTI).
Na osnovi uveljavljenega sekvenčnega algoritma R2 za računanje vidnosti je razvit modificiran
vzporedni algoritem R2-P, ki za pohitritev izračuna poleg večnitenja izkorišča še
druge uporabne lastnosti grafične procesne enote. Združen dostop do globalnega pomnilnika
pripomore k hitrejšemu pretoku podatkov in s tem k hitrejšemu izračunu. Izmenjava
informacij med nitmi v času računanja igra ključno vlogo pri pohitritvi. Izračun vidnosti
na poljubno velikih podatkih je omogočeno s segmentacijo digitalnega zemljevida.
Modificiran vzporedni algoritem R2 je primerjan z že implementiranimi algoritmi za
izračun vidnosti v smislu točnosti izračuna in časa izračuna. Izkaže se, da je novi algoritem
enako točen kot že uveljavljeni sekvenčni algoritem R2, hkrati pa omogoča bistveno
pohitritev izračuna. Čas izračuna je skrajšan iz reda nekaj minut na red nekaj sekund.
To pa v praksi pomeni možnost interaktivnega dela.
Pri načrtovanju radijskega pokrivanja je poleg vidnosti zelo uporaben podatek o zakrivanju
Fresnelovih con. Pri algoritmu za izračun zakrivanj Fresnelovih con se izbere
lokacijo radijskega oddajnika, višino oddajnika, opazovano višino sprejemnika nad terenom
in valovno dolžino radijskega valovanja. Algoritem za vsako točko terena izračuna,
katera Fresnelova cona je zakrita. Rezultat je digitalni zemljevid z izrisanimi območji
zakrivanj Fresnelovih con, kar o radijskem signalu na terenu pove precej več kot izračun
vidnosti. Predvsem na področjih, kjer je prva Fresnelova cona povsem zakrita, se v primerjavi
z izračunom vidnosti pridobi v praksi zelo uporabna informacija. Algoritem ima
tudi možnost upoštevanja rabe tal, kjer se višina terena poveča v odvisnosti od rabe tal
(npr. za gozdno površino reda 15 m). Z modifikacijami, kot sta vpeljava Friisove enačbe in upoštevanje smernega diagrama
anten, postane algoritem enostaven propagacijski model in tako primeren za izračun radijskega
pokrivanja. Izračun radijskega signala se primerja z izmerjenimi vrednostmi na
terenu za frekvence 90 Mhz (FM), 800 MHz (LTE) in 1800 MHz (LTE). Za različne
vhodne parametre enostavnega propagacijskega modela se izračuna standardna deviacija
sprememb med izmerjenimi in izračunanimi vrednostmi in se jih prikaže na grafih. Tako se
pridobijo najbolj optimalne vrednosti vhodnih parametrov za vsako frekvenčno področje
posebej.
Algoritem za izračun prečnega preseka Fresnelovih con med oddajnikom in sprejemnikom
izračuna sliko Fresnelovih con, ki predstavlja matematični presek vseh skaliranih
prečnih presekov Fresnelovih con vzdolž radijske poti. Rezultat je vizualna slika, ki pokaže
lastnosti radijske (linkovske) zveze v smislu zakritja posameznih Fresnelovih con. V
praksi bi algoritem najbolj koristil pri načrtovanju radijskih linkov, kjer bi lahko preverili,
koliko in kateri del Fresnelovih con manjka zaradi ovir (terena).
Vsi trije algoritmi so implementirani kot moduli GRASS GIS in se lahko uporabljajo
na vsakem osebnem računalniku, ki ima vgrajeno grafično procesno enoto CUDA NVIDIA
in naloženo ustrezno prosto dostopno programsko opremo.The work describes an innovative method with which to calculate the visibility [61, 62]
and Fresnel zones on digital maps using graphics processing NVIDIA CUDA cards. Three
parallel algorithms were formulated:
• modified R2 parallel algorithm for calculating visibility (R2-P),
• algorithm for calculating Fresnel zone clearance (FZC),
• algorithm for calculating Fresnel zone transverse intersection between the transmitter
and the receiver (FZTI).
The R2 parallel algorithm was developed based on the established R2 sequential algorithm
for computing visibility. Aside from threading, other useful features of the graphics
processing unit were used to speed up calculation time. Coalesced access to the global
memory helps speed up the flow of information and thus also speeds up the calculation.
Exchange of information between threads during computation plays a key role in the
speedup. The segmentation of the digital map enables the calculation of visibility for
huge data sets.
The modified parallel R2 algorithm was compared with the already implemented algorithms
for the viewshed calculation in term of accuracy and duration of the calculation.
It turned out that the new algorithm R2-P had the same accuracy as the already established
sequential algorithm R2, although the former also makes it possible to significantly
speed up the calculation. Calculation time is reduced from the order of a few minutes to
the order of a couple of seconds. This, in practice, means that there is a possibility of
interactive work.
In addition to the viewshed, Fresnel zone clearance is very useful for planning the radio
coverage. Algorithm FZC starts with the location of the radio transmitter, the height of
the transmitter, the receiver observation height above terrain, and the wavelength of
the radio waves. The algorithm for each point of the terrain calculates the first clear
Fresnel zone. The result is a digital map with the plotted areas of Fresnel zone clearance.
This map provides better information about the radio signal than just a calculation of the
viewshed. Indeed areas where the first Fresnel zone is completely obscured are particularly
good for providing very useful information. The algorithm also has the ability to take
into account land use, where the height of the terrain is raised as a function of land use
(eg. For the forest area, raising can be 15 m).
With modifications, such as the introduction of the Friis transmission equation and
consideration of the radiation pattern, the algorithm becomes a simple radio propagation
model and thus is suitable for the calculation of radio coverage. Calculation of the radio
propagation is compared with the measured values on a field for frequencies of 90 MHz
(FM), 800 MHz (LTE) and 1800 MHz (LTE). For a variety of input parameters, the
standard deviation of changes between the field measurements and calculated propagation
is presented in graphs. In this way, the optimal values of the input parameters for each
frequency band can be obtained.
The algorithm for calculating Fresnel zone transverse intersection between the transmitter
and the receiver produces an image of Fresnel zones, which represents the mathematical
section of all scale cross-sectional Fresnel zones along the transmission path. The
result is a visual image that shows the characteristics of the radio link in terms of masking
individual Fresnel zones. In practice, the algorithm is most useful in the design of radio
links, where man can check how much and which part of the Fresnel zone is missing due
to terrain obstacles.
All three algorithms were implemented as GRASS GIS modules and can be used on any
PC with an integrated GPU NVIDIA CUDA and loaded with the appropriate free-access
software
Novel parallel approaches to efficiently solve spatial problems on heterogeneous CPU-GPU systems
Addressing this task is difficult as (i) it requires analysing large databases in a short time, and (ii) it is commonly addressed by combining different methods with complex data dependencies, making it challenging to exploit parallelism on heterogeneous CPU-GPU systems. Moreover, most efforts in this context focus on improving the accuracy of the approaches and neglect reducing the processing time—the most accurate algorithm was designed to process the fingerprints using a single thread. We developed a new methodology to address the latent fingerprint identification problem called “Asynchronous processing for Latent Fingerprint Identification” (ALFI) that speeds up processing while maintaining high accuracy. ALFI exploits all the resources of CPU-GPU systems using asynchronous processing and fine-coarse parallelism to analyse massive fingerprint databases. We assessed the performance of ALFI on Linux and Windows operating systems using the well-known NIST/FVC databases. Experimental results revealed that ALFI is on average 22x faster than the state-of-the-art identification algorithm, reaching a speed-up of 44.7x for the best-studied case.
In terrain analysis, Digital Elevation Models (DEMs) are relevant datasets used as input to those algorithms that typically sweep the terrain to analyse its main topological features such as visibility, elevation, and slope. The most challenging computation related to this topic is the total viewshed problem. It involves computing the viewshed—the visible area of the terrain—for each of the points in the DEM. The algorithms intended to solve this problem require many memory accesses to 2D arrays, which, despite being regular, lead to poor data locality in memory. We proposed a methodology called “skewed Digital Elevation Model” (sDEM) that substantially improves the locality of memory accesses and exploits the inherent parallelism of rotational sweep-based algorithms. Particularly, sDEM applies a data relocation technique before accessing the memory and computing the viewshed, thus significantly reducing the execution time. Different implementations are provided for single-core, multi-core, single-GPU, and multi-GPU platforms. We carried out two experiments to compare sDEM with (i) the most used geographic information systems (GIS) software and (ii) the state-of-the-art algorithm for solving the total viewshed problem. In the first experiment, sDEM results on average 8.8x faster than current GIS software, despite considering only a few points because of the limitations of the GIS software. In the second experiment, sDEM is 827.3x faster than the state-of-the-art algorithm considering the best case.
The use of Unmanned Aerial Vehicles (UAVs) with multiple onboard sensors has grown enormously in tasks involving terrain coverage, such as environmental and civil monitoring, disaster management, and forest fire fighting. Many of these tasks require a quick and early response, which makes maximising the land covered from the flight path an essential goal, especially when the area to be monitored is irregular, large, and includes many blind spots. In this regard, state-of-the-art total viewshed algorithms can help analyse large areas and find new paths providing all-round visibility. We designed a new heuristic called “Visibility-based Path Planning” (VPP) to solve the path planning problem in large areas based on a thorough visibility analysis. VPP generates flyable paths that provide high visual coverage to monitor forest regions using the onboard camera of a single UAV. For this purpose, the hidden areas of the target territory are identified and considered when generating the path. Simulation results showed that VPP covers up to 98.7% of the Montes de Malaga Natural Park and 94.5% of the Sierra de las Nieves National Park, both located in the province of Malaga (Spain). In addition, a real flight test confirmed the high visibility achieved using VPP. Our methodology and analysis can be easily applied to enhance monitoring in other large outdoor areas.In recent years, approaches that seek to extract valuable information from large datasets have become particularly relevant in today's society. In this category, we can highlight those problems that comprise data analysis distributed across two-dimensional scenarios called spatial problems. These usually involve processing (i) a series of features distributed across a given plane or (ii) a matrix of values where each cell corresponds to a point on the plane. Therefore, we can see the open-ended and complex nature of spatial problems, but it also leaves room for imagination to be applied in the search for new solutions.
One of the main complications we encounter when dealing with spatial problems is that they are very computationally intensive, typically taking a long time to produce the desired result. This drawback is also an opportunity to use heterogeneous systems to address spatial problems more efficiently. Heterogeneous systems give the developer greater freedom to speed up suitable algorithms by increasing the parallel programming options available, making it possible for different parts of a program to run on the dedicated hardware that suits them best.
Several of the spatial problems that have not been optimised for heterogeneous systems cover very diverse areas that seem vastly different at first sight. However, they are closely related due to common data processing requirements, making them suitable for using dedicated hardware. In particular, this thesis provides new parallel approaches to tackle the following three crucial spatial problems: latent fingerprint identification, total viewshed computation, and path planning based on maximising visibility in large regions.
Latent fingerprint identification is one of the essential identification procedures in criminal investigations. Addressing this task is difficult as (i) it requires analysing large databases in a short time, and (ii) it is commonly addressed by combining different methods with complex data dependencies, making it challenging to exploit parallelism on heterogeneous CPU-GPU systems. Moreover, most efforts in this context focus on improving the accuracy of the approaches and neglect reducing the processing time—the most accurate algorithm was designed to process the fingerprints using a single thread. We developed a new methodology to address the latent fingerprint identification problem called “Asynchronous processing for Latent Fingerprint Identification” (ALFI) that speeds up processing while maintaining high accuracy. ALFI exploits all the resources of CPU-GPU systems using asynchronous processing and fine-coarse parallelism to analyse massive fingerprint databases. We assessed the performance of ALFI on Linux and Windows operating systems using the well-known NIST/FVC databases. Experimental results revealed that ALFI is on average 22x faster than the state-of-the-art identification algorithm, reaching a speed-up of 44.7x for the best-studied case.
In terrain analysis, Digital Elevation Models (DEMs) are relevant datasets used as input to those algorithms that typically sweep the terrain to analyse its main topological features such as visibility, elevation, and slope. The most challenging computation related to this topic is the total viewshed problem. It involves computing the viewshed—the visible area of the terrain—for each of the points in the DEM. The algorithms intended to solve this problem require many memory accesses to 2D arrays, which, despite being regular, lead to poor data locality in memory. We proposed a methodology called “skewed Digital Elevation Model” (sDEM) that substantially improves the locality of memory accesses and exploits the inherent parallelism of rotational sweep-based algorithms. Particularly, sDEM applies a data relocation technique before accessing the memory and computing the viewshed, thus significantly reducing the execution time. Different implementations are provided for single-core, multi-core, single-GPU, and multi-GPU platforms. We carried out two experiments to compare sDEM with (i) the most used geographic information systems (GIS) software and (ii) the state-of-the-art algorithm for solving the total viewshed problem. In the first experiment, sDEM results on average 8.8x faster than current GIS software, despite considering only a few points because of the limitations of the GIS software. In the second experiment, sDEM is 827.3x faster than the state-of-the-art algorithm considering the best case.
The use of Unmanned Aerial Vehicles (UAVs) with multiple onboard sensors has grown enormously in tasks involving terrain coverage, such as environmental and civil monitoring, disaster management, and forest fire fighting. Many of these tasks require a quick and early response, which makes maximising the land covered from the flight path an essential goal, especially when the area to be monitored is irregular, large, and includes many blind spots. In this regard, state-of-the-art total viewshed algorithms can help analyse large areas and find new paths providing all-round visibility. We designed a new heuristic called “Visibility-based Path Planning” (VPP) to solve the path planning problem in large areas based on a thorough visibility analysis. VPP generates flyable paths that provide high visual coverage to monitor forest regions using the onboard camera of a single UAV. For this purpose, the hidden areas of the target territory are identified and considered when generating the path. Simulation results showed that VPP covers up to 98.7% of the Montes de Malaga Natural Park and 94.5% of the Sierra de las Nieves National Park, both located in the province of Malaga (Spain). In addition, a real flight test confirmed the high visibility achieved using VPP. Our methodology and analysis can be easily applied to enhance monitoring in other large outdoor areas
Towards Optimal Line of Sight Coverage
Maintaining the line of sight to a moving object or person over long distances is critical in many applications, e.g., mobile communications, security, surveillance. Determining the best places to position (or build) technologies is difficult because even small changes in the location can greatly affect the so-called viewshed, which is the collection of land areas within line of sight of a given observer. The need for multiple sensors or towers further complicates this problem, as they often need to work cooperatively to achieve the best possible coverage. This study proposes a novel approach that consists of three separate inventions: 1) An algorithm for calculating viewsheds from many sensors in parallel, 2) Introduction of a meaningful measure of quality for coverage to compare competing configurations; and 3) Optimization of that well-defined objective function to find the best suitable sensor parameters for practical applications. Preliminary results suggest unprecedented performance on a wide range of real terrains
Recommended from our members
Hyperspectral unmixing: a theoretical aspect and applications to CRISM data processing
Hyperspectral imaging has been deployed in earth and planetary remote sensing, and has contributed the development of new methods for monitoring the earth environment and new discoveries in planetary science. It has given scientists and engineers a new way to observe the surface of earth and planetary bodies by measuring the spectroscopic spectrum at a pixel scale.
Hyperspectal images require complex processing before practical use. One of the important goals of hyperspectral imaging is to obtain the images of reflectance spectrum. A raw image obtained by hyperspectral remote sensing usually undergoes conversion to a physical quantity representing the intensity of light energy, called radiance. In order to obtain the reflectance spectrum of surface, the contribution of atmosphere needs to be addressed and then divided by a spectrum of ``white reference.\u27\u27 Furthermore, the obtained reflectance spectra of image pixels are likely to be the mixtures of multiple species due to limited spatial resolution from orbits around planets.
Hyperspectral unmixing is an attempt to unmix those pixels - to identify substantial components and estimate their fractional abundances. Hyperspectral unmixing has been widely explored in the literature, but there are still many aspects yet to be studied. The majority of research focuses on the development of methods to retrieve correct substantial components and accurate fractional abundances. Their theoretical aspects are rarely investigated. Chapter 2 will pursue a theoretical aspect of sparse unmixing, one of the hyperspectral unmixing problems and derive its theoretical conditions that guarantee the correct identification of substantial components.
Hyperspectral unmixing can also be used for other stages of hyperspectral data processing. Chapter 3 explores the application of hyperspectral unmixing to the processing of hyperspectral image acquired by the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) onboard the Mars Reconnaissance Orbiter (MRO). In particular, new atmospheric correction and de-noising methods for the CRISM data that use a hyperspectral unmixing to model surface spectra, are introduced. The new methods remove most of the problematic systematic artifacts present in CRISM images and significantly improve signal quality.
Chapter 4 investigates how hyperspectral images acquired from orbits can be combined with ground exploration. In the recent rush of the launch of many Martian ground rover missions, it is important to effectively integrate knowledge obtained by hyperspectral remote sensing from orbits into ground exploration for facilitating Martian exploration. In specific, this dissertation solves the problem of matching hyperspectral image pixels obtained by the CRISM with ground mega-pixel images acquired by the Mast Camera (Mastcam) installed on the Curiosity rover on Mars. A new systematic methodology to map the CRISM and Mastcam images onto high resolution surface topography is developed
Three-dimensional scene recovery for measuring sighting distances of rail track assets from monocular forward facing videos
Rail track asset sighting distance must be checked regularly to ensure the continued and safe operation of rolling stock. Methods currently used to check asset line-of-sight involve manual labour or laser systems. Video cameras and computer vision techniques provide one possible route for cheaper, automated systems. Three categories of computer vision method are identified for possible application: two-dimensional object recognition, two-dimensional object tracking and three-dimensional scene recovery. However, presented experimentation shows recognition and tracking methods produce less accurate asset line-of-sight results for increasing asset-camera distance. Regarding three-dimensional scene recovery, evidence is presented suggesting a relationship between image feature and recovered scene information. A novel framework which learns these relationships is proposed. Learnt relationships from recovered image features probabilistically limit the search space of future features, improving efficiency. This framework is applied to several scene recovery methods and is shown (on average) to decrease computation by two-thirds for a possible, small decrease in accuracy of recovered scenes. Asset line-of-sight results computed from recovered three-dimensional terrain data are shown to be more accurate than two-dimensional methods, not effected by increasing asset-camera distance. Finally, the analysis of terrain in terms of effect on asset line-of-sight is considered. Terrain elements, segmented using semantic information, are ranked with a metric combining a minimum line-of-sight blocking distance and the growth required to achieve this minimum distance. Since this ranking measure is relative, it is shown how an approximation of the terrain data can be applied, decreasing computation time. Further efficiency increases are found by decomposing the problem into a set of two-dimensional problems and applying binary search techniques. The combination of the research elements presented in this thesis provide efficient methods for automatically analysing asset line-of-sight and the impact of the surrounding terrain, from captured monocular video.EThOS - Electronic Theses Online ServiceGBUnited Kingdo