806 research outputs found
FLIAT, an object-relational GIS tool for flood impact assessment in Flanders, Belgium
Floods can cause damage to transportation and energy infrastructure, disrupt the delivery of services, and take a toll on public health, sometimes even causing significant loss of life. Although scientists widely stress the compelling need for resilience against extreme events under a changing climate, tools for dealing with expected hazards lag behind. Not only does the socio-economic, ecologic and cultural impact of floods need to be considered, but the potential disruption of a society with regard to priority adaptation guidelines, measures, and policy recommendations need to be considered as well. The main downfall of current impact assessment tools is the raster approach that cannot effectively handle multiple metadata of vital infrastructures, crucial buildings, and vulnerable land use (among other challenges). We have developed a powerful cross-platform flood impact assessment tool (FLIAT) that uses a vector approach linked to a relational database using open source program languages, which can perform parallel computation. As a result, FLIAT can manage multiple detailed datasets, whereby there is no loss of geometrical information. This paper describes the development of FLIAT and the performance of this tool
An anti-aliasing method for parallel rendering
We describe a parallel rendering method based on the adaptive supersampling technique to produce anti-aliased images with minimal memory consumption. Unlike traditional supersampling methods, this one does not supersample every pixel, but only those edge pixels. We consider various strategies to reduce the memory consumption in order for the method to be applicable in situations where limited or fixed amount of pre-allocated memory is available. This is a very important issue, especially in parallel rendering. We have implemented our algorithm on a parallel machine based on the message passing model. Towards the end of the paper, we present some experimental results on the memory usage and the performance of the method.published_or_final_versio
Efficient Algorithms for Coastal Geographic Problems
The increasing performance of computers has made it possible to solve algorithmically problems for which manual and possibly inaccurate methods have been previously used. Nevertheless, one must still pay attention to the performance of an algorithm if huge datasets are used or if the problem iscomputationally difficult.
Two geographic problems are studied in the articles included in this thesis. In the first problem the goal is to determine distances from points, called study points, to shorelines in predefined directions. Together with other in-formation, mainly related to wind, these distances can be used to estimate wave exposure at different areas. In the second problem the input consists of a set of sites where water quality observations have been made and of the results of the measurements at the different sites. The goal is to select a subset of the observational sites in such a manner that water quality is still measured in a sufficient accuracy when monitoring at the other sites is stopped to reduce economic cost.
Most of the thesis concentrates on the first problem, known as the fetch length problem. The main challenge is that the two-dimensional map is represented as a set of polygons with millions of vertices in total and the distances may also be computed for millions of study points in several directions. Efficient algorithms are developed for the problem, one of them approximate and the others exact except for rounding errors. The solutions also differ in that three of them are targeted for serial operation or for a small number of CPU cores whereas one, together with its further developments, is suitable also for parallel machines such as GPUs.Tietokoneiden suorituskyvyn kasvaminen on tehnyt mahdolliseksi ratkaista algoritmisesti ongelmia, joita on aiemmin tarkasteltu paljon ihmistyötä vaativilla, mahdollisesti epätarkoilla, menetelmillä. Algoritmien suorituskykyyn on kuitenkin toisinaan edelleen kiinnitettävä huomiota lähtömateriaalin suuren määrän tai ongelman laskennallisen vaikeuden takia.
Väitöskirjaansisältyvissäartikkeleissatarkastellaankahtamaantieteellistä ongelmaa. Ensimmäisessä näistä on määritettävä etäisyyksiä merellä olevista pisteistä lähimpään rantaviivaan ennalta määrätyissä suunnissa. Etäisyyksiä ja tuulen voimakkuutta koskevien tietojen avulla on mahdollista arvioida esimerkiksi aallokon voimakkuutta. Toisessa ongelmista annettuna on joukko tarkkailuasemia ja niiltä aiemmin kerättyä tietoa erilaisista vedenlaatua kuvaavista parametreista kuten sameudesta ja ravinteiden määristä. Tehtävänä on valita asemajoukosta sellainen osa joukko, että vedenlaatua voidaan edelleen tarkkailla riittävällä tarkkuudella, kun mittausten tekeminen muilla havaintopaikoilla lopetetaan kustannusten säästämiseksi.
Väitöskirja keskittyy pääosin ensimmäisen ongelman, suunnattujen etäisyyksien, ratkaisemiseen. Haasteena on se, että tarkasteltava kaksiulotteinen kartta kuvaa rantaviivan tyypillisesti miljoonista kärkipisteistä koostuvana joukkonapolygonejajaetäisyyksiäonlaskettavamiljoonilletarkastelupisteille kymmenissä eri suunnissa. Ongelmalle kehitetään tehokkaita ratkaisutapoja, joista yksi on likimääräinen, muut pyöristysvirheitä lukuun ottamatta tarkkoja. Ratkaisut eroavat toisistaan myös siinä, että kolme menetelmistä on suunniteltu ajettavaksi sarjamuotoisesti tai pienellä määrällä suoritinytimiä, kun taas yksi menetelmistä ja siihen tehdyt parannukset soveltuvat myös voimakkaasti rinnakkaisille laitteille kuten GPU:lle.
Vedenlaatuongelmassa annetulla asemajoukolla on suuri määrä mahdollisia osajoukkoja. Lisäksi tehtävässä käytetään aikaa vaativia operaatioita kuten lineaarista regressiota, mikä entisestään rajoittaa sitä, kuinka monta osajoukkoa voidaan tutkia. Ratkaisussa käytetäänkin heuristiikkoja, jotkaeivät välttämättä tuota optimaalista lopputulosta.Siirretty Doriast
Recommended from our members
Parallel Contour Path Planning for Complicated Cavity Part Fabrication using Voronoi-based Distance Map
To generate parallel contour path for direct production of complicated cavity
component, a novel path planning based on Voronoi-based distance map is presented
in this paper. Firstly, the grid representation of polygonal slice is produced by
hierarchical rasterization using graphics hardware acceleration and divided into
Voronoi cells of contour by an exact EDT (Euclidean distance transformation). Then,
each VCI (Voronoi cell of inner contour) is further subdivided into CLRI (closed loop
region of inner contour) and OLRI (open loop region of inner contour). Closed paths
for each CLRI and the block merging VCO (Voronoi cell of outer contour) and all
OLRIs are generated by local and global isoline extraction, respectively. The final
path ordered in circumferential and radial directions is obtained by sorting and
connecting all individual paths. In comparison with conventional methods such as
pair-wise intersection and Voronoi diagram, the proposed algorithm is numerically
robust, can avoid null path and self-intersection because of the application of distance
map and discrete Voronoi diagram. It is especially used for FGM (Functionally
Graded Material) design and fabrication.Mechanical Engineerin
Methods for Rapidly Processing Angular Masks of Next-Generation Galaxy Surveys
As galaxy surveys become larger and more complex, keeping track of the
completeness, magnitude limit, and other survey parameters as a function of
direction on the sky becomes an increasingly challenging computational task.
For example, typical angular masks of the Sloan Digital Sky Survey contain
about N=300,000 distinct spherical polygons. Managing masks with such large
numbers of polygons becomes intractably slow, particularly for tasks that run
in time O(N^2) with a naive algorithm, such as finding which polygons overlap
each other. Here we present a "divide-and-conquer" solution to this challenge:
we first split the angular mask into predefined regions called "pixels," such
that each polygon is in only one pixel, and then perform further computations,
such as checking for overlap, on the polygons within each pixel separately.
This reduces O(N^2) tasks to O(N), and also reduces the important task of
determining in which polygon(s) a point on the sky lies from O(N) to O(1),
resulting in significant computational speedup. Additionally, we present a
method to efficiently convert any angular mask to and from the popular HEALPix
format. This method can be generically applied to convert to and from any
desired spherical pixelization. We have implemented these techniques in a new
version of the mangle software package, which is freely available at
http://space.mit.edu/home/tegmark/mangle/, along with complete documentation
and example applications. These new methods should prove quite useful to the
astronomical community, and since mangle is a generic tool for managing angular
masks on a sphere, it has the potential to benefit terrestrial mapmaking
applications as well.Comment: New version 2.1 of the mangle software now available at
http://space.mit.edu/home/tegmark/mangle/ - includes galaxy survey masks and
galaxy lists for the latest SDSS data release and the 2dFGRS final data
release as well as extensive documentation and examples. 14 pages, 9 figures,
matches version accepted by MNRA
Sparse Volumetric Deformation
Volume rendering is becoming increasingly popular as applications require realistic solid shape representations with seamless texture mapping and accurate filtering. However rendering sparse volumetric data is difficult because of the limited memory and processing capabilities of current hardware. To address these limitations, the volumetric information can be stored at progressive resolutions in the hierarchical branches of a tree structure, and sampled according to the region of interest. This means that only a partial region of the full dataset is processed, and therefore massive volumetric scenes can be rendered efficiently.
The problem with this approach is that it currently only supports static scenes. This is because it is difficult to accurately deform massive amounts of volume elements and reconstruct the scene hierarchy in real-time. Another problem is that deformation operations distort the shape where more than one volume element tries to occupy the same location, and similarly gaps occur where deformation stretches the elements further than one discrete location. It is also challenging to efficiently support sophisticated deformations at hierarchical resolutions, such as character skinning or physically based animation. These types of deformation are expensive and require a control structure (for example a cage or skeleton) that maps to a set of features to accelerate the deformation process. The problems with this technique are that the varying volume hierarchy reflects different feature sizes, and manipulating the features at the original resolution is too expensive; therefore the control structure must also hierarchically capture features according to the varying volumetric resolution.
This thesis investigates the area of deforming and rendering massive amounts of dynamic volumetric content. The proposed approach efficiently deforms hierarchical volume elements without introducing artifacts and supports both ray casting and rasterization renderers. This enables light transport to be modeled both accurately and efficiently with applications in the fields of real-time rendering and computer animation. Sophisticated volumetric deformation, including character animation, is also supported in real-time. This is achieved by automatically generating a control skeleton which is mapped to the varying feature resolution of the volume hierarchy. The output deformations are demonstrated in massive dynamic volumetric scenes
Fast data parallel polygon rendering
Journal ArticleThis paper describes a data parallel method for polygon rendering on a massively parallel machine. This method, based on a simple shading model, is targeted for applications which require very fast rendering for extremely large sets of polygons. Such sets are found in many scienti c visualization applications. The renderer can handle arbitrarily complex polygons which need not be meshed. Issues involving load balancing are addressed and a data parallel load balancing algorithm is presented. The rendering and load balancing algorithms are implemented on both the CM-200 and the CM-5. Experimental results are presented. This rendering toolkit enables a scientist to display 3D shaded polygons directly from a parallel machine avoiding the transmission of huge amounts of data to a post-processing rendering system
APRIL: Approximating Polygons as Raster Interval Lists
The spatial intersection join an important spatial query operation, due to
its popularity and high complexity. The spatial join pipeline takes as input
two collections of spatial objects (e.g., polygons). In the filter step, pairs
of object MBRs that intersect are identified and passed to the refinement step
for verification of the join predicate on the exact object geometries. The
bottleneck of spatial join evaluation is in the refinement step. We introduce
APRIL, a powerful intermediate step in the pipeline, which is based on raster
interval approximations of object geometries. Our technique applies a sequence
of interval joins on 'intervalized' object approximations to determine whether
the objects intersect or not. Compared to previous work, APRIL approximations
are simpler, occupy much less space, and achieve similar pruning effectiveness
at a much higher speed. Besides intersection joins between polygons, APRIL can
directly be applied and has high effectiveness for polygonal range queries,
within joins, and polygon-linestring joins. By applying a lightweight
compression technique, APRIL approximations may occupy even less space than
object MBRs. Furthermore, APRIL can be customized to apply on partitioned data
and on polygons of varying sizes, rasterized at different granularities. Our
last contribution is a novel algorithm that computes the APRIL approximation of
a polygon without having to rasterize it in full, which is orders of magnitude
faster than the computation of other raster approximations. Experiments on real
data demonstrate the effectiveness and efficiency of APRIL; compared to the
state-of-the-art intermediate filter, APRIL occupies 2x-8x less space, is
3.5x-8.5x more time-efficient, and reduces the end-to-end join cost up to 3
times.Comment: 12 page
- …