18,951 research outputs found
From Big Data to Big Displays: High-Performance Visualization at Blue Brain
Blue Brain has pushed high-performance visualization (HPV) to complement its
HPC strategy since its inception in 2007. In 2011, this strategy has been
accelerated to develop innovative visualization solutions through increased
funding and strategic partnerships with other research institutions.
We present the key elements of this HPV ecosystem, which integrates C++
visualization applications with novel collaborative display systems. We
motivate how our strategy of transforming visualization engines into services
enables a variety of use cases, not only for the integration with high-fidelity
displays, but also to build service oriented architectures, to link into web
applications and to provide remote services to Python applications.Comment: ISC 2017 Visualization at Scale worksho
The role of graphics super-workstations in a supercomputing environment
A new class of very powerful workstations has recently become available which integrate near supercomputer computational performance with very powerful and high quality graphics capability. These graphics super-workstations are expected to play an increasingly important role in providing an enhanced environment for supercomputer users. Their potential uses include: off-loading the supercomputer (by serving as stand-alone processors, by post-processing of the output of supercomputer calculations, and by distributed or shared processing), scientific visualization (understanding of results, communication of results), and by real time interaction with the supercomputer (to steer an iterative computation, to abort a bad run, or to explore and develop new algorithms)
Volume visualization of time-varying data using parallel, multiresolution and adaptive-resolution techniques
This paper presents a parallel rendering approach that allows high-quality visualization of large time-varying volume datasets. Multiresolution and adaptive-resolution techniques are also incorporated to improve the efficiency of the rendering. Three basic steps are needed to implement this kind of an application. First we divide the task through decomposition of data. This decomposition can be either temporal or spatial or a mix of both. After data has been divided, each of the data portions is rendered by a separate processor to create sub-images or frames. Finally these sub-images or frames are assembled together into a final image or animation. After developing this application, several experiments were performed to show that this approach indeed saves time when a reasonable number of processors are used. Also, we conclude that the optimal number of processors is dependent on the size of the dataset used
BSML: A Binding Schema Markup Language for Data Interchange in Problem Solving Environments (PSEs)
We describe a binding schema markup language (BSML) for describing data
interchange between scientific codes. Such a facility is an important
constituent of scientific problem solving environments (PSEs). BSML is designed
to integrate with a PSE or application composition system that views model
specification and execution as a problem of managing semistructured data. The
data interchange problem is addressed by three techniques for processing
semistructured data: validation, binding, and conversion. We present BSML and
describe its application to a PSE for wireless communications system design
A random walk model of wave propagation
This paper shows that a reasonably accurate description of propagation loss in small urban cells can be obtained with a simple stochastic model based on the theory of random walks, that accounts for only two parameters: the amount of clutter and the amount of absorption in the environment. Despite the simplifications of the model, the derived analytical solution correctly describes the smooth transition of power attenuation from an inverse square law with the distance to the transmitter, to an exponential attenuation as this distance is increased - as it is observed in practice. Our analysis suggests using a simple exponential path loss formula as an alternative to the empirical formulas that are often used for prediction. Results are validated by comparison with experimental data collected in a small urban cell
AROMA: Automatic Generation of Radio Maps for Localization Systems
WLAN localization has become an active research field recently. Due to the
wide WLAN deployment, WLAN localization provides ubiquitous coverage and adds
to the value of the wireless network by providing the location of its users
without using any additional hardware. However, WLAN localization systems
usually require constructing a radio map, which is a major barrier of WLAN
localization systems' deployment. The radio map stores information about the
signal strength from different signal strength streams at selected locations in
the site of interest. Typical construction of a radio map involves measurements
and calibrations making it a tedious and time-consuming operation. In this
paper, we present the AROMA system that automatically constructs accurate
active and passive radio maps for both device-based and device-free WLAN
localization systems. AROMA has three main goals: high accuracy, low
computational requirements, and minimum user overhead. To achieve high
accuracy, AROMA uses 3D ray tracing enhanced with the uniform theory of
diffraction (UTD) to model the electric field behavior and the human shadowing
effect. AROMA also automates a number of routine tasks, such as importing
building models and automatic sampling of the area of interest, to reduce the
user's overhead. Finally, AROMA uses a number of optimization techniques to
reduce the computational requirements. We present our system architecture and
describe the details of its different components that allow AROMA to achieve
its goals. We evaluate AROMA in two different testbeds. Our experiments show
that the predicted signal strength differs from the measurements by a maximum
average absolute error of 3.18 dBm achieving a maximum localization error of
2.44m for both the device-based and device-free cases.Comment: 14 pages, 17 figure
Setting intelligent city tiling strategies for urban shading simulations
Assessing accurately the solar potential of all building surfaces in cities, including shading and multiple reflections between buildings, is essential for urban energy modelling. However, since the number of surface interactions and radiation exchanges increase exponentially with the scale of the district, innovative computational strategies are needed, some of which will be introduced in the present work. They should hold the best compromise between result accuracy and computational efficiency, i.e. computational time and memory requirements.
In this study, different approaches that may be used for the computation of urban solar irradiance in large areas are presented. Two concrete urban case studies of different densities have been used to compare and evaluate three different methods: the Perez Sky model, the Simplified Radiosity Algorithm and a new scene tiling method implemented in our urban simulation platform SimStadt, used for feasible estimations on a large scale. To quantify the influence of shading, the new concept of Urban Shading Ratio has been introduced and used for this evaluation process. In high density urban areas, this index may reach 60% for facades and 25% for roofs. Tiles of 500 m width and 200 m overlap are a minimum requirement in this case to compute solar irradiance with an acceptable accuracy. In medium density areas, tiles of 300 m width and 100 m overlap meet perfectly the accuracy requirements. In addition, the solar potential for various solar energy thresholds as well as the monthly variation of the Urban Shading Ratio have been quantified for both case studies, distinguishing between roofs and facades of different orientations
- …