3 research outputs found
Distributed Shared Memory for Roaming Large Volumes
We present a cluster-based volume rendering system for roaming very large volumes. This system allows to move a gigabyte-sized probe inside a total volume of several tens or hundreds of gigabytes in real-time. While the size of the probe is limited by the total amount of texture memory on the cluster, the size of the total data set has no theoretical limit. The cluster is used as a distributed graphics processing unit that both aggregates graphics power and graphics memory. A hardware-accelerated volume renderer runs in parallel on the cluster nodes and the final image compositing is implemented using a pipelined sort-last rendering algorithm. Meanwhile, volume bricking and volume paging allow efficient data caching. On each rendering node, a distributed hierarchical cache system implements a global software-based distributed shared memory on the cluster. In case of a cache miss, this system first checks page residency on the other cluster nodes instead of directly accessing local disks. Using two Gigabit Ethernet network interfaces per node, we accelerate data fetching by a factor of 4 compared to directly accessing local disks. The system also implements asynchronous disk access and texture loading, which makes it possible to overlap data loading, volume slicing and rendering for optimal volume roaming
An Architecture Approach for 3D Render Distribution using Mobile Devices in Real Time
Nowadays, video games such as Massively
Multiplayer Online Game (MMOG) have become cultural
mediators. Mobile games contribute to a large number of
downloads and potential benefits in the applications market.
Although processing power of mobile devices increases the
bandwidth transmission, a poor network connectivity may
bottleneck Gaming as a Service (GaaS). In order to enhance
performance in digital ecosystem, processing tasks are
distributed among thin client devices and robust servers. This
research is based on the method ‘divide and rule’, that is,
volumetric surfaces are subdivided using a tree-KD of sequence
of scenes in a game, so reducing the surface into small sets of
points. Reconstruction efficiency is improved, because the search
of data is performed in local and small regions. Processes are
modeled through a finite set of states that are built using Hidden
Markov Models with domains configured by heuristics. Six test
that control the states of each heuristic, including the number of
intervals are carried out to validate the proposed model. This
validation concludes that the proposed model optimizes response
frames per second, in a sequence of interactions
Distributed shared memory for roaming large volumes
We present a cluster-based volume rendering system for roaming very large volumes. This system allows to move a gigabyte-sized probe inside a total volume of several tens or hundreds of gigabytes in real-time. While the size of the probe is limited by the total amount of texture memory on the cluster, the size of the total data set has no theoretical limit. The cluster is used as a distributed graphics processing unit that both aggregates graphics power and graphics memory. A hardware-accelerated volume renderer runs in parallel on the cluster nodes and the final image compositing is implemented using a pipelined sort-last rendering algorithm. Meanwhile, volume bricking and volume paging allow efficient data caching. On each rendering node, a distributed hierarchical cache system implements a global software-based distributed shared memory on the cluster. In case of a cache miss, this system first checks page residency on the other cluster nodes instead of directly accessing local disks. Using two Gigabit Ethernet network interfaces per node, we accelerate data fetching by a factor of 4 compared to directly accessing local disks. The system also implements asynchronous disk access and texture loading, which makes it possible to overlap data loading, volume slicing and rendering for optimal volume roaming