150 research outputs found
Large scale parallel state space search utilizing graphics processing units and solid state disks
The evolution of science is a double-track process composed of theoretical insights on
the one hand and practical inventions on the other one. While in most cases new theoretical
insights motivate hardware developers to produce systems following the theory,
in some cases the shown hardware solutions force theoretical research to forecast the
results to expect.
Progress in computer science rely on two aspects, processing information and storing
it. Improving one side without touching the other will evidently impose new problems
without producing a real alternative solution to the problem. While decreasing
the time to solve a challenge may provide a solution to long term problems it will fail
in solving problems which require much storage. In contrast, increasing the available
amount of space for information storage will definitively allow harder problems to be
solved by offering enough time.
This work studies two recent developments in the hardware to utilize them in the
domain of graph searching. The trend to discontinue information storage on magnetic
disks and use electronic media instead and the tendency to parallelize the computation
to speed up information processing are analyzed.
Storing information on rotating magnetic disk has become the standard way since
a couple of years and has reached a point where the storage capacity can be seen as
infinite due to the possibility of adding new drives instantly with low costs. However,
while the possible storage capacity increases every year, the transferring speed does
not. At the beginning of this work, solid state media appeared on the market, slowly
suppressing hard disks in speed demanding applications. Today, when finishing this
work solid state drives are replacing magnetic disks in mobile computing, and computing
centers use them as caching media to increase information retrieving speed.
The reason is the huge advantage in random access where the speed does not drop so
significantly as with magnetic drives.
While storing and retrieving huge amounts of information is one side of the medal,
the other one is the processing speed. Here the trend from increasing the clock frequency
of single processors stagnated in 2006 and the manufacturers started to combine
multiple cores in one processor. While a CPU is a general purpose processor the
manufacturers of graphics processing units (GPUs) encounter the challenge to perform
the same computation for a large number of image points. Here, a parallelization offers
huge advantages, so modern graphics cards have evolved to highly parallel computing
instances with several hundreds of cores. The challenge is to utilize these processors
in other domains than graphics processing.
One of the vastly used tasks in computer science is search. Not only disciplines with
an obvious search but also in software testing searching a graph is the crucial aspect.
Strategies which enable to examine larger graphs, be it by reducing the number of
considered nodes or by increasing the searching speed, have to be developed to battle
the rising challenges. This work enhances searching in multiple scientific domains
like explicit state Model Checking, Action Planning, Game Solving and Probabilistic
Model Checking proposing strategies to find solutions for the search problems.
Providing an universal search strategy which can be used in all environments to
utilize solid state media and graphics processing units is not possible due to the
heterogeneous aspects of the domains. Thus, this work presents a tool kit of strategies tied
together in an universal three stage strategy. In the first stage the edges leaving a node
are determined, in the second stage the algorithm follows the edges to generate nodes.
The duplicate detection in stage three compares all newly generated nodes to existing
once and avoids multiple expansions.
For each stage at least two strategies are proposed and decision hints are given to
simplify the selection of the proper strategy. After describing the strategies the kit is
evaluated in four domains explaining the choice for the strategy, evaluating its outcome
and giving future clues on the topic
I Am Error
I Am Error is a platform study of the Nintendo Family Computer (or Famicom), a videogame console first released in Japan in July 1983 and later exported to the rest of the world as the Nintendo Entertainment System (or NES). The book investigates the underlying computational architecture of the console and its effects on the creative works (e.g. videogames) produced for the platform. I Am Error advances the concept of platform as a shifting configuration of hardware and software that extends even beyond its ‘native’ material construction. The book provides a deep technical understanding of how the platform was programmed and engineered, from code to silicon, including the design decisions that shaped both the expressive capabilities of the machine and the perception of videogames in general. The book also considers the platform beyond the console proper, including cartridges, controllers, peripherals, packaging, marketing, licensing, and play environments. Likewise, it analyzes the NES’s extension and afterlife in emulation and hacking, birthing new genres of creative expression such as ROM hacks and tool-assisted speed runs. I Am Error considers videogames and their platforms to be important objects of cultural expression, alongside cinema, dance, painting, theater and other media. It joins the discussion taking place in similar burgeoning disciplines—code studies, game studies, computational theory—that engage digital media with critical rigor and descriptive depth. But platform studies is not simply a technical discussion—it also keeps a keen eye on the cultural, social, and economic forces that influence videogames. No platform exists in a vacuum: circuits, code, and console alike are shaped by the currents of history, politics, economics, and culture—just as those currents are shaped in kind
Challenges and applications of assembly level software model checking
This thesis addresses the application of a formal method called Model Checking to the
domain of software verification. Here, exploration algorithms are used to search for
errors in a program. In contrast to the majority of other approaches, we claim that the
search should be applied to the actual source code of the program, rather than to some
formal model.
There are several challenges that need to be overcome to build such a model checker.
First, the tool must be capable to handle the full semantics of the underlying programming
language. This implies a considerable amount of additional work unless the interpretation
of the program is done by some existing infrastructure. The second challenge
lies in the increased memory requirements needed to memorize entire program configurations.
This additionally aggravates the problem of large state spaces that every model
checker faces anyway. As a remedy to the first problem, the thesis proposes to use an existing
virtual machine to interpret the program. This takes the burden off the developer,
who can fully concentrate on the model checking algorithms. To address the problem of
large program states, we call attention to the fact that most transitions in a program only
change small fractions of the entire program state. Based on this observation, we devise
an incremental storing of states which considerably lowers the memory requirements of
program exploration. To further alleviate the per-state memory requirement, we apply
state reconstruction, where states are no longer memorized explicitly but through their
generating path. Another problem that results from the large state description of a program
lies in the computational effort of hashing, which is exceptionally high for the used
approach. Based on the same observation as used for the incremental storing of states,
we devise an incremental hash function which only needs to process the changed parts
of the program’s state. Due to the dynamic nature of computer programs, this is not a
trivial task and constitutes a considerable part of the overall thesis.
Moreover, the thesis addresses a more general problem of model checking - the state
explosion, which says that the number of reachable states grows exponentially in the
number of state components. To minimize the number of states to be memorized, the
thesis concentrates on the use of heuristic search. It turns out that only a fraction of all
reachable states needs to be visited to find a specific error in the program. Heuristics
can greatly help to direct the search forwards the error state. As another effective way
to reduce the number of memorized states, the thesis proposes a technique that skips
intermediate states that do not affect shared resources of the program. By merging several
consecutive state transitions to a single transition, the technique may considerably
truncate the search tree.
The proposed approach is realized in StEAM, a model checker for concurrent C++ programs,
which was developed in the course of the thesis. Building on an existing virtual
machine, the tool provides a set of blind and directed search algorithms for the detection
of errors in the actual C++ implementation of a program. StEAM implements all of the
aforesaid techniques, whose effectiveness is experimentally evaluated at the end of the
thesis.
Moreover, we exploit the relation between model checking and planning. The claim is,
that the two fields of research have great similarities and that technical advances in one
fields can easily carry over to the other. The claim is supported by a case study where
StEAM is used as a planner for concurrent multi-agent systems.
The thesis also contains a user manual for StEAM and technical details that facilitate
understanding the engineering process of the tool
New Game Physics - Added Value for Transdisciplinary Teams
This study focused on game physics, an area of computer game design where physics is applied in interactive computer software. The purpose of the research was a fresh analysis of game physics in order to prove that its current usage is limited and requires advancement. The investigations presented in this dissertation establish constructive principles to advance game physics design. The main premise was that transdisciplinary approaches provide significant value. The resulting designs reflected combined goals of game developers, artists and physicists and provide novel ways to incorporate physics into games. The applicability and user impact of such new game physics across several target audiences was thoroughly examined.
In order to explore the transdisciplinary nature of the premise, valid evidence was gathered using a broad range of theoretical and practical methodologies. The research established a clear definition of game physics within the context of historical, technological, practical, scientific, and artistic considerations. Game analysis, literature reviews and seminal surveys of game players, game developers and scientists were conducted. A heuristic categorization of game types was defined to create an extensive database of computer games and carry out a statistical analysis of game physics usage. Results were then combined to define core principles for the design of unconventional new game physics elements. Software implementations of several elements were developed to examine the practical feasibility of the proposed principles. This research prototype was exposed to practitioners (artists, game developers and scientists) in field studies, documented on video and subsequently analyzed to evaluate the effectiveness of the elements on the audiences.
The findings from this research demonstrated that standard game physics is a common but limited design element in computer games. It was discovered that the entertainment driven design goals of game developers interfere with the needs of educators and scientists. Game reviews exemplified the exaggerated and incorrect physics present in many commercial computer games. This “pseudo physics” was shown to have potentially undesired effects on game players. Art reviews also indicated that game physics technology remains largely inaccessible to artists. The principal conclusion drawn from this study was that the proposed new game physics advances game design and creates value by expanding the choices available to game developers and designers, enabling artists to create more scientifically robust artworks, and encouraging scientists to consider games as a viable tool for education and research. The practical portion generated tangible evidence that the isolated “silos” of engineering, art and science can be bridged when game physics is designed in a transdisciplinary way.
This dissertation recommends that scientific and artistic perspectives should always be considered when game physics is used in computer-based media, because significant value for a broad range of practitioners in succinctly different fields can be achieved. The study has thereby established a state of the art research into game physics, which not only offers other researchers constructive principles for future investigations, but also provides much-needed new material to address the observed discrepancies in game theory and digital media design
Scalable exploration of 3D massive models
Programa Oficial de Doutoramento en Tecnoloxías da Información e as Comunicacións. 5032V01[Resumo] Esta tese presenta unha serie técnicas escalables que avanzan o estado da arte da creación e exploración de grandes modelos tridimensionaies. No ámbito da xeración
destes modelos, preséntanse métodos para mellorar a adquisición e procesado de
escenas reais, grazas a unha implementación eficiente dun sistema out- of- core de
xestión de nubes de puntos, e unha nova metodoloxía escalable de fusión de datos
de xeometría e cor para adquisicións con oclusións. No ámbito da visualización de
grandes conxuntos de datos, que é o núcleo principal desta tese, preséntanse dous
novos métodos. O primeiro é unha técnica adaptabile out-of-core que aproveita o
hardware de rasterización da GPU e as occlusion queries para crear lotes coherentes
de traballo, que serán procesados por kernels de trazado de raios codificados en
shaders, permitindo out-of-core ray-tracing con sombreado e iluminación global. O segundo
é un método de compresión agresivo que aproveita a redundancia xeométrica
que se adoita atopar en grandes modelos 3D para comprimir os datos de forma
que caiban, nun formato totalmente renderizable, na memoria da GPU. O método
está deseñado para representacións voxelizadas de escenas 3D, que son amplamente
utilizadas para diversos cálculos como para acelerar as consultas de visibilidade na
GPU. A compresión lógrase fusionando subárbores idénticas a través dunha transformación
de similitude, e aproveitando a distribución non homoxénea de referencias
a nodos compartidos para almacenar punteiros aos nodos fillo, e utilizando unha
codificación de bits variable. A capacidade e o rendemento de todos os métodos
avalíanse utilizando diversos casos de uso do mundo real de diversos ámbitos e
sectores, incluídos o patrimonio cultural, a enxeñería e os videoxogos.[Resumen] En esta tesis se presentan una serie técnicas escalables que avanzan el estado del arte de la creación y exploración de grandes modelos tridimensionales. En el ámbito de
la generación de estos modelos, se presentan métodos para mejorar la adquisición y
procesado de escenas reales, gracias a una implementación eficiente de un sistema
out-of-core de gestión de nubes de puntos, y una nueva metodología escalable de
fusión de datos de geometría y color para adquisiciones con oclusiones. Para la
visualización de grandes conjuntos de datos, que constituye el núcleo principal de
esta tesis, se presentan dos nuevos métodos. El primero de ellos es una técnica
adaptable out-of-core que aprovecha el hardware de rasterización de la GPU y las
occlusion queries, para crear lotes coherentes de trabajo, que serán procesados por
kernels de trazado de rayos codificados en shaders, permitiendo renders out-of-core
avanzados con sombreado e iluminación global. El segundo es un método de compresión
agresivo, que aprovecha la redundancia geométrica que se suele encontrar en
grandes modelos 3D para comprimir los datos de forma que quepan, en un formato
totalmente renderizable, en la memoria de la GPU. El método está diseñado para
representaciones voxelizadas de escenas 3D, que son ampliamente utilizadas para
diversos cálculos como la aceleración las consultas de visibilidad en la GPU o el
trazado de sombras. La compresión se logra fusionando subárboles idénticos a través
de una transformación de similitud, y aprovechando la distribución no homogénea de
referencias a nodos compartidos para almacenar punteros a los nodos hijo, utilizando
una codificación de bits variable. La capacidad y el rendimiento de todos los métodos
se evalúan utilizando diversos casos de uso del mundo real de diversos ámbitos y
sectores, incluidos el patrimonio cultural, la ingeniería y los videojuegos.[Abstract] This thesis introduces scalable techniques that advance the state-of-the-art in massive model creation and exploration. Concerning model creation, we present methods for improving reality-based scene acquisition and processing, introducing an efficient
implementation of scalable out-of-core point clouds and a data-fusion approach for
creating detailed colored models from cluttered scene acquisitions. The core of this
thesis concerns enabling technology for the exploration of general large datasets.
Two novel solutions are introduced. The first is an adaptive out-of-core technique
exploiting the GPU rasterization pipeline and hardware occlusion queries in order
to create coherent batches of work for localized shader-based ray tracing kernels,
opening the door to out-of-core ray tracing with shadowing and global illumination.
The second is an aggressive compression method that exploits redundancy in large
models to compress data so that it fits, in fully renderable format, in GPU memory.
The method is targeted to voxelized representations of 3D scenes, which are widely
used to accelerate visibility queries on the GPU. Compression is achieved by merging
subtrees that are identical through a similarity transform and by exploiting the skewed
distribution of references to shared nodes to store child pointers using a variable bitrate
encoding The capability and performance of all methods are evaluated on many
very massive real-world scenes from several domains, including cultural heritage,
engineering, and gaming
Computer Science & Technology Series : XVIII Argentine Congress of Computer Science. Selected papers
CACIC’12 was the eighteenth Congress in the CACIC series. It was organized by the School of Computer Science and Engineering at the Universidad Nacional del Sur.
The Congress included 13 Workshops with 178 accepted papers, 5 Conferences, 2 invited tutorials, different meetings related with Computer Science Education (Professors, PhD students, Curricula) and an International School with 5 courses.
CACIC 2012 was organized following the traditional Congress format, with 13 Workshops covering a diversity of dimensions of Computer Science Research.
Each topic was supervised by a committee of 3-5 chairs of different Universities.
The call for papers attracted a total of 302 submissions. An average of 2.5 review reports were collected for each paper, for a grand total of 752 review reports that involved about 410 different reviewers.
A total of 178 full papers, involving 496 authors and 83 Universities, were accepted and 27 of them were selected for this book.Red de Universidades con Carreras en Informática (RedUNCI
Scalable exploration of highly detailed and annotated 3D models
With the widespread availability of mobile graphics terminals andWebGL-enabled browsers, 3D
graphics over the Internet is thriving. Thanks to recent advances in 3D acquisition and modeling
systems, high-quality 3D models are becoming increasingly common, and are now potentially
available for ubiquitous exploration.
In current 3D repositories, such as Blend Swap, 3D Café or Archive3D, 3D models available for
download are mostly presented through a few user-selected static images. Online exploration is
limited to simple orbiting and/or low-fidelity explorations of simplified models, since photorealistic
rendering quality of complex synthetic environments is still hardly achievable within the
real-time constraints of interactive applications, especially on on low-powered mobile devices or
script-based Internet browsers.
Moreover, navigating inside 3D environments, especially on the now pervasive touch devices,
is a non-trivial task, and usability is consistently improved by employing assisted navigation
controls. In addition, 3D annotations are often used in order to integrate and enhance the visual
information by providing spatially coherent contextual information, typically at the expense of
introducing visual cluttering.
In this thesis, we focus on efficient representations for interactive exploration and understanding
of highly detailed 3D meshes on common 3D platforms. For this purpose, we present several
approaches exploiting constraints on the data representation for improving the streaming and
rendering performance, and camera movement constraints in order to provide scalable navigation
methods for interactive exploration of complex 3D environments.
Furthermore, we study visualization and interaction techniques to improve the exploration
and understanding of complex 3D models by exploiting guided motion control techniques to aid
the user in discovering contextual information while avoiding cluttering the visualization.
We demonstrate the effectiveness and scalability of our approaches both in large screen museum
installations and in mobile devices, by performing interactive exploration of models ranging
from 9Mtriangles to 940Mtriangles
- …