9,069 research outputs found
BSML: A Binding Schema Markup Language for Data Interchange in Problem Solving Environments (PSEs)
We describe a binding schema markup language (BSML) for describing data
interchange between scientific codes. Such a facility is an important
constituent of scientific problem solving environments (PSEs). BSML is designed
to integrate with a PSE or application composition system that views model
specification and execution as a problem of managing semistructured data. The
data interchange problem is addressed by three techniques for processing
semistructured data: validation, binding, and conversion. We present BSML and
describe its application to a PSE for wireless communications system design
Serious Games in Cultural Heritage
Although the widespread use of gaming for leisure purposes has been well documented, the use of games to support cultural heritage purposes, such as historical teaching and learning, or for enhancing museum visits, has been less well considered. The state-of-the-art in serious game technology is identical to that of the state-of-the-art in entertainment games technology. As a result the field of serious heritage games concerns itself with recent advances in computer games, real-time computer graphics, virtual and augmented reality and artificial intelligence. On the other hand, the main strengths of serious gaming applications may be generalised as being in the areas of communication, visual expression of information, collaboration mechanisms, interactivity and entertainment. In this report, we will focus on the state-of-the-art with respect to the theories, methods and technologies used in serious heritage games. We provide an overview of existing literature of relevance to the domain, discuss the strengths and weaknesses of the described methods and point out unsolved problems and challenges. In addition, several case studies illustrating the application of methods and technologies used in cultural heritage are presented
Developing serious games for cultural heritage: a state-of-the-art review
Although the widespread use of gaming for leisure purposes has been well documented, the use of games to support cultural heritage purposes, such as historical teaching and learning, or for enhancing museum visits, has been less well considered. The state-of-the-art in serious game technology is identical to that of the state-of-the-art in entertainment games technology. As a result, the field of serious heritage games concerns itself with recent advances in computer games, real-time computer graphics, virtual and augmented reality and artificial intelligence. On the other hand, the main strengths of serious gaming applications may be generalised as being in the areas of communication, visual expression of information, collaboration mechanisms, interactivity and entertainment. In this report, we will focus on the state-of-the-art with respect to the theories, methods and technologies used in serious heritage games. We provide an overview of existing literature of relevance to the domain, discuss the strengths and weaknesses of the described methods and point out unsolved problems and challenges. In addition, several case studies illustrating the application of methods and technologies used in cultural heritage are presented
The Iray Light Transport Simulation and Rendering System
While ray tracing has become increasingly common and path tracing is well
understood by now, a major challenge lies in crafting an easy-to-use and
efficient system implementing these technologies. Following a purely
physically-based paradigm while still allowing for artistic workflows, the Iray
light transport simulation and rendering system allows for rendering complex
scenes by the push of a button and thus makes accurate light transport
simulation widely available. In this document we discuss the challenges and
implementation choices that follow from our primary design decisions,
demonstrating that such a rendering system can be made a practical, scalable,
and efficient real-world application that has been adopted by various companies
across many fields and is in use by many industry professionals today
Refraction-corrected ray-based inversion for three-dimensional ultrasound tomography of the breast
Ultrasound Tomography has seen a revival of interest in the past decade,
especially for breast imaging, due to improvements in both ultrasound and
computing hardware. In particular, three-dimensional ultrasound tomography, a
fully tomographic method in which the medium to be imaged is surrounded by
ultrasound transducers, has become feasible. In this paper, a comprehensive
derivation and study of a robust framework for large-scale bent-ray ultrasound
tomography in 3D for a hemispherical detector array is presented. Two
ray-tracing approaches are derived and compared. More significantly, the
problem of linking the rays between emitters and receivers, which is
challenging in 3D due to the high number of degrees of freedom for the
trajectory of rays, is analysed both as a minimisation and as a root-finding
problem. The ray-linking problem is parameterised for a convex detection
surface and three robust, accurate, and efficient ray-linking algorithms are
formulated and demonstrated. To stabilise these methods, novel
adaptive-smoothing approaches are proposed that control the conditioning of the
update matrices to ensure accurate linking. The nonlinear UST problem of
estimating the sound speed was recast as a series of linearised subproblems,
each solved using the above algorithms and within a steepest descent scheme.
The whole imaging algorithm was demonstrated to be robust and accurate on
realistic data simulated using a full-wave acoustic model and an anatomical
breast phantom, and incorporating the errors due to time-of-flight picking that
would be present with measured data. This method can used to provide a
low-artefact, quantitatively accurate, 3D sound speed maps. In addition to
being useful in their own right, such 3D sound speed maps can be used to
initialise full-wave inversion methods, or as an input to photoacoustic
tomography reconstructions
Data visualization: foundations, techniques, and applications
The idea that there is no precedence for the amount of data that is being generated
today, and that the need to explore and analyze this vast volumes of data has become an
increasingly difficult task that could benefit from using Data visualization is presented. It is
pointed that the goals of data visualization are data-driven and depend largely on the type
of application, but the final objective is to convey to people information that is hidden in
large volumes of data. Finally, the visualization pipeline is presented to review aspects of
visualization methodology and visualization tool design, to conclude that the true potential of
visualization emerge from the interaction of the user with the visualization model. The paper
concludes establishing that the current processes of digital transformation will increase the
need for visual analytics tools
Interactive ray shading of FRep objects
In this paper we present a method for interactive rendering general procedurally defined functionally represented (FRep) objects using the acceleration with graphics hardware, namely Graphics Processing Units (GPU). We obtain interactive rates by using GPU acceleration for all computations in rendering algorithm, such as ray-surface intersection, function evaluation and normal computations. We compute primary rays as well as secondary rays for shadows, reflection and refraction for obtaining high quality of the output visualization and further extension to ray-tracing of FRep objects. The algorithm is well-suited for modern GPUs and provides acceptable interactive rates with good quality of the results. A wide range of objects can be rendered including traditional skeletal implicit surfaces, constructive solids, and purely procedural objects such as 3D fractals
nelli: a lightweight frontend for MLIR
Multi-Level Intermediate Representation (MLIR) is a novel compiler
infrastructure that aims to provide modular and extensible components to
facilitate building domain specific compilers. However, since MLIR models
programs at an intermediate level of abstraction, and most extant frontends are
at a very high level of abstraction, the semantics and mechanics of the
fundamental transformations available in MLIR are difficult to investigate and
employ in and of themselves. To address these challenges, we have developed
\texttt{nelli}, a lightweight, Python-embedded, domain-specific, language for
generating MLIR code. \texttt{nelli} leverages existing MLIR infrastructure to
develop Pythonic syntax and semantics for various MLIR features. We describe
\texttt{nelli}'s design goals, discuss key details of our implementation, and
demonstrate how \texttt{nelli} enables easily defining and lowering compute
kernels to diverse hardware platforms
Procedural function-based modelling of volumetric microstructures
We propose a new approach to modelling heterogeneous objects containing internal volumetric structures with size of details orders of magnitude smaller than the overall size of the object. The proposed function-based procedural representation provides compact, precise, and arbitrarily parameterised models of coherent microstructures, which can undergo blending, deformations, and other geometric operations, and can be directly rendered and fabricated without generating any auxiliary representations (such as polygonal meshes and voxel arrays). In particular, modelling of regular lattices and cellular microstructures as well as irregular porous media is discussed and illustrated. We also present a method to estimate parameters of the given model by fitting it to microstructure data obtained with magnetic resonance imaging and other measurements of natural and artificial objects. Examples of rendering and digital fabrication of microstructure models are presented
- …