1,437 research outputs found
Experiences with Matlab and VRML in Functional Neuroimaging Visualizations
Introduction Neuroinformatics is the task of organizing, analyzing and presenting the knowledge of neuroscience. In the part of neuroinformatics that is associated with functional neuroimaging ("brain mapping") visualization plays an important role. Functional neuroimaging utilizes brain scanners to examine the inner workings of the human brain, usually by mapping the relationship between cognitive components and specialized brain regions. fMRI and PET brain scanners can produce sequential volumetric scalar data. The functional neuroimaging study sometimes incorporate a complicated design making the "time axis" multi-dimensional. This data is seldom viewed directly, but instead it is put through rigorous processing, the result being one or more scalar volumes. Center of mass points of "hot spot" areas are tabelized and included in scientific publications as so-called "Talairach-coordinates" [1]. A database exists with these coordinates: The BrainMap database [2]. The typical vis
Parallel LISP
Projects in the past few years have looked into the problem of automatic parallelization of the Lisp programming language. Since it appears to be feasible to adapt Lisp to run on a general parallel computer, an implementation will be developed. This implementation will be as general as possible in order to locate the tradeoffs between implementing Lisp on a general parallel computer versus having an efficient interpreter. This implementation can be used to study the execution characteristics of Lisp in a parallel environment. It can also be used to derive information about architectural features which affect the performance of Lisp on parallel machines. This implementation will use a multitasking system and interprocess communication to simulate an MIMD machine. The implementation will include the formation, queuing, distribution, and execution of dataflow frames. Realistic Lisp application programs will be used with the implementation to examine the feasibility and efficiency of parallel Lisp. Measurements derivable from the simulator include number of processor cycles, processor utilization, memory requirements, and speedup. These tests will provide two main results. First, they will indicate possibilities for further gains by illustrating the bottlenecks in such a scheme. Second, they will help determine if it is indeed feasible to run Lisp on a parallel machine or if instead the overhead is too high for the application to be profitable. Most likely, some parallelism will be profitable. The simulation will provide information on the extent to which parallelism can be utilized
Bridging the gap: building better tools for game development
The following thesis is about questioning how we design game making tools, and how developers may build easier tools to use. It is about the highlighting the inadequacies of current game making programs as well as introducing Goal-Oriented Design as a possible solution. It is also about the processes of digital product development, and reflecting on the necessity for both design and development methods to work cohesively for meaningful results. Interaction Design is in essence the abstracting of key relations that matter to the contextual environment. The result of attempting to tie the Interaction Design principles, Game Design issues together with Software Development practices has led to the production of the User-Centred game engine, PlayBoard
Cadabra: reference guide and tutorial
Cadabra is a computer algebra system for the manipulation of tensorial mathematical expressions such as they occur in “field theory problems”. It is aimed at, but not necessarily restricted to, high-energy physicists. It is constructed as a simple tree-manipulating core, a large collection of standalone algorithmic modules which act on the expression tree, and a set of modules responsible for output of nodes in the tree. All of these parts are written in C++. The input and output formats closely follow TEX, which in many cases means that cadabra is much simpler to use than other similar programs. It intentionally does not contain its own programming language; instead, new functionality is added by writing new modules in C++
Recommended from our members
A semantic Grid for molecular science
Proceedings of the 2003 UK e-Science All Hands Meeting, 31st August - 3rd September, Nottingham UKThe properties of molecules have very well defined semantics and allow the creation of a semantic GRID. Markup languages (CML - Chemical Markup Language) and dictionary-based ontologies have been designed to support a wide range of applications, including chemical supply, publication and the safety of compounds. Many properties can be computed by Quantum Mechanical (QM) programs and we have developed a "black-box" system based on XML wrappers for all components. This is installed on a Condor system on which we have computed properties for 250, 000 compounds. The results of this will be available in an OpenData/OpenSource peer-to-peer (P2P) system (WorldWide Molecular Matrix - WWMM)
Navigation and interaction in a real-scale digital mock-up using natural language and user gesture
This paper tries to demonstrate a very new real-scale 3D system and sum up some firsthand and cutting edge results concerning multi-modal navigation and interaction interfaces. This work is part of the CALLISTO-SARI collaborative project. It aims at constructing an immersive room, developing a set of software tools and some navigation/interaction interfaces. Two sets of interfaces will be introduced here: 1) interaction devices, 2) natural language (speech processing) and user gesture. The survey on this system using subjective observation (Simulator Sickness Questionnaire, SSQ) and objective measurements (Center of Gravity, COG) shows that using natural languages and gesture-based interfaces induced less cyber-sickness comparing to device-based interfaces. Therefore, gesture-based is more efficient than device-based interfaces.FUI CALLISTO-SAR
Logic programming in the context of multiparadigm programming: the Oz experience
Oz is a multiparadigm language that supports logic programming as one of its
major paradigms. A multiparadigm language is designed to support different
programming paradigms (logic, functional, constraint, object-oriented,
sequential, concurrent, etc.) with equal ease. This article has two goals: to
give a tutorial of logic programming in Oz and to show how logic programming
fits naturally into the wider context of multiparadigm programming. Our
experience shows that there are two classes of problems, which we call
algorithmic and search problems, for which logic programming can help formulate
practical solutions. Algorithmic problems have known efficient algorithms.
Search problems do not have known efficient algorithms but can be solved with
search. The Oz support for logic programming targets these two problem classes
specifically, using the concepts needed for each. This is in contrast to the
Prolog approach, which targets both classes with one set of concepts, which
results in less than optimal support for each class. To explain the essential
difference between algorithmic and search programs, we define the Oz execution
model. This model subsumes both concurrent logic programming
(committed-choice-style) and search-based logic programming (Prolog-style).
Instead of Horn clause syntax, Oz has a simple, fully compositional,
higher-order syntax that accommodates the abilities of the language. We
conclude with lessons learned from this work, a brief history of Oz, and many
entry points into the Oz literature.Comment: 48 pages, to appear in the journal "Theory and Practice of Logic
Programming
Emulating Digital Logic using Transputer Networks (Very High Parallelism = Simplicity = Performance)
Modern VLSI technology has changed the economic rules by which the balance between processing
power, memory and communications is decided in computing systems. This will have a profound
impact on the design rules for the controlling software. In particular, the criteria for judging efficiency
of the algorithms will be somewhat different. This paper explores some of these implications through
the development of highly parallel and highly distributable algorithms based on occam and transputer
networks. The major results reported are a new simplicity for software designs, a corresponding ability
to reason (formally and informally) about their properties, the reusability of their components and some
real performance figures which demonstrate their practicality. Some guidelines to assist in these designs
are also given. As a vehicle for discussion, an interactive simulator is developed for checking the
functional and timing characteristics of digital logic circuits of arbitrary complexity
- …