1,449 research outputs found
Doctor of Philosophy
dissertationStochastic methods, dense free-form mapping, atlas construction, and total variation are examples of advanced image processing techniques which are robust but computationally demanding. These algorithms often require a large amount of computational power as well as massive memory bandwidth. These requirements used to be ful lled only by supercomputers. The development of heterogeneous parallel subsystems and computation-specialized devices such as Graphic Processing Units (GPUs) has brought the requisite power to commodity hardware, opening up opportunities for scientists to experiment and evaluate the in uence of these techniques on their research and practical applications. However, harnessing the processing power from modern hardware is challenging. The di fferences between multicore parallel processing systems and conventional models are signi ficant, often requiring algorithms and data structures to be redesigned signi ficantly for efficiency. It also demands in-depth knowledge about modern hardware architectures to optimize these implementations, sometimes on a per-architecture basis. The goal of this dissertation is to introduce a solution for this problem based on a 3D image processing framework, using high performance APIs at the core level to utilize parallel processing power of the GPUs. The design of the framework facilitates an efficient application development process, which does not require scientists to have extensive knowledge about GPU systems, and encourages them to harness this power to solve their computationally challenging problems. To present the development of this framework, four main problems are described, and the solutions are discussed and evaluated: (1) essential components of a general 3D image processing library: data structures and algorithms, as well as how to implement these building blocks on the GPU architecture for optimal performance; (2) an implementation of unbiased atlas construction algorithms|an illustration of how to solve a highly complex and computationally expensive algorithm using this framework; (3) an extension of the framework to account for geometry descriptors to solve registration challenges with large scale shape changes and high intensity-contrast di fferences; and (4) an out-of-core streaming model, which enables developers to implement multi-image processing techniques on commodity hardware
Recommended from our members
A software classification scheme
Reusing code is one approach to software reusability. Code is the end product of the software lifecycle. It is delivered in a low level representation that is difficult to reuse unless an almost perfect match exists between available features and required specifications. There is a need to organize large inventories of software such that reusable code is easy to locate and exchange. The relative success in the reuse of code fragments reported by some software factories is due in part to their capacity to encapsulate domain specific functions and create specialized libraries of components classified by these locally standardized functions.A general software classification scheme that organizes reusability related attributes and common functions from different domains is proposed as a partial solution to the software reusability problem. For the problem of selecting from similar, potentially reusable. components, a partial solution based on evaluation of common characteristics is also proposed. A library system is presented that integrates the proposed classification scheme with an evaluation mechanism based on inherent component attributes, programming languages characteristics and reuser experience.The fundamental contribution of this dissertation is a formal treatment of a faceted scheme for software classification leading to better understanding of reusability at the code level. This approach has been prototyped in a library system for the semi-automatic classification of software components. Analysis were performed to evaluate the classification scheme. The results show the potential of the scheme in organizing collections of code fragments, in improving retrieval, and in simplifying the classification process. Tests of the evaluation mechanism showed positive correlation with evaluations conducted by potential reusers
Recommended from our members
Construction of a support tool for the design of the activity structures based computer system architectures
This thesis was submitted for the degree of Doctor of Philosophy and was awarded by Brunel University.This thesis is a reapproachment of diverse design concepts, brought to bear upon the computer system
engineering problem of identification and control of highly constrained multiprocessing (HCM)
computer machines. It contributes to the area of meta/general systems methodology, and brings
a new insight into the design formalisms, and results afforded by bringing together various design
concepts that can be used for the construction of highly constrained computer system architectures.
A unique point of view is taken by assuming the process of identification and control of HCM
computer systems to be the process generated by the Activity Structures Methodology (ASM).
The research in ASM has emerged from the Neuroscience research, aiming at providing the
techniques for combining the diverse knowledge sources that capture the 'deep knowledge' of this
application field in an effective formal and computer representable form. To apply the ASM design
guidelines in the realm of the distributed computer system design, we provide new design definitions
for the identification and control of such machines in terms of realisations. These realisation definitions
characterise the various classes of the identification and control problem. The classes covered
consist of:
1. the identification of the designer activities,
2. the identification and control of the machine's distributed structures of behaviour,
3. the identification and control of the conversational environment activities (i.e. the randomised/
adaptive activities and interactions of both the user and the machine environments),
4. the identification and control of the substrata needed for the realisation of the machine, and
5. the identification of the admissible design data, both user-oriented and machineoriented,
that can force the conversational environment to act in a self-regulating
manner.
All extent results are considered in this context, allowing the development of both necessary
conditions for machine identification in terms of their distributed behaviours as well as the substrata
structures of the unknown machine and sufficient conditions in terms of experiments on the unknown
machine to achieve the self-regulation behaviour.
We provide a detailed description of the design and implementation of the support software tool
which can be used for aiding the process of constructing effective, HCM computer systems, based
on various classes of identification and control. The design data of a highly constrained system, the
NUKE, are used to verify the tool logic as well as the various identification and control procedures.
Possible extensions as well as future work implied by the results are considered.Government of Ira
Homotopy theory and topoi
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mathematics, 1998.Includes bibliographical references (p. 53-55).by Tibor Beke.Ph.D
Topological Foundations of Cognitive Science
A collection of papers presented at the First International Summer Institute in Cognitive Science, University at Buffalo, July 1994, including the following papers:
** Topological Foundations of Cognitive Science, Barry Smith
** The Bounds of Axiomatisation, Graham White
** Rethinking Boundaries, Wojciech Zelaniec
** Sheaf Mereology and Space Cognition, Jean Petitot
** A Mereotopological Definition of 'Point', Carola Eschenbach
** Discreteness, Finiteness, and the Structure of Topological Spaces, Christopher Habel
** Mass Reference and the Geometry of Solids, Almerindo E. Ojeda
** Defining a 'Doughnut' Made Difficult, N .M. Gotts
** A Theory of Spatial Regions with Indeterminate Boundaries, A.G. Cohn and N.M. Gotts
** Mereotopological Construction of Time from Events, Fabio Pianesi and Achille C. Varzi
** Computational Mereology: A Study of Part-of Relations for Multi-media Indexing, Wlodek Zadrozny and Michelle Ki
Points of Contact
In the first few centuries of Islam, Middle Eastern Christians, Muslims, and Jews alike all faced the challenges of preserving their holy texts in the midst of a changing religious landscape. This situation led Syriac, Arabic, and Hebrew scholars to develop new fields of linguistic science in order to better analyse the languages of the Bible and the Qurʾān.
Part of this work dealt with the issue of vocalisation in Semitic scripts, which lacked the letters required to precisely record all the vowels in their languages. Semitic scribes thus developed systems of written vocalisation points to better record vowel sounds, first in Syriac, then soon after in Arabic and Hebrew. These new points opened a new field of linguistic analysis, enabling medieval grammarians to more easily examine vowel phonology and explore the relationships between phonetics and orthography.
Many aspects of this new field of vocalisation crossed the boundaries between religious communities, first with the spread of ‘relative’ vocalisation systems prior to the eighth century, and later with the terminology created to name the discrete vowels of ‘absolute’ vocalisation systems.
This book investigates the theories behind Semitic vocalisation and vowel phonology in the early medieval Middle East, tracing their evolution to identify points of intellectual contact between Syriac, Arabic, and Hebrew linguists before the twelfth century
Improving the Performance of User-level Runtime Systems for Concurrent Applications
Concurrency is an essential part of many modern large-scale software systems. Applications must handle millions of simultaneous requests from millions of connected devices. Handling
such a large number of concurrent requests requires runtime systems that efficiently man-
age concurrency and communication among tasks in an application across multiple cores.
Existing low-level programming techniques provide scalable solutions with low overhead,
but require non-linear control flow. Alternative approaches to concurrent programming,
such as Erlang and Go, support linear control flow by mapping multiple user-level execution
entities across multiple kernel threads (M:N threading). However, these systems provide
comprehensive execution environments that make it difficult to assess the performance
impact of user-level runtimes in isolation.
This thesis presents a nimble M:N user-level threading runtime that closes this con-
ceptual gap and provides a software infrastructure to precisely study the performance
impact of user-level threading. Multiple design alternatives are presented and evaluated
for scheduling, I/O multiplexing, and synchronization components of the runtime. The
performance of the runtime is evaluated in comparison to event-driven software, system-
level threading, and other user-level threading runtimes. An experimental evaluation is
conducted using benchmark programs, as well as the popular Memcached application.
The user-level runtime supports high levels of concurrency without sacrificing application
performance. In addition, the user-level scheduling problem is studied in the context of
an existing actor runtime that maps multiple actors to multiple kernel-level threads. In
particular, two locality-aware work-stealing schedulers are proposed and evaluated. It is
shown that locality-aware scheduling can significantly improve the performance of a class
of applications with a high level of concurrency. In general, the performance and resource
utilization of large-scale concurrent applications depends on the level of concurrency that
can be expressed by the programming model. This fundamental effect is studied by refining
and customizing existing concurrency models
Exploiting Spatio-Temporal Coherence for Video Object Detection in Robotics
This paper proposes a method to enhance video object detection for indoor environments in robotics. Concretely, it exploits knowledge about the camera motion between frames to propagate previously detected objects to successive frames. The proposal is rooted in the concepts of planar homography to propose regions of interest where to find objects, and recursive Bayesian filtering to integrate observations over time. The proposal is evaluated on six virtual, indoor environments, accounting for the detection of nine object classes over a total of ∼ 7k frames. Results show that our proposal improves the recall and the F1-score by a factor of 1.41 and 1.27, respectively, as well as it achieves a significant reduction of the object categorization entropy (58.8%) when compared to a two-stage video object detection method used as baseline, at the cost of small time overheads (120 ms) and precision loss (0.92).</p
- …