30 research outputs found
Automatic goal distribution strategies for the execution of committed choice logic languages on distributed memory parallel computers
There has been much research interest in efficient implementations of the Committed
Choice Non-Deterministic (CCND) logic languages on parallel computers. To take
full advantage of the speed gains of parallel computers, methods need to be found
to automatically distribute goals over the machine processors, ideally with as little
involvement from the user as possible.In this thesis we explore some automatic goal distribution strategies for the execu¬
tion of the CCND languages on commercially available distributed memory parallel
computers.There are two facets to the goal distribution strategies we have chosen to explore:DEMAND DRIVEN: An idle processor requests work from other processors. We describe
two strategies in this class: one in which an idle processor asks only neighbouring
processors for spare work, the nearest-neighbour strategy; and one where an idle
processor may ask any other processor in the machine for spare work, the allprocessors strategy.WEIGHTS: Using a program analysis technique devised by Tick, weights are attached to
goals; the weights can be used to order the goals so that they can be executed
and distributed out in weighted order, possibly increasing performance.We describe a framework in which to implement and analyse goal distribution strategies, and then go on to describe experiments with demand driven strategies, both with
and without weights. The experiments were made using two of our own implementations of Flat Guarded Horn Clauses — an interpreter and a WAM-like system —
executing on a MEIKO T800 Transputer Array configured in a 2-D mesh topology.Analysis of the results show that the all-processors strategies are promising (AP-NW),
adding weights had little positive effect on performance, and that nearest-neighbours
strategies can reduce performance due to bad load balancing.We also describe some preliminary experiments for a variant of the AP-NW strategy:
goals which suspend on one variable are sent to the processor that controls that variable,
the processes-to-data strategy. And we briefly look at some preliminary results of
executing programs on large numbers of processors (> 30)
The migration process of mobile agents: implementation, classification, and optimization
Mobile Agenten stellen ein neues faszinierendes Design-Paradigma für den Aufbau und die Programmierung von verteilten Systemen dar. Ein mobiler Agent ist eine Software-Entität, die von ihrem Besitzer mit einem Auftrag auf einem Knoten eines verteilten Systems gestartet wird und dann zur Laufzeit auf andere Knoten des Netzwerkes migriert. Diese Arbeit konzentriert sich auf den Migrationsprozess für mobile Agenten, dem in der Literatur bisher wenig Aufmerksamkeit geschenkt wurde, obwohl er die Ausführungsgeschwindigkeit eines Agenten entscheidend beeinflusst. Eine detaillierte Analyse der Netzbelastung von mobilen Agenten im Vergleich zum traditionellen Client-Server Ansatz in mehreren typischen Anwendungsszenarien zeigt das Potential von mobilen Agenten zur Verringerung von Verarbeitungszeiten. Allerdings zeigt die Analyse ebenso die Nachteile der in heutigen Agentensystemen verwendenten sehr einfachen Migrationstechniken. Es wird ein neues Migrationsmodell mit Namen Kalong vorgestellt, das den Nachteil der fehlenden Anpassbarkeit heutiger Agentensysteme beseitigt und dem Programmierer eines mobilen Agenten eine sehr flexible Technik für die Migration zur Verfügung stellt
Continued study of NAVSTAR/GPS for general aviation
A conceptual approach for examining the full potential of Global Positioning Systems (GPS) for the general aviation community is presented. Aspects of an experimental program to demonstrate these concepts are discussed. The report concludes with the observation that the true potential of GPS can only be exploited by utilization in concert with a data link. The capability afforded by the combination of position location and reporting stimulates the concept of GPS providing the auxiliary functions of collision avoidance, and approach and landing guidance. A series of general recommendations for future NASA and civil community efforts in order to continue to support GPS for general aviation are included
Modelling and simulation of flexible instruments for minimally invasive surgical training in virtual reality
Improvements in quality and safety standards in surgical training, reduction in training hours and constant technological advances have challenged the traditional apprenticeship model to create a competent surgeon in a patient-safe way. As a result, pressure on training outside the operating room has increased. Interactive, computer based Virtual Reality (VR) simulators offer a safe, cost-effective, controllable and configurable training environment free from ethical and patient safety issues.
Two prototype, yet fully-functional VR simulator systems for minimally invasive procedures relying on flexible instruments were developed and validated. NOViSE is the first force-feedback enabled VR simulator for Natural Orifice Transluminal Endoscopic Surgery (NOTES) training supporting a flexible endoscope. VCSim3 is a VR simulator for cardiovascular interventions using catheters and guidewires. The underlying mathematical model of flexible instruments in both simulator prototypes is based on an established theoretical framework – the Cosserat Theory of Elastic Rods. The efficient implementation of the Cosserat Rod model allows for an accurate, real-time simulation of instruments at haptic-interactive rates on an off-the-shelf computer. The behaviour of the virtual tools and its computational performance was evaluated using quantitative and qualitative measures. The instruments exhibited near sub-millimetre accuracy compared to their real counterparts. The proposed GPU implementation further accelerated their simulation performance by approximately an order of magnitude.
The realism of the simulators was assessed by face, content and, in the case of NOViSE, construct validity studies. The results indicate good overall face and content validity of both simulators and of virtual instruments. NOViSE also demonstrated early signs of construct validity. VR simulation of flexible instruments in NOViSE and VCSim3 can contribute to surgical training and improve the educational experience without putting patients at risk, raising ethical issues or requiring expensive animal or cadaver facilities. Moreover, in the context of an innovative and experimental technique such as NOTES, NOViSE could potentially facilitate its development and contribute to its popularization by keeping practitioners up to date with this new minimally invasive technique.Open Acces
Advanced data management system analysis techniques study
The state of the art of system analysis is reviewed, emphasizing data management. Analytic, hardware, and software techniques are described
Learning natural coding conventions
Coding conventions are ubiquitous in software engineering practice. Maintaining a uniform
coding style allows software development teams to communicate through code by
making the code clear and, thus, readable and maintainable—two important properties
of good code since developers spend the majority of their time maintaining software
systems. This dissertation introduces a set of probabilistic machine learning models
of source code that learn coding conventions directly from source code written in a
mostly conventional style. This alleviates the coding convention enforcement problem,
where conventions need to first be formulated clearly into unambiguous rules and then
be coded in order to be enforced; a tedious and costly process.
First, we introduce the problem of inferring a variable’s name given its usage context
and address this problem by creating Naturalize — a machine learning framework
that learns to suggest conventional variable names. Two machine learning models, a
simple n-gram language model and a specialized neural log-bilinear context model are
trained to understand the role and function of each variable and suggest new stylistically
consistent variable names. The neural log-bilinear model can even suggest previously
unseen names by composing them from subtokens (i.e. sub-components of code identifiers).
The suggestions of the models achieve 90% accuracy when suggesting variable
names at the top 20% most confident locations, rendering the suggestion system usable
in practice.
We then turn our attention to the significantly harder method naming problem.
Learning to name methods, by looking only at the code tokens within their body, requires
a good understating of the semantics of the code contained in a single method.
To achieve this, we introduce a novel neural convolutional attention network that learns
to generate the name of a method by sequentially predicting its subtokens. This is
achieved by focusing on different parts of the code and potentially directly using body
(sub)tokens even when they have never been seen before. This model achieves an F1
score of 51% on the top five suggestions when naming methods of real-world open-source
projects.
Learning about naming code conventions uses the syntactic structure of the code
to infer names that implicitly relate to code semantics. However, syntactic similarities
and differences obscure code semantics. Therefore, to capture features of semantic
operations with machine learning, we need methods that learn semantic continuous
logical representations. To achieve this ambitious goal, we focus our investigation on
logic and algebraic symbolic expressions and design a neural equivalence network architecture
that learns semantic vector representations of expressions in a syntax-driven
way, while solely retaining semantics. We show that equivalence networks learn significantly
better semantic vector representations compared to other, existing, neural
network architectures.
Finally, we present an unsupervised machine learning model for mining syntactic
and semantic code idioms. Code idioms are conventional “mental chunks” of code that
serve a single semantic purpose and are commonly used by practitioners. To achieve
this, we employ Bayesian nonparametric inference on tree substitution grammars. We
present a wide range of evidence that the resulting syntactic idioms are meaningful,
demonstrating that they do indeed recur across software projects and that they occur
more frequently in illustrative code examples collected from a Q&A site. These syntactic
idioms can be used as a form of automatic documentation of coding practices
of a programming language or an API. We also mine semantic loop idioms, i.e. highly
abstracted but semantic-preserving idioms of loop operations. We show that semantic
idioms provide data-driven guidance during the creation of software engineering tools
by mining common semantic patterns, such as candidate refactoring locations. This
gives data-based evidence to tool, API and language designers about general, domain
and project-specific coding patterns, who instead of relying solely on their intuition, can
use semantic idioms to achieve greater coverage of their tool or new API or language
feature. We demonstrate this by creating a tool that suggests loop refactorings into
functional constructs in LINQ. Semantic loop idioms also provide data-driven evidence
for introducing new APIs or programming language features
Fourth Annual Workshop on Space Operations Applications and Research (SOAR 90)
The proceedings of the SOAR workshop are presented. The technical areas included are as follows: Automation and Robotics; Environmental Interactions; Human Factors; Intelligent Systems; and Life Sciences. NASA and Air Force programmatic overviews and panel sessions were also held in each technical area