65 research outputs found
Efficient abstractions for visualization and interaction
Abstractions, such as functions and methods, are an essential tool for any programmer. Abstractions encapsulate the details of a computation: the programmer only needs to know what the abstraction achieves, not how it achieves it. However, using abstractions can come at a cost: the resulting program may be inefficient. This can lead to programmers not using some abstractions, instead writing the entire functionality from the ground up. In this thesis, we present several results that make this situation less likely when programming interactive visualizations. We present results that make abstractions more efficient in the areas of graphics, layout and events
Recommended from our members
Semantic chunking
Long sentences pose a challenge for natural language processing (NLP) applications. They are associated with a complex information structure leading to increased requirements for processing resources. Although the issue is present in many areas of research, there is little uniformity in the solutions used by research communities dedicated to individual NLP applications. Different aspects of the problem are addressed by different tasks, such as sentence simplification or shallow chunking.
The main contribution of this thesis is the introduction of the task of semantic chunking as a general approach to reducing the cost of processing long sentences. The goal of semantic chunking is to find semantically contained fragments of a sentence representation that can be processed independently and recombined without loss of information. We anchor its principles in established concepts of semantic theory, in particular event and situation semantics. Most of the experiments in this thesis focus on semantic chunking defined on complex semantic representations in Dependency Minimal Recursion Semantics (DMRS),
but we also demonstrate that the task can be performed on sentence strings. We present three chunking models: a) rule-based proof-of-concept DMRS chunking system; b) a semi-supervised sequence labelling neural model for surface semantic chunking; c) a system capable of finding semantic chunk boundaries based on the inherent structure of DMRS graphs, generalisable in the form of descriptive templates. We show how semantic chunking can be applied within a divide-and-conquer processing paradigm, using as an example the task of realization from DMRS. The application of semantic chunking yields noticeable efficiency gains without decreasing the quality of results
Machine checkable design patterns using dependent types and domain specific goal-oriented modelling languages
Goal-Oriented Modelling Languages such as the Goal Requirements Language (GRL) have been used to reason about Design Patterns. However, the GRL is a general purpose modelling language that does not support concepts bespoke to the pattern domain. This thesis has investigated how advanced programming language techniques, namely Dependent Types and Domain Specific Languages, can be used to enhance the design and construction of Domain Specific Modelling languages (DSMLs), and apply the results to Design Pattern Engineering.
This thesis presents Sif, a DSML for reasoning about design patterns as goal-oriented requirements problems. Sif presents modellers with a modelling language tailored to the pattern domain but leverages the GRL for realisation of the modelling constructs. Dependent types have influenced the design and implementation of Sif to provide correctness guarantees, and have led to the development of NovoGRL a novel extension of the GRL.
A technique for DSML implementation called Types as (Meta) Modellers was developed in which the interpretation between a DSML and its host language is implemented directly within the type-system of the DSML. This provides correctness guarantees of DSML model instances during model construction. Models can only be constructed if and only if the DSML’s type-system can build a valid representation of the model in the host language.
This thesis also investigated design pattern evaluation, developing PREMES an evaluation framework that uses tailorable testing techniques to provide demonstrable reporting on pattern quality. Linking PREMES with Sif are: Freyja - an active pattern document schema in which Sif models are embedded within pattern documents; and Frigg - a tool for interacting with pattern documents.
The proof-of-concept tools in this thesis demonstrate: machine enhanced interactions with design patterns; reproducible automation in the PREMES framework; and machine checking of pattern documents as Sif models. With the tooling and techniques presented, design pattern engineering can become a more rigorous, demonstrable, and machine checkable process
Algebraic Stream Processing
We identify and analyse the typically higher-order approaches to stream processing in the literature. From this analysis we motivate an alternative approach to the specification of SPSs as STs based on an essentially first-order equational representation. This technique is called Cartesian form specification. More specifically, while STs are properly second-order objects we show that using Cartesian forms, the second-order models needed to formalise STs are so weak that we may use and develop well-understood first-order methods from computability theory and mathematical logic to reason about their properties. Indeed, we show that by specifying STs equationally in Cartesian form as primitive recursive functions we have the basis of a new, general purpose and mathematically sound theory of stream processing that emphasises the formal specification and formal verification of STs. The main topics that we address in the development of this theory are as follows. We present a theoretically well-founded general purpose stream processing language ASTRAL (Algebraic Stream TRAnsformer Language) that supports the use of modular specification techniques for full second-order STs. We show how ASTRAL specifications can be given a Cartesian form semantics using the language PREQ that is an equational characterisation of the primitive recursive functions. In more detail, we show that by compiling ASTRAL specifications into an equivalent Cartesian form in PREQ we can use first-order equational logic with induction as a logical calculus to reason about STs. In particular, using this calculus we identify a syntactic class of correctness statements for which the verification of ASTRAL programmes is decidable relative to this calculus. We define an effective algorithm based on term re-writing techniques to implement this calculus and hence to automatically verify a very broad class of STs including conventional hardware devices. Finally, we analyse the properties of this abstract algorithm as a proof assistant and discuss various techniques that have been adopted to develop software tools based on this algorithm
Reconfigurable Computing Systems for Robotics using a Component-Oriented Approach
Robotic platforms are becoming more complex due to the wide range of modern applications, including multiple heterogeneous sensors and actuators. In order to comply with real-time and power-consumption constraints, these systems need to process a large amount of heterogeneous data from multiple sensors and take action (via actuators), which represents a problem as the resources of these systems have limitations in memory storage, bandwidth, and computational power.
Field Programmable Gate Arrays (FPGAs) are programmable logic devices that offer high-speed parallel processing. FPGAs are particularly well-suited for applications that require real-time processing, high bandwidth, and low latency. One of the fundamental advantages of FPGAs is their flexibility in designing hardware tailored to specific needs, making them adaptable to a wide range of applications. They can be programmed to pre-process data close to sensors, which reduces the amount of data that needs to be transferred to other computing resources, improving overall system efficiency. Additionally, the reprogrammability of FPGAs enables them to be repurposed for different applications, providing a cost-effective solution that needs to adapt quickly to changing demands. FPGAs' performance per watt is close to that of Application-Specific Integrated Circuits (ASICs), with the added advantage of being reprogrammable.
Despite all the advantages of FPGAs (e.g., energy efficiency, computing capabilities), the robotics community has not fully included them so far as part of their systems for several reasons. First, designing FPGA-based solutions requires hardware knowledge and longer development times as their programmability is more challenging than Central Processing Units (CPUs) or Graphics Processing Units (GPUs). Second, porting a robotics application (or parts of it) from software to an accelerator requires adequate interfaces between software and FPGAs. Third, the robotics workflow is already complex on its own, combining several fields such as mechanics, electronics, and software.
There have been partial contributions in the state-of-the-art for FPGAs as part of robotics systems. However, a study of FPGAs as a whole for robotics systems is missing in the literature, which is the primary goal of this dissertation. Three main objectives have been established to accomplish this. (1) Define all components required for an FPGAs-based system for robotics applications as a whole. (2) Establish how all the defined components are related. (3) With the help of Model-Driven Engineering (MDE) techniques, generate these components, deploy them, and integrate them into existing solutions.
The component-oriented approach proposed in this dissertation provides a proper solution for designing and implementing FPGA-based designs for robotics applications.
The modular architecture, the tool 'FPGA Interfaces for Robotics Middlewares' (FIRM), and the toolchain 'FPGA Architectures for Robotics' (FAR) provide a set of tools and a comprehensive design process that enables the development of complex FPGA-based designs more straightforwardly and efficiently. The component-oriented approach contributed to the state-of-the-art in FPGA-based designs significantly for robotics applications and helps to promote their wider adoption and use by specialists with little FPGA knowledge
AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments
This report considers the application of Articial Intelligence (AI) techniques to
the problem of misuse detection and misuse localisation within telecommunications
environments. A broad survey of techniques is provided, that covers inter alia
rule based systems, model-based systems, case based reasoning, pattern matching,
clustering and feature extraction, articial neural networks, genetic algorithms, arti
cial immune systems, agent based systems, data mining and a variety of hybrid
approaches. The report then considers the central issue of event correlation, that
is at the heart of many misuse detection and localisation systems. The notion of
being able to infer misuse by the correlation of individual temporally distributed
events within a multiple data stream environment is explored, and a range of techniques,
covering model based approaches, `programmed' AI and machine learning
paradigms. It is found that, in general, correlation is best achieved via rule based approaches,
but that these suffer from a number of drawbacks, such as the difculty of
developing and maintaining an appropriate knowledge base, and the lack of ability
to generalise from known misuses to new unseen misuses. Two distinct approaches
are evident. One attempts to encode knowledge of known misuses, typically within
rules, and use this to screen events. This approach cannot generally detect misuses
for which it has not been programmed, i.e. it is prone to issuing false negatives.
The other attempts to `learn' the features of event patterns that constitute normal
behaviour, and, by observing patterns that do not match expected behaviour, detect
when a misuse has occurred. This approach is prone to issuing false positives,
i.e. inferring misuse from innocent patterns of behaviour that the system was not
trained to recognise. Contemporary approaches are seen to favour hybridisation,
often combining detection or localisation mechanisms for both abnormal and normal
behaviour, the former to capture known cases of misuse, the latter to capture
unknown cases. In some systems, these mechanisms even work together to update
each other to increase detection rates and lower false positive rates. It is concluded
that hybridisation offers the most promising future direction, but that a rule or state
based component is likely to remain, being the most natural approach to the correlation
of complex events. The challenge, then, is to mitigate the weaknesses of
canonical programmed systems such that learning, generalisation and adaptation
are more readily facilitated
- …