100 research outputs found

    Composing graphical user interfaces in a purely functional language

    Get PDF
    This thesis is about building interactive graphical user interfaces in a compositional manner. Graphical user interface application hold out the promise of providing users with an interactive, graphical medium by which they can carry out tasks more effectively and conveniently. The application aids the user to solve some task. Conceptually, the user is in charge of the graphical medium, controlling the order and the rate at which individual actions are performed. This user-centred nature of graphical user interfaces has considerable ramifications for how software is structured. Since the application now services the user rather than the other way around, it has to be capable of responding to the user's actions when and in whatever order they might occur. This transfer of overall control towards the user places heavy burden on programming systems, a burden that many systems don't support too well. Why? Because the application now has to be structured so that it is responsive to whatever action the user may perform at any time. The main contribution of this thesis is to present a compositional approach to constructing graphical user interface applications in a purely functional programming language The thesis is concerned with the software techniques used to program graphical user interface applications, and not directly with their design. A starting point for the work presented here was to examine whether an approach based on functional programming could improve how graphical user interfaces are built. Functional programming languages, and Haskell in particular, contain a number of distinctive features such as higher-order functions, polymorphic type systems, lazy evaluation, and systematic overloading, that together pack quite a punch, at least according to proponents of these languages. A secondary contribution of this thesis is to present a compositional user interface framework called Haggis, which makes good use of current functional programming techniques. The thesis evaluates the properties of this framework by comparing it to existing systems

    Declarative Support for Prototyping Interactive Systems

    Get PDF
    The development of complex, multi-user, interactive systems is a difficult process that requires both a rapid iterative approach, and the ability to reason carefully about system designs. This thesis argues that a combination of declarative prototyping and formal specification provides a suitable way of satisfying these requirements. The focus of this thesis is on the development of software tools for prototyping interactive systems. In particular, it uses a declarative approach, based on the functional programming paradigm. This thesis makes two contributions. The most significant contribution is the presentation of FranTk, a new Graphical User Interface language, embedded in the functional language Haskell. It is suitable for prototyping complex, concurrent, multi-user systems. It allows systems to be built in a high level, structured manner. In particular, it provides good support for specifying real-time properties of such systems. The second contribution is a mechanism that allows a formal specification to be derived from a high level FranTk prototype. The approach allows this to be done automatically. This specification can then be checked, with tool support, to verify some safety properties about a system. To avoid the state space explosion problem that would be faced when verifying an entire system, we focus on partial verification. This concentrates on key areas of a design: in particular this means that we only derive a specification from parts of a prototype. To demonstrate the scalability of both the prototyping and verification approaches, this thesis uses a series of case studies including a multi-user design rationale editor and a prototype data-link Air Traffic Control system

    Integrating interactive tools using concurrent haskell and synchronous events

    Get PDF
    In this paper we describe how existing interactive tools can be integrated using Concurrent Haskell and synchronous events. The base technology is a higher-order approach to concurrency as in CML extended with a framework for handling external events of the environment. These events are represented as first class synchronous events to achieve a uniform, composable approach to event handling. Adaptors are interposed between the external event sources and the internal set of listening agents to achieve this degree of abstraction. A substantially improved integration framework compared to existing technology (such as for example the combination of Tcl/Tk with expect) is then provided. With this basis it is for example possible to wrap a GUI around the hugs interpreter with very little work required.Eje: Conferencia latinoamericana de programaciĂłn funcionalRed de Universidades con Carreras en InformĂĄtica (RedUNCI

    Hidden expectations: Scaffolding subject specialists' genre knowledge of the assignments they set

    Get PDF
    Subject specialists’ knowledge of academic and disciplinary literacy is often tacit. We tackle the issue of how to elicit subject specialists’ tacit knowledge in order to develop their pedagogical practices and enable them to communicate this knowledge to students. Drawing on theories of genre and metacognition, a professional development activity was designed and delivered. Our aims were to (1) build participants’ genre knowledge and (2) scaffold metacognitive awareness of how genre knowledge can enhance their pedagogical practices. The findings reveal that participants built a genre-based understanding of academic literacy and that the tasks provided them with an accessible framework to articulate and reflect upon their knowledge of disciplinary literacy. Participants gained metacognitive awareness of misalignments between what they teach and what they expect from students, their assumptions about students’ prior learning and genre-based strategies to adapt their practice to students’ needs. Our approach provides a theoretically grounded professional development tool for the HE sector

    Learning natural coding conventions

    Get PDF
    Coding conventions are ubiquitous in software engineering practice. Maintaining a uniform coding style allows software development teams to communicate through code by making the code clear and, thus, readable and maintainable—two important properties of good code since developers spend the majority of their time maintaining software systems. This dissertation introduces a set of probabilistic machine learning models of source code that learn coding conventions directly from source code written in a mostly conventional style. This alleviates the coding convention enforcement problem, where conventions need to first be formulated clearly into unambiguous rules and then be coded in order to be enforced; a tedious and costly process. First, we introduce the problem of inferring a variable’s name given its usage context and address this problem by creating Naturalize — a machine learning framework that learns to suggest conventional variable names. Two machine learning models, a simple n-gram language model and a specialized neural log-bilinear context model are trained to understand the role and function of each variable and suggest new stylistically consistent variable names. The neural log-bilinear model can even suggest previously unseen names by composing them from subtokens (i.e. sub-components of code identifiers). The suggestions of the models achieve 90% accuracy when suggesting variable names at the top 20% most confident locations, rendering the suggestion system usable in practice. We then turn our attention to the significantly harder method naming problem. Learning to name methods, by looking only at the code tokens within their body, requires a good understating of the semantics of the code contained in a single method. To achieve this, we introduce a novel neural convolutional attention network that learns to generate the name of a method by sequentially predicting its subtokens. This is achieved by focusing on different parts of the code and potentially directly using body (sub)tokens even when they have never been seen before. This model achieves an F1 score of 51% on the top five suggestions when naming methods of real-world open-source projects. Learning about naming code conventions uses the syntactic structure of the code to infer names that implicitly relate to code semantics. However, syntactic similarities and differences obscure code semantics. Therefore, to capture features of semantic operations with machine learning, we need methods that learn semantic continuous logical representations. To achieve this ambitious goal, we focus our investigation on logic and algebraic symbolic expressions and design a neural equivalence network architecture that learns semantic vector representations of expressions in a syntax-driven way, while solely retaining semantics. We show that equivalence networks learn significantly better semantic vector representations compared to other, existing, neural network architectures. Finally, we present an unsupervised machine learning model for mining syntactic and semantic code idioms. Code idioms are conventional “mental chunks” of code that serve a single semantic purpose and are commonly used by practitioners. To achieve this, we employ Bayesian nonparametric inference on tree substitution grammars. We present a wide range of evidence that the resulting syntactic idioms are meaningful, demonstrating that they do indeed recur across software projects and that they occur more frequently in illustrative code examples collected from a Q&A site. These syntactic idioms can be used as a form of automatic documentation of coding practices of a programming language or an API. We also mine semantic loop idioms, i.e. highly abstracted but semantic-preserving idioms of loop operations. We show that semantic idioms provide data-driven guidance during the creation of software engineering tools by mining common semantic patterns, such as candidate refactoring locations. This gives data-based evidence to tool, API and language designers about general, domain and project-specific coding patterns, who instead of relying solely on their intuition, can use semantic idioms to achieve greater coverage of their tool or new API or language feature. We demonstrate this by creating a tool that suggests loop refactorings into functional constructs in LINQ. Semantic loop idioms also provide data-driven evidence for introducing new APIs or programming language features

    Can Large Language Models emulate an inductive Thematic Analysis of semi-structured interviews? An exploration and provocation on the limits of the approach and the model

    Full text link
    Large Language Models (LLMs) have emerged as powerful generative Artificial Intelligence solutions which can be applied to several fields and areas of work. This paper presents results and reflection of an experiment done to use the model GPT 3.5-Turbo to emulate some aspects of an inductive Thematic Analysis. Previous research on this subject has largely worked on conducting deductive analysis. Thematic Analysis is a qualitative method for analysis commonly used in social sciences and it is based on interpretations made by the human analyst(s) and the identification of explicit and latent meanings in qualitative data. Attempting an analysis based on human interpretation with an LLM clearly is a provocation but also a way to learn something about how these systems can or cannot be used in qualitative research. The paper presents the motivations for attempting this emulation, it reflects on how the six steps to a Thematic Analysis proposed by Braun and Clarke can at least partially be reproduced with the LLM and it also reflects on what are the outputs produced by the model. The paper used two existing datasets of open access semi-structured interviews, previously analysed with Thematic Analysis by other researchers. It used the previously produced analysis (and the related themes) to compare with the results produced by the LLM. The results show that the model can infer at least partially some of the main Themes. The objective of the paper is not to replace human analysts in qualitative analysis but to learn if some elements of LLM data manipulation can to an extent be of support for qualitative research

    Thinking outside the TBox multiparty service matchmaking as information retrieval

    Get PDF
    Service oriented computing is crucial to a large and growing number of computational undertakings. Central to its approach are the open and network-accessible services provided by many different organisations, and which in turn enable the easy creation of composite workflows. This leads to an environment containing many thousands of services, in which a programmer or automated composition system must discover and select services appropriate for the task at hand. This discovery and selection process is known as matchmaking. Prior work in the field has conceived the problem as one of sufficiently describing individual services using formal, symbolic knowledge representation languages. We review the prior work, and present arguments for why it is optimistic to assume that this approach will be adequate by itself. With these issues in mind, we examine how, by reformulating the task and giving the matchmaker a record of prior service performance, we can alleviate some of the problems. Using two formalisms—the incidence calculus and the lightweight coordination calculus—along with algorithms inspired by information retrieval techniques, we evolve a series of simple matchmaking agents that learn from experience how to select those services which performed well in the past, while making minimal demands on the service users. We extend this mechanism to the overlooked case of matchmaking in workflows using multiple services, selecting groups of services known to inter-operate well. We examine the performance of such matchmakers in possible future services environments, and discuss issues in applying such techniques in large-scale deployments
    • 

    corecore