656 research outputs found

    A study of novice programmer performance and programming pedagogy.

    Get PDF
    Identifying and mitigating the difficulties experienced by novice programmers is an active area of research that has embraced a number of research areas. The aim of this research was to perform a holistic study into the causes of poor performance in novice programmers and to develop teaching approaches to mitigate them. A grounded action methodology was adopted to enable the primary concepts of programming cognitive psychology and their relationships to be established, in a systematic and formal manner. To further investigate novice programmer behaviour, two sub-studies were conducted into programming performance and ability. The first sub-study was a novel application of the FP-Tree algorithm to determine if novice programmers demonstrated predictable patterns of behaviour. This was the first study to data mine programming behavioural characteristics rather than the learner’s background information such as age and gender. Using the algorithm, patterns of behaviour were generated and associated with the students’ ability. No patterns of behaviour were identified and it was not possible to predict student results using this method. This suggests that novice programmers demonstrate no set patterns of programming behaviour that can be used determine their ability, although problem solving was found to be an important characteristic. Therefore, there was no evidence that performance could be improved by adopting pedagogies to promote simple changes in programming behaviour beyond the provision of specific problem solving instruction. A second sub-study was conducted using Raven’s Matrices which determined that cognitive psychology, specifically working memory, played an important role in novice programmer ability. The implication was that programming pedagogies must take into consideration the cognitive psychology of programming and the cognitive load imposed on learners. Abstracted Construct Instruction was developed based on these findings and forms a new pedagogy for teaching programming that promotes the recall of abstract patterns while reducing the cognitive demands associated with developing code. Cognitive load is determined by the student’s ability to ignore irrelevant surface features of the written problem and to cross-reference between the problem domain and their mental program model. The former is dealt with by producing tersely written exercises to eliminate distractors, while for the latter the teaching of problem solving should be delayed until the student’s program model is formed. While this does delay the development of problem solving skills, the problem solving abilities of students taught using this pedagogy were found to be comparable with students taught using a more traditional approach. Furthermore, monitoring students’ understanding of these patterns enabled micromanagement of the learning process, and hence explanations were provided for novice behaviour such as difficulties using arrays, inert knowledge and “code thrashing”. For teaching more complex problem solving, scaffolding of practice was investigated through a program framework that could be developed in stages by the students. However, personalising the level of scaffolding required was complicated and found to be difficult to achieve in practice. In both cases, these new teaching approaches evolved as part of a grounded theory study and a clear progression of teaching practice was demonstrated with appropriate evaluation at each stage in accordance with action researc

    Applying science of learning in education: Infusing psychological science into the curriculum

    Get PDF
    The field of specialization known as the science of learning is not, in fact, one field. Science of learning is a term that serves as an umbrella for many lines of research, theory, and application. A term with an even wider reach is Learning Sciences (Sawyer, 2006). The present book represents a sliver, albeit a substantial one, of the scholarship on the science of learning and its application in educational settings (Science of Instruction, Mayer 2011). Although much, but not all, of what is presented in this book is focused on learning in college and university settings, teachers of all academic levels may find the recommendations made by chapter authors of service. The overarching theme of this book is on the interplay between the science of learning, the science of instruction, and the science of assessment (Mayer, 2011). The science of learning is a systematic and empirical approach to understanding how people learn. More formally, Mayer (2011) defined the science of learning as the “scientific study of how people learn” (p. 3). The science of instruction (Mayer 2011), informed in part by the science of learning, is also on display throughout the book. Mayer defined the science of instruction as the “scientific study of how to help people learn” (p. 3). Finally, the assessment of student learning (e.g., learning, remembering, transferring knowledge) during and after instruction helps us determine the effectiveness of our instructional methods. Mayer defined the science of assessment as the “scientific study of how to determine what people know” (p.3). Most of the research and applications presented in this book are completed within a science of learning framework. Researchers first conducted research to understand how people learn in certain controlled contexts (i.e., in the laboratory) and then they, or others, began to consider how these understandings could be applied in educational settings. Work on the cognitive load theory of learning, which is discussed in depth in several chapters of this book (e.g., Chew; Lee and Kalyuga; Mayer; Renkl), provides an excellent example that documents how science of learning has led to valuable work on the science of instruction. Most of the work described in this book is based on theory and research in cognitive psychology. We might have selected other topics (and, thus, other authors) that have their research base in behavior analysis, computational modeling and computer science, neuroscience, etc. We made the selections we did because the work of our authors ties together nicely and seemed to us to have direct applicability in academic settings

    A review and assessment of novice learning tools for problem solving and program development

    Get PDF
    There is a great demand for the development of novice learning tools to supplement classroom instruction in the areas of problem solving and program development. Research in the area of pedagogy, the psychology of programming, human-computer interaction, and cognition have provided valuable input to the development of new methodologies, paradigms, programming languages, and novice learning tools to answer this demand. Based on the cognitive needs of novices, it is possible to postulate a set of characteristics that should comprise the components an effective novice-learning tool. This thesis will discover these characteristics and provide recommendations for the development of new learning tools. This will be accomplished with a review of the challenges that novices face, an in-depth discussion on modem learning tools and the challenges that they address, and the identification and discussion of the vital characteristics that constitute an effective learning tool based on these tools and personal ideas

    Freeform User Interfaces for Graphical Computing

    Get PDF
    報告番号: 甲15222 ; 学位授与年月日: 2000-03-29 ; 学位の種別: 課程博士 ; 学位の種類: 博士(工学) ; 学位記番号: 博工第4717号 ; 研究科・専攻: 工学系研究科情報工学専

    Formal Object Interaction Language: Modeling and Verification of Sequential and Concurrent Object-Oriented Software

    Get PDF
    As software systems become larger and more complex, developers require the ability to model abstract concepts while ensuring consistency across the entire project. The internet has changed the nature of software by increasing the desire for software deployment across multiple distributed platforms. Finally, increased dependence on technology requires assurance that designed software will perform its intended function. This thesis introduces the Formal Object Interaction Language (FOIL). FOIL is a new object-oriented modeling language specifically designed to address the cumulative shortcomings of existing modeling techniques. FOIL graphically displays software structure, sequential and concurrent behavior, process, and interaction in a simple unified notation, and has an algebraic representation based on a derivative of the π-calculus. The thesis documents the technique in which FOIL software models can be mathematically verified to anticipate deadlocks, ensure consistency, and determine object state reachability. Scalability is offered through the concept of behavioral inheritance; and, FOIL’s inherent support for modeling concurrent behavior and all known workflow patterns is demonstrated. The concepts of process achievability, process complete achievability, and process determinism are introduced with an algorithm for simulating the execution of a FOIL object model using a FOIL process model. Finally, a technique for using a FOIL process model as a constraint on FOIL object system execution is offered as a method to ensure that object-oriented systems modeled in FOIL will complete their processes based activities. FOIL’s capabilities are compared and contrasted with an extensive array of current software modeling techniques. FOIL is ideally suited for data-aware, behavior based systems such as interactive or process management software

    Purposive variation in recordkeeping in the academic molecular biology laboratory

    Get PDF
    This thesis presents an investigation into the role played by laboratory records in the disciplinary discourse of academic molecular biology laboratories. The motivation behind this study stems from two areas of concern. Firstly, the laboratory record has received comparatively little attention as a linguistic genre in spite of its central role in the daily work of laboratory scientists. Secondly, laboratory records have become a focus for technologically driven change through the advent of computing systems that aim to support a transition away from the traditional paper-based approach towards electronic recordkeeping. Electronic recordkeeping raises the potential for increased sharing of laboratory records across laboratory communities. However, the uptake of electronic laboratory notebooks has been, and remains, markedly low in academic laboratories. The investigation employs a multi-perspective research framework combining ethnography, genre analysis, and reading protocol analysis in order to evaluate both the organizational practices and linguistic practices at work in laboratory recordkeeping, and to examine these practices from the viewpoints of both producers and consumers of laboratory records. Particular emphasis is placed on assessing variation in the practices used by different scientists when keeping laboratory records, and on assessing the types of articulation work used to achieve mutual intelligibility across laboratory members. The findings of this investigation indicate that the dominant viewpoint held by laboratory staff other than principal investigators conceptualized laboratory records as a personal resource rather than a community archive. Readers other than the original author relied almost exclusively on the recontextualization of selected information from laboratory records into ‘public genres’ such as laboratory talks, research articles, and progress reports as the preferred means of accessing the information held in the records. The consistent use of summarized forms of recording experimental data rendered most laboratory records as both unreliable and of limited usability in the records management sense that they did not form full and accurate descriptions that could support future organizational activities. These findings offer a counterpoint to other studies, notably a number of studies undertaken as part of technology developments for electronic recordkeeping, that report sharing of laboratory records or assume a ‘cyberbolic’ view of laboratory records as a shared resource

    Visual Occam: High level visualization and design of process networks

    Full text link
    With networks, multiprocessors, and multi-threaded systems becoming more common in our world it is increasingly evident that concurrent programming is not something to be ignored or marginalized even though many takes on concurrency (mainly by means of monitors or shared resources) have proven to be difficult to deal with on large scales. Thankfully, a good deal of work has already been done to combat this, through CSP, occam, and other such derivatives, to produce a scalable process oriented paradigm. Still, it is cumbersome to attempt to deal with the intricacies of such communicating networks down to every minutia; if, instead, it was possible to manage communicating elements on a higher level it would be far more practical to design large scale networks of processes! As such, Visual Occam has been designed to automate some of the inner workings of occam to allow any user (novice or otherwise) the ability to create complex networks of communicating processes through easy to understand user interactions and interfaces. Taking a number of cues from digital circuit design software and modern integrated development environments, it is possible to select components (both predefined and arbitrarily complex user created systems) from a library of objects, hook them together in a network, and produce compilable code without having to worry about how or why the chosen components perform their function. Since any of these components may themselves be networks of processes, it becomes trivial to construct large systems that would otherwise be unwieldy to put together by hand. The end result? A high level, easy to understand, visual abstraction of those concurrent networks previously so frustrating to develop

    Recognising the design decisions in Prolog programs as a prelude to critiquing

    Get PDF
    This thesis presents an approach by which an automated teaching system can analyse the design of novices' Prolog programs for tutorial critiquing. Existing methodologies for tutorial analysis of programs focus on the kind of small pro¬ gramming examples that are used only in the early stages of teaching. If an automated teaching system is to be widely useful, it must cover a substantial amount of the teaching syllabus, and a critiquing system must be able to analyse and critique programs written during the later stages of the syllabus.The work is motivated by a study of students' Prolog programs which were written as assessed exercises towards the end of their course. These programs all work (in some sense), yet they reveal a wide range of design (laws (bodges) for which some form of tutoring would be useful. They present problems for any automated analysis in terms of the size of the programs, the number of individual decisions that must be made to create each program and the range of correct and incorrect decisions that may be made in each case.This study identifies two areas in the analysis of students' program in which further work is needed. Existing work has focussed only on the design and implementation decisions that relate closely to the programming language. That is not sufficient for these slightly more advanced programs, for which decisions in the problem domain must also be recognised. Existing work has focussed on the different ways to implement code, but in these programs the students also make decisions about which data structures are to be used. These decisions must also be part of an analysis.The thesis provides an approach which represents both decisions in the domain of the problem being solved and decisions about how to implement them in Prolog. Decisions in the problem domain are represented by tasks (for code) and by domain objects (for data structures). Decisions that are specific to the Prolog implementation are represented by prototypes which encapsulate standard programming techniques (for code) and by a polymorphic data type language (for data structures). Issues in devising these representations are discussed.An analysis-by synthesis approach is used for code recognition. This is aug¬ mented by a procedure called "clausal split" which isolates novel or poorly de¬ signed parts of an implementation. Following an incomplete analysis of the program by synthesis, the results of this analysis provide the basis for making inferences about the parts of the program that have not been understood. For analysing data structures, a type inference mechanism is combined with inference about the parts of domain objects. Inferred data type information is also used to limit search, both for synthesis and analysis.An architecture using this approach has been implemented. The success of the architecture is assessed on student's programs. From this assessment it is clear that much further work remains to be done, but the results are hopeful

    Services in pervasive computing environments : from design to delivery

    Get PDF
    The work presented in this thesis is based on the assumption that modern computer technologies are already potentially pervasive: CPUs are embedded in any sort of device; RAM and storage memory of a modern PDA is comparable to those of a ten years ago Unix workstation; Wi-Fi, GPRS, UMTS are leveraging the development of the wireless Internet. Nevertheless, computing is not pervasive because we do not have a clear conceptual model of the pervasive computer and we have not tools, methodologies, and middleware to write and to seamlessly deliver at once services over a multitude of heterogeneous devices and different delivery contexts. Our thesis addresses these issues starting from the analysis of forces in a pervasive computing environment: user mobility, user profile, user position, and device profile. The conceptual model, or metaphor, we use to drive our work is to consider the environment as surrounded by a multitude of services and objects and devices as the communicating gates between the real world and the virtual dimension of pervasive computing around us. Our thesis is thus built upon three main “pillars”. The first pillar is a domain-object-driven methodology which allows developer to abstract from low level details of the final delivery platform, and provides the user with the ability to access services in a multi-channel way. The rationale is that domain objects are self-contained pieces of software able to represent data and to compute functions and procedures. Our approach fills the gap between users and domain objects building an appropriate user interface which is both adapted to the domain object and to the end user device. As example, we present how to design, implement and deliver an electronic mail application over various platforms. The second pillar of this thesis analyzes in more details the forces that make direct object manipulation inadequate in a pervasive context. These forces are the user profile, the device profile, the context of use, and the combinatorial explosion of domain objects. From the analysis of the electronic mail application presented as example, we notice that according to the end user device, or according to particular circumstances during the access to the service (for instance if the user access the service by the interactive TV while he is having his breakfast) some functionalities are not compulsory and do not fit an adequate task sequence. So we decided to make task models explicit in the design of a service and to integrate the capability to automatically generate user interfaces for domain objects with the formal definition of task models adapted to the final delivery context. Finally, the third pillar of our thesis is about the lifecycle of services in a pervasive computing environment. Our solutions are based upon an existing framework, the Jini connection technology, and enrich this framework with new services and architectures for the deployment and discovery of services, for the user session management, and for the management of offline agents

    Evolutionary design assistants for architecture

    Get PDF
    In its parallel pursuit of an increased competitivity for design offices and more pleasurable and easier workflows for designers, artificial design intelligence is a technical, intellectual, and political challenge. While human-machine cooperation has become commonplace through Computer Aided Design (CAD) tools, a more improved collaboration and better support appear possible only through an endeavor into a kind of artificial design intelligence, which is more sensitive to the human perception of affairs. Considered as part of the broader Computational Design studies, the research program of this quest can be called Artificial / Autonomous / Automated Design (AD). The current available level of Artificial Intelligence (AI) for design is limited and a viable aim for current AD would be to develop design assistants that are capable of producing drafts for various design tasks. Thus, the overall aim of this thesis is the development of approaches, techniques, and tools towards artificial design assistants that offer a capability for generating drafts for sub-tasks within design processes. The main technology explored for this aim is Evolutionary Computation (EC), and the target design domain is architecture. The two connected research questions of the study concern, first, the investigation of the ways to develop an architectural design assistant, and secondly, the utilization of EC for the development of such assistants. While developing approaches, techniques, and computational tools for such an assistant, the study also carries out a broad theoretical investigation into the main problems, challenges, and requirements towards such assistants on a rather overall level. Therefore, the research is shaped as a parallel investigation of three main threads interwoven along several levels, moving from a more general level to specific applications. The three research threads comprise, first, theoretical discussions and speculations with regard to both existing literature and the proposals and applications of the thesis; secondly, proposals for descriptive and prescriptive models, mappings, summary illustrations, task structures, decomposition schemes, and integratory frameworks; and finally, experimental applications of these proposals. This tripartite progression allows an evaluation of each proposal both conceptually and practically; thereby, enabling a progressive improvement of the understanding regarding the research question, while producing concrete outputs on the way. Besides theoretical and interpretative examinations, the thesis investigates its subject through a set of practical and speculative proposals, which function as both research instruments and the outputs of the study. The first main output of the study is the “design_proxy” approach (d_p), which is an integrated approach for draft making design assistants. It is an outcome of both theoretical examinations and experimental applications, and proposes an integration of, (1) flexible and relaxed task definitions and representations (instead of strict formalisms), (2) intuitive interfaces that make use of usual design media, (3) evaluation of solution proposals through their similarity to given examples, and (4) a dynamic evolutionary approach for solution generation. The design_proxy approach may be useful for AD researchers that aim at developing practical design assistants, as has been examined and demonstrated with the two applications, i.e., design_proxy.graphics and design_proxy.layout. The second main output, the “Interleaved Evolutionary Algorithm” (IEA, or Interleaved EA) is a novel evolutionary algorithm proposed and used as the underlying generative mechanism of design_proxybased design assistants. The Interleaved EA is a dynamic, adaptive, and multi-objective EA, in which one of the objectives leads the evolution until its fitness progression stagnates; in the sense that the settings and fitness values of this objective is used for most evolutionary decisions. In this way, the Interleaved EA enables the use of different settings and operators for each of the objectives within an overall task, which would be the same for all objectives in a regular multi-objective EA. This property gives the algorithm a modular structure, which offers an improvable method for the utilization of domain-specific knowledge for each sub-task, i.e., objective. The Interleaved EA can be used by Evolutionary Computation (EC) researchers and by practitioners who employ EC for their tasks. As a third main output, the “Architectural Stem Cells Framework” is a conceptual framework for architectural design assistants. It proposes a dynamic and multi-layered method for combining a set of design assistants for larger tasks in architectural design. The first component of the framework is a layer-based, parallel task decomposition approach, which aims at obtaining a dynamic parallelization of sub-tasks within a more complicated problem. The second component of the framework is a conception for the development mechanisms for building drafts, i.e., Architectural Stem Cells (ASC). An ASC can be conceived as a semantically marked geometric structure, which contains the information that specifies the possibilities and constraints for how an abstract building may develop from an undetailed stage to a fully developed building draft. ASCs are required for re-integrating the separated task layers of an architectural problem through solution-based development. The ASC Framework brings together many of the ideas of this thesis for a practical research agenda and it is presented to the AD researchers in architecture. Finally, the “design_proxy.layout” (d_p.layout) is an architectural layout design assistant based on the design_proxy approach and the IEA. The system uses a relaxed problem definition (producing draft layouts) and a flexible layout representation that permits the overlapping of design units and boundaries. User interaction with the system is carried out through intuitive 2D graphics and the functional evaluations are performed by measuring the similarity of a proposal to existing layouts. Functioning in an integrated manner, these properties make the system a practicable and enjoying design assistant, which was demonstrated through two workshop cases. The d_p.layout is a versatile and robust layout design assistant that can be used by architects in their design processes
    corecore