1,041 research outputs found

    Retrieval-based learning in java programming and online application

    Get PDF
    The production of a computer program requires learners to be skilled in basic concepts of programming, to master mathematical formulas, appropriate syntax usage, and in-depth knowledge of programming languages. Typically, in forming a program, students should be able to identify problems, generate algorithms, and convert algorithms into program code according to the syntax. Therefore, a Java programming course requires a student to master cognitive thinking in which, students can recover knowledge by using 'retrieval-based learning' as a basic recollection strategy in learning programming languages. The retrieval-based learning method refers to the following five waves: mnemonic, semantic, episodic context account, map concept, and quiz. Through these five waves, students should be able to implement retrieval method such as producing their own practice questions, quizzes, scan cards, to rewrite learning, repainting of learning, and concept maps. Instructional materials should include formative (topical) assessment, emphasis on text and content requirements, use of open-ended questions (subjective), answers or feedback, repeating exercises, and student achievement estimation. The main contribution is to create a descriptive Java programming lesson, which includes the choice of difficult topics, learning activities, teaching and learning modules, and online learning. However, the initial purpose of this study is to determine the most difficult topics in Java programming, the retrieval based learning used by students, as well as online student learning modules among students in Diploma of Computer Science (Programming) in vocational colleges. The instrument used to collect the data is through an online questionnaire and the findings were analysed using SPSS software by giving the percentage value for each element studied. The sample of the study was 110 students in Diploma of Computer Science (Programming) from four vocational colleges in Malaysia. The findings from the preliminary study conducted by the researcher are presented in detail in this paper

    A General Framework for Accelerating Swarm Intelligence Algorithms on FPGAs, GPUs and Multi-core CPUs

    Get PDF
    Swarm intelligence algorithms (SIAs) have demonstrated excellent performance when solving optimization problems including many real-world problems. However, because of their expensive computational cost for some complex problems, SIAs need to be accelerated effectively for better performance. This paper presents a high-performance general framework to accelerate SIAs (FASI). Different from the previous work which accelerate SIAs through enhancing the parallelization only, FASI considers both the memory architectures of hardware platforms and the dataflow of SIAs, and it reschedules the framework of SIAs as a converged dataflow to improve the memory access efficiency. FASI achieves higher acceleration ability by matching the algorithm framework to the hardware architectures. We also design deep optimized structures of the parallelization and convergence of FASI based on the characteristics of specific hardware platforms. We take the quantum behaved particle swarm optimization algorithm (QPSO) as a case to evaluate FASI. The results show that FASI improves the throughput of SIAs and provides better performance through optimizing the hardware implementations. In our experiments, FASI achieves a maximum of 290.7Mbit/s throughput which is higher than several existing systems, and FASI on FPGAs achieves a better speedup than that on GPUs and multi-core CPUs. FASI is up to 123 times and not less than 1.45 times faster in terms of optimization time on Xilinx Kintex Ultrascale xcku040 when compares to Intel Core i7-6700 CPU/ NVIDIA GTX1080 GPU. Finally, we compare the differences of deploying FASI on hardware platforms and provide some guidelines for promoting the acceleration performance according to the hardware architectures

    Investigating an Approach to Integrating Computational Thinking into an Undergraduate Calculus Course

    Get PDF
    Computational thinking can be conceptualized as patterns of thinking which align with certain fundamental computer science processes. While this algorithmic way of thinking has always been integral to computer science, it has recently gained momentum as a valuable approach to problem solving in a wide variety of contexts. Education researchers highlight the potential of computational thinking to transform, enrich, and revitalize teaching and learning experiences, by providing a systematic framework for analysis and enabling powerful computational tools to be incorporated to further enhance problem-solving activities. Research suggests that in order to maximize the affordances of computational thinking, it should be integrated into all subjects, from primary to tertiary, in meaningful and subject-specific ways. However, due to persistent theoretical and practical barriers, comprehensive integration of computational thinking into school and university curricula has not yet been achieved. One particularly strong obstacle identified in the literature is the lack of practical resources detailing how to effectively incorporate computational thinking into subjects beyond computer science. Using a case study research design with over 1000 participants, my project investigated an approach to integrating computational thinking into a first-year calculus course at McMaster University. Students engaged in computational thinking by working on computer coding activities developed to complement the mathematical content taught in the course. Following each set of activities, students responded to prompts designed to determine: (1) how studentsā€™ conceptual understanding of calculus concepts changes in response to working on problem-solving and mathematical modelling activities which incorporate computational thinking, and (2) how studentsā€™ learning experiences are transformed when they explore calculus concepts, ideas and techniques using computational tools and models. A qualitative content analysis of these responses revealed that exploring calculus concepts with code modified studentsā€™ perceptions of mathematics, enhanced their mathematical learning experiences, and offered unique coding affordances. Further analyzing the data using a literacy framework helped situate the results of this study within the broader context of a computational literacy. This research augments the ongoing project, Computational Thinking in Mathematics Education, by providing insights and rich feedback on an approach to designing and integrating coding activities into a tertiary mathematics curriculum

    The development of intelligent hypermedia courseware, for design and technology in the English National Curriculum at Key Stage 3, by the sequential combination of cognition clusters, supported by system intelligence, derived from a dynamic user model

    Get PDF
    The purpose of this research was to develop an alternative to traditional textbooks for the teaching of electronics, within Design and Technology at Key Stage 3, in the English National Curriculum. The proposed alternative of intelligent hypermedia courseware was investigated in terms of its potential to support pupil procedural autonomy in task directed, goal oriented, design projects. Three principal design criteria were applied to the development of this courseware: the situation in which it is to be used; the task that it is to support; and the pedagogy that it will reflect and support. The discussion and satisfaction of these design criteria led towards a new paradigm for the development of intelligent hypermedia courseware, i.e. the sequential combination of cognition clusters, supported by system intelligence, derived from a dynamic user model. A courseware prototype was instantiated using this development paradigm and subsequently evaluated in three schools. An illuminative evaluation method was developed to investigate the consequences of using this courseware prototype. This evaluation method was based on longitudinal case studies where cycles of observation, further inquiry and explanation are undertaken. As a consequence of following this longitudinal method, where participants chose to adopt the courseware after the first trial, the relatability of outcomes increased as subsequent cycles were completed. Qualitative data was obtained from semi-structured interviews with participating teachers. This data was triangulated against quantitative data obtained from the completed dynamic user models generated by pupils using the courseware prototype. These data were used to generate hypotheses, in the form of critical processes, by the identification of significant features, concomitant features and recurring concomitants from the courseware trials. Four relatable critical processes are described that operate when this courseware prototype is used. These critical processes relate to: the number of computers available; the physical environment where the work takes place; the pedagogical features of a task type match, a design brief frame match and a preferred teaching approach match; and the levels of heuristic interaction with the courseware prototype

    Introductory programming: a systematic literature review

    Get PDF
    As computing becomes a mainstream discipline embedded in the school curriculum and acts as an enabler for an increasing range of academic disciplines in higher education, the literature on introductory programming is growing. Although there have been several reviews that focus on specific aspects of introductory programming, there has been no broad overview of the literature exploring recent trends across the breadth of introductory programming. This paper is the report of an ITiCSE working group that conducted a systematic review in order to gain an overview of the introductory programming literature. Partitioning the literature into papers addressing the student, teaching, the curriculum, and assessment, we explore trends, highlight advances in knowledge over the past 15 years, and indicate possible directions for future research

    Efficiently and Transparently Maintaining High SIMD Occupancy in the Presence of Wavefront Irregularity

    Get PDF
    Demand is increasing for high throughput processing of irregular streaming applications; examples of such applications from scientific and engineering domains include biological sequence alignment, network packet filtering, automated face detection, and big graph algorithms. With wide SIMD, lightweight threads, and low-cost thread-context switching, wide-SIMD architectures such as GPUs allow considerable flexibility in the way application work is assigned to threads. However, irregular applications are challenging to map efficiently onto wide SIMD because data-dependent filtering or replication of items creates an unpredictable data wavefront of items ready for further processing. Straightforward implementations of irregular applications on a wide-SIMD architecture are prone to load imbalance and reduced occupancy, while more sophisticated implementations require advanced use of parallel GPU operations to redistribute work efficiently among threads. This dissertation will present strategies for addressing the performance challenges of wavefront- irregular applications on wide-SIMD architectures. These strategies are embodied in a developer framework called Mercator that (1) allows developers to map irregular applications onto GPUs ac- cording to the streaming paradigm while abstracting from low-level data movement and (2) includes generalized techniques for transparently overcoming the obstacles to high throughput presented by wavefront-irregular applications on a GPU. Mercator forms the centerpiece of this dissertation, and we present its motivation, performance model, implementation, and extensions in this work

    Doctor of Philosophy

    Get PDF
    dissertationVisualization has emerged as an effective means to quickly obtain insight from raw data. While simple computer programs can generate simple visualizations, and while there has been constant progress in sophisticated algorithms and techniques for generating insightful pictorial descriptions of complex data, the process of building visualizations remains a major bottleneck in data exploration. In this thesis, we present the main design and implementation aspects of VisTrails, a system designed around the idea of transparently capturing the exploration process that leads to a particular visualization. In particular, VisTrails explores the idea of provenance management in visualization systems: keeping extensive metadata about how the visualizations were created and how they relate to one another. This thesis presents the provenance data model in VisTrails, which can be easily adopted by existing visualization systems and libraries. This lightweight model entirely captures the exploration process of the user, and it can be seen as an electronic analogue of the scientific notebook. The provenance metadata collected during the creation of pipelines can be reused to suggest similar content in related visualizations and guide semi-automated changes. This thesis presents the idea of building visualizations by analogy in a system that allows users to change many visualizations at once, without requiring them to interact with the visualization specifications. It then proposes techniques to help users construct pipelines by consensus, automatically suggesting completions based on a database of previously created pipelines. By presenting these predictions in a carefully designed interface, users can create visualizations and other data products more efficiently because they can augment their normal work patterns with the suggested completions. VisTrails leverages the workflow specifications to identify and avoid redundant operations. This optimization is especially useful while exploring multiple visualizations. When variations of the same pipeline need to be executed, substantial speedups can be obtained by caching the results of overlapping subsequences of the pipelines. We present the design decisions behind the execution engine, and how it easily supports the execution of arbitrary third-party modules. These specifications also facilitate the reproduction of previous results. We will present a description of an infrastructure that makes the workflows a complete description of the computational processes, including information necessary to identify and install necessary system libraries. In an environment where effective visualization and data analysis tasks combine many different software packages, this infrastructure can mean the difference between being able to replicate published results and getting lost in a sea of software dependencies and missing libraries. The thesis concludes with a discussion of the system architecture, design decisions and learned lessons in VisTrails. This discussion is meant to clarify the issues present in creating a system based around a provenance tracking engine, and should help implementors decide how to best incorporate these notions into their own systems

    Senior Computer Science Studentsā€™ Task and Revised Task Interpretation While Engaged in Programming Endeavor

    Get PDF
    Developing a computer program is not an easy task. Studies reported that a large number of computer science students decided to change their major due to the extreme challenge in learning programming. Fortunately, studies also reported that learning various self-regulation strategies may help students to continue studying computer science. This study is interested in assessing studentsā€™ self-regulation, in specific their task understanding and its revision during programming endeavors. Task understanding is specifically selected because it affects the entire programming endeavor. In this qualitative case study, two female and two male senior computer science students were voluntarily recruited as research participants. They were asked to think aloud while answering five programming problems. Before solving the problem, they had to explain their understanding of the task and after that answer some questions related to their problem-solving process. The participantsā€™ problem-solving process were video and audio-recorded, transcribed, and analyzed. This study found that the participantsā€™ were capable of tailoring their problem-solving approach to the task types, including when understanding the tasks. Given enough time, the participants can understand the problem correctly. When the task is complicated, the participants will gradually update their understanding during the problem-solving endeavor. Some situations may have prevented the participants from understanding the task correctly, including overconfidence, being overwhelmed, utilizing an inappropriate presentation technique, or drawing knowledge from irrelevant experience. Last, the participants tended to be inexperienced in managing unfavorable outcomes

    Dagstuhl News January - December 2007

    Get PDF
    "Dagstuhl News" is a publication edited especially for the members of the Foundation "Informatikzentrum Schloss Dagstuhl" to thank them for their support. The News give a summary of the scientific work being done in Dagstuhl. Each Dagstuhl Seminar is presented by a small abstract describing the contents and scientific highlights of the seminar as well as the perspectives or challenges of the research topic

    Investigation and development of a tangible technology framework for highly complex and abstract concepts

    Get PDF
    The ubiquitous integration of computer-supported learning tools within the educational domain has led educators to continuously seek effective technological platforms for teaching and learning. Overcoming the inherent limitations of traditional educational approaches, interactive and tangible computing platforms have consequently garnered increased interest in the pursuit of embedding active learning pedagogies within curricula. However, whilst Tangible User Interface (TUI) systems have been successfully developed to edutain children in various research contexts, TUI architectures have seen limited deployment towards more advanced educational pursuits. Thus, in contrast to current domain research, this study investigates the effectiveness and suitability of adopting TUI systems for enhancing the learning experience of abstract and complex computational science and technology-based concepts within higher educational institutions (HEI)s. Based on the proposal of a contextually apt TUI architecture, the research describes the design and development of eight distinct TUI frameworks embodying innovate interactive paradigms through tabletop peripherals, graphical design factors, and active tangible manipulatives. These computationally coupled design elements are evaluated through summative and formative experimental methodologies for their ability to aid in the effective teaching and learning of diverse threshold concepts experienced in computational science. In addition, through the design and adoption of a technology acceptance model for educational technology (TAM4Edu), the suitability of TUI frameworks in HEI education is empirically evaluated across a myriad of determinants for modelling studentsā€™ behavioural intention. In light of the statistically significant results obtained in both academic knowledge gain (Ī¼ = 25.8%) and student satisfaction (Ī¼ = 12.7%), the study outlines the affordances provided through TUI design for various constituents of active learning theories and modalities. Thus, based on an empirical and pedagogical analyses, a set of design guidelines is defined within this research to direct the effective development of TUI design elements for teaching and learning abstract threshold concepts in HEI adaptations
    • ā€¦
    corecore