388 research outputs found
Effects of Error Messages on a Studentâs Ability to Understand and Fix Programming Errors
abstract: Assemblers and compilers provide feedback to a programmer in the form of error messages. These error messages become input to the debugging model of the programmer. For the programmer to fix an error, they should first locate the error in the program, understand what is causing that error, and finally resolve that error. Error messages play an important role in all three stages of fixing of errors. This thesis studies the effects of error messages in the context of teaching programming. Given an error message, this work investigates how it effects studentâs way of 1) understanding the error, and 2) fixing the error. As part of the study, three error message types were developed â Default, Link and Example, to better understand the effects of error messages. The Default type provides an assembler-centric single line error message, the Link type provides a program-centric detailed error description with a hyperlink for more information, and the Example type provides a program centric detailed error description with a relevant example. All these error message types were developed for assembly language programming. A think aloud programming exercise was conducted as part of the study to capture the student programmerâs knowledge model. Different codes were developed to analyze the data collected as part of think aloud exercise. After transcribing, coding, and analyzing the data, it was found that the Link type of error message helped to fix the error in less time and with fewer steps. Among the three types, the Link type of error message also resulted in a significantly higher ratio of correct to incorrect steps taken by the programmer to fix the error.Dissertation/ThesisMasters Thesis Software Engineering 201
Doctor of Philosophy
dissertationPortable electronic devices will be limited to available energy of existing battery chemistries for the foreseeable future. However, system-on-chips (SoCs) used in these devices are under a demand to offer more functionality and increased battery life. A difficult problem in SoC design is providing energy-efficient communication between its components while maintaining the required performance. This dissertation introduces a novel energy-efficient network-on-chip (NoC) communication architecture. A NoC is used within complex SoCs due it its superior performance, energy usage, modularity, and scalability over traditional bus and point-to-point methods of connecting SoC components. This is the first academic research that combines asynchronous NoC circuits, a focus on energy-efficient design, and a software framework to customize a NoC for a particular SoC. Its key contribution is demonstrating that a simple, asynchronous NoC concept is a good match for low-power devices, and is a fruitful area for additional investigation. The proposed NoC is energy-efficient in several ways: simple switch and arbitration logic, low port radix, latch-based router buffering, a topology with the minimum number of 3-port routers, and the asynchronous advantages of zero dynamic power consumption while idle and the lack of a clock tree. The tool framework developed for this work uses novel methods to optimize the topology and router oorplan based on simulated annealing and force-directed movement. It studies link pipelining techniques that yield improved throughput in an energy-efficient manner. A simulator is automatically generated for each customized NoC, and its traffic generators use a self-similar message distribution, as opposed to Poisson, to better match application behavior. Compared to a conventional synchronous NoC, this design is superior by achieving comparable message latency with half the energy
Transactions Chasing Scalability and Instruction Locality on Multicores
For several decades, online transaction processing (OLTP) has been one of the main server applications that drives innovations in the data management ecosystem, and in turn the database and computer architecture communities. Recent hardware trends oblige software to overcome two major challenges against systems scalability on modern multicore processors: (1) exploiting the abundant thread-level parallelism across cores and (2) taking advantage of the implicit parallelism within a core. The traditional design of the OLTP systems, however, faces inherent scalability problems due to its tightly coupled components. In addition, OLTP cannot exploit the full capability of the micro-architectural resources of modern processors because of the conventional scheduling decisions that ignore the cache locality for transactions. As a result, todayâs commonly used server hardware remains largely underutilized leading to a huge waste of hardware resources and energy. .... In this thesis, we first identify the unbounded critical sections of traditional OLTP systems as the main enemy of thread-level parallelism. We design an alternative shared-everything system based on physiological partitioning (PLP) to eliminate the unbounded critical sections while providing an infrastructure for low-cost dynamic repartitioning and without introducing high-cost distributed transactions. Then, we demonstrate that L1 instruction cache stalls are the dominant factor leading to underutilization in the commodity servers. However, we also observe that independently of their high-level functionality, transactions running in parallel on a multicore system share significant amount of common instructions. By adaptively spreading the execution of a transaction over multiple cores through thread migration or multiplexing transactions on one core, we enable both an ample L1 instruction cache capacity for a transaction and reuse of common instructions across concurrent transactions. .... As the hardware demands more from the software to exploit the complexity and parallelism it offers in the multicore era, this work would change the way we traditionally schedule transactions. Instead of viewing a transaction as a single big task, we split it into smaller parts that can exploit data and instruction locality through careful dynamic scheduling decisions. The methods this thesis presents are not only specific to OLTP systems, but they can also benefit other types of applications that have concurrent requests executing a series of actions from a predefined set and face similar scalability problems on emerging hardware
ÎαÏηγοÏÎčÎżÏοίηÏη ÎșαÎč ÏΔÏÎčληÏÏÎčÎșÎÏ Î±ÏοΎÏÏΔÎčÏ Î”ÏγαÏÎčÏÎœ ÏÏ ÎœÎ”ÎŽÏÎŻÏÎœ ÏÎ·Ï ACM SIGCSE
Î ÏαÏÎżÏÏα ΔÏγαÏία αÏÎżÏÎŹ ÏÏη ΌΔλÎÏη ΔÏγαÏÎčÏÎœ ÎżÎč ÎżÏÎżÎŻÎ”Ï ÏαÏÎżÏ
ÏÎčÎŹÏÏηÎșαΜ ÏÏÎż ÏÏ
ÎœÎÎŽÏÎčÎż ACM SIGCSE ÏÎčÏ ÏÏÎżÎœÎčÎÏ 2016, 2017 ÎșαÎč 2018.
ÎÏÏÎčÎșÎŹ, ÎłÎŻÎœÎ”ÏαÎč ÎŒÎčα ÎșαÏηγοÏÎčÎżÏοίηÏη, ΌΔ ÎČÎŹÏη ÏÎżÎœ ÎșÏÏÎčÎż ÏÎżÎŒÎα ÏÎ·Ï ÎÎșÏαίΎΔÏ
ÏÎ·Ï ÏÎ·Ï Î Î»Î·ÏÎżÏÎżÏÎčÎșÎźÏ ÏÎżÎœ ÎżÏοίο αÏÎżÏÎŹ η ÎșΏΞΔ ΔÏγαÏία ÏÎżÏ
ÏαÏÎżÏ
ÏÎčÎŹÏÏηÎșΔ ÏÏα ÏÏÎżÎ±ÎœÎ±ÏΔÏΞÎÎœÏα ÏÏ
ÎœÎÎŽÏÎčα. ÎÎč ÎșαÏηγοÏÎŻÎ”Ï ÏÏÎčÏ ÎżÏÎżÎŻÎ”Ï ÎșαÏαÏÎŹÏΞηÎșαΜ Ïα ÎŹÏΞÏα Î”ÎŻÎœÎ±Îč ÎżÎč Î”ÎŸÎźÏ:
âą ÎΟÎčολÏγηÏη ÏÏÎżÏ
ΎαÏÏÏÎœ
âą ÎÏÏΏλΔÎčα ÎșαÎč ÏÏÎżÏÏαÏία ÏÎ·Ï ÎčÎŽÎčÏÏÎčÎșÎźÏ Î¶ÏÎźÏ
âą ÎÎčαΎÏαÏÏÎčÎșÎŹ ÏΔÏÎčÎČÎŹÎ»Î»ÎżÎœÏα ÎŒÎŹÎžÎ·ÏηÏ
âą ÎÎčαÏÎżÏΔÏÎčÎșÏÏηÏα ÏÏÎœ ÏÏλÏÎœ/ ΠολÏ
ÏολÎčÏÎčÏÎŒÎčÎșÏÏηÏα
âą ÎÎșÏαίΎΔÏ
Ïη ÏÎ·Ï ÎηÏαΜÎčÎșÎźÏ ÎογÎčÏÎŒÎčÎșÎżÏ
âą ÎÎčÏαγÏγΟ ÏÏηΜ ΠληÏÎżÏÎżÏÎčÎșÎź
âą ÎÎșÏαίΎΔÏ
Ïη ÏÎ·Ï Î Î»Î·ÏÎżÏÎżÏÎčÎșÎźÏ
âą ÎÎœÏÏÎŒÎŹÏÏÏη ΠληÏÎżÏÎżÏίαÏ
âą ÎλΔÎșÏÏÎżÎœÎčÎșÎź ÎŒÎŹÎžÎ·Ïη
âą ÎÏÏÎčÎșÎżÏοίηÏη
âą Î ÏÏÏÏ
Ïα αΜαλÏ
ÏÎčÎșÎŹ ÏÏογÏÎŹÎŒÎŒÎ±Ïα
âą Î ÏÏÏÎżÎČÎŹÎžÎŒÎčα ÎșαÎč ÎΔÏ
ÏΔÏÎżÎČÎŹÎžÎŒÎčα ÎÎșÏαίΎΔÏ
Ïη
âą ÎŁÏ
ΜΔÏγαÏÎčÎșÎź ÎΏΞηÏη
âą ÎŁÏ
ÏÏÎźÎŒÎ±Ïα ÎŽÎčαÏΔίÏÎčÏÎ·Ï ÎŒÎŹÎžÎ·ÏηÏ
⹠΄ÏολογÎčÏÏÎčÎșÎź ÎŁÎșÎÏη
⹠΄ÏολογÎčÏÏÎčÎșÏÏ ÎλÏαÎČηÏÎčÏÎŒÏÏ
ÎŁÏη ÏÏ
ÎœÎÏΔÎčα, ÎŽÎŻÎœÎżÎœÏαÎč ÏΔÏÎčληÏÏÎčÎșÎÏ Î±ÏοΎÏÏΔÎčÏ ÏÏÎœ ΔÏγαÏÎčÏÎœ ÏÎ·Ï ÏÏÎżÎœÎčÎŹÏ 2017 ÏÎżÏ
ΔΌÏÎŻÏÏÎżÏ
Îœ ÏÏÎčÏ ÏαÏαÎșÎŹÏÏ Î”ÏÎčÎ»Î”ÎłÎŒÎÎœÎ”Ï ÎșαÏηγοÏίΔÏ:
âą ÎΟÎčολÏγηÏη ÏÎżÎčÏηÏÏÎœ/ΌαΞηÏÏÎœ
âą ÎÎčÏαγÏγΟ ÏÏηΜ ΠληÏÎżÏÎżÏÎčÎșÎź
âą ÎÎșÏαίΎΔÏ
Ïη ÏÎ·Ï Î Î»Î·ÏÎżÏÎżÏÎčÎșÎźÏ
âą Î ÏÏÏÎżÎČÎŹÎžÎŒÎčα ÎșαÎč ÎΔÏ
ÏΔÏÎżÎČÎŹÎžÎŒÎčα ÎÎșÏαίΎΔÏ
Ïη
âą ÎŁÏ
ΜΔÏγαÏÎčÎșÎź ÎΏΞηÏη
⹠΄ÏολογÎčÏÏÎčÎșÎź ÎŁÎșÎÏηThis thesis focuses on the study of papers presented at the ACM SIGCSE conference in the years 2016, 2017 and 2018.
Initially, a categorization is defined, based on the main areas of IT education that are included in the aforementioned conferences. The categories in which the articles were classified are:
âą Student evaluation
âą Security and Privacy
âą Interactive learning environments
âą Gender Diversity / Multiculturalism
âą Software engineering education
âą CS1
âą Computer Science Education
âą Integration of Information
âą E-learning
âą Visualization
âą Model curricula
âą K-12
âą Collaborative learning
âą Computational Thinking
âą Computing Literacy
Afterwards, reviews of the papers of the year 2017 are presented concerning the following categories:
âą Student evaluation
âą CS1
âą Computer Science Education
âą K-12
âą Collaborative learning
âą Computational Thinkin
Personalised e-Learning
This thesis proposes to add value to the traditional e-learning systems by personalising the content being presented. The personalisation process was brought together through the amalgamation of crowdsourcing techniques, explicit with learnersâ interests, and learner profiling technologies. A prototype called iPLE, intelligent personal learning environment, was developed and tested within an empirical study where participants experienced and compared the proposed iPLE with a static e-learning environment and a standard face-to-face delivery. A number of data collection instruments have been integrated within the empirical study to accumulate participantsâ feedback. The results were fully documented and analysed using a combination of quantitative and qualitative data analysis tools that generated essential assessment information. An indicative improvement was reported following the data analysis and evaluation of results that led to the conclusion that even though there is plenty of room for further development and research, the combination of the proposed techniques does help and assist in rendering e-learning more effective
Applying pause analysis to explore cognitive processes in the copying of sentences by second language users
Pause analysis is a method that investigates processes of writing by measuring the amount of time between pen strokes. It provides the field of second language studies with a means to explore the cognitive processes underpinning the nature of writing. This study examined the potential of using free handwritten copying of sentences as a means of investigating components of the cognitive processes of adults who have English as their Second Language (ESL).
A series of one pilot and three experiments investigated possible measures of language skill and the factors that influence the quality of the measures. The pilot study, with five participants of varying English competence, identified copying without pre-reading to be an effective task and âmedianâ at the beginning of words to be an effective measure. Experiment 1 (n=20 Malaysian speakers) found jumbled sentences at the letter and word levels to effectively differentiate test-taker competence in relation to grammatical knowledge. Experiment 2 (n=20 Spanish speakers) investigated the jumbling effects further, but found that participants varied their strategy depending on the order of the sentence types. As a result, Experiment 3 (n= 24 Malaysian speakers) used specific task instructions to control participant strategy use, so that they either attended to the meaning of the sentences, or merely copied as quickly as possible. Overall, these experiments show that it is feasible to apply pause analysis to cognitively investigate both grammar and vocabulary components of language processing.
Further, a theoretical information processing model of copying (MoC) was developed. The model assists in the analysis and description of (1) the flow of copying processes; (2) the factors that might affect longer or shorter pauses amongst participants of varying competence level; and (3) sentence stimuli design
- âŠ