699 research outputs found

    Fuzz Testing Projects in Massive Courses

    Get PDF
    ABSTRACT Scaffolded projects with automated feedback are core instructional components of many massive courses. In subjects that include programming, feedback is typically provided by test cases constructed manually by the instructor. This paper explores the effectiveness of fuzz testing, a randomized technique for verifying the behavior of programs. In particular, we apply fuzz testing to identify when a student's solution differs in behavior from a reference implementation by randomly exploring the space of legal inputs to a program. Fuzz testing serves as a useful complement to manually constructed tests. Instructors can concentrate on designing targeted tests that focus attention on specific issues while using fuzz testing for comprehensive error checking. In the first project of a 1,400-student introductory computer science course, fuzz testing caught errors that were missed by a suite of targeted test cases for more than 48% of students. As a result, the students dedicated substantially more effort to mastering the nuances of the assignment

    Report on the co-design process of Duct Tape University project

    Get PDF
    This report offers a narrative analysis of a co-design process undertaken to create a web-based repository for learning resources. The process involved numerous stakeholders and draws on diverse elements of theory and practice. Stakeholders include MMU, who are assessing this work, JISC who provided funding as part of the Summer of Student Innovation project and Community Arts North West who are the target user group. Communities of practice can provide fantastic opportunities for rich user feedback when creating tools or services together. Wenger identifies them as a key component to the process of building social learning systems (2000). The sharing of learning resources seems perfect activity to create greater communication between a community of practice of trainers. However, research in the context of the UK OER projects funded by JISC shows that there are tensions and barriers which can impede the effective exchange of learning resources, certainly in the form of OER (Littlejohn and Pegler, 2015). This multimedia report makes observations on the potential for a co-design process to help the adoption of open sharing of learning resources. It also documents a process to put these practices into action through the co-creation of a web-based repository for a specific community of practice. As part of the process, a large amount of data has been gathered in the form of photographs of the process, audio recordings of sessions and video interviews surrounding evaluation, design and dissemination. While, there is sufficient source material for a significant study beyond the remit of this report, the goal of this report is to briefly introduce the area of interests surrounding this project based on a review of relevant literature, to outline activities undertaken and to make broad observations on lesson learned and questions raised by the process. This chapter is divided into four main sections, this introduction, a section on theory and the framing of the project, an outline of project activities and finally a reflective summary. This report takes the form of an ebook in ePub format. There is more information on the technical process of its creation in section titled the making of this report. A chief reason for choosing the ePub format is the possibility to include the audio, video and photos gathered on the projects blog in a self-contained format. The contents of this report are also online here: http://blog.ducttapeuni.org. The focus of this report is to communicate the breadth rather than depth in any one area. The media contained in this whole document can be seen a resource to dip into the purpose being to convey a rich, impressionistic sense of the project. This chapter provides a more traditional frame for the project. It intersperses citations to academic work using Harvard reference system and hyper-links to other chapters of this ePub

    The Courier, Volume 8, Issue 20, March 6, 1975

    Get PDF
    Stories: Cite Moral Issue in Foreign Affairs College Feels Squeeze, Relocates 5 Agencies Can ‘Organized Religion’ Reorganize Your Life? Boiler Room Computerized, Ultra-Modern, Says Engineer Campus Center Plans Student ‘Convenience Area’ Inflation, or Why Gasoline Is Exploding Tuition Here 3rd Highest in State Religion on Upswing by 19% Last Four Years Chaps Win 5th Straight Hockey Championship! People: Alex Seith Phyllis Eisman Ken Trout Robert Seaton Isabel Bedell Berna Zema

    2018 Student Center for Science Engagement Research Symposium Program

    Get PDF
    Welcome everyone to the 10th Annual Research Symposium of the Student Center for Science Engagement (SCSE), co-sponsored with the NIH MARC NU-STAR Program! All of us in the SCSE are excited about the research and collaborations that were part of the summer program, both at NEIU and at other institutions. The SCSE Summer Research Program has continued to flourish, with 44 students and 26 faculty involved in 19 different research groups. These projects represented all of the STEM disciplines, with many interdisciplinary collaborations. These partnerships extended outside of the NEIU campus with students working with the scientists at the Field Museum, Lafayette College, Northern Illinois University, the University of Chicago, Michigan State University, the USDA National Soil Erosion Research Laboratory, the University of California at Berkeley, the University of Iowa, Ithaca College, Centro de Investigación Científica de Yucatán, and the Pennsylvania State University. Whether projects were done at NEIU or elsewhere, they are only possible with the support and efforts of faculty mentors and students working together to form strong and authentic research communities. Vital support also came from the College of Arts and Sciences, Academic Affairs, the SCSE Executive Board, and the contributions from grant programs secured by the NEIU community, including the NSF Louis Stokes Alliance for Minority Participation, the U.S. Department of Education Hispanic Serving Institutions Title III program, the U.S. Department of Agriculture, the NIH MARC U-STAR Program, and the NIH Chicago-CHEC program. It is also important to recognize the work of the SCSE staff in supporting all of the work that went into supporting students and faculty, as well as this Symposium. Since I am relatively new in the position of Director, I also need to recognize the extensive efforts of Dr. Joel Olfelt, who was in the position of Director prior to my start in August of this year. Finally, I want to emphasize not just the excellent work that was done over the summer, but also the building of a culture and community at NEIU that values and emphasizes these research experiences for our students, faculty, and staff. This is the result of all those involved, especially the talents, abilities, dedication, enthusiasm, and determination of our students

    Automated testing for GPU kernels

    Get PDF
    Graphics Processing Units (GPUs) are massively parallel processors offering performance acceleration and energy efficiency unmatched by current processors (CPUs) in computers. These advantages along with recent advances in the programmability of GPUs have made them widely used in various general-purpose computing domains. However, this has also made testing GPU kernels critical to ensure that their behaviour meets the requirements of the design and specification. Despite the advances in programmability, GPU kernels are hard to code and analyse due to the high complexity of memory sharing patterns, striding patterns for memory accesses, implicit synchronisation, and combinatorial explosion of thread interleavings. Existing few techniques for testing GPU kernels use symbolic execution for test generation that incur a high overhead, have limited scalability and do not handle all data types. In this thesis, we present novel approaches to measure test effectiveness and generate tests automatically for GPU kernels. To achieve this, we address significant challenges related to the GPU execution and memory model, and the lack of customised thread scheduling and global synchronisation. We make the following contributions: First, we present a framework, CLTestCheck, for assessing the quality of test suites developed for GPU kernels. The framework can measure code coverage using three different coverage metrics that are inspired by faults found in real kernel code. Fault finding capability of the test suite is also measured by the framework to seed different types of faults in the kernel and reported in the form of mutation score, which is the ratio of the number of uncovered faults to the total number of seeded faults. Second, with the goal of being fast, effective and scalable, we propose a test generation technique, CLFuzz, for GPU kernels that combines mutation-based fuzzing for fast test generation and selective SMT solving to help cover unreachable branches by fuzzing. Fuzz testing for GPU kernels has not been explored previously. Our approach for fuzz testing randomly mutates input kernel argument values with the goal of increasing branch coverage and supports GPU-specific data types such as images. When fuzz testing is unable to increase branch coverage with random mutations, we gather path constraints for uncovered branch conditions, build additional constraints to represent the context of GPU execution such as number of threads and work-group size, and invoke the Z3 constraint solver to generate tests for them. Finally, to help uncover inter work-group data races and replay these bugs with fixed work-group schedules, we present a schedule amplifier, CLSchedule, that simulates multiple work-group schedules, with which to execute each of the generated tests. By reimplementing the OpenCL API, CLSchedule executes the kernel with a fixed work-group schedule rather than the default arbitrary schedule. It also executes the kernel directly, without requiring the developer to manually provide boilerplate host code. The outcome of our research can be summarised as follows: 1. CLTestCheck is applied to 82 publicly available GPU kernels from industry-standard benchmark suites along with their test suites. The experiment reveals that CLTestCheck is capable of automatically measuring the effectiveness of test suites, in terms of code coverage, faulting finding capability and revealing data races in real OpenCL kernels. 2. CLFuzz can automatically generate tests and achieve close to 100% coverage and mutation score for the majority of the data set of 217 GPU kernels collected from open-source projects and industry-standard benchmarks. 3. CLSchedule is capable of exploring the effect of work-group schedules on the 217 GPU kernels and uncovers data races in 21 of them. The techniques developed in this thesis demonstrate that we can measure the effectiveness of tests developed for GPU kernels with our coverage criteria and fault seeding methods. The result is useful in highlighting code portions that may need developers' further attention. Our automated test generation and work-group scheduling approaches are also fast, effective and scalable, with small overhead incurred (average of 0.8 seconds) and scalability to large kernels with complex data structures

    Sonic utopia and social dystopia in the music of Hendrix, Reznor and Deadmau5

    Get PDF
    Twentieth-century popular music is fundamentally associated with electronics in its creation and recording, consumption, modes of dissemination, and playback. Traditional musical analysis, placing primacy on notated music, generally focuses on harmony, melody, and form, with issues of timbre and postproduction effects remaining largely unstudied. Interdisciplinary methodological practices address these limitations and can help broaden the analytical scope of popular idioms. Grounded in Jacques Attali's critical theories about the political economy of music, this dissertation investigates how the subversive noise of electronic sound challenges a controlling order and predicts broad cultural realignment. This study demonstrates how electronic noise, as an extra-musical element, creates modern soundscapes that require a new mapping of musical form and social intent. I further argue that the use of electronics in popular music signifies a technologically-obsessed postwar American culture moving rapidly towards an online digital revolution. I examine how electronic music technology introduces new sounds concurrent with generational shifts, projects imagined utopian and dystopian futures, and engages the tension between automated modern life and emotionally validating musical communities in real and virtual spaces. Chapter One synthesizes this interdisciplinary American studies project with the growing scholarship of sound studies in order to construct theoretical models for popular music analysis drawn from the fields of musicology, history, and science and technology studies. Chapter Two traces the emergence of the electronic synthesizer as a new sound that facilitated the transition of a technological postwar American culture into the politicized counterculture of the 1960s. The following three chapters provide case studies of individual popular artists' use of electronic music technology to express societal and political discontent: 1) Jimi Hendrix's application of distortion and stereo effects to narrate an Afrofuturist consciousness in the 1960s; 2) Trent Reznor's aggressive industrial rejection of Conservatism in the 1980s; and 3) Deadmau5's mediation of online life through computer-based production and performance in the 2000s. Lastly, this study extends existing discussions within sound studies to consider the cultural implications of music technology, noise politics, electronic timbre, multitrack audio, digital analytical techniques and online communities built through social media

    Exploring Automated Code Evaluation Systems and Resources for Code Analysis: A Comprehensive Survey

    Full text link
    The automated code evaluation system (AES) is mainly designed to reliably assess user-submitted code. Due to their extensive range of applications and the accumulation of valuable resources, AESs are becoming increasingly popular. Research on the application of AES and their real-world resource exploration for diverse coding tasks is still lacking. In this study, we conducted a comprehensive survey on AESs and their resources. This survey explores the application areas of AESs, available resources, and resource utilization for coding tasks. AESs are categorized into programming contests, programming learning and education, recruitment, online compilers, and additional modules, depending on their application. We explore the available datasets and other resources of these systems for research, analysis, and coding tasks. Moreover, we provide an overview of machine learning-driven coding tasks, such as bug detection, code review, comprehension, refactoring, search, representation, and repair. These tasks are performed using real-life datasets. In addition, we briefly discuss the Aizu Online Judge platform as a real example of an AES from the perspectives of system design (hardware and software), operation (competition and education), and research. This is due to the scalability of the AOJ platform (programming education, competitions, and practice), open internal features (hardware and software), attention from the research community, open source data (e.g., solution codes and submission documents), and transparency. We also analyze the overall performance of this system and the perceived challenges over the years

    Spartan Daily, November 7, 2013

    Get PDF
    Volume 141, Issue 31https://scholarworks.sjsu.edu/spartandaily/1450/thumbnail.jp

    BS News

    Get PDF
    • …
    corecore