202 research outputs found

    Convo: What does conversational programming need? An exploration of machine learning interface design

    Full text link
    Vast improvements in natural language understanding and speech recognition have paved the way for conversational interaction with computers. While conversational agents have often been used for short goal-oriented dialog, we know little about agents for developing computer programs. To explore the utility of natural language for programming, we conducted a study (nn=45) comparing different input methods to a conversational programming system we developed. Participants completed novice and advanced tasks using voice-based, text-based, and voice-or-text-based systems. We found that users appreciated aspects of each system (e.g., voice-input efficiency, text-input precision) and that novice users were more optimistic about programming using voice-input than advanced users. Our results show that future conversational programming tools should be tailored to users' programming experience and allow users to choose their preferred input mode. To reduce cognitive load, future interfaces can incorporate visualizations and possess custom natural language understanding and speech recognition models for programming.Comment: 9 pages, 7 figures, submitted to VL/HCC 2020, for associated user study video: https://youtu.be/TC5P3OO5ex

    Judicial Protection of Popular Sovereignty: Redressing Voting Technology

    Get PDF
    My analysis seeks to underscore the gravity of technologically threatened constitutional voting rights and values, implicating both individual rights to vote and the structural promise of popular sovereignty. Resolution of the dispute over the meaning of Fourteenth Amendment17 principles properly derived from Bush v. Gore18 will be pivotal to assuring meaningful voting rights in the information society. If the Court should hold the Fourteenth Amendment to embrace a deferential standard of review or arduous intent requirements, allowing state political branches to persist in choosing voting technologies based on scientifically unfounded premises that do not achieve classic components of voting rights, the American Republic’s future is seriously endangered.19 The argument proceeds in two parts. Part I traces illustrative empirical findings of the two comprehensive, definitive voting systems studies, offers evidence derived from actual election calamities that substantiates the experts’ findings, and translates these findings into concepts meaningful for voting rights and election law. Part II considers the judiciary’s failures thus far to understand the legal import of the scientific studies of voting systems when adjudicating the structural legal sufficiency of deployed voting systems20 and identifies questions on which scholarship is critically needed. Throughout, owing to space constraints, the argument is illustrative rather than comprehensive

    Judicial Protection of Popular Sovereignty: Redressing Voting Technology

    Get PDF
    My analysis seeks to underscore the gravity of technologically threatened constitutional voting rights and values, implicating both individual rights to vote and the structural promise of popular sovereignty. Resolution of the dispute over the meaning of Fourteenth Amendment17 principles properly derived from Bush v. Gore18 will be pivotal to assuring meaningful voting rights in the information society. If the Court should hold the Fourteenth Amendment to embrace a deferential standard of review or arduous intent requirements, allowing state political branches to persist in choosing voting technologies based on scientifically unfounded premises that do not achieve classic components of voting rights, the American Republic’s future is seriously endangered.19 The argument proceeds in two parts. Part I traces illustrative empirical findings of the two comprehensive, definitive voting systems studies, offers evidence derived from actual election calamities that substantiates the experts’ findings, and translates these findings into concepts meaningful for voting rights and election law. Part II considers the judiciary’s failures thus far to understand the legal import of the scientific studies of voting systems when adjudicating the structural legal sufficiency of deployed voting systems20 and identifies questions on which scholarship is critically needed. Throughout, owing to space constraints, the argument is illustrative rather than comprehensive

    Toward Productivity Improvements in Programming Languages Through Behavioral Analytics

    Full text link
    Computer science knowledge and skills have become foundational for success in virtually every professional field. As such, productivity in programming and computer science education is of paramount economic and strategic importance for innovation, employment and economic growth. Much of the research around productivity and computer science education has centered around improving notoriously difficult compiler error messages, with a noted surge in new studies in the last decade. In developing an original research plan for this area, this dissertation begins with an examination of the Case for New Instrumentation, draw- ing inspiration from automated data mining innovations and corporate marketing techniques in behavioral analytics as a model for understanding and prediction of human behavior. This paper then develops and explores techniques for automated measurement of programmer behavior based on token level lexical analysis of computer code. The techniques are applied in two empirical studies on parallel programming tasks with 88 and 91 student participants from the University of Nevada, Las Vegas as well as 108,110 programs from a database code repository. In the first study, through a re-analysis of previously captured data, the token accuracy mapping technique provided direct insight into the root cause for observed performance differences comparing thread-based vs. process-oriented parallel programming paradigms. In the second study com- paring two approaches to GPU programming at different levels of abstraction, we found that students who completed programming tasks in the CUDA paradigm (considered a lower level abstraction) performed at least equal to or better than students using the Thrust library (a higher level of abstraction) across four different abstraction tests. The code repository of programs with compiler errors was gathered from an online programming interface on curriculum pages available in the Quorum language (quorumlanguage.com) for Code.org’s Hour of Code, Quorum’s Common Core-mapped curriculum, activities from Girls Who Code and curriculum for Skynet Junior Scholars for a National Science Foundation funded grant entitled Inno- vators Developing Accessible Tools for Astronomy (IDATA). A key contribution of this research project is the development of a novel approach to compiler error categorization and hint generation based on token patterns called the Token Signature Technique. Token Signature analysis occurs as a post-processing step after a compilation pass with an ANTLR LL* parser triggers and categorizes an error. In this project, we use this technique to i.) further categorize and measure the root causes of the most common compiler errors in the Quorum database and then ii.) serve as an analysis tool for the development of a rules engine for enhancing compiler errors and providing live hint suggestions to programmers. The observed error patterns both in the overall error code categories in the Quorum database and in the specific token signatures within each error code category show error concentration patterns similar to other compiler error studies of the Java and Python programming languages, suggesting a potentially high impact of automated error messages and hints based on this technique. The automated nature of token signature analysis also lends itself to future development with sophisticated data mining technologies in the areas of machine learning, search, artificial intelligence, databases and statistics

    Directions for the future of technology in pronunciation research and teaching

    Get PDF
    This paper reports on the role of technology in state-of-the-art pronunciation research and instruction, and makes concrete suggestions for future developments. The point of departure for this contribution is that the goal of second language (L2) pronunciation research and teaching should be enhanced comprehensibility and intelligibility as opposed to native-likeness. Three main areas are covered here. We begin with a presentation of advanced uses of pronunciation technology in research with a special focus on the expertise required to carry out even small-scale investigations. Next, we discuss the nature of data in pronunciation research, pointing to ways in which future work can build on advances in corpus research and crowdsourcing. Finally, we consider how these insights pave the way for researchers and developers working to create research-informed, computer-assisted pronunciation teaching resources. We conclude with predictions for future developments
    • …
    corecore