77 research outputs found

    Remote Opportunities: A Rethinking and Retooling

    Get PDF
    Abstract Introducing technology as a sustainable means of creating, connecting, and collaborating reveals the need to carefully consider subtle aspects of deployment strategies and support in remote regions. In order to comprehensively address both cultural and technical issues for educational infrastructure, we consider two elements to be key: (1) a staged deployment approach, involving both educators and community members, coupled with (2) uniquely designed collaborative Integrated Development Environments (IDEs) to aid constructivism. This paper presents our current experience with these elements in the context of a pilot project for aboriginal communities on the west coast of British Columbia. Currently, these local communities have been working alongside our group for a staged deployment of programs throughout southern Vancouver Island. In our next phase we will be extending this to more remote regions in the north island and coastal regions. By building on a philosophy of CommunityDriven Initiatives for Technology (C-DIT), we hope to secure community involvement in the development and testing of necessary tool support. These tools specifically target IDEs for the development of programming skills, and support our long term goal to help secondary and postsecondary level students appreciate both the process and the art of programming

    Locking Discipline Inference and Checking

    Get PDF
    Concurrency is a requirement for much modern software, but the implementation of multithreaded algorithms comes at the risk of errors such as data races. Programmers can prevent data races by documenting and obeying a locking discipline, which indicates which locks must be held in order to access which data. This paper introduces a formal semantics for locking specifications that gives a guarantee of race freedom. The paper also provides two implementations of the formal semantics for the Java language: one based on abstract interpretation and one based on type theory. To the best of our knowledge, these are the first tools that can soundly infer and check a locking discipline for Java. Our experiments com-pare the implementations with one another and with annotations written by programmers

    First Class Copy & Paste

    Get PDF
    The Subtext project seeks to make programming fundamentally easier by altering the nature of programming languages and tools. This paper defines an operational semantics for an essential subset of the Subtext language. It also presents a fresh approach to the problems of mutable state, I/O, and concurrency.Inclusions reify copy & paste edits into persistent relationships that propagate changes from their source into their destination. Inclusions formulate a programming language in which there is no distinction between a programÂs representation and its execution. Like spreadsheets, programs are live executions within a persistent runtime, and programming is direct manipulation of these executions via a graphical user interface. There is no need to encode programs into source text.Mutation of state is effected by the computation of hypothetical recursive variants of the state, which can then be lifted into new versions of the state. Transactional concurrency is based upon queued single-threaded execution. Speculative execution of queued hypotheticals provides concurrency as a semantically transparent implementation optimization

    Sound Atomicity Inference for Data-Centric Synchronization

    Full text link
    Data-Centric Concurrency Control (DCCC) shifts the reasoning about concurrency restrictions from control structures to data declaration. It is a high-level declarative approach that abstracts away from the actual concurrency control mechanism(s) in use. Despite its advantages, the practical use of DCCC is hindered by the fact that it may require many annotations and/or multiple implementations of the same method to cope with differently qualified parameters. Moreover, the existing DCCC solutions do not address the use of interfaces, precluding their use in most object-oriented programs. To overcome these limitations, in this paper we present AtomiS, a new DCCC model based on a rigorously defined type-sound programming language. Programming with AtomiS requires only (atomic)-qualifying types of parameters and return values in interface definitions, and of fields in class definitions. From this atomicity specification, a static analysis infers the atomicity constraints that are local to each method, considering valid only the method variants that are consistent with the specification, and performs code generation for all valid variants of each method. The generated code is then the target for automatic injection of concurrency control primitives, by means of the desired automatic technique and associated atomicity and deadlock-freedom guarantees, which can be plugged-into the model's pipeline. We present the foundations for the AtomiS analysis and synthesis, with formal guarantees that the generated program is well-typed and that it corresponds behaviourally to the original one. The proofs are mechanised in Coq. We also provide a Java implementation that showcases the applicability of AtomiS in real-life programs

    Cautiously Optimistic Program Analyses for Secure and Reliable Software

    Full text link
    Modern computer systems still have various security and reliability vulnerabilities. Well-known dynamic analyses solutions can mitigate them using runtime monitors that serve as lifeguards. But the additional work in enforcing these security and safety properties incurs exorbitant performance costs, and such tools are rarely used in practice. Our work addresses this problem by constructing a novel technique- Cautiously Optimistic Program Analysis (COPA). COPA is optimistic- it infers likely program invariants from dynamic observations, and assumes them in its static reasoning to precisely identify and elide wasteful runtime monitors. The resulting system is fast, but also ensures soundness by recovering to a conservatively optimized analysis when a likely invariant rarely fails at runtime. COPA is also cautious- by carefully restricting optimizations to only safe elisions, the recovery is greatly simplified. It avoids unbounded rollbacks upon recovery, thereby enabling analysis for live production software. We demonstrate the effectiveness of Cautiously Optimistic Program Analyses in three areas: Information-Flow Tracking (IFT) can help prevent security breaches and information leaks. But they are rarely used in practice due to their high performance overhead (>500% for web/email servers). COPA dramatically reduces this cost by eliding wasteful IFT monitors to make it practical (9% overhead, 4x speedup). Automatic Garbage Collection (GC) in managed languages (e.g. Java) simplifies programming tasks while ensuring memory safety. However, there is no correct GC for weakly-typed languages (e.g. C/C++), and manual memory management is prone to errors that have been exploited in high profile attacks. We develop the first sound GC for C/C++, and use COPA to optimize its performance (16% overhead). Sequential Consistency (SC) provides intuitive semantics to concurrent programs that simplifies reasoning for their correctness. However, ensuring SC behavior on commodity hardware remains expensive. We use COPA to ensure SC for Java at the language-level efficiently, and significantly reduce its cost (from 24% down to 5% on x86). COPA provides a way to realize strong software security, reliability and semantic guarantees at practical costs.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/170027/1/subarno_1.pd

    Languages of games and play: A systematic mapping study

    Get PDF
    Digital games are a powerful means for creating enticing, beautiful, educational, and often highly addictive interactive experiences that impact the lives of billions of players worldwide. We explore what informs the design and construction of good games to learn how to speed-up game development. In particular, we study to what extent languages, notations, patterns, and tools, can offer experts theoretical foundations, systematic techniques, and practical solutions they need to raise their productivity and improve the quality of games and play. Despite the growing number of publications on this topic there is currently no overview describing the state-of-the-art that relates research areas, goals, and applications. As a result, efforts and successes are often one-off, lessons learned go overlooked, language reuse remains minimal, and opportunities for collaboration and synergy are lost. We present a systematic map that identifies relevant publications and gives an overview of research areas and publication venues. In addition, we categorize research perspectives along common objectives, techniques, and approaches, illustrated by summaries of selected languages. Finally, we distill challenges and opportunities for future research and development

    Integration analysis of solutions based on software as a service to implement Educational Technological Ecosystems

    Get PDF
    [ES]Una de las principales características de la actual Sociedad del Conocimiento reside en el valor del conocimiento como un recurso activo en cualquier tipo de entidad, desde instituciones educativas hasta grandes corporaciones empresariales. La gestión del conocimiento surge como una ventaja competitiva de tal forma que las entidades dedican parte de sus recursos a desarrollar su capacidad para compartir, crear y aplicar nuevos conocimientos de forma continuada a lo largo del tiempo. La tecnología, considerada el motor, el elemento central, en la Sociedad de la Información, pasa a convertirse en un soporte para el aprendizaje, para la transformación de conocimiento tácito en explícito, de conocimiento individual en grupal. Internet, las tecnologías de la información y la comunicación y, en particular, los sistemas de información pasan de ser elementos que guían el desarrollo de la sociedad a ser herramientas cuyo desarrollo está guiado por las necesidades de gestión del conocimiento y los procesos de aprendizaje. Los ecosistemas tecnológicos, considerados como la evolución de los sistemas de información tradicionales, se posicionan como sistemas de gestión del conocimiento que abarcan tanto la componente tecnológica como el factor humano. En el caso de que la gestión del conocimiento esté dirigida a apoyar fundamentalmente procesos de aprendizaje, el ecosistema tecnológico se puede denominar ecosistema de aprendizaje. La metáfora de ecosistema, que proviene del área de la biología, se utiliza en diferentes contextos para transmitir la naturaleza evolutiva de procesos, actividades y relaciones. El uso del concepto ecosistema natural se aplica al ámbito tecnológico para reflejar un conjunto de características o propiedades de los ecosistemas naturales que pueden transferirse a los ecosistemas tecnológicos o ecosistemas software con el fin de proporcionar soluciones, las cuales deben estar orientadas resolver los problemas de gestión del conocimiento. A su vez, estas soluciones tienen que adaptarse a los constantes cambios que sufre cualquier tipo de entidad o contexto en el que se despliega algún tipo de solución tecnológica. A pesar de las ventajas que ofrecen los ecosistemas tecnológicos, el desarrollo de este tipo de soluciones tiene una mayor complejidad que los sistemas de información tradicionales. A los problemas propios de la ingeniería del software, tales como la interoperabilidad de los componentes o la evolución del ecosistema, se unen la dificultad de gestionar un conocimiento complejo y la diversidad de personas involucradas. Los diferentes retos y problemas de los ecosistemas tecnológicos, y en particular de aquellos centrados en gestionar el conocimiento y el aprendizaje, requieren mejorar los procesos de definición y desarrollo de este tipo de soluciones tecnológicas. La presente tesis doctoral se centra en proporcionar un marco arquitectónico que permita mejorar la definición, el desarrollo y la sostenibilidad de los ecosistemas tecnológicos para el aprendizaje. Dicho marco estará compuesto, principalmente, por dos resultados asociados a esta investigación: un patrón arquitectónico que permita resolver los problemas detectados en ecosistemas de aprendizaje reales y un metamodelo de ecosistema de aprendizaje, basado en el patrón, que permita aplicar Ingeniería Dirigida por Modelos para sustentar la definición y el desarrollo de los ecosistemas de aprendizaje. Para llevar a cabo la investigación se han definido tres ciclos siguiendo el marco metodológico Investigación-Acción. El primer ciclo se ha centrado en el análisis de varios casos de estudio reales con el fin de obtener un modelo de dominio del problema. Se han analizado ecosistemas tecnológicos para la gestión del conocimiento y el aprendizaje desplegados en contextos heterogéneos, en particular, la Universidad de Salamanca, el grupo de investigación GRIAL y el proyecto europeo TRAILER (centrado en gestionar el conocimiento informal en instituciones y empresas). Como resultado de este ciclo se han detectado una serie de características que debe tener un ecosistema tecnológico y se ha definido un patrón arquitectónico que permite sentar las bases del ecosistema, dando solución a algunos de los problemas detectados y asegurando la flexibilidad y adaptabilidad de los componentes del ecosistema con el fin de permitir su evolución. El segundo ciclo se ha centrado en la mejora y validación del patrón arquitectónico. Los problemas detectados en el ciclo anterior se han modelado con la notación Business Process Model and Notation. Para ello, se han agrupado los problemas relacionados con procesos de gestión del conocimiento similares y posteriormente se ha realizado para cada conjunto de problemas un diagrama con un alto nivel de abstracción. Después, para cada uno de los diagramas, se han identificado una vez más los problemas a resolver y se ha definido un nuevo diagrama aplicando el patrón. Esto ha permitido validar el patrón arquitectónico y sentar las bases para su formalización. Por último, el tercer ciclo ha planteado el Desarrollo Dirigido por Modelos de ecosistemas tecnológicos para la gestión del conocimiento y el aprendizaje. En concreto, se ha definido un metamodelo de ecosistema de aprendizaje basado en el patrón arquitectónico planteado en el ciclo anterior. El metamodelo se ha validado a través de una serie de transformaciones modelo a modelo automatizadas mediante reglas de transformación. Para poder llevar a cabo dicho proceso, se ha definido un metamodelo específico de plataforma que proporciona un conjunto de recomendaciones, tanto tecnológicas como humanas, para implementar ecosistemas de aprendizaje basados en software open source. El metamodelo de ecosistema de aprendizaje y el metamodelo específico de plataforma para definir ecosistemas basados en software open source proporcionan las guías necesarias para definir ecosistemas de aprendizaje que resuelvan los principales problemas detectados en este tipo de soluciones software. Los tres casos de estudio reales que se han desarrollado para validar los resultados obtenidos a lo largo de los ciclos de Investigación-Acción, en especial, el patrón arquitectónico para modelar ecosistemas de aprendizaje, el metamodelo de ecosistema de aprendizaje y el metamodelo específico de plataforma para definir ecosistemas basados en software open source, permiten afirmar, como conclusión más general, que es posible mejorar la definición y el desarrollo de los ecosistemas tecnológicos enfocados en gestionar el conocimiento y los procesos de aprendizaje. Más concretamente, el uso de ingeniería dirigida por modelos, sustentada sobre una sólida propuesta arquitectónica, permite definir ecosistemas de aprendizaje que evolucionan y se adaptan a las necesidades cambiantes del entorno y de los usuarios, así como resolver un conjunto de problemas comunes identificado en este tipo de soluciones tecnológicas

    Dynamically fighting bugs : prevention, detection and elimination

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 147-160).This dissertation presents three test-generation techniques that are used to improve software quality. Each of our techniques targets bugs that are found by different stake-holders: developers, testers, and maintainers. We implemented and evaluated our techniques on real code. We present the design of each tool and conduct experimental evaluation of the tools with available alternatives. Developers need to prevent regression errors when they create new functionality. This dissertation presents a technique that helps developers prevent regression errors in object-oriented programs by automatically generating unit-level regression tests. Our technique generates regressions tests by using models created dynamically from example executions. In our evaluation, our technique created effective regression tests, and achieved good coverage even for programs with constrained APIs. Testers need to detect bugs in programs. This dissertation presents a technique that helps testers detect and localize bugs in web applications. Our technique automatically creates tests that expose failures by combining dynamic test generation with explicit state model checking. In our evaluation, our technique discovered hundreds of faults in real applications. Maintainers have to reproduce failing executions in order to eliminate bugs found in deployed programs. This dissertation presents a technique that helps maintainers eliminate bugs by generating tests that reproduce failing executions. Our technique automatically generates tests that reproduce the failed executions by monitoring methods and storing optimized states of method arguments.(cont.) In our evaluation, our technique reproduced failures with low overhead in real programs Analyses need to avoid unnecessary computations in order to scale. This dissertation presents a technique that helps our other techniques to scale by inferring the mutability classification of arguments. Our technique classifies mutability by combining both static analyses and a novel dynamic mutability analysis. In our evaluation, our technique efficiently and correctly classified most of the arguments for programs with more than hundred thousand lines of code.by Shay Artzi.Ph.D
    corecore