56 research outputs found

    Human-Centric Tools for Navigating Code

    Get PDF
    All software failures are fundamentally the fault of humansthe software\u27s design was flawed. The high cost of such failures ultimately results in developers having to design, implement, and test fixes, which all take considerable time and effort, and may result in more failures. As developers work on software maintenance tasks, they must navigate enormous codebases that may comprise millions of lines of code organized across thousands of modules. However, navigating code carries with it a plethora of problems for developers. In the hopes of addressing these navigation barriers, modern code editor and development environments provide a variety of features to aid in navigation; however, they are not without their limitations. Code navigations take many forms, and in this work I focus on three key types of code navigation in modern software development: navigating the working set, navigating among versions of code, and navigating the code structure. To address the challenges of navigating code, I designed three novel software development tools, one to enhance each type of navigation. First, I designed and implemented Patchworks, a code editor interface to support developers in navigating the working set. Patchworks aims to make these more efficient by providing a fixed grid of open code fragments that developers can quickly navigate. Second, I designed and implemented Yestercode, a code editor extension to support navigating among versions of code. Yestercode does so by providing a comparison view of the current code and a previous version of the same code. Third, I designed and implemented Wandercode, a code editor extension to enable developers to efficiently navigate the structure of their code. Wandercode aims to do so by providing a visualization of the code\u27s call graph overlayed on the code editor. My approach to designing these tools for more efficient code navigation was a human-centric onethat is, based on the needs of actual developers performing real software development tasks. Through user study evaluations, I found that these tools significantly improved developer productivity by reducing developers\u27 time spent navigating and mental effort during software maintenance tasks

    Towards lightweight and high-performance hardware transactional memory

    Get PDF
    Conventional lock-based synchronization serializes accesses to critical sections guarded by the same lock. Using multiple locks brings the possibility of a deadlock or a livelock in the program, making parallel programming a difficult task. Transactional Memory (TM) is a promising paradigm for parallel programming, offering an alternative to lock-based synchronization. TM eliminates the risk of deadlocks and livelocks, while it provides the desirable semantics of Atomicity, Consistency, and Isolation of critical sections. TM speculatively executes a series of memory accesses as a single, atomic, transaction. The speculative changes of a transaction are kept private until the transaction commits. If a transaction can break the atomicity or cause a deadlock or livelock, the TM system aborts the transaction and rolls back the speculative changes. To be effective, a TM implementation should provide high performance and scalability. While implementations of TM in pure software (STM) do not provide desirable performance, Hardware TM (HTM) implementations introduce much smaller overhead and have relatively good scalability, due to their better control of hardware resources. However, many HTM systems support only the transactions that fit limited hardware resources (for example, private caches), and fall back to software mechanisms if hardware limits are reached. These HTM systems, called best-effort HTMs, are not desirable since they force a programmer to think in terms of hardware limits, to use both HTM and STM, and to manage concurrent transactions in HTM and STM. In contrast with best-effort HTMs, unbounded HTM systems support overflowed transactions, that do not fit into private caches. Unbounded HTM systems often require complex protocols or expensive hardware mechanisms for conflict detection between overflowed transactions. In addition, an execution with overflowed transactions is often much slower than an execution that has only regular transactions. This is typically due to restrictive or approximative conflict management mechanism used for overflowed transactions. In this thesis, we study hardware implementations of transactional memory, and make three main contributions. First, we improve the general performance of HTM systems by proposing a scalable protocol for conflict management. The protocol has precise conflict detection, in contrast with often-employed inexact Bloom-filter-based conflict detection, which often falsely report conflicts between transactions. Second, we propose a best-effort HTM that utilizes the new scalable conflict detection protocol, termed EazyHTM. EazyHTM allows parallel commits for all non-conflicting transactions, and generally simplifies transaction commits. Finally, we propose an unbounded HTM that extends and improves the initial protocol for conflict management, and we name it EcoTM. EcoTM features precise conflict detection, and it efficiently supports large as well as small and short transactions. The key idea of EcoTM is to leverage an observation that very few locations are actually conflicting, even if applications have high contention. In EcoTM, each core locally detects if a cache line is non-conflicting, and conflict detection mechanism is invoked only for the few potentially conflicting cache lines.La Sincronización tradicional basada en los cerrojos de exclusión mutua (locks) serializa los accesos a las secciones críticas protegidas este cerrojo. La utilización de varios cerrojos en forma concurrente y/o paralela aumenta la posibilidad de entrar en abrazo mortal (deadlock) o en un bloqueo activo (livelock) en el programa, está es una de las razones por lo cual programar en forma paralela resulta ser mucho mas dificultoso que programar en forma secuencial. La memoria transaccional (TM) es un paradigma prometedor para la programación paralela, que ofrece una alternativa a los cerrojos. La memoria transaccional tiene muchas ventajas desde el punto de vista tanto práctico como teórico. TM elimina el riesgo de bloqueo mutuo y de bloqueo activo, mientras que proporciona una semántica de atomicidad, coherencia, aislamiento con características similares a las secciones críticas. TM ejecuta especulativamente una serie de accesos a la memoria como una transacción atómica. Los cambios especulativos de la transacción se mantienen privados hasta que se confirma la transacción. Si una transacción entra en conflicto con otra transacción o sea que alguna de ellas escribe en una dirección que la otra leyó o escribió, o se entra en un abrazo mortal o en un bloqueo activo, el sistema de TM aborta la transacción y revierte los cambios especulativos. Para ser eficaz, una implementación de TM debe proporcionar un alto rendimiento y escalabilidad. Las implementaciones de TM en el software (STM) no proporcionan este desempeño deseable, en cambio, las mplementaciones de TM en hardware (HTM) tienen mejor desempeño y una escalabilidad relativamente buena, debido a su mejor control de los recursos de hardware y que la resolución de los conflictos así el mantenimiento y gestión de los datos se hace en hardware. Sin embargo, muchos de los sistemas de HTM están limitados a los recursos de hardware disponibles, por ejemplo el tamaño de las caches privadas, y dependen de mecanismos de software para cuando esos límites son sobrepasados. Estos sistemas HTM, llamados best-effort HTM no son deseables, ya que obligan al programador a pensar en términos de los límites existentes en el hardware que se esta utilizando, así como en el sistema de STM que se llama cuando los recursos son sobrepasados. Además, tiene que resolver que transacciones hardware y software se ejecuten concurrentemente. En cambio, los sistemas de HTM ilimitados soportan un numero de operaciones ilimitadas o sea no están restringidos a límites impuestos artificialmente por el hardware, como ser el tamaño de las caches o buffers internos. Los sistemas HTM ilimitados por lo general requieren protocolos complejos o mecanismos muy costosos para la detección de conflictos y el mantenimiento de versiones de los datos entre las transacciones. Por otra parte, la ejecución de transacciones es a menudo mucho más lenta que en una ejecución sobre un sistema de HTM que este limitado. Esto es debido al que los mecanismos utilizados en el HTM limitado trabaja con conjuntos de datos relativamente pequeños que caben o están muy cerca del núcleo del procesador. En esta tesis estudiamos implementaciones de TM en hardware. Presentaremos tres contribuciones principales: Primero, mejoramos el rendimiento general de los sistemas, al proponer un protocolo escalable para la gestión de conflictos. El protocolo detecta los conflictos de forma precisa, en contraste con otras técnicas basadas en filtros Bloom, que pueden reportar conflictos falsos entre las transacciones. Segundo, proponemos un best-effort HTM que utiliza el nuevo protocolo escalable detección de conflictos, denominado EazyHTM. EazyHTM permite la ejecución completamente paralela de todas las transacciones sin conflictos, y por lo general simplifica la ejecución. Por último, proponemos una extensión y mejora del protocolo inicial para la gestión de conflictos, que llamaremos EcoTM. EcoTM cuenta con detección de conflictos precisa, eficiente y es compatible tanto con transacciones grandes como con pequeñas. La idea clave de EcoTM es aprovechar la observación que en muy pocas ubicaciones de memoria aparecen los conflictos entre las transacciones, incluso en aplicaciones tienen muchos conflictos. En EcoTM, cada núcleo detecta localmente si la línea es conflictiva, además existe un mecanismo de detección de conflictos detallado que solo se activa para las pocas líneas de memoria que son potencialmente conflictivas

    Town of Henniker, New Hampshire. 2010 annual report.

    Get PDF
    This is an annual report containing vital statistics for a town/city in the state of New Hampshire

    Developing an intervention toolbox for common health problems in the workplace

    Get PDF
    The project brief was to develop the content for an intervention toolbox for common health problems in the workplace - musculoskeletal, mental health and stress complaints. The intention was to develop a prototype toolbox that can be taken forward to (1) minimise the occurrence of work-relevant common health problems (CHPs) and (2) reduce avoidable sickness absence, healthcare use and long-term disability for CHP complaints that inevitably occur in the workplac

    Perceptions of job loss : a descriptive study of managerial and professional men and women.

    Get PDF

    Town of Dunbarton, New Hampshire for the fiscal year ending December 31, 2011.

    Get PDF
    This is an annual report containing vital statistics for a town/city in the state of New Hampshire

    Flood management in a changing climate

    Get PDF
    In 2010-2011 Australia experienced its most expensive floods in history with costs to insurers and state and federal governments exceeding A10billion.Climateandpopulationchangesarelikelytoincreasefuturefloodthreatsandeconomistsestimatethatby2050,evenwithoutfactoringinclimatechange,Australiasnaturaldisasterdamagebillcouldreach10 billion. Climate and population changes are likely to increase future flood threats and economists estimate that by 2050, even without factoring in climate change, Australia’s natural disaster damage bill could reach 33 billion per year. Flood management is thus a key area for improving adaptive capacity. While the causes of flooding are well-known, effective solutions have proved elusive and some flood management options may be maladaptive in the longer term. There were contradictions in flood management literature. Some sources categorized structural measures such as dykes and levees as adaptation measures. Others warned about their negative impacts. Meanwhile, innovative approaches used overseas appeared little known or used in Australia. Although structural measures were often criticized in adaptation literature, there was a lack of guidance about how to reduce reliance on them. Similarly, resilience researchers with a social-ecological systems perspective argued the need to identify policy and institutional interventions that would make it possible to move from undesirable to more desirable resilience domains. The challenge was therefore to determine how best to adapt to increasing flood risk, and how to facilitate the adoption of adaptive approaches. A key question was whether adaptive approaches used elsewhere were transferrable to Australia. Given the dominance of resilience theory in modern disaster management, a related research aim was to determine whether or not disaster resilience policy was likely to achieve adaptive outcomes. Literature review was the primary research method, supplemented with semi-structured interviews. Sources included recent flood reviews, academic literature, policy and legal documents. These were used to develop comparative case studies from China, The Netherlands, the United States and Australia. This was extended to cover global organizations for the resilience component of the work. Data analysis drew on literature relating to adaptation, resilience, comparative public policy, institutional theory and emergency management. Resilience interpretations were identified in a systematic way using a modified emergency management framework, complemented with narratives. Results revealed that resilience interpretations varied according to country, with Australia tending to be the least adaptive and the Netherlands the most. This reflects changes in attitudes towards structural mitigation. While support for structural mitigation remains strong in Australia, recent flood events in other countries have exposed its weaknesses. This has resulted in a shift to reduce levee dependency, accompanied by support for alternatives such as ecosystem based measures and development relocation. Such measures encounter significant barriers in Australia, making policy transfer problematic. Nevertheless, case studies revealed opportunities to improve program implementation, and investigation of path dependency associated with structural mitigation identified opportunities to alter feedbacks. Regarding application of resilience theory to disaster management, it was found that while resilience is a useful concept for researchers, there are problems when it is operationalised. A better focus for practitioners would be to negotiate long-term adaptation pathways
    corecore