416 research outputs found

    Untangling Fine-Grained Code Changes

    Get PDF
    After working for some time, developers commit their code changes to a version control system. When doing so, they often bundle unrelated changes (e.g., bug fix and refactoring) in a single commit, thus creating a so-called tangled commit. Sharing tangled commits is problematic because it makes review, reversion, and integration of these commits harder and historical analyses of the project less reliable. Researchers have worked at untangling existing commits, i.e., finding which part of a commit relates to which task. In this paper, we contribute to this line of work in two ways: (1) A publicly available dataset of untangled code changes, created with the help of two developers who accurately split their code changes into self contained tasks over a period of four months; (2) a novel approach, EpiceaUntangler, to help developers share untangled commits (aka. atomic commits) by using fine-grained code change information. EpiceaUntangler is based and tested on the publicly available dataset, and further evaluated by deploying it to 7 developers, who used it for 2 weeks. We recorded a median success rate of 91% and average one of 75%, in automatically creating clusters of untangled fine-grained code changes

    Evaluating advanced search interfaces using established information-seeking model

    No full text
    When users have poorly defined or complex goals search interfaces offering only keyword searching facilities provide inadequate support to help them reach their information-seeking objectives. The emergence of interfaces with more advanced capabilities such as faceted browsing and result clustering can go some way to some way toward addressing such problems. The evaluation of these interfaces, however, is challenging since they generally offer diverse and versatile search environments that introduce overwhelming amounts of independent variables to user studies; choosing the interface object as the only independent variable in a study would reveal very little about why one design out-performs another. Nonetheless if we could effectively compare these interfaces we would have a way to determine which was best for a given scenario and begin to learn why. In this article we present a formative framework for the evaluation of advanced search interfaces through the quantification of the strengths and weaknesses of the interfaces in supporting user tactics and varying user conditions. This framework combines established models of users, user needs, and user behaviours to achieve this. The framework is applied to evaluate three search interfaces and demonstrates the potential value of this approach to interactive IR evaluation

    A Validated Framework for Measuring Interface Support for Interactive Information Seeking

    No full text
    In this paper we present the validation of an evaluation framework that models the support provided by search systems for different types of user and their expected types of seeking behavior. Factors determining the types of users include previous knowledge and goals. After an overview is presented, the framework is validated in two ways. First, the novel integration of the two existing information-seeking models used in the framework is validated by the correlation of multiple expert and novice analysis. Second, the framework is validated against the results produced by two separated user studies. Further, the refinements made by the first validation technique are shown to increase the accuracy of the framework through the second technique. The successful validation process has shown that the framework can identify both strong and weak areas of search interface design in only a few hours. The results produced can be used to either revise and strengthen designs or inform the structure of a user study

    A Nine Month Report on Progress Towards a Framework for Evaluating Advanced Search Interfaces considering Information Retrieval and Human Computer Interaction

    No full text
    This is a nine month progress report detailing my research into supporting users in their search for information, where the questions, results or even thei

    High-contrast imaging of tight resolved binaries with two vector vortex coronagraphs in cascade with the Palomar SDC instrument

    Get PDF
    More than half of the stars in the solar neighborhood reside in binary/multiple stellar systems, and recent studies suggest that gas giant planets may be more abundant around binaries than single stars. Yet, these multiple systems are usually overlooked or discarded in most direct imaging surveys, as they prove difficult to image at high-contrast using coronographs. This is particularly the case for compact binaries (less than 1" angular separation) with similar stellar magnitudes, where no existing coronagraph can provide high-contrast regime. Here we present preliminary results of an on-going Palomar pilot survey searching for low-mass companions around ~15 young challenging binary systems, with angular separation as close as 0".3 and near-equal K-band magnitudes. We use the Stellar Double Coronagraph (SDC) instrument on the 200-inch Telescope in a modified optical configuration, making it possible to align any targeted binary system behind two vector vortex coronagraphs in cascade. This approach is uniquely possible at Palomar, thanks to the absence of sky rotation combined with the availability of an extreme AO system, and the number of intermediate focal-planes provided by the SDC instrument. Finally, we expose our current data reduction strategy, and we attempt to quantify the exact contrast gain parameter space of our approach, based on our latest observing runs.Comment: 12 pages, 8 figures, to appear in Proceedings of the SPIE, paper 10702-14

    'Irrational' searchers and IR-rational researchers

    Get PDF
    In this article we look at the prescriptions advocated by Web search textbooks in the light of a selection of empirical data of real Web information search processes. We use the strategy of disjointed incrementalism, which is a theoretical foundation from decision making, to focus on how people face complex problems, and claim that such problem solving can be compared to the tasks searchers perform when interacting with the Web. The findings suggest that textbooks on web searching should take into account that searchers only tend to take a certain number of sources into consideration, that the searchers adjust their goals and objectives during searching, and that searchers reconsider the usefulness of sources at different stages of their work tasks as well as their search tasks

    Interaction-aware development environments: recording, mining, and leveraging IDE interactions to analyze and support the development flow

    Get PDF
    Nowadays, software development is largely carried out using Integrated Development Environments, or IDEs. An IDE is a collection of tools and facilities to support the most diverse software engineering activities, such as writing code, debugging, and program understanding. The fact that they are integrated enables developers to find all the tools needed for the development in the same place. Each activity is composed of many basic events, such as clicking on a menu item in the IDE, opening a new user interface to browse the source code of a method, or adding a new statement in the body of a method. While working, developers generate thousands of these interactions, that we call fine-grained IDE interaction data. We believe this data is a valuable source of information that can be leveraged to enable better analyses and to offer novel support to developers. However, this data is largely neglected by modern IDEs. In this dissertation we propose the concept of "Interaction-Aware Development Environments": IDEs that collect, mine, and leverage the interactions of developers to support and simplify their workflow. We formulate our thesis as follows: Interaction-Aware Development Environments enable novel and in- depth analyses of the behavior of software developers and set the ground to provide developers with effective and actionable support for their activities inside the IDE. For example, by monitoring how developers navigate source code, the IDE could suggest the program entities that are potentially relevant for a particular task. Our research focuses on three main directions: 1. Modeling and Persisting Interaction Data. The first step to make IDEs aware of interaction data is to overcome its ephemeral nature. To do so we have to model this new source of data and to persist it, making it available for further use. 2. Interpreting Interaction Data. One of the biggest challenges of our research is making sense of the millions of interactions generated by developers. We propose several models to interpret this data, for example, by reconstructing high-level development activities from interaction histories or measure the navigation efficiency of developers. 3. Supporting Developers with Interaction Data. Novel IDEs can use the potential of interaction data to support software development. For example, they can identify the UI components that are potentially unnecessary for the future and suggest developers to close them, reducing the visual cluttering of the IDE

    Waterfall: Primitives Generation on the Fly

    No full text
    Modern languages are typically supported by managed runtimes (Virtual Machines). Since VMs have to deal with many concepts such as memory management, abstract execution model and scheduling, they tend to be very complex. Additionally, VMs have to meet strong performance requirements. This demand of performance is one of the main reasons why many VMs are built statically. Thus, design decisions are frozen at compile time preventing changes at runtime. One clear example is the impossibility to dynamically adapt or change primitives of the VM once it has been compiled. In this work we present a toolchain that allows for altering and configuring components such as primitives and plug-ins at runtime. The main contribution is Waterfall, a dynamic and reflective translator from Slang, a restricted subset of Smalltalk, to native code. Waterfall generates primitives on demand and executes them on the fly. We validate our approach by implementing dynamic primitive modification and runtime customization of VM plug-ins

    PRODUCER SEGMENTATION AND THE ROLE OF LONG-TERM RELATIONSHIP IN MALAYSIA’S MILK SUPPLY CHAINS

    Get PDF
    Research on buyer-seller relationships in the agricultural sector receives little attention. A growing body of evidence suggests that strong buyer-seller relationships facilitate more efficient supply chains. The long term relationship literature tends to treat suppliers as a homogenous group when attempting to identify motivations, strategies and incentives to enhance the quality of buyer-seller relationships. This article explores the role of long-term relationships between buyers and sellers in Malaysia’s dairy industry, taking into consideration the heterogeneous nature of the producers. Interviews with 133 producers provide the data for this study. Cluster analysis suggests two well-defined groups differing in terms of demographic characteristics and relationship perceptions toward their buyers. Based on the results, the study proposes some policy implication and marketing strategies for both milk buyers and government.buyer-seller relationship, price satisfaction dimensions, cluster analysis, dairy industry, Malaysia, Marketing,

    Quality-Aware Tooling

    Get PDF
    Programming is a fascinating activity that can yield results capable of changing people lives by automating daily tasks or even completely reimagining how we perform certain activities. Such a great power comes with a handful of challenges, with software maintainability being one of them. Maintainability cannot be validated by executing the program but has to be assessed by analyzing the codebase. This tedious task can be also automated by the means of software development. Programs called static analyzers can process source code and try to detect suspicious patterns. While these programs were proven to be useful, there is also an evidence that they are not used in practice. In this dissertation we discuss the concept of quality-aware tooling —- an approach that seeks a promotion of static analysis by seamlessly integrating it into development tools. We describe our experience of applying quality-aware tooling on a core distribution of a development environment. Our main focus is to provide live quality feedback in the code editor, but we also integrate static analysis into other tools based on our code quality model. We analyzed the attitude of the developers towards the integrated static analysis and assessed the impact of the integration on the development ecosystem. As a result 90% of software developers find the live feedback useful, quality rules received an overhaul to better match the contemporary development practices, and some developers even experimented with a custom analysis implementations. We discovered that live feedback helped developers to avoid dangerous mistakes, saved time, and taught valuable concepts. But most importantly we changed the developers' attitude towards static analysis from viewing it as just another tool to seeing it as an integral part of their toolset
    corecore