2,461 research outputs found

    DEEP: a provenance-aware executable document system

    Get PDF
    The concept of executable documents is attracting growing interest from both academics and publishers since it is a promising technology for the dissemination of scientific results. Provenance is a kind of metadata that provides a rich description of the derivation history of data products starting from their original sources. It has been used in many different e-Science domains and has shown great potential in enabling reproducibility of scientific results. However, while both executable documents and provenance are aimed at enhancing the dissemination of scientific results, little has been done to explore the integration of both techniques. In this paper, we introduce the design and development of DEEP, an executable document environment that generates scientific results dynamically and interactively, and also records the provenance for these results in the document. In this system, provenance is exposed to users via an interface that provides them with an alternative way of navigating the executable document. In addition, we make use of the provenance to offer a document rollback facility to users and help to manage the system's dynamic resources

    Interaction-aware development environments: recording, mining, and leveraging IDE interactions to analyze and support the development flow

    Get PDF
    Nowadays, software development is largely carried out using Integrated Development Environments, or IDEs. An IDE is a collection of tools and facilities to support the most diverse software engineering activities, such as writing code, debugging, and program understanding. The fact that they are integrated enables developers to find all the tools needed for the development in the same place. Each activity is composed of many basic events, such as clicking on a menu item in the IDE, opening a new user interface to browse the source code of a method, or adding a new statement in the body of a method. While working, developers generate thousands of these interactions, that we call fine-grained IDE interaction data. We believe this data is a valuable source of information that can be leveraged to enable better analyses and to offer novel support to developers. However, this data is largely neglected by modern IDEs. In this dissertation we propose the concept of "Interaction-Aware Development Environments": IDEs that collect, mine, and leverage the interactions of developers to support and simplify their workflow. We formulate our thesis as follows: Interaction-Aware Development Environments enable novel and in- depth analyses of the behavior of software developers and set the ground to provide developers with effective and actionable support for their activities inside the IDE. For example, by monitoring how developers navigate source code, the IDE could suggest the program entities that are potentially relevant for a particular task. Our research focuses on three main directions: 1. Modeling and Persisting Interaction Data. The first step to make IDEs aware of interaction data is to overcome its ephemeral nature. To do so we have to model this new source of data and to persist it, making it available for further use. 2. Interpreting Interaction Data. One of the biggest challenges of our research is making sense of the millions of interactions generated by developers. We propose several models to interpret this data, for example, by reconstructing high-level development activities from interaction histories or measure the navigation efficiency of developers. 3. Supporting Developers with Interaction Data. Novel IDEs can use the potential of interaction data to support software development. For example, they can identify the UI components that are potentially unnecessary for the future and suggest developers to close them, reducing the visual cluttering of the IDE

    Navigation

    Get PDF
    Reihe Begriffe des digitalen Bildes Das DFG-Schwerpunktprogramm ‚Das digitale Bild‘ untersucht von einem multiperspektivischen Standpunkt aus die zentrale Rolle, die dem Bild im komplexen Prozess der Digitalisierung des Wissens zukommt. In einem deutschlandweiten Verbund soll dabei eine neue Theorie und Praxis computerbasierter Bildwelten erarbeitet werden

    An evaluation of the ‘open source internet research tool’: a user-centred and participatory design approach with UK law enforcement

    Get PDF
    As part of their routine investigations, law enforcement conducts open source research; that is, investigating and researching using publicly available information online. Historically, the notion of collecting open sources of information is as ingrained as the concept of intelligence itself. However, utilising open source research in UK law enforcement is a relatively new concept not generally, or practically, considered until after the civil unrest seen in the UK’s major cities in the summer of 2011. While open source research focuses on the understanding of bein‘publicly available’, there are legal, ethical and procedural issues that law enforcement must consider. This asks the following mainresearch question: What constraints do law enforcement face when conducting open source research? From a legal perspective, law enforcement officials must ensure their actions are necessary and proportionate, more so where an individual’s privacy is concerned under human rights legislation and data protection laws such as the General Data Protection Regulation. Privacy issues appear, though, when considering the boom and usage of social media, where lines can be easily blurred as to what is public and private. Guidance from Association of Chief Police Officers (ACPO) and, now, the National Police Chief’s Council (NPCC) tends to be non-committal in tone, but nods towards obtaining legal authorisation under the Regulation of Investigatory Powers Act (RIPA) 2000 when conducting what may be ‘directed surveillance’. RIPA, however, pre-dates the modern era of social media by several years, so its applicability as the de-facto piece of legislation for conducting higher levels of open source research is called into question. 22 semi-structured interviews with law enforcement officials were conducted and discovered a grey area surrounding legal authorities when conducting open source research. From a technical and procedural aspect of conducting open source research, officers used a variety of software tools that would vary both in price and quality, with no standard toolset. This was evidenced from 20 questionnaire responses from 12 police forces within the UK. In an attempt to bring about standardisation, the College of Policing’s Research, Identifying and Tracing the Electronic Suspect (RITES) course recommended several capturing and productivity tools. Trainers on the RITES course, however, soon discovered the cognitive overload this had on the cohort, who would often spend more time learning to use the tools than learn about open source research techniques. The problem highlighted above prompted the creation of Open Source Internet Research Tool (OSIRT); an all-in-one browser for conducting open source research. OSIRT’s creation followed the user-centred design (UCD) method, with two phases of development using the software engineering methodologies ‘throwaway prototyping’, for the prototype version, and ‘incremental and iterative development’ for the release version. OSIRT has since been integrated into the RITES course, which trains over 100 officers a year, and provides a feedback outlet for OSIRT. System Usability Scale questionnaires administered on RITES courses have shown OSIRT to be usable, with feedback being positive. Beyond the RITES course, surveys, interviews and observations also show OSIRT makes an impact on everyday policing and has reduced the burden officers faced when conducting opens source research. OSIRT’s impact now reaches beyond the UK and sees usage across the globe. OSIRT contributes to law enforcement output in countries such as the USA, Canada, Australia and even Israel, demonstrating OSIRT’s usefulness and necessity are not only applicable to UK law enforcement. This thesis makes several contributions both academically and from a practical perspective to law enforcement. The main contributions are: • Discussion and analysis of the constraints law enforcement within the UK face when conducting open source research from a legal, ethical and procedural perspective. • Discussion, analysis and reflective discourse surrounding the development of a software tool for law enforcement and the challenges faced in what is a unique development. • An approach to collaborating with those who are in ‘closed’ environments, such as law enforcement, to create bespoke software. Additionally, this approach offers a method of measuring the value and usefulness of OSIRT with UK law enforcement. • The creation and integration of OSIRT in to law enforcement and law enforcement training packages

    The Multimodal Tutor: Adaptive Feedback from Multimodal Experiences

    Get PDF
    This doctoral thesis describes the journey of ideation, prototyping and empirical testing of the Multimodal Tutor, a system designed for providing digital feedback that supports psychomotor skills acquisition using learning and multimodal data capturing. The feedback is given in real-time with machine-driven assessment of the learner's task execution. The predictions are tailored by supervised machine learning models trained with human annotated samples. The main contributions of this thesis are: a literature survey on multimodal data for learning, a conceptual model (the Multimodal Learning Analytics Model), a technological framework (the Multimodal Pipeline), a data annotation tool (the Visual Inspection Tool) and a case study in Cardiopulmonary Resuscitation training (CPR Tutor). The CPR Tutor generates real-time, adaptive feedback using kinematic and myographic data and neural networks

    Towards an automated photogrammetry-based approach for monitoring and controlling construction site activities

    Get PDF
    The construction industry has a poor productivity record, which was predominantly ascribed to inadequate monitoring of how a project is progressing at any given time. Most available approaches do not offer key stakeholders a shared understanding of project performance in real-time, which as a result failed to identify any project slippage on the original schedule. This study reports on the development of a novel automated system for monitoring, updating and controlling construction site activities in real-time. The proposed system seeks to harness advances in close-range photogrammetry, BIM and computer vision to deliver an original approach that is capable of continuous monitoring of construction activities, with the progress status determinable, at any given time, throughout the construction stage.The research adopted a sequential mixed approach strategy pursuant to the design science standard processes in three stages. The first stage involved interviews within a focus group setting with seven carefully selected construction professionals. Their answers were analysed and provided "the informed-basis for the development of the automated system” for detecting and notifying delays in construction projects. The second stage involved development of ‘proof of the concept’ in a pilot project case study with nine potential users of the proposed automated system. Face-to-face interviews were conducted to evaluate and verify the effectiveness of the developed prototype, which as a result was continuously refined and improved according to the users’ comments and feedbacks. Within this stage the prototype to be tested and evaluated by a representative of construction professionals was developed. Subsequently a sub-stage of the system’s development sought to test and validate the final version of the system in the context of a real-life construction project in Dubai whereby an online survey is administered to 40 users, a representative sample of potential system users. The third stage addressed the conclusion, limitations and recommendations for further research studies for the proposed system.The findings of the study revealed that once the system installed and programmed, it does not require any expertise or manual intervention. This is mainly due to all the processes of the system being fully automated and the data collection, interpretations, analysis and notifications are automatically processed without any human intervention. Consequently, human errors and subjectivity are eliminated, and accordingly the system achieved a significantly high level of accuracy, automation and reliability. The system achieved a level of accuracy of 99.97% for horizontal construction elements and exceeded 99.70% for vertical elements. The findings also highlighted that this developed system is inexpensive, easy to operate and its accuracy excels that of current systems sought to automate monitoring and updating of progress status’ for construction projects. The distinctive features of the proposed system assisted the site team to complete the project 61 days ahead of its contractual completion date with a 9% time saving and 3% cost saving.The proposed system has the potential to identify any deviation from as-planned construction schedules, and prompt actions taken in response to the automatic notification system, which informs decision-makers via emails and SMS
    • …
    corecore