570 research outputs found

    Process Mining Concepts for Discovering User Behavioral Patterns in Instrumented Software

    Get PDF
    Process Mining is a technique for discovering “in-use” processes from traces emitted to event logs. Researchers have recently explored applying this technique to documenting processes discovered in software applications. However, the requirements for emitting events to support Process Mining against software applications have not been well documented. Furthermore, the linking of end-user intentional behavior to software quality as demonstrated in the discovered processes has not been well articulated. After evaluating the literature, this thesis suggested focusing on user goals and actual, in-use processes as an input to an Agile software development life cycle in order to improve software quality. It also provided suggestions for instrumenting software applications to support Process Mining techniques

    Grand Challenges of Traceability: The Next Ten Years

    Full text link
    In 2007, the software and systems traceability community met at the first Natural Bridge symposium on the Grand Challenges of Traceability to establish and address research goals for achieving effective, trustworthy, and ubiquitous traceability. Ten years later, in 2017, the community came together to evaluate a decade of progress towards achieving these goals. These proceedings document some of that progress. They include a series of short position papers, representing current work in the community organized across four process axes of traceability practice. The sessions covered topics from Trace Strategizing, Trace Link Creation and Evolution, Trace Link Usage, real-world applications of Traceability, and Traceability Datasets and benchmarks. Two breakout groups focused on the importance of creating and sharing traceability datasets within the research community, and discussed challenges related to the adoption of tracing techniques in industrial practice. Members of the research community are engaged in many active, ongoing, and impactful research projects. Our hope is that ten years from now we will be able to look back at a productive decade of research and claim that we have achieved the overarching Grand Challenge of Traceability, which seeks for traceability to be always present, built into the engineering process, and for it to have "effectively disappeared without a trace". We hope that others will see the potential that traceability has for empowering software and systems engineers to develop higher-quality products at increasing levels of complexity and scale, and that they will join the active community of Software and Systems traceability researchers as we move forward into the next decade of research

    Grand Challenges of Traceability: The Next Ten Years

    Full text link
    In 2007, the software and systems traceability community met at the first Natural Bridge symposium on the Grand Challenges of Traceability to establish and address research goals for achieving effective, trustworthy, and ubiquitous traceability. Ten years later, in 2017, the community came together to evaluate a decade of progress towards achieving these goals. These proceedings document some of that progress. They include a series of short position papers, representing current work in the community organized across four process axes of traceability practice. The sessions covered topics from Trace Strategizing, Trace Link Creation and Evolution, Trace Link Usage, real-world applications of Traceability, and Traceability Datasets and benchmarks. Two breakout groups focused on the importance of creating and sharing traceability datasets within the research community, and discussed challenges related to the adoption of tracing techniques in industrial practice. Members of the research community are engaged in many active, ongoing, and impactful research projects. Our hope is that ten years from now we will be able to look back at a productive decade of research and claim that we have achieved the overarching Grand Challenge of Traceability, which seeks for traceability to be always present, built into the engineering process, and for it to have "effectively disappeared without a trace". We hope that others will see the potential that traceability has for empowering software and systems engineers to develop higher-quality products at increasing levels of complexity and scale, and that they will join the active community of Software and Systems traceability researchers as we move forward into the next decade of research

    Automatic performance optimisation of component-based enterprise systems via redundancy

    Get PDF
    Component technologies, such as J2EE and .NET have been extensively adopted for building complex enterprise applications. These technologies help address complex functionality and flexibility problems and reduce development and maintenance costs. Nonetheless, current component technologies provide little support for predicting and controlling the emerging performance of software systems that are assembled from distinct components. Static component testing and tuning procedures provide insufficient performance guarantees for components deployed and run in diverse assemblies, under unpredictable workloads and on different platforms. Often, there is no single component implementation or deployment configuration that can yield optimal performance in all possible conditions under which a component may run. Manually optimising and adapting complex applications to changes in their running environment is a costly and error-prone management task. The thesis presents a solution for automatically optimising the performance of component-based enterprise systems. The proposed approach is based on the alternate usage of multiple component variants with equivalent functional characteristics, each one optimized for a different execution environment. A management framework automatically administers the available redundant variants and adapts the system to external changes. The framework uses runtime monitoring data to detect performance anomalies and significant variations in the application's execution environment. It automatically adapts the application so as to use the optimal component configuration under the current running conditions. An automatic clustering mechanism analyses monitoring data and infers information on the components' performance characteristics. System administrators use decision policies to state high-level performance goals and configure system management processes. A framework prototype has been implemented and tested for automatically managing a J2EE application. Obtained results prove the framework's capability to successfully manage a software system without human intervention. The management overhead induced during normal system execution and through management operations indicate the framework's feasibility

    Modulating application behaviour for closely coupled intrusion detection

    Get PDF
    Includes bibliographical references.This thesis presents a security measure that is closely coupled to applications. This distinguishes it from conventional security measures which tend to operate at the infrastructure level (network, operating system or virtual machine). Such lower level mechanisms exhibit a number of limitations, amongst others they are poorly suited to the monitoring of applications which operate on encrypted data or the enforcement of security policies involving abstractions introduced by applications. In order to address these problems, the thesis proposes externalising the security related analysis functions performed by applications. These otherwise remain hidden in applications and so are likely to be underdeveloped, inflexible or insular. It is argued that these deficiencies have resulted in an over-reliance on infrastructure security components

    Continuous Experimentation for Automotive Software on the Example of a Heavy Commercial Vehicle in Daily Operation

    Full text link
    As the automotive industry focuses its attention more and more towards the software functionality of vehicles, techniques to deliver new software value at a fast pace are needed. Continuous Experimentation, a practice coming from the web-based systems world, is one of such techniques. It enables researchers and developers to use real-world data to verify their hypothesis and steer the software evolution based on performances and user preferences, reducing the reliance on simulations and guesswork. Several challenges prevent the verbatim adoption of this practice on automotive cyber-physical systems, e.g., safety concerns and limitations from computational resources; nonetheless, the automotive field is starting to take interest in this technique. This work aims at demonstrating and evaluating a prototypical Continuous Experimentation infrastructure, implemented on a distributed computational system housed in a commercial truck tractor that is used in daily operations by a logistic company on public roads. The system comprises computing units and sensors, and software deployment and data retrieval are only possible remotely via a mobile data connection due to the commercial interests of the logistics company. This study shows that the proposed experimentation process resulted in the development team being able to base software development choices on the real-world data collected during the experimental procedure. Additionally, a set of previously identified design criteria to enable Continuous Experimentation on automotive systems was discussed and their validity confirmed in the light of the presented work.Comment: Paper accepted to the 14th European Conference on Software Architecture (ECSA 2020). 16 pages, 5 figure

    Integrating a smart city testbed into a large-scale heterogeneous federation of future internet experimentation facilities: the SmartSantander approach

    Get PDF
    For some years already, there has been a plethora of research initiatives throughout the world that have deployed diverse experimentation facilities for Future Internet technologies research and development. While access to these testbeds has been sometimes restricted to the specific research community supporting them, opening them to different communities can not only help those infrastructures to achieve a wider impact, but also to better identify new possibilities based on novel considerations brought by those external users. On top of the individual testbeds, supporting experiments that employs several of them in a combined and seamless fashion has been one of the main objectives of different transcontinental research initiatives, such as FIRE in Europe or GENI in United States. In particular, Fed4FIRE project and its continuation, Fed4FIRE+, have emerged as "best-in-town" projects to federate heterogeneous experimentation platforms. This paper presents the most relevant aspects of the integration of a large scale testbed on the IoT domain within the Fed4FIRE+ federation. It revolves around the adaptation carried out on the SmartSantander smart city testbed. Additionally, the paper offers an overview of the different federation models that Fed4FIRE+ proposes to testbed owners in order to provide a complete view of the involved technologies. The paper is also presenting a survey of how several specific research platforms from different experimentation domains have fulfilled the federation task following Fed4FIRE+ concepts.This work was partially funded by the European project Federation for FIRE Plus (Fed4FIRE+) from the European Union’s Horizon 2020 Programme with the Grant Agreement No. 732638 and by the Spanish Government (MINECO) by means of the projects ADVICE: Dynamic provisioning of connectivity in high density 5G wireless scenarios (TEC2015-71329-C2-1-R) and Future Internet Enabled Resilient Cities (FIERCE)

    Augmenting Network Flows with User Interface Context to Inform Access Control Decisions

    Get PDF
    Whitelisting IP addresses and hostnames allow organizations to employ a default-deny approach to network traffic. Organizations employing a default-deny approach can stop many malicious threats, even including zero-day attacks, because it only allows explicitly stated legitimate activities. However, creating a comprehensive whitelist for a default-deny approach is difficult due to user-supplied destinations that can only be known at the time of usage. Whitelists, therefore, interfere with user experience by denying network traffic to user-supplied legitimate destinations. In this thesis, we focus on creating dynamic whitelists that are capable of allowing user-supplied network activity. We designed and built a system called Harbinger, which leverages user interface activity to provide contextual information in which network activity took place. We built Harbinger for Microsoft Windows operating systems and have tested its usability and effectiveness on four popular Microsoft applications. We find that Harbinger can reduce false positives-positive detection rates from 44%-54% to 0%-0.4% in IP and DNS whitelists. Furthermore, while traditional whitelists failed to detect propagation attacks, Harbinger detected the same attacks 96% of the time. We find that our system only introduced six milliseconds of delay or less for 96% of network activity
    corecore