570 research outputs found
Process Mining Concepts for Discovering User Behavioral Patterns in Instrumented Software
Process Mining is a technique for discovering “in-use” processes from traces emitted to event logs. Researchers have recently explored applying this technique to documenting processes discovered in software applications. However, the requirements for emitting events to support Process Mining against software applications have not been well documented. Furthermore, the linking of end-user intentional behavior to software quality as demonstrated in the discovered processes has not been well articulated. After evaluating the literature, this thesis suggested focusing on user goals and actual, in-use processes as an input to an Agile software development life cycle in order to improve software quality. It also provided suggestions for instrumenting software applications to support Process Mining techniques
Grand Challenges of Traceability: The Next Ten Years
In 2007, the software and systems traceability community met at the first
Natural Bridge symposium on the Grand Challenges of Traceability to establish
and address research goals for achieving effective, trustworthy, and ubiquitous
traceability. Ten years later, in 2017, the community came together to evaluate
a decade of progress towards achieving these goals. These proceedings document
some of that progress. They include a series of short position papers,
representing current work in the community organized across four process axes
of traceability practice. The sessions covered topics from Trace Strategizing,
Trace Link Creation and Evolution, Trace Link Usage, real-world applications of
Traceability, and Traceability Datasets and benchmarks. Two breakout groups
focused on the importance of creating and sharing traceability datasets within
the research community, and discussed challenges related to the adoption of
tracing techniques in industrial practice. Members of the research community
are engaged in many active, ongoing, and impactful research projects. Our hope
is that ten years from now we will be able to look back at a productive decade
of research and claim that we have achieved the overarching Grand Challenge of
Traceability, which seeks for traceability to be always present, built into the
engineering process, and for it to have "effectively disappeared without a
trace". We hope that others will see the potential that traceability has for
empowering software and systems engineers to develop higher-quality products at
increasing levels of complexity and scale, and that they will join the active
community of Software and Systems traceability researchers as we move forward
into the next decade of research
Grand Challenges of Traceability: The Next Ten Years
In 2007, the software and systems traceability community met at the first
Natural Bridge symposium on the Grand Challenges of Traceability to establish
and address research goals for achieving effective, trustworthy, and ubiquitous
traceability. Ten years later, in 2017, the community came together to evaluate
a decade of progress towards achieving these goals. These proceedings document
some of that progress. They include a series of short position papers,
representing current work in the community organized across four process axes
of traceability practice. The sessions covered topics from Trace Strategizing,
Trace Link Creation and Evolution, Trace Link Usage, real-world applications of
Traceability, and Traceability Datasets and benchmarks. Two breakout groups
focused on the importance of creating and sharing traceability datasets within
the research community, and discussed challenges related to the adoption of
tracing techniques in industrial practice. Members of the research community
are engaged in many active, ongoing, and impactful research projects. Our hope
is that ten years from now we will be able to look back at a productive decade
of research and claim that we have achieved the overarching Grand Challenge of
Traceability, which seeks for traceability to be always present, built into the
engineering process, and for it to have "effectively disappeared without a
trace". We hope that others will see the potential that traceability has for
empowering software and systems engineers to develop higher-quality products at
increasing levels of complexity and scale, and that they will join the active
community of Software and Systems traceability researchers as we move forward
into the next decade of research
Automatic performance optimisation of component-based enterprise systems via redundancy
Component technologies, such as J2EE and .NET have been extensively adopted for building complex enterprise applications. These technologies help address complex functionality and flexibility problems and reduce development and maintenance costs. Nonetheless, current component technologies provide little support for predicting and controlling the emerging performance of software systems that are assembled from distinct components.
Static component testing and tuning procedures provide insufficient performance guarantees for components deployed and run in diverse assemblies, under unpredictable workloads and on different platforms. Often, there is no single component implementation or deployment configuration that can yield optimal performance in all possible conditions under which a component may run. Manually optimising and adapting complex applications to changes in their running environment is a costly and error-prone management task.
The thesis presents a solution for automatically optimising the performance of component-based enterprise systems. The proposed approach is based on the alternate usage of multiple component variants with equivalent functional characteristics, each one optimized for a different execution environment. A management framework automatically administers the available redundant variants and adapts the system to external changes. The framework uses runtime monitoring data to detect performance anomalies and significant variations in the application's execution environment. It automatically adapts the application so as to use the optimal component configuration under the current running conditions. An automatic clustering mechanism analyses monitoring data and infers information on the components' performance characteristics. System administrators use decision policies to state high-level performance goals and configure system management processes.
A framework prototype has been implemented and tested for automatically managing a J2EE application. Obtained results prove the framework's capability to successfully manage a software system without human intervention. The management overhead induced during normal system execution and through management operations indicate the framework's feasibility
Modulating application behaviour for closely coupled intrusion detection
Includes bibliographical references.This thesis presents a security measure that is closely coupled to applications. This distinguishes it from conventional security measures which tend to operate at the infrastructure level (network, operating system or virtual machine). Such lower level mechanisms exhibit a number of limitations, amongst others they are poorly suited to the monitoring of applications which operate on encrypted data or the enforcement of security policies involving abstractions introduced by applications. In order to address these problems, the thesis proposes externalising the security related analysis functions performed by applications. These otherwise remain hidden in applications and so are likely to be underdeveloped, inflexible or insular. It is argued that these deficiencies have resulted in an over-reliance on infrastructure security components
Continuous Experimentation for Automotive Software on the Example of a Heavy Commercial Vehicle in Daily Operation
As the automotive industry focuses its attention more and more towards the
software functionality of vehicles, techniques to deliver new software value at
a fast pace are needed. Continuous Experimentation, a practice coming from the
web-based systems world, is one of such techniques. It enables researchers and
developers to use real-world data to verify their hypothesis and steer the
software evolution based on performances and user preferences, reducing the
reliance on simulations and guesswork. Several challenges prevent the verbatim
adoption of this practice on automotive cyber-physical systems, e.g., safety
concerns and limitations from computational resources; nonetheless, the
automotive field is starting to take interest in this technique. This work aims
at demonstrating and evaluating a prototypical Continuous Experimentation
infrastructure, implemented on a distributed computational system housed in a
commercial truck tractor that is used in daily operations by a logistic company
on public roads. The system comprises computing units and sensors, and software
deployment and data retrieval are only possible remotely via a mobile data
connection due to the commercial interests of the logistics company. This study
shows that the proposed experimentation process resulted in the development
team being able to base software development choices on the real-world data
collected during the experimental procedure. Additionally, a set of previously
identified design criteria to enable Continuous Experimentation on automotive
systems was discussed and their validity confirmed in the light of the
presented work.Comment: Paper accepted to the 14th European Conference on Software
Architecture (ECSA 2020). 16 pages, 5 figure
Recommended from our members
An i2b2-based, generalizable, open source, self-scaling chronic disease registry
Objective: Registries are a well-established mechanism for obtaining high quality, disease-specific data, but are often highly project-specific in their design, implementation, and policies for data use. In contrast to the conventional model of centralized data contribution, warehousing, and control, we design a self-scaling registry technology for collaborative data sharing, based upon the widely adopted Integrating Biology & the Bedside (i2b2) data warehousing framework and the Shared Health Research Information Network (SHRINE) peer-to-peer networking software. Materials and methods Focusing our design around creation of a scalable solution for collaboration within multi-site disease registries, we leverage the i2b2 and SHRINE open source software to create a modular, ontology-based, federated infrastructure that provides research investigators full ownership and access to their contributed data while supporting permissioned yet robust data sharing. We accomplish these objectives via web services supporting peer-group overlays, group-aware data aggregation, and administrative functions. Results: The 56-site Childhood Arthritis & Rheumatology Research Alliance (CARRA) Registry and 3-site Harvard Inflammatory Bowel Diseases Longitudinal Data Repository now utilize i2b2 self-scaling registry technology (i2b2-SSR). This platform, extensible to federation of multiple projects within and between research networks, encompasses >6000 subjects at sites throughout the USA. Discussion We utilize the i2b2-SSR platform to minimize technical barriers to collaboration while enabling fine-grained control over data sharing. Conclusions: The implementation of i2b2-SSR for the multi-site, multi-stakeholder CARRA Registry has established a digital infrastructure for community-driven research data sharing in pediatric rheumatology in the USA. We envision i2b2-SSR as a scalable, reusable solution facilitating interdisciplinary research across diseases
Integrating a smart city testbed into a large-scale heterogeneous federation of future internet experimentation facilities: the SmartSantander approach
For some years already, there has been a plethora of research initiatives throughout the world that have deployed diverse experimentation facilities for Future Internet technologies research and development. While access to these testbeds has been sometimes restricted to the specific research community supporting them, opening them to different communities can not only help those infrastructures to achieve a wider impact, but also to better identify new possibilities based on novel considerations brought by those external users. On top of the individual testbeds, supporting experiments that employs several of them in a combined and seamless fashion has been one of the main objectives of different transcontinental research initiatives, such as FIRE in Europe or GENI in United States. In particular, Fed4FIRE project and its continuation, Fed4FIRE+, have emerged as "best-in-town" projects to federate heterogeneous experimentation platforms. This paper presents the most relevant aspects of the integration of a large scale testbed on the IoT domain within the Fed4FIRE+ federation. It revolves around the adaptation carried out on the SmartSantander smart city testbed. Additionally, the paper offers an overview of the different federation models that Fed4FIRE+ proposes to testbed owners in order to provide a complete view of the involved technologies. The paper is also presenting a survey of how several specific research platforms from different experimentation domains have fulfilled the federation task following Fed4FIRE+ concepts.This work was partially funded by the European project Federation for FIRE Plus (Fed4FIRE+) from the European Union’s Horizon 2020 Programme with the Grant Agreement No. 732638 and by the Spanish Government (MINECO) by means of the projects ADVICE: Dynamic provisioning of connectivity in high density 5G wireless scenarios (TEC2015-71329-C2-1-R) and Future Internet Enabled Resilient Cities (FIERCE)
Augmenting Network Flows with User Interface Context to Inform Access Control Decisions
Whitelisting IP addresses and hostnames allow organizations to employ a default-deny approach to network traffic. Organizations employing a default-deny approach can stop many malicious threats, even including zero-day attacks, because it only allows explicitly stated legitimate activities. However, creating a comprehensive whitelist for a default-deny approach is difficult due to user-supplied destinations that can only be known at the time of usage. Whitelists, therefore, interfere with user experience by denying network traffic to user-supplied legitimate destinations. In this thesis, we focus on creating dynamic whitelists that are capable of allowing user-supplied network activity. We designed and built a system called Harbinger, which leverages user interface activity to provide contextual information in which network activity took place. We built Harbinger for Microsoft Windows operating systems and have tested its usability and effectiveness on four popular Microsoft applications. We find that Harbinger can reduce false positives-positive detection rates from 44%-54% to 0%-0.4% in IP and DNS whitelists. Furthermore, while traditional whitelists failed to detect propagation attacks, Harbinger detected the same attacks 96% of the time. We find that our system only introduced six milliseconds of delay or less for 96% of network activity
- …