81 research outputs found
A mapping study on documentation in Continuous Software Development
Context: With an increase in Agile, Lean, and DevOps software methodologies over the last years (collectively referred to as Continuous Software Development (CSD)), we have observed that documentation is often poor. Objective: This work aims at collecting studies on documentation challenges, documentation practices, and tools that can support documentation in CSD. Method: A systematic mapping study was conducted to identify and analyze research on documentation in CSD, covering publications between 2001 and 2019. Results: A total of 63 studies were selected. We found 40 studies related to documentation practices and challenges, and 23 studies related to tools used in CSD. The challenges include: informal documentation is hard to understand, documentation is considered as waste, productivity is measured by working software only, documentation is out-of-sync with the software and there is a short-term focus. The practices include: non-written and informal communication, the usage of development artifacts for documentation, and the use of architecture frameworks. We also made an inventory of numerous tools that can be used for documentation purposes in CSD. Overall, we recommend the usage of executable documentation, modern tools and technologies to retrieve information and transform it into documentation, and the practice of minimal documentation upfront combined with detailed design for knowledge transfer afterwards. Conclusion: It is of paramount importance to increase the quantity and quality of documentation in CSD. While this remains challenging, practitioners will benefit from applying the identified practices and tools in order to mitigate the stated challenges
Interorganizational Information Systems: Systematic Literature Mapping Protocol
Organizations increasingly need to establish partnerships with other organizations to face environment changes and remain competitive. This interorganizational relationship allows organizations to share resources and collaborate to handle business opportunities better. This technical report present the protocol of the systematic mapping performed to understand what is an IOIS and how these systems support interorganizational relationships
A Methodology For Intelligent Honeypot Deployment And Active Engagement Of Attackers
Thesis (Ph.D.) University of Alaska Fairbanks, 2012The internet has brought about tremendous changes in the way we see the world, allowing us to communicate at the speed of light, and dramatically changing the face of business forever. Organizations are able to share their business strategies and sensitive or proprietary information across the globe in order to create a sense of cohesiveness. This ability to share information across the vastness of the internet also allows attackers to exploit these different avenues to steal intellectual property or gather information vital to the national security of an entire nation. As technology advances to include more devices accessing an organization's network and as more business is handled via the internet, attackers' opportunities increase daily. Honeypots were created in response to this cyber warfare. Honeypots provide a technique to gather information about attackers performing reconnaissance on a network or device without the voluminous logs obtained by the majority of intrusion detection systems. This research effort provides a methodology to dynamically generate context-appropriate honeynets. Administrators are able to modify the system to conform to the target environment and gather the information passively or through increasing degrees of active scanning. The information obtained during the process of scanning the environment aids the administrator in creating a network topology and understanding the flux of devices in the network. This research continues the effort to defend an organization's networks against the onslaught of attackers
30 Years of Software Refactoring Research: A Systematic Literature Review
Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/155872/4/30YRefactoring.pd
Safeguarding Privacy Through Deep Learning Techniques
Over the last few years, there has been a growing need to meet minimum security and privacy requirements. Both public and private companies have had to comply with increasingly stringent standards, such as the ISO 27000 family of standards, or the various laws governing the management of personal data. The huge amount of data to be managed has required a huge effort from the employees who, in the absence of automatic techniques, have had to work tirelessly to achieve the certification objectives. Unfortunately, due to the delicate information contained in the documentation relating to these problems, it is difficult if not impossible to obtain material for research and study purposes on which to experiment new ideas and techniques aimed at automating processes, perhaps exploiting what is in ferment in the scientific community and linked to the fields of ontologies and artificial intelligence for data management. In order to bypass this problem, it was decided to examine data related to the medical world, which, especially for important reasons related to the health of individuals, have gradually become more and more freely accessible over time, without affecting the generality of the proposed methods, which can be reapplied to the most diverse fields in which there is a need to manage privacy-sensitive information
30 Years of Software Refactoring Research:A Systematic Literature Review
Due to the growing complexity of software systems, there has been a dramatic
increase and industry demand for tools and techniques on software refactoring
in the last ten years, defined traditionally as a set of program
transformations intended to improve the system design while preserving the
behavior. Refactoring studies are expanded beyond code-level restructuring to
be applied at different levels (architecture, model, requirements, etc.),
adopted in many domains beyond the object-oriented paradigm (cloud computing,
mobile, web, etc.), used in industrial settings and considered objectives
beyond improving the design to include other non-functional requirements (e.g.,
improve performance, security, etc.). Thus, challenges to be addressed by
refactoring work are, nowadays, beyond code transformation to include, but not
limited to, scheduling the opportune time to carry refactoring, recommendations
of specific refactoring activities, detection of refactoring opportunities, and
testing the correctness of applied refactorings. Therefore, the refactoring
research efforts are fragmented over several research communities, various
domains, and objectives. To structure the field and existing research results,
this paper provides a systematic literature review and analyzes the results of
3183 research papers on refactoring covering the last three decades to offer
the most scalable and comprehensive literature review of existing refactoring
research studies. Based on this survey, we created a taxonomy to classify the
existing research, identified research trends, and highlighted gaps in the
literature and avenues for further research.Comment: 23 page
Distributed Load Testing by Modeling and Simulating User Behavior
Modern human-machine systems such as microservices rely upon agile engineering practices which require changes to be tested and released more frequently than classically engineered systems. A critical step in the testing of such systems is the generation of realistic workloads or load testing. Generated workload emulates the expected behaviors of users and machines within a system under test in order to find potentially unknown failure states. Typical testing tools rely on static testing artifacts to generate realistic workload conditions. Such artifacts can be cumbersome and costly to maintain; however, even model-based alternatives can prevent adaptation to changes in a system or its usage. Lack of adaptation can prevent the integration of load testing into system quality assurance, leading to an incomplete evaluation of system quality.
The goal of this research is to improve the state of software engineering by addressing open challenges in load testing of human-machine systems with a novel process that a) models and classifies user behavior from streaming and aggregated log data, b) adapts to changes in system and user behavior, and c) generates distributed workload by realistically simulating user behavior. This research contributes a Learning, Online, Distributed Engine for Simulation and Testing based on the Operational Norms of Entities within a system (LODESTONE): a novel process to distributed load testing by modeling and simulating user behavior. We specify LODESTONE within the context of a human-machine system to illustrate distributed adaptation and execution in load testing processes. LODESTONE uses log data to generate and update user behavior models, cluster them into similar behavior profiles, and instantiate distributed workload on software systems. We analyze user behavioral data having differing characteristics to replicate human-machine interactions in a modern microservice environment. We discuss tools, algorithms, software design, and implementation in two different computational environments: client-server and cloud-based microservices. We illustrate the advantages of LODESTONE through a qualitative comparison of key feature parameters and experimentation based on shared data and models. LODESTONE continuously adapts to changes in the system to be tested which allows for the integration of load testing into the quality assurance process for cloud-based microservices
Recommended from our members
Intelligent monitoring of business processes using case-based reasoning
The work in this thesis presents an approach towards the effective monitoring of business processes using Case-Based Reasoning (CBR). The rationale behind this research was that business processes constitute a fundamental concept of the modern world and there is a constantly emerging need for their efficient control. They can be efficiently represented but not necessarily monitored and diagnosed effectively via an appropriate platform.
Motivated by the above observation this research pursued to which extent there can be efficient monitoring, diagnosis and explanation of the workflows. Workflows and their effective representation in terms of CBR were investigated as well as how similarity measures among them could be established appropriately. The monitoring results and their following explanation to users were questioned as well as which should be an appropriate software architecture to allow monitoring of workflow executions.
Throughout the progress of this research, several sets of experiments have been conducted using existing enterprise systems which are coordinated via a predefined workflow business process. Past data produced over several years have been used for the needs of the conducted experiments. Based on those the necessary knowledge repositories were built and used afterwards in order to evaluate the suggesting approach towards the effective monitoring and diagnosis of business processes.
The produced results show to which extent a business process can be monitored and diagnosed effectively. The results also provide hints on possible changes that would maximize the accuracy of the actual monitoring, diagnosis and explanation. Moreover the presented approach can be generalised and expanded further to enterprise systems that have as common characteristics a possible workflow representation and the presence of uncertainty.
Further work motivated by this thesis could investigate how the knowledge acquisition can be transferred over workflow systems and be of benefit to large-scale multidimensional enterprises. Additionally the temporal uncertainty could be investigated further, in an attempt to address it while reasoning. Finally the provenance of cases and their solutions could be explored further, identifying correlations with the process of reasoning
Service level agreement specification for IoT application workflow activity deployment, configuration and monitoring
PhD ThesisCurrently, we see the use of the Internet of Things (IoT) within various domains
such as healthcare, smart homes, smart cars, smart-x applications, and smart
cities. The number of applications based on IoT and cloud computing is projected
to increase rapidly over the next few years. IoT-based services must meet
the guaranteed levels of quality of service (QoS) to match users’ expectations.
Ensuring QoS through specifying the QoS constraints using service level agreements
(SLAs) is crucial. Also because of the potentially highly complex nature
of multi-layered IoT applications, lifecycle management (deployment, dynamic
reconfiguration, and monitoring) needs to be automated. To achieve this it is
essential to be able to specify SLAs in a machine-readable format.
currently available SLA specification languages are unable to accommodate
the unique characteristics (interdependency of its multi-layers) of the IoT domain.
Therefore, in this research, we propose a grammar for a syntactical structure
of an SLA specification for IoT. The grammar is based on a proposed conceptual
model that considers the main concepts that can be used to express the requirements
for most common hardware and software components of an IoT application
on an end-to-end basis. We follow the Goal Question Metric (GQM) approach to
evaluate the generality and expressiveness of the proposed grammar by reviewing
its concepts and their predefined lists of vocabularies against two use-cases
with a number of participants whose research interests are mainly related to IoT.
The results of the analysis show that the proposed grammar achieved 91.70% of
its generality goal and 93.43% of its expressiveness goal.
To enhance the process of specifying SLA terms, We then developed a toolkit
for creating SLA specifications for IoT applications. The toolkit is used to simplify
the process of capturing the requirements of IoT applications. We demonstrate
the effectiveness of the toolkit using a remote health monitoring service (RHMS)
use-case as well as applying a user experience measure to evaluate the tool by
applying a questionnaire-oriented approach. We discussed the applicability of our
tool by including it as a core component of two different applications: 1) a contextaware
recommender system for IoT configuration across layers; and 2) a tool for
automatically translating an SLA from JSON to a smart contract, deploying it
on different peer nodes that represent the contractual parties. The smart contract
is able to monitor the created SLA using Blockchain technology. These two
applications are utilized within our proposed SLA management framework for IoT.
Furthermore, we propose a greedy heuristic algorithm to decentralize workflow
activities of an IoT application across Edge and Cloud resources to enhance
response time, cost, energy consumption and network usage. We evaluated the
efficiency of our proposed approach using iFogSim simulator. The performance
analysis shows that the proposed algorithm minimized cost, execution time, networking,
and Cloud energy consumption compared to Cloud-only and edge-ward
placement approaches
- …