13 research outputs found

    A Systematic Mapping Study of Empirical Studies on Software Cloud Testing Methods

    Get PDF
    Context: Software has become more complicated, dynamic, and asynchronous than ever, making testing more challenging. With the increasing interest in the development of cloud computing, and increasing demand for cloud-based services, it has become essential to systematically review the research in the area of software testing in the context of cloud environments. Objective: The purpose of this systematic mapping study is to provide an overview of the empirical research in the area of software cloud-based testing, in order to build a classification scheme. We investigate functional and non-functional testing methods, the application of these methods, and the purpose of testing using these methods. Method: We searched for electronically available papers in order to find relevant literature and to extract and analyze data about the methods used. Result: We identified 69 primary studies reported in 75 research papers published in academic journals, conferences, and edited books. Conclusion: We found that only a minority of the studies combine rigorous statistical analysis with quantitative results. The majority of the considered studies present early results, using a single experiment to evaluate their proposed solution

    A Reengineering Approach to Reconciling Requirements and Implementation for Context - Aware Web Services Systems

    Get PDF
    In modern software development, the gap between software requirements and implementation is not always conciliated. Typically, for Web services-based context-aware systems, reconciling this gap is even harder. The aim of this research is to explore how software reengineering can facilitate the reconciliation between requirements and implementation for the said systems. The underlying research in this thesis comprises the following three components. Firstly, the requirements recovery framework underpins the requirements elicitation approach on the proposed reengineering framework. This approach consists of three stages: 1) Hypothesis generation, where a list of hypothesis source code information is generated; 2) Segmentation, where the hypothesis list is grouped into segments; 3) Concept binding, where the segments turn into a list of concept bindings linking regions of source code. Secondly, the derived viewpoints-based context-aware service requirements model is proposed to fully discover constraints, and the requirements evolution model is developed to maintain and specify the requirements evolution process for supporting context-aware services evolution. Finally, inspired by context-oriented programming concepts and approaches, ContXFS is implemented as a COP-inspired conceptual library in F#, which enables developers to facilitate dynamic context adaption. This library along with context-aware requirements analyses mitigate the development of the said systems to a great extent, which in turn, achieves reconciliation between requirements and implementation

    Systematic analysis of software development in cloud computing perceptions

    Get PDF
    Cloud computing is characterized as a shared computing and communication infrastructure. It encourages the efficient and effective developmental processes that are carried out in various organizations. Cloud computing offers both possibilities and solutions of problems for outsourcing and management of software developmental operations across distinct geography. Cloud computing is adopted by organizations and application developers for developing quality software. The cloud has the significant impact on utilizing the artificial complexity required in developing and designing quality software. Software developmental organization prefers cloud computing for outsourcing tasks because of its available and scalable nature. Cloud computing is the ideal choice utilized for development modern software as they have provided a completely new way of developing real-time cost-effective, efficient, and quality software. Tenants (providers, developers, and consumers) are provided with platforms, software services, and infrastructure based on pay per use phenomenon. Cloud-based software services are becoming increasingly popular, as observed by their widespread use. Cloud computing approach has drawn the interest of researchers and business because of its ability to provide a flexible and resourceful platform for development and deployment. To determine a cohesive understanding of the analyzed problems and solutions to improve the quality of software, the existing literature resources on cloud-based software development should be analyzed and synthesized systematically. Keyword strings were formulated for analyzing relevant research articles from journals, book chapters, and conference papers. The research articles published in (2011–2021) various scientific databases were extracted and analyzed for retrieval of relevant research articles. A total of 97 research publications are examined in this SLR and are evaluated to be appropriate studies in explaining and discussing the proposed topic. The major emphasis of the presented systematic literature review (SLR) is to identify the participating entities of cloud-based software development, challenges associated with adopting cloud for software developmental processes, and its significance to software industries and developers. This SLR will assist organizations, designers, and developers to develop and deploy user-friendly, efficient, effective, and real time software applications.Qatar University Internal Grant - No. IRCC‐2021‐010

    Cloud enterprise resource planning development model based on software factory approach

    Get PDF
    Literature reviews revealed that Cloud Enterprise Resource Planning (Cloud ERP) is significantly growing, yet from software developers’ perspective, it has succumbed to high management complexity, high workload, inconsistency software quality, and knowledge retention problems. Previous researches lack a solution that holistically addresses all the research problem components. Software factory approach was chosen to be adapted along with relevant theories to develop a model referred to as Cloud ERP Factory Model (CEF Model), which intends to pave the way in solving the above-mentioned problems. There are three specific objectives, those are (i) to develop the model by identifying the components with its elements and compile them into the CEF Model, (ii) to verify the model’s deployment technical feasibility, and (iii) to validate the model field usability in a real Cloud ERP production case studies. The research employed Design Science methodology, with a mixed method evaluation approach. The developed CEF Model consists of five components; those are Product Lines, Platform, Workflow, Product Control, and Knowledge Management, which can be used to setup a CEF environment that simulates a process-oriented software production environment with capacity and resource planning features. The model was validated through expert reviews and the finalized model was verified to be technically feasible by a successful deployment into a selected commercial Cloud ERP production facility. Three Cloud ERP commercial deployment case studies were conducted using the prototype environment. Using the survey instruments developed, the results yielded a Likert score mean of 6.3 out of 7 thus reaffirming that the model is usable and the research has met its objective in addressing the problem components. The models along with its deployment verification processes are the main research contributions. Both items can also be used by software industry practitioners and academician as references in developing a robust Cloud ERP production facility

    A predictive fault-tolerance framework for IoT systems

    Get PDF
    As Internet of Things (IoT) systems scale, attributes such as availability, reliability, safety, maintainability, security, and performance become increasingly more important. A key challenge to realise IoT is how to provide a dependable infrastructure for the billions of expected IoT devices. A dependable IoT system is one that can defensibly be trusted to deliver its intended service within a given time period. To define a FT-support solution that is applicable to all IoT systems, it is important that error definition is a generic, language-agnostic process, so that FT can be applied as a software pattern. It must also be interoperable, so that FT support can be easily 'plugged into' any existing IoT system, which is facilitated by an adherence to standards and protocols. Lastly, it is important that FT support is, itself, fault tolerant, so that it can be depended on to provide correct support for IoT systems. The work in this thesis considers how real-time and historical data analysis techniques can be combined to monitor an IoT environment and analyse its short- and long-term data to make the system as resilient to failure as possible. Specifically, complex event processing (CEP) is proposed for real-time error detection based on the analysis of stream data in an IoT system, where errors are defined as nondeterministic finite automata (NFA). For long-term error analysis, machine learning (ML) is proposed to predict when an error is likely to occur and mitigate imminent system faults based on previous experience of erroneous system behaviour in the IoT system. The contribution is threefold: (1) a language-agnostic approach to error definition using NFAs, designed to provide 'FT as a service' for easy deployment and integration into existing IoT systems; (2) an implementation of NFAs on a bespoke CEP system, BoboCEP, that provides distributed, resilient event processing at the network edge via active replication; and (3) a ML approach to intelligent FT that can learn from system errors over time to ensure correct long-term FT support. The proposed solution was evaluated using two vertical-farming testbeds and a dataset from a real-world vertical farm. Results showed that the proposed solution could detect and predict the successful detection and recovery of erroneous system behaviours. A performance analysis of BoboCEP was conducted with favourable results

    Fault Management For Service-Oriented Systems

    Get PDF
    Service Oriented Architectures (SOAs) enable the automatic creation of business applications from independently developed and deployed Web services. As Web services are inherently unreliable, how to deliver reliable Web services composition over unreliable Web services is a significant and challenging problem. The process requires monitoring the system\u27s behavior, determining when and why faults occur, and then applying fault prevention/recovery mechanisms to minimize the impact and/or recover from these faults. However, it is hard to apply a non-distributed management approach to SOA, since a manager needs to communicate with the different components through authentications. In SOA, a business process can terminate successfully if all services finish their work correctly through providing alternative plans in case of fault. However, the business process itself may encounter different faults because the fault may occur anywhere at any time due to SOA specifications. In this work, we propose new fault management technique (FLEX) and we identify several improvements over existing techniques. First, existing techniques rely mainly on static information while FLEX is based on dynamic information. Second, existing frameworks use a limited number of attributes; while we use all possible attributes by identify them as either required or optional. Third, FLEX reduces the comparison cost (time and space) by filtering out services at each level needed for evaluation. In general, FLEX is divided into two phases: Phase I, computes service reliability and utility, while in Phase II, runtime planning and evaluation. In Phase I, we assess the fault likelihood of the service using a combination of techniques (e.g., Hidden Marcov Model, Reputation, and Clustering). In Phase II, we build a recovery plan to execute in case of fault(s) and we calculate the overall system reliability based on the fault occurrence likelihoods assessed for all the services that are part of the current composition. FLEX is novel because it relies on key activities of the autonomic control loop (i.e., collect, analyze, decide, plan, and execute) to support dynamic management based on the changes of user requirements and QoS level. Our technique dynamically evaluates the performance of Web services based on their previous history and user requirements, assess the likelihood of fault occurrence, and uses the result to create (multiple) recovery plans. Moreover, we define a method to assess the overall system reliability by evaluating the performance of individual recovery plans, when invoked together. The Experiment results show that our technique improves the service selection quality by selecting the services with the highest score and improves the overall system performance in comparison with existing works. In the future, we plan to investigate techniques for monitoring service oriented systems and assess the online negotiation possibilities for combining different services to create high performance systems

    Towards understanding the challenges faced by machine learning software developers and enabling automated solutions

    Get PDF
    Modern software systems are increasingly including machine learning (ML) as an integral component. However, we do not yet understand the difficulties faced by software developers when learning about ML libraries and using them within their systems. To fill that gap this thesis reports on a detailed (manual) examination of 3,243 highly-rated Q&A posts related to ten ML libraries, namely Tensorflow, Keras, scikitlearn, Weka, Caffe, Theano, MLlib, Torch, Mahout, and H2O, on Stack Overflow, a popular online technical Q&A forum. Our findings reveal the urgent need for software engineering (SE) research in this area. The second part of the thesis particularly focuses on understanding the Deep Neural Network (DNN) bug characteristics. We study 2,716 high-quality posts from Stack Overflow and 500 bug fix commits from Github about five popular deep learning libraries Caffe, Keras, Tensorflow, Theano, and Torch to understand the types of bugs, their root causes and impacts, bug-prone stage of deep learning pipeline as well as whether there are some common antipatterns found in this buggy software. While exploring the bug characteristics, our findings imply that repairing software that uses DNNs is one such unmistakable SE need where automated tools could be beneficial; however, we do not fully understand challenges to repairing and patterns that are utilized when manually repairing DNNs. So, the third part of this thesis presents a comprehensive study of bug fix patterns to address these questions. We have studied 415 repairs from Stack Overflow and 555 repairs from Github for five popular deep learning libraries Caffe, Keras, Tensorflow, Theano, and Torch to understand challenges in repairs and bug repair patterns. Our key findings reveal that DNN bug fix patterns are distinctive compared to traditional bug fix patterns and the most common bug fix patterns are fixing data dimension and neural network connectivity. Finally, we propose an automatic technique to detect ML Application Programming Interface (API) misuses. We started with an empirical study to understand ML API misuses. Our study shows that ML API misuse is prevalent and distinct compared to non-ML API misuses. Inspired by these findings, we contributed Amimla (Api Misuse In Machine Learning Apis) an approach and a tool for ML API misuse detection. Amimla relies on several technical innovations. First, we proposed an abstract representation of ML pipelines to use in misuse detection. Second, we proposed an abstract representation of neural networks for deep learning related APIs. Third, we have developed a representation strategy for constraints on ML APIs. Finally, we have developed a misuse detection strategy for both single and multi-APIs. Our experimental evaluation shows that Amimla achieves a high average accuracy of ∼80% on two benchmarks of misuses from Stack Overflow and Github
    corecore