41 research outputs found

    Applying Architectural Analysis for Current Software Systems: A Case Study of KFC and Pizza Hut Online Food Ordering Systems in Malaysia

    Get PDF
    The main aim of this study is to discover the ability in analyzing, criticizing and providing suggestion in improving the selected important properties of a software application using architectural analysis dimensions. The researchers selected KFC and Pizza Hut online food ordering systems in Malaysia for the case study purpose. These two selected systems are critically analyzed using seven architectural dimensions such as goals of analysis, scope of analysis, primary architectural concern being analyzed, level of formality of architectural models, type of analysis, level of automation, system stakeholders who are interested in analysis. The finding suggests that there are some characteristics provided by Pizza Hut system which are better than KFC system. Furthermore, details of the findings and discussion are highlighted from seven different aspects of analysis which have been carefully studied and very well analyzed on two popular online food ordering systems

    SIM-PFED: A Simulation-Based Decision Making Model of Patient Flow for Improving Patient Throughput Time in Emergency Department

    Get PDF
    Healthcare sectors face multiple threats, and the hospital emergency department (ED) is one of the most crucial hospital areas. ED plays a key role in promoting hospitals\u27 goals of enhancing service efficiency. ED is a complex system due to the stochastic behavior of patient arrivals, the unpredictability of the care required by patients, and the department\u27s complex nature. Simulations are effective tools for analyzing and optimizing complex ED operations. Although existing ED simulation models have substantially improved ED performance in terms of ensuring patient satisfaction and effective treatment services, many deficiencies continue to exist in addressing the key challenge in ED, namely, long patient throughput time. The patient throughput time issue is affected by causative factors, such as waiting time, length of stay, and decision-making. This research aims to develop a new simulation model of patient flow for ED (SIM-PFED) to address the reported key challenge of the patient throughput time. SIM-PFED introduces a new process for patient flow in ED on the basis of the newly proposed operational patient flow by combining discrete event simulation and agent-based simulation and applying a multi-attribute decision-making method, namely, the technique for order preference by similarity to the ideal solution. Experiments were performed on three actual hospital ED datasets to assess the effectiveness of SIM-PFED. Experimental results revealed the superiority of SIM-PFED over other alternative models in reducing patient throughput time in ED by consuming less patient waiting time and having a shorter length of stay. The findings also demonstrated the effectiveness of SIM-PFED in helping ED decision-makers select the best scenarios to be implemented in ED for ensuring minimal throughput time while being cost effective

    Investigation of Requirements Interdependencies in Existing Techniques of Requirements Prioritization

    Get PDF
    Requirements prioritization (RP) is considered as a key role in producing a successful system by selecting the most important requirements to be released. Requirements interdependencies (RI) is one of the crucial aspects that need to be addressed in RP, since most of the requirements in reality are not independent and have dependencies between each other. Thus, ignoring RI in RP process may lead to produce inaccurate prioritization result which directly impacts the system’s success. In spite of this, little is known about the impact of RI, and obviously further research is urgently required to measure the RI in the RP techniques. Hence, this study aims to investigate and analyze the existence and the execution steps of handling RI in the existing RP techniques to improve the performance of techniques in generating accurate result and assist the researchers and practitioners to select the appropriate technique that can handle RI in prioritization process. The findings indicate that, out of 65 techniques, there are only 4 techniques that handle the RI. The result reveals that these four techniques still suffer from issues of manual process and heavily rely on the experts’ participation. Proposing a new technique is recommended to overcome the identified limitations

    Hazard analysis techniques, methods and approaches: A review

    Get PDF
    Hazard analysis (HA) is an indispensable task during the specification and development of safety-critical systems. It involves identifying potential forms of harm, their effects, causal factors, and the level of risk associated with them. Systems are always vulnerable to mishaps, hazards, or risks that result in system failures, resulting in injuries, loss, and damage. Even though previous studies have made a significant contribution to the study of hazard analysis, little effort has been made to give an overview of the common HA techniques, highlighting their responsibilities, advantages, and disadvantages. Thus, this paper aims to focus on and feature the existing HA techniques along with their respective functions. An overall picture of the advantages and disadvantages of listed HA techniques is presented as well in this paper. Such a study may be utilized as a guide to aid researchers and practitioners in understanding hazard analysis. The investigation is conducted using a process-oriented approach that consists of three steps: formulation of the research questions, the gathering of related studies, and the analysis of the extracted studies. The study revealed a total of 22 HA techniques. A further study is to propose and carry out a systematic literature review to identify to what extent the hazard analysis techniques have been implemented and evaluated in case studies

    SRPTackle: A semi-automated requirements prioritisation technique for scalable requirements of software system projects

    Get PDF
    ContextRequirement prioritisation (RP) is often used to select the most important system requirements as perceived by system stakeholders. RP plays a vital role in ensuring the development of a quality system with defined constraints. However, a closer look at existing RP techniques reveals that these techniques suffer from some key challenges, such as scalability, lack of quantification, insufficient prioritisation of participating stakeholders, overreliance on the participation of professional expertise, lack of automation and excessive time consumption. These key challenges serve as the motivation for the present research.ObjectiveThis study aims to propose a new semiautomated scalable prioritisation technique called ‘SRPTackle’ to address the key challenges.MethodSRPTackle provides a semiautomated process based on a combination of a constructed requirement priority value formulation function using a multi-criteria decision-making method (i.e. weighted sum model), clustering algorithms (K-means and K-means++) and a binary search tree to minimise the need for expert involvement and increase efficiency. The effectiveness of SRPTackle is assessed by conducting seven experiments using a benchmark dataset from a large actual software project.ResultsExperiment results reveal that SRPTackle can obtain 93.0% and 94.65% as minimum and maximum accuracy percentages, respectively. These values are better than those of alternative techniques. The findings also demonstrate the capability of SRPTackle to prioritise large-scale requirements with reduced time consumption and its effectiveness in addressing the key challenges in comparison with other techniques.ConclusionWith the time effectiveness, ability to scale well with numerous requirements, automation and clear implementation guidelines of SRPTackle, project managers can perform RP for large-scale requirements in a proper manner, without necessitating an extensive amount of effort (e.g. tedious manual processes, need for the involvement of experts and time workload)

    Safety property attributes in critical systems for requirement specification : A review

    Get PDF
    The integration of critical system components, requirement specification, and safety properties plays a crucial role in advancing the development and verification processes of critical systems. This integration enables effective analysis, management of safety requirements, and identification of potential risks. Although several studies have explored safety properties in safety analysis (SA), they often lack a comprehensive presentation of all possible safety properties with proper categorization. This paper aims to address this gap by analyzing a comprehensive list of possible safety properties in requirement specification. The list is derived through an extensive analysis of studies published between 2019 and 2023, with a focus on past researchers' contributions. Additionally, our future work includes a systematic literature review encompassing a broader range of studies to further enhance the analysis. By providing a structured approach for addressing safety aspects, this paper contributes valuable insights into the significance of safety properties in ensuring the safety and reliability of critical systems. It lays the foundation for improved safety analysis (SA) practices and strengthens the overall development process of critical systems

    Evaluating the layout quality of UML class diagrams using machine learning

    Get PDF
    UML is the de facto standard notation for graphically representing software. UML diagrams are used in the analysis, construction, and maintenance of software systems. Mostly, UML diagrams capture an abstract view of a (piece of a) software system. A key purpose of UML diagrams is to share knowledge about the system among developers. The quality of the layout of UML diagrams plays a crucial role in their comprehension. In this paper, we present an automated method for evaluating the layout quality of UML class diagrams. We use machine learning based on features extracted from the class diagram images using image processing. Such an automated evaluator has several uses: (1) From an industrial perspective, this tool could be used for automated quality assurance for class diagrams (e.g., as part of a quality monitor integrated into a DevOps toolchain). For example, automated feedback can be generated once a UML diagram is checked in the project repository. (2) In an educational setting, the evaluator can grade the layout aspect of student assignments in courses on software modeling, analysis, and design. (3) In the field of algorithm design for graph layouts, our evaluator can assess the layouts generated by such algorithms. In this way, this evaluator opens up the road for using machine learning to learn good layouting algorithms. Approach.: We use machine learning techniques to build (linear) regression models based on features extracted from the class diagram images using image processing. As ground truth, we use a dataset of 600+ UML Class Diagrams for which experts manually label the quality of the layout. Contributions.: This paper makes the following contributions: (1) We show the feasibility of the automatic evaluation of the layout quality of UML class diagrams. (2) We analyze which features of UML class diagrams are most strongly related to the quality of their layout. (3) We evaluate the performance of our layout evaluator. (4) We offer a dataset of labeled UML class diagrams. In this dataset, we supply for every diagram the following information: (a) a manually established ground truth of the quality of the layout, (b) an automatically established value for the layout-quality of the diagram (produced by our classifier), and (c) the values of key features of the layout of the diagram (obtained by image processing). This dataset can be used for replication of our study and others to build on and improve on this work. Editor\u27s note: Open Science material was validated by the Journal of Systems and Software Open Science Board

    A flexible enhanced fuzzy min-max neural network for pattern classification

    Get PDF
    In this paper, the existing enhanced fuzzy min–max (EFMM) neural network is improved with a flexible learning procedure for undertaking pattern classification tasks. Four new contributions are introduced. Firstly, a new training strategy is proposed for avoiding the generation of unnecessary overlapped regions between hyperboxes of different classes. The learning phase is simplified by eliminating the contraction procedure. Secondly, a new flexible expansion procedure is introduced. It eliminates the use of a user-defined parameter (expansion coefficient) to determine the hyperbox sizes. Thirdly, a new overlap test rule is applied during the test phase to identify the containment cases and activate the contraction procedure (if necessary). Fourthly, a new contraction procedure is formulated to overcome the containment cases and avoid the data distortion problem. Both the third and fourth contributions are important for preventing the catastrophic forgetting issue and supporting the stability-plasticity principle pertaining to online learning. The performance of the proposed model is evaluated with benchmark data sets. The results demonstrate its efficiency in handling pattern classification tasks, outperforming other related models in online learning environments

    An adaptive opposition-based learning selection: the case for Jaya algorithm

    Get PDF
    Over the years, opposition-based Learning (OBL) technique has been proven to effectively enhance the convergence of meta-heuristic algorithms. The fact that OBL is able to give alternative candidate solutions in one or more opposite directions ensures good exploration and exploitation of the search space. In the last decade, many OBL techniques have been established in the literature including the Standard-OBL, General-OBL, Quasi Reflection-OBL, Centre-OBL and Optimal-OBL. Although proven useful, much existing adoption of OBL into meta-heuristic algorithms has been based on a single technique. If the search space contains many peaks with potentially many local optima, relying on a single OBL technique may not be sufficiently effective. In fact, if the peaks are close together, relying on a single OBL technique may not be able to prevent entrapment in local optima. Addressing this issue, assembling a sequence of OBL techniques into meta-heuristic algorithm can be useful to enhance the overall search performance. Based on a simple penalized and reward mechanism, the best performing OBL is rewarded to continue its execution in the next cycle, whilst poor performing one will miss cease its current turn. This paper presents a new adaptive approach of integrating more than one OBL techniques into Jaya Algorithm, termed OBL-JA. Unlike other adoptions of OBL which use one type of OBL, OBL-JA uses several OBLs and their selections will be based on each individual performance. Experimental results using the combinatorial testing problems as case study demonstrate that OBL-JA shows very competitive results against the existing works in term of the test suite size. The results also show that OBL-JA performs better than standard Jaya Algorithm in most of the tested cases due to its ability to adapt its behaviour based on the current performance feedback of the search process

    Latin hypercube sampling Jaya algorithm based strategy for T-way test suite generation

    Get PDF
    T-way testing is a sampling strategy that generates a subset of test cases from a pool of possible tests. Many t-way testing strategies appear in the literature to-date ranging from general computational ones to meta-heuristic based. Owing to its performance, man the meta-heuristic based t-way strategies have gained significant attention recently (e.g. Particle Swarm Optimization, Genetic Algorithm, Ant Colony Algorithm, Harmony Search, Jaya Algorithm and Cuckoo Search). Jaya Algorithm (JA) is a new metaheuristic algorithm, has been used for solving different problems. However, losing the search's diversity is a common issue in the metaheuristic algorithm. In order to enhance JA's diversity, enhanced Jaya Algorithm strategy called Latin Hypercube Sampling Jaya Algorithm (LHS-JA) for Test Suite Generation is proposed. Latin Hypercube Sampling (LHS) is a sampling approach that can be used efficiently to improve search diversity. To evaluate the efficiency of LHS-JA, LHS-JA is compared against existing metaheuristic-based t-way strategies. Experimental results have shown promising results as LHS-JA can compete with existing t-way strategies
    corecore