34 research outputs found

    Stochastically Simulating the Effects of Requirements Creep on Software Development Risk Management

    Get PDF
    One of the major chronic problems in software development is the fact that application requirements are almost never stable and fixed. Creeping user requirements have been troublesome since the software industry began. Several empirical studies have reported that volatile requirements are a challenging factor in most information systems development projects. Software process simulation modeling has increasingly been used for a variety of issues during software development. The management of software development risks is one of them. This study presents an approach for simulating and analyzing the effect of Requirements Creep on certain software development risk management activities. The proposed algorithm is based on stochastic simulation and has been implemented using C

    Handling of Congestion in Cluster Computing Environment Using Mobile Agent Approach

    Get PDF
    Computer networks have experienced an explosive growth over the past few years and with that growth have come severe congestion problems. Congestion must be prevented in order to maintain good network performance. In this paper, we proposed a cluster based framework to control congestion over network using mobile agent. The cluster implementation involves the designing of a server which manages the configuring, resetting of cluster. Our framework handles - the generation of application mobile code, its distribution to appropriate client, efficient handling of results, so generated and communicated by a number of client nodes and recording of execution time of application. The client node receives and executes the mobile code that defines the distributed job submitted by server and replies the results back. We have also the analyzed the performance of the developed system emphasizing the tradeoff between communication and computation overhead. The effectiveness of proposed framework is analyzed using JDK 1.5

    Simulator for Resource Optimization of Job Scheduling in a Grid Framework

    Get PDF
    Traditionally, computer software2019;s has been written for serial computation. This software is to be run on a single computer with a single Central Processing Unit (CPU). A problem is broken into a discrete serial of instructions that executed in the exact order, one after another. Only one instruction can be executed at any moment of time on a single CPU. Parallel computing, on the other hand, is the simultaneous use of multiple computer resources to solve a computational problem. The program is to be run using multiple CPU2019;s. A problem is broken into discrete parts that can be solved concurrently and executed simultaneously on different CPU2019;s. The purpose of this proposed work is to develop a simulator using Java for the implementation of Job scheduling and shows that Parallel Execution is efficient with respect to serial execution in terms of time, speed and resources

    Simulation of Reliability of Software Component

    Get PDF
    Component-Based Software Engineering (CBSE) is increasingly being accepted worldwide for software development, in most of the industries. Software reliability is defined as the probability that a software system operates with no failure within a specified time on specified operating conditions. Software component reliability and failure intensity are two important parameters that Estimates the reliability of system after integration of component. The estimation of reliability of software can save loss of time, life and cost. In this paper, software reliability has been estimated by analyzing the failure data. The Imperfect Software Reliability Growth Models (SRGMs) model have been used for simulating the software reliability by estimating the number of remaining faults and the model parameters of the fault content rate function. We aim for simulating software reliability by connecting the imperfect debugging and Goel-Okumoto model. The estimation of reliability gives the time of stopping the unending testing of that component or time of release of software component

    Load Balanced Clusters for Efficient Mobile Computing

    Get PDF
    Mobile computing is distributed computing that involves components with dynamic position during computation. It bestows a new paradigm of mobile ad hoc networks (MANET) for organizing and implementing computation on the fly. MANET is characterized by the flexibility to be deployed and functional in “on-demand” situations, combined with the capability to ship a wide spectrum of applications and buoyancy to dynamically repair around broken links. The underlying issue is routing in such dynamic topology. Numerous studies have shown the difficulty for a routing protocol to scale to large MANET. For this, such network relies on a combination of storing some information about the position of the Mobile Unit (MU) at selected sites and on forming some form of clustering. But the centralized Clusterhead (CH) can become a bottleneck and possibly lead to lower throughput for MANET. We propose a mechanism in which communication outside the cluster is distributed through separate CHs. We prove that the overall averaged throughput increases by using distinct CHs for each neighboring cluster. Although increase in throughput, reduces after one level of traffic rates due to overhead induced by “many” CHs

    Nutrition, atherosclerosis, arterial imaging, cardiovascular risk stratification, and manifestations in COVID-19 framework: a narrative review.

    Get PDF
    Background: Atherosclerosis is the primary cause of the cardiovascular disease (CVD). Several risk factors lead to atherosclerosis, and altered nutrition is one among those. Nutrition has been ignored quite often in the process of CVD risk assessment. Altered nutrition along with carotid ultrasound imaging-driven atherosclerotic plaque features can help in understanding and banishing the problems associated with the late diagnosis of CVD. Artificial intelligence (AI) is another promisingly adopted technology for CVD risk assessment and management. Therefore, we hypothesize that the risk of atherosclerotic CVD can be accurately monitored using carotid ultrasound imaging, predicted using AI-based algorithms, and reduced with the help of proper nutrition. Layout: The review presents a pathophysiological link between nutrition and atherosclerosis by gaining a deep insight into the processes involved at each stage of plaque development. After targeting the causes and finding out results by low-cost, user-friendly, ultrasound-based arterial imaging, it is important to (i) stratify the risks and (ii) monitor them by measuring plaque burden and computing risk score as part of the preventive framework. Artificial intelligence (AI)-based strategies are used to provide efficient CVD risk assessments. Finally, the review presents the role of AI for CVD risk assessment during COVID-19. Conclusions: By studying the mechanism of low-density lipoprotein formation, saturated and trans fat, and other dietary components that lead to plaque formation, we demonstrate the use of CVD risk assessment due to nutrition and atherosclerosis disease formation during normal and COVID times. Further, nutrition if included, as a part of the associated risk factors can benefit from atherosclerotic disease progression and its management using AI-based CVD risk assessment
    corecore