322 research outputs found

    Performance analysis : a case study on network management system using machine learning

    Get PDF
    Businesses have legacy distributed software systems which are out of traditional data analysis methods due to their complexities. In addition, the software systems evolve and become complex to understand even with the knowledge of system architecture. Machine learning and big data analytic techniques are widely used in many technical domains to get insight from this large business data due to performance and accuracy. This study was conducted to investigate the applicability of machine learning techniques on performance utilization modelling on Nokia’s network management system. The objective was to study and develop resource utilization models based on system performance data and to study future business needs on capacity analysis of the software performance to minimize manual tasks. The performance data was extracted from network management system software which contains resource usages on system level and component level measurements based on input load. In general, the simulated load on a network management system is uniform with less variance. To overcome this during the research, different load profiles were simulated on the system to assess its performance. Later the data was processed and evaluated using set of machine learning techniques (linear regression, MARS, K-NN, random forest, SVR and feed forward neural networks) to construct resource utilization models. Further, the goodness of developed models was evaluated on simulated test and customer data. Overall, no single algorithm performed best on all resource entities, but neural networks performed well on most response variables as a multivariable output model. However, when comparing performance across customer and test datasets, there were some differences which were also studied. Overall, the results show the feasibility on modeling system resource that can be used in capacity analysis. In future iterations, further analysis on remaining system nodes and suggestions have been made in the report

    Evolutionary approaches to signal decomposition in an application service management system

    Get PDF
    The increased demand for autonomous control in enterprise information systems has generated interest on efficient global search methods for multivariate datasets in order to search for original elements in time-series patterns, and build causal models of systems interactions, utilization dependencies, and performance characteristics. In this context, activity signals deconvolution is a necessary step to achieve effective adaptive control in Application Service Management. The paper investigates the potential of population-based metaheuristic algorithms, particularly variants of particle swarm, genetic algorithms and differential evolution methods, for activity signals deconvolution when the application performance model is unknown a priori. In our approach, the Application Service Management System is treated as a black- or grey-box, and the activity signals deconvolution is formulated as a search problem, decomposing time-series that outline relations between action signals and utilization-execution time of resources. Experiments are conducted using a queue-based computing system model as a test-bed under different load conditions and search configurations. Special attention was put on high-dimensional scenarios, testing effectiveness for large-scale multivariate data analyses that can obtain a near-optimal signal decomposition solution in a short time. The experimental results reveal benefits, qualities and drawbacks of the various metaheuristic strategies selected for a given signal deconvolution problem, and confirm the potential of evolutionary-type search to effectively explore the search space even in high-dimensional cases. The approach and the algorithms investigated can be useful in support of human administrators, or in enhancing the effectiveness of feature extraction schemes that feed decision blocks of autonomous controllers

    Effective Resource and Workload Management in Data Centers

    Get PDF
    The increasing demand for storage, computation, and business continuity has driven the growth of data centers. Managing data centers efficiently is a difficult task because of the wide variety of datacenter applications, their ever-changing intensities, and the fact that application performance targets may differ widely. Server virtualization has been a game-changing technology for IT, providing the possibility to support multiple virtual machines (VMs) simultaneously. This dissertation focuses on how virtualization technologies can be utilized to develop new tools for maintaining high resource utilization, for achieving high application performance, and for reducing the cost of data center management.;For multi-tiered applications, bursty workload traffic can significantly deteriorate performance. This dissertation proposes an admission control algorithm AWAIT, for handling overloading conditions in multi-tier web services. AWAIT places on hold requests of accepted sessions and refuses to admit new sessions when the system is in a sudden workload surge. to meet the service-level objective, AWAIT serves the requests in the blocking queue with high priority. The size of the queue is dynamically determined according to the workload burstiness.;Many admission control policies are triggered by instantaneous measurements of system resource usage, e.g., CPU utilization. This dissertation first demonstrates that directly measuring virtual machine resource utilizations with standard tools cannot always lead to accurate estimates. A directed factor graph (DFG) model is defined to model the dependencies among multiple types of resources across physical and virtual layers.;Virtualized data centers always enable sharing of resources among hosted applications for achieving high resource utilization. However, it is difficult to satisfy application SLOs on a shared infrastructure, as application workloads patterns change over time. AppRM, an automated management system not only allocates right amount of resources to applications for their performance target but also adjusts to dynamic workloads using an adaptive model.;Server consolidation is one of the key applications of server virtualization. This dissertation proposes a VM consolidation mechanism, first by extending the fair load balancing scheme for multi-dimensional vector scheduling, and then by using a queueing network model to capture the service contentions for a particular virtual machine placement

    Supply chain viability: conceptualization, measurement, and nomological validation

    Full text link
    Supply chain viability (SCV) is an emerging concept of growing importance in operations management. This paper aims to conceptualize, develop, and validate a measurement scale for SCV. SCV is first defined and operationalized as a construct, followed by content validation and item measure development. Data have been collected through three independent samplings comprising a total of 558 respondents. Both exploratory and confirmatory factor analyses are used in a step-wise manner for scale development. Reliability and validity are evaluated. A nomological model is theorized and tested to evaluate nomological validity. For the first time, our study frames SCV as a novel and distinct construct. The findings show that SCV is a hierarchical and multidimensional construct, reflected in organizational structures, organizational resources, dynamic design capabilities, and operational aspects. The findings reveal that a central characteristic of SCV is the dynamic reconfiguration of SC structures in an adaptive manner to ensure survival in the long-term perspective. This research conceptualizes and provides specific, validated dimensions and item measures for SCV. Practitioner directed guidance and suggestions are offered for improving SCV during the COVID-19 pandemic and future severe disruptions

    Influence of ICT Capacity on Effective Utilization of ICT to Improve Organizational Performance of Learning Institutions: A Literature Review

    Get PDF
    Governments and ICT integration advocates tend to seek infrastructural investments as a panacea for the ICT needs for learning institutions, without proper plans on how they will be utilized and without clear understanding existing capacity deficits that will affect its successful implementation. The mere focus of most studies on availability of technology and what students learn through the technology has left a gap in understanding on the capacity requirements that will ensure effective utilization of the technology in order to improve the quality of educational processes in learning institutions. ICT capacity has been of particular focus by scholars in understanding the influence of teacher characteristics and capabilities on effective utilization of ICT to realize its full potential in improving efficiency and effectiveness of management, teaching and learning processes in learning institutions. This review summarizes the relevant research on the influence of ICT capacity on effective utilization of ICT to improve organizational performance of learning institutions. Specifically, the review summarizes the relevant research on teachers’ characteristics and ICT capacity and its effect on organizational performance in learning institutions. The review also discusses gaps in the literature, directions for future studies to breach the gaps and the research implications on scholars and policy makers in educational technology
    • …
    corecore