946,208 research outputs found

    APPLICATION OF REMOTE AND MONITORING BASED JAVA REMOTE METHOD INVOCATION

    Get PDF
    This research aims to develop application software of Remote and Monitoring based on Java Remote Method Invocation (RMI). This application software can be used as remote control and monitoring on a connected computer network that is not limited to the operating system used. This study uses an object-oriented software development and combined with the waterfall software model process which through 4 stages. The first stage, analysis requirement, observation of the application of existing remote system, and the study of literature. Second stage, the design of the system include use case diagrams that illustrate the actors activities of the application and sequence diagrams describe the sequence of execution of applications. Third stages, coding, coding implementation of the design sequence diagrams and test units or more often called white-box testing. Fourth stages, integrated testing includes the black-box testing, alpha testing that will be used to determine the performance of the application, and beta testing to users of the Focus Group Discussion Digital Networks and Multimedia Puskom UNY. Based on the testing results of software applications Remote and Monitoring based on Java Remote Method Invocation (RMI) indicates that: 1) Application of Remote and Monitoring based Java Remote Method Invocation has been successfully designed, manufactured, and implemented. 2) The performance of the application of of Remote and Monitoring based Java Remote Method Invocation has a good performance all the systems tested can run and work in accordance with the desired specifications. 3) Feasibility of applications Remote and Monitoring based on Java Remote Method Invocation in terms of usability is very feasible with percentage 82,14 %. Keywords: applications, and remote monitoring, remote system, remote control, Java, Remote Method Invocatio

    DeepGauge: Multi-Granularity Testing Criteria for Deep Learning Systems

    Full text link
    Deep learning (DL) defines a new data-driven programming paradigm that constructs the internal system logic of a crafted neuron network through a set of training data. We have seen wide adoption of DL in many safety-critical scenarios. However, a plethora of studies have shown that the state-of-the-art DL systems suffer from various vulnerabilities which can lead to severe consequences when applied to real-world applications. Currently, the testing adequacy of a DL system is usually measured by the accuracy of test data. Considering the limitation of accessible high quality test data, good accuracy performance on test data can hardly provide confidence to the testing adequacy and generality of DL systems. Unlike traditional software systems that have clear and controllable logic and functionality, the lack of interpretability in a DL system makes system analysis and defect detection difficult, which could potentially hinder its real-world deployment. In this paper, we propose DeepGauge, a set of multi-granularity testing criteria for DL systems, which aims at rendering a multi-faceted portrayal of the testbed. The in-depth evaluation of our proposed testing criteria is demonstrated on two well-known datasets, five DL systems, and with four state-of-the-art adversarial attack techniques against DL. The potential usefulness of DeepGauge sheds light on the construction of more generic and robust DL systems.Comment: The 33rd IEEE/ACM International Conference on Automated Software Engineering (ASE 2018

    Quality Research by Using Performance Evaluation Metrics for Software Systems and Components

    Get PDF
    Software performance and evaluation have four basic needs: (1) well-defined performance testing strategy, requirements, and focuses, (2) correct and effective performance evaluation models, (3) well-defined performance metrics, and (4) cost-effective performance testing and evaluation tools and techniques. This chapter first introduced a performance test process and discusses the performance testing objectives and focus areas. Then, it summarized the basic challenges and issues on performance testing and evaluation of component based programs and components. Next, this chapter presented different types of performance metrics for software components and systems, including processing speed, utilization, throughput, reliability, availability, and scalability metrics. Most of the performance metrics covered here can be considered as the application of existing metrics to software components. New performance metrics are needed to support the performance evaluation of component based programs.metrics, software performance, testing, evaluation, reliability, scalability

    Toward a Unified Performance and Power Consumption NAND Flash Memory Model of Embedded and Solid State Secondary Storage Systems

    Full text link
    This paper presents a set of models dedicated to describe a flash storage subsystem structure, functions, performance and power consumption behaviors. These models cover a large range of today's NAND flash memory applications. They are designed to be implemented in simulation tools allowing to estimate and compare performance and power consumption of I/O requests on flash memory based storage systems. Such tools can also help in designing and validating new flash storage systems and management mechanisms. This work is integrated in a global project aiming to build a framework simulating complex flash storage hierarchies for performance and power consumption analysis. This tool will be highly configurable and modular with various levels of usage complexity according to the required aim: from a software user point of view for simulating storage systems, to a developer point of view for designing, testing and validating new flash storage management systems

    Tools and Techniques Used for Prioritizing Test Cases in Regression Testing

    Get PDF
    Testing is a very expensive task in term of cost, effort and time and it is necessary step of software development because without testing software cannot be completed. Regression testing is a type of software testing which is widely used in software development and maintenance phase; it also occupies a large portion of the software maintenance budget. There are many software testing tools and Technique that is used to test the software program. This research paper defines some testing approach which reduced person effort and time in regression testing. Software systems are change regularly during development and maintenance face. After software is modified regression testing is applied to software to ensure that it behaves intended and modifications not negatively impacts its original functionality. This paper focus on improving the performance of regression testing by using these approach for regression testing by computing coverage data for evolving software using dataflow analysis and execution tracing

    CONFPROFITT: A CONFIGURATION-AWARE PERFORMANCE PROFILING, TESTING, AND TUNING FRAMEWORK

    Get PDF
    Modern computer software systems are complicated. Developers can change the behavior of the software system through software configurations. The large number of configuration option and their interactions make the task of software tuning, testing, and debugging very challenging. Performance is one of the key aspects of non-functional qualities, where performance bugs can cause significant performance degradation and lead to poor user experience. However, performance bugs are difficult to expose, primarily because detecting them requires specific inputs, as well as specific configurations. While researchers have developed techniques to analyze, quantify, detect, and fix performance bugs, many of these techniques are not effective in highly-configurable systems. To improve the non-functional qualities of configurable software systems, testing engineers need to be able to understand the performance influence of configuration options, adjust the performance of a system under different configurations, and detect configuration-related performance bugs. This research will provide an automated framework that allows engineers to effectively analyze performance-influence configuration options, detect performance bugs in highly-configurable software systems, and adjust configuration options to achieve higher long-term performance gains. To understand real-world performance bugs in highly-configurable software systems, we first perform a performance bug characteristics study from three large-scale opensource projects. Many researchers have studied the characteristics of performance bugs from the bug report but few have reported what the experience is when trying to replicate confirmed performance bugs from the perspective of non-domain experts such as researchers. This study is meant to report the challenges and potential workaround to replicate confirmed performance bugs. We also want to share a performance benchmark to provide real-world performance bugs to evaluate future performance testing techniques. Inspired by our performance bug study, we propose a performance profiling approach that can help developers to understand how configuration options and their interactions can influence the performance of a system. The approach uses a combination of dynamic analysis and machine learning techniques, together with configuration sampling techniques, to profile the program execution, analyze configuration options relevant to performance. Next, the framework leverages natural language processing and information retrieval techniques to automatically generate test inputs and configurations to expose performance bugs. Finally, the framework combines reinforcement learning and dynamic state reduction techniques to guide subject application towards achieving higher long-term performance gains
    • …
    corecore