346,881 research outputs found
PENGUJIAN SISTEM INFORMASI PENJADWALAN KAPAL MENGGUNAKAN BLACKBOX TESTING DENGAN METODE BOUNDARY VALUE ANALYSIS
Information systems are one of the things that are really needed in the current technological era. Information systems are also important things that every agency, whether private or government, needs. In the Software Development Life Cycle (SDLC), there is a stage that is an important part of software development, namely the testing stage. This testing stage is carried out to ensure the system or application can run according to the expected functionality. Testing using the Blackbox Testing method with Boundary Value Analysis Technique is a testing method that will test the maximum limit and also the minimum limit to produce valid or logical values. This research will be carried out on a ship scheduling information system, which has many features in the form of input fields. The stages of Blackbox Testing test flow using the Boundary Value Analysis Technique will start with identifying the problem, selecting test data and determining test participant data, then Boundary Value Analysis testing which contains designing test cases, after that calculating test results and documentation of evaluation of test results. In this test, all systems were felt to be running well according to their specifications and the test results obtained an effectiveness value of 100%
Recommended from our members
Prototyping a process-centered environment
This paper describes an experimental system developed and used as a vehicle for prototyping the Arcadia-1 software development environment. Prototyping is viewed as a knowledge acquisition process and is used to reduce risks in software development by gaining rapid feedback about the suitability of a production system before the system is completed. Prototyping a software development environment is particularly important due to the lack of experience with them. There is an acute need to acquire knowledge about user interaction requirements for software environments. These needs are especially important for the Arcadia project, as it is one of the first attempts to construct a process-centered environment. Our prototyping effort addresses questions about effective interaction with a process-centered environment by simulating how Arcadia-1 would interact with users in a representative range of usage scenarios. We built a prototyping system, called PRODUCER, and used it to generate a variety of prototypes simulating user interactions with Arcadia-1 process programs.Experience with PRODUCER indicates that our approach is effective at risk reduction. The prototypes greatly improved communication with our customer. They confirmed some of our design decisions but also redirected our research efforts as a result of unexpected insight. We also found that prototyping usage scenarios provides conceptual guides and design information for process programmers. Most of the benefits of our prototyping effort derive from developing and interacting with usage scenarios, so our approach is generalizable to other prototyping systems. This paper reports on our prototyping approach and our experience in prototyping a process-centered environment
Integrating testing techniques through process programming
Integration of multiple testing techniques is required to demonstrate high quality of software. Technique integration has three basic goals: incremental testing capabilities, extensive error detection, and cost-effective application. We are experimenting with the use of process programming as a mechanism of integrating testing techniques. Having set out to integrate DATA FLOW testing and RELAY, we proposed synergistic use of these techniques to achieve all three goals. We developed a testing process program much as we would develop a software product from requirements through design to implementation and evaluation. We found process programming to be effective for explicitly integrating the techniques and achieving the desired synergism. Used in this way, process programming also mitigates many of the other problems that plague testing in the software development process
ATAMM analysis tool
Diagnostics software for analyzing Algorithm to Architecture Mapping Model (ATAMM) based concurrent processing systems is presented. ATAMM is capable of modeling the execution of large grain algorithms on distributed data flow architectures. The tool graphically displays algorithm activities and processor activities for evaluation of the behavior and performance of an ATAMM based system. The tool's measurement capabilities indicate computing speed, throughput, concurrency, resource utilization, and overhead. Evaluations are performed on a simulated system using the software tool. The tool is used to estimate theoretical lower bound performance. Analysis results are shown to be comparable to the predictions
Recommended from our members
An analysis of test data selection criteria using the RELAY model of fault detection
RELAY is a model of faults and failures that defines failure conditions, which describe test data for which execution will guarantee that a fault originates erroneous behavior that also transfers through computations and information flow until a failure is revealed. This model of fault detection provides a framework within which other testing criteria's capabilities can be evaluated. In this paper, we analyze three test data selection criteria that attempt to detect faults in six fault classes. This analysis shows that none of these criteria is capable of guaranteeing detection for these fault classes and points out two major weaknesses of these criteria. The first weakness is that the criteria do not consider the potential unsatisfiability of their rules; each criterion includes rules that are sufficient to cause potential failures for some fault classes, yet when such rules are unsatisfiable, many faults may remain undetected. Their second weakness is failure to integrate their proposed rules; although a criterion may cause a subexpression to take on an erroneous value, there is no effort made to guarantee that the intermediate values cause observable, erroneous behavior. This paper shows how the RELAY model overcomes these weaknesses
Recommended from our members
Testing based on the RELAY model of error detection
RELAY, a model for error detection, defines revealing conditions that guarantee that a fault originates an error during execution and that the error transfers through computations and data flow until it is revealed. This model of error detection provides a fault-based criterion for test data selection. The model is applied by choosing a fault classification, instantiating the conditions for the classes of faults, and applying them to the program being tested. Such an application guarantees the detection of errors caused by any fault of the chosen classes. As a formal mode of error detection, RELAY provides the basis for an automated testing tool. This paper presents the concepts behind RELAY, describes why it is better than other fault-based testing criteria, and discusses how RELAY could be used as the foundation for a testing system
Quality measures for ETL processes: from goals to implementation
Extraction transformation loading (ETL) processes play an increasingly important role for the support of modern business operations. These business processes are centred around artifacts with high variability and diverse lifecycles, which correspond to key business entities. The apparent complexity of these activities has been examined through the prism of business process management, mainly focusing on functional requirements and performance optimization. However, the quality dimension has not yet been thoroughly investigated, and there is a need for a more human-centric approach to bring them closer to business-users requirements. In this paper, we take a first step towards this direction by defining a sound model for ETL process quality characteristics and quantitative measures for each characteristic, based on existing literature. Our model shows dependencies among quality characteristics and can provide the basis for subsequent analysis using goal modeling techniques. We showcase the use of goal modeling for ETL process design through a use case, where we employ the use of a goal model that includes quantitative components (i.e., indicators) for evaluation and analysis of alternative design decisions.Peer ReviewedPostprint (author's final draft
Artificial table testing dynamically adaptive systems
Dynamically Adaptive Systems (DAS) are systems that modify their behavior and
structure in response to changes in their surrounding environment. Critical
mission systems increasingly incorporate adaptation and response to the
environment; examples include disaster relief and space exploration systems.
These systems can be decomposed in two parts: the adaptation policy that
specifies how the system must react according to the environmental changes and
the set of possible variants to reconfigure the system. A major challenge for
testing these systems is the combinatorial explosions of variants and
envi-ronment conditions to which the system must react. In this paper we focus
on testing the adaption policy and propose a strategy for the selection of
envi-ronmental variations that can reveal faults in the policy. Artificial
Shaking Table Testing (ASTT) is a strategy inspired by shaking table testing
(STT), a technique widely used in civil engineering to evaluate building's
structural re-sistance to seismic events. ASTT makes use of artificial
earthquakes that simu-late violent changes in the environmental conditions and
stresses the system adaptation capability. We model the generation of
artificial earthquakes as a search problem in which the goal is to optimize
different types of envi-ronmental variations
- …