1,512 research outputs found

    Optimal test case selection for multi-component software system

    Get PDF
    The omnipresence of software has forced upon the industry to produce efficient software in a short time. These requirements can be met by code reusability and software testing. Code reusability is achieved by developing software as components/modules rather than a single block. Software coding teams are becoming large to satiate the need of massive requirements. Large teams could work easily if software is developed in a modular fashion. It would be pointless to have software that would crash often. Testing makes the software more reliable. Modularity and reliability is the need of the day. Testing is usually carried out using test cases that target a class of software faults or a specific module. Usage of different test cases has an idiosyncratic effect on the reliability of the software system. Proposed research develops a model to determine the optimal test case policy selection that considers a modular software system with specific test cases in a stipulated testing time. The proposed model, models the failure behavior of each component using a conditional NHPP (Non-homogeneous Poisson process) and the interactions of the components by a CTMC (continuous time Markov chain). The initial number of bugs and the bug detection rate are known distributions. Dynamic programming is used as a tool in determining the optimal test case policy. The complete model is simulated using Matlab. The Markov decision process is computationally intensive but the implementation of the algorithm is meticulously optimized to eliminate repeat calculations. This has saved roughly 25-40% in processing time for different variations of the problem

    Pilot interaction with automated airborne decision making systems

    Get PDF
    Two project areas were pursued: the intelligent cockpit and human problem solving. The first area involves an investigation of the use of advanced software engineering methods to aid aircraft crews in procedure selection and execution. The second area is focused on human problem solving in dynamic environments, particulary in terms of identification of rule-based models land alternative approaches to training and aiding. Progress in each area is discussed

    Plan stability: replanning versus plan repair

    Get PDF
    The ultimate objective in planning is to construct plans for execution. However, when a plan is executed in a real environment it can encounter differences between the expected and actual context of execution. These differences can manifest as divergences between the expected and observed states of the world, or as a change in the goals to be achieved by the plan. In both cases, the old plan must be replaced with a new one. In replacing the plan an important consideration is plan stability. We compare two alternative strategies for achieving the {em stable} repair of a plan: one is simply to replan from scratch and the other is to adapt the existing plan to the new context. We present arguments to support the claim that plan stability is a valuable property. We then propose an implementation, based on LPG, of a plan repair strategy that adapts a plan to its new context. We demonstrate empirically that our plan repair strategy achieves more stability than replanning and can produce repaired plans more efficiently than replanning

    Working Notes from the 1992 AAAI Spring Symposium on Practical Approaches to Scheduling and Planning

    Get PDF
    The symposium presented issues involved in the development of scheduling systems that can deal with resource and time limitations. To qualify, a system must be implemented and tested to some degree on non-trivial problems (ideally, on real-world problems). However, a system need not be fully deployed to qualify. Systems that schedule actions in terms of metric time constraints typically represent and reason about an external numeric clock or calendar and can be contrasted with those systems that represent time purely symbolically. The following topics are discussed: integrating planning and scheduling; integrating symbolic goals and numerical utilities; managing uncertainty; incremental rescheduling; managing limited computation time; anytime scheduling and planning algorithms, systems; dependency analysis and schedule reuse; management of schedule and plan execution; and incorporation of discrete event techniques

    NASA/NBS (National Aeronautics and Space Administration/National Bureau of Standards) standard reference model for telerobot control system architecture (NASREM)

    Get PDF
    The document describes the NASA Standard Reference Model (NASREM) Architecture for the Space Station Telerobot Control System. It defines the functional requirements and high level specifications of the control system for the NASA space Station document for the functional specification, and a guideline for the development of the control system architecture, of the 10C Flight Telerobot Servicer. The NASREM telerobot control system architecture defines a set of standard modules and interfaces which facilitates software design, development, validation, and test, and make possible the integration of telerobotics software from a wide variety of sources. Standard interfaces also provide the software hooks necessary to incrementally upgrade future Flight Telerobot Systems as new capabilities develop in computer science, robotics, and autonomous system control

    Do We Really Sample Right In Model-Based Diagnosis?

    Full text link
    Statistical samples, in order to be representative, have to be drawn from a population in a random and unbiased way. Nevertheless, it is common practice in the field of model-based diagnosis to make estimations from (biased) best-first samples. One example is the computation of a few most probable possible fault explanations for a defective system and the use of these to assess which aspect of the system, if measured, would bring the highest information gain. In this work, we scrutinize whether these statistically not well-founded conventions, that both diagnosis researchers and practitioners have adhered to for decades, are indeed reasonable. To this end, we empirically analyze various sampling methods that generate fault explanations. We study the representativeness of the produced samples in terms of their estimations about fault explanations and how well they guide diagnostic decisions, and we investigate the impact of sample size, the optimal trade-off between sampling efficiency and effectivity, and how approximate sampling techniques compare to exact ones

    On-line Point-of-Click Web Usability Mining with PopEval_MB, WebEval_MB and the C-Assure Methodology

    Get PDF
    In this paper we describe a new tool for planning, creating and conducting wide-ranging usability data acquisition throughout the system life cycle from inception to replacement. This aids sub-culturally targetable, on-line mass-consultation applicable to usability studies and change management. Usability and web data intelligence mining is made possible by the system capturing data from users, asynchronously on a distributed network, with minimised annoyance and judgement distortion effects. Previous research has shown that human judgements, particularly retrospective as distinct from real-time evaluations of painful experiences, are fundamentally flawed when superseded by other experiences. In evaluation studies as in any knowledge elicitation exercise (whether for requirements specification, expert systems prototyping or IS impact analysis) it is vital that unarticulated or poorly articulated data is captured as completely as possible whilst minimising distortion bias effects and annoyance of the user. A monolithic data elicitation method often proves inadequate for requirements acquisition or usability data whereas a dynamic planning framework can provide the execution monitoring and contextually-aware control of the enquiry process, as prescribed in Llemex_rb (Badii 1986/87/88). Such meta-level reasoning needs meta-methodological knowledge of the situated applicability of methods so as to choose suitable techniques to capture user data. Such data can range from simple static IDs to highly dynamic data on underlying patterns of multi-modal user behaviour; with various life-cycle models, sub-languages, semiotics and dispositional attitudes (Badii 1986, 1999a,b,c). PopEval_MB as an unobtrusive, on-line, mass-personalisation tool replaces the traditional paper-based survey methods, which suffer from problems of usability data distortion and acquisition management. It serves an enquiry methodology, which is contextually sensitive to the capture problems of the particular data type(s) being targeted at any time thus guiding the selection of suitable elicitation techniques along the way (Badii 1986/96). The enquiry methodology itself is a sub-system within a meta-methodology of frameworks and tools for IS/IT impact analysis and IS cultural compatibility management. This meta-methodology is referred to as Cultural Accommodation Analysis with Sensitised Systems for User/Usability Relationships and Reachabilities Evaluation (C-Assure); under a research programme directed by P3i at UCN (Badii et al 1996, 1999a,b,c). This paper describes the motivation for C-Assure in researching applicable meta-models that minimise the risks in IS development and adoption. It gives an overview of the tools that C-Assure seeks to incorporate into an integrated IS Planning, Development and Diffusion Support Environment (IPDSE) of which the tools for usability evaluation, mass-personalisation and web intelligence ie PopEval_MB and Web_Eval_AB are the focus of this work. The paper explains the theoretical foundations and the hypotheses to be tested in terms of the human Judgement and Decision Making, and, the Pleasure and Pain Recall, or (J/DM)-PPR theoretic effects which also motivated the design of PopEval_MB. Our results support the recent findings from cognitive psychology studies in applying the research on Pleasure and Pain Recall (PPR) to Human Computer Interaction (HCI). In this context we have validated the influence of factors modifying J/DM; specifically the effects of \u27duration neglect\u27 and \u27peak-and-end evaluationsā€™. Thus the empirical studies, as performed here, have provided the first supportive evidence for the J/DM and PPR results from earlier research in psychology as can be applied to the fields of software engineering; in particular to Web Mediated Systems (WMS) for on-line shopping as an exemplar. We maintain that more expressive causal models of usability are needed for the increasingly more volatile user environments of emergent interactive systems such as WMS. We propose a new definition and a process model for dynamic usability, distinguishing instantaneous and steady state usability. The results indicate that PopEval_MB and WebEval_AB deliver their intended functionality with minimal user annoyance and distortion bias. We show how PopEval_MB can be used to by-pass, interpret and exploit natural J/DM-PPR biases; to enable the elicitation of least-distorted usability data intelligence; to reveal the precise root causes of, and the routes to, perceived user (dis)satisfaction. This study also confirms the validity of our new dynamic usability process model, which exploits the natural J/DMā€“PPR saliencyrecency effects and is thus more relevant to the emergent click-happy WMS user environments. The results can be exploited in interpretivist-iterative approaches to IS deployment, diffusion and change management, enterprise health analysis, marketing, design of WebAds and culturally inter-operable systems generall

    There is No Free Lunch: Tradeoffs in the Utility of Learned Knowledge

    Get PDF
    With the recent introduction of learning in integrated systems, there is a need to measure the utility of learned knowledge for these more complex systems. A difficulty arrises when there are multiple, possibly conflicting, utility metrics to be measured. In this paper, we present schemes which trade off conflicting utility metrics in order to achieve some global performance objectives. In particular, we present a case study of a multi-strategy machine learning system, mutual theory refinement, which refines world models for an integrated reactive system, the Entropy Reduction Engine. We provide experimental results on the utility of learned knowledge in two conflicting metrics - improved accuracy and degraded efficiency. We then demonstrate two ways to trade off these metrics. In each, some learned knowledge is either approximated or dynamically 'forgotten' so as to improve efficiency while degrading accuracy only slightly
    • ā€¦
    corecore