148,234 research outputs found

    Role of Testers in Selecting an Enterprise Architecture Solution: An Exploratory Study

    Get PDF
    Software testing groups are playing an increasingly prominent role in both the software development lifecycle (SDLC) and in the long-term planning of technology architectures that support large-scale organizational information systems. The advent of integrated enterprise architectures (EA) provides new opportunities for testing groups to play a proactive role in building consistent and testable guidelines for improving enterprise-wide software quality. Given that testing groups historically have not been invited to participate in EA decisions, there is little academic literature or industry best practices on approaches that testers might use to guide their participation. This article draws lessons from the experience of a Fortune 100 corporation whose testing group used theoretical notions of “testability” to guide its involvement in an EA acquisition process. It describes how it operationalized testability criteria, incorporating controllability, observability, and simplicity, into various stages of the process and illustrates the benefits and challenges of taking such an approach

    Provenance-based trust for grid computing: Position Paper

    No full text
    Current evolutions of Internet technology such as Web Services, ebXML, peer-to-peer and Grid computing all point to the development of large-scale open networks of diverse computing systems interacting with one another to perform tasks. Grid systems (and Web Services) are exemplary in this respect and are perhaps some of the first large-scale open computing systems to see widespread use - making them an important testing ground for problems in trust management which are likely to arise. From this perspective, today's grid architectures suffer from limitations, such as lack of a mechanism to trace results and lack of infrastructure to build up trust networks. These are important concerns in open grids, in which "community resources" are owned and managed by multiple stakeholders, and are dynamically organised in virtual organisations. Provenance enables users to trace how a particular result has been arrived at by identifying the individual services and the aggregation of services that produced such a particular output. Against this background, we present a research agenda to design, conceive and implement an industrial-strength open provenance architecture for grid systems. We motivate its use with three complex grid applications, namely aerospace engineering, organ transplant management and bioinformatics. Industrial-strength provenance support includes a scalable and secure architecture, an open proposal for standardising the protocols and data structures, a set of tools for configuring and using the provenance architecture, an open source reference implementation, and a deployment and validation in industrial context. The provision of such facilities will enrich grid capabilities by including new functionalities required for solving complex problems such as provenance data to provide complete audit trails of process execution and third-party analysis and auditing. As a result, we anticipate that a larger uptake of grid technology is likely to occur, since unprecedented possibilities will be offered to users and will give them a competitive edge

    Neural Architectures for Control

    Get PDF
    The cerebellar model articulated controller (CMAC) neural architectures are shown to be viable for the purposes of real-time learning and control. Software tools for the exploration of CMAC performance are developed for three hardware platforms, the MacIntosh, the IBM PC, and the SUN workstation. All algorithm development was done using the C programming language. These software tools were then used to implement an adaptive critic neuro-control design that learns in real-time how to back up a trailer truck. The truck backer-upper experiment is a standard performance measure in the neural network literature, but previously the training of the controllers was done off-line. With the CMAC neural architectures, it was possible to train the neuro-controllers on-line in real-time on a MS-DOS PC 386. CMAC neural architectures are also used in conjunction with a hierarchical planning approach to find collision-free paths over 2-D analog valued obstacle fields. The method constructs a coarse resolution version of the original problem and then finds the corresponding coarse optimal path using multipass dynamic programming. CMAC artificial neural architectures are used to estimate the analog transition costs that dynamic programming requires. The CMAC architectures are trained in real-time for each obstacle field presented. The coarse optimal path is then used as a baseline for the construction of a fine scale optimal path through the original obstacle array. These results are a very good indication of the potential power of the neural architectures in control design. In order to reach as wide an audience as possible, we have run a seminar on neuro-control that has met once per week since 20 May 1991. This seminar has thoroughly discussed the CMAC architecture, relevant portions of classical control, back propagation through time, and adaptive critic designs

    A Model-Based Approach for Testing Large Scale Systems

    Get PDF
    This document summarize the author's experience over six years testing large-scale systems.We outline that experience in four points.First, we present a methodology for testing large-scale system. The methodology takes into account three dimensions of these systems: functionality, scalability, and volatility.The methodology proposes to execute tests in different workloads, from a small-scale static system up to a large-scale dynamic system.Experiments show that the alteration of the three dimensional aspects improves code coverage, thus improving the confidence on tests.Second, we introduce a distributed test architecture that uses both, a broadcast protocol to send messages from the test controller to testers and a converge cast protocol to send messages from testers back to the test controller. Experiments show that the architecture is more scalable than traditional centralized architectures when testing systems with more than \num{1000} nodes.Third, we present an approach for using models as dynamic oracles for testing global properties of large-scale systems.This approach focuses on global, liveness, observable and controllable properties. We propose to efficiently keep updating a global model of the system during its execution. This model is then instantiated and evolved at runtime, by monitoring the corresponding distributed system, and serve as oracle for the distributed tests.We illustrate this approach by testing the reliability of two routing algorithms under churn. Results show common flaws in both algorithms.Finally, we present a model-driven approach for software artifacts deployment.We consider software artifacts as a product line and use feature models to represent their configurations and model-based techniques to handle automatic artifact deployment and reconfiguration.Experiments show that this approach reduces network traffic when deploying software on cloud environment

    Multitask Learning Deep Neural Networks to Combine Revealed and Stated Preference Data

    Full text link
    It is an enduring question how to combine revealed preference (RP) and stated preference (SP) data to analyze travel behavior. This study presents a framework of multitask learning deep neural networks (MTLDNNs) for this question, and demonstrates that MTLDNNs are more generic than the traditional nested logit (NL) method, due to its capacity of automatic feature learning and soft constraints. About 1,500 MTLDNN models are designed and applied to the survey data that was collected in Singapore and focused on the RP of four current travel modes and the SP with autonomous vehicles (AV) as the one new travel mode in addition to those in RP. We found that MTLDNNs consistently outperform six benchmark models and particularly the classical NL models by about 5% prediction accuracy in both RP and SP datasets. This performance improvement can be mainly attributed to the soft constraints specific to MTLDNNs, including its innovative architectural design and regularization methods, but not much to the generic capacity of automatic feature learning endowed by a standard feedforward DNN architecture. Besides prediction, MTLDNNs are also interpretable. The empirical results show that AV is mainly the substitute of driving and AV alternative-specific variables are more important than the socio-economic variables in determining AV adoption. Overall, this study introduces a new MTLDNN framework to combine RP and SP, and demonstrates its theoretical flexibility and empirical power for prediction and interpretation. Future studies can design new MTLDNN architectures to reflect the speciality of RP and SP and extend this work to other behavioral analysis
    corecore