460,497 research outputs found

    thematic series on verification and composition for the internet of services and things

    Get PDF
    Abstract â– â– â–  The Internet of Services and Things is characterized as a distributed computing environment that will be populated by a large number of software services and things. Within this context, software systems will increasingly be built by reusing and composing together software services and things distributed over the Internet. This calls for new integration paradigms and patterns, formal composition theories, integration architectures, as well as flexible and dynamic composition and verification mechanisms. In particular, service- and thing-based systems pose new challenges for software composition and verification techniques, due to changing requirements, emerging behaviors, uncertainty, and dynamicity

    Planning and Resource Management in an Intelligent Automated Power Management System

    Get PDF
    Power system management is a process of guiding a power system towards the objective of continuous supply of electrical power to a set of loads. Spacecraft power system management requires planning and scheduling, since electrical power is a scarce resource in space. The automation of power system management for future spacecraft has been recognized as an important R&D goal. Several automation technologies have emerged including the use of expert systems for automating human problem solving capabilities such as rule based expert system for fault diagnosis and load scheduling. It is questionable whether current generation expert system technology is applicable for power system management in space. The objective of the ADEPTS (ADvanced Electrical Power management Techniques for Space systems) is to study new techniques for power management automation. These techniques involve integrating current expert system technology with that of parallel and distributed computing, as well as a distributed, object-oriented approach to software design. The focus of the current study is the integration of new procedures for automatically planning and scheduling loads with procedures for performing fault diagnosis and control. The objective is the concurrent execution of both sets of tasks on separate transputer processors, thus adding parallelism to the overall management process

    A Case Study in Testing Distributed Systems

    Get PDF
    This paper describes a case study in the testing of distributed systems. The software under test is a middleware system developed in Java. The full test lifecycle is examined including unit testing, integration testing, and system testing. Where possible, traditional tools and techniques are used to carry out the testing. One aspect where this is not possible is the testing of the low-level concurrency, which is often overlooked when testing commercial distributed systems, since the middleware or application server is already developed by a third-party and is assumed to operate correctly. This paper examines testing the middleware system itself, and therefore, a method for testing the concurrency properties of the system is used. The testing revealed a number of faults and design weaknesses, and showed that, with some adaptation, traditional tools and techniques go a long way in the testing of distributed applications

    EU DataGRID testbed management and support at CERN

    Full text link
    In this paper we report on the first two years of running the CERN testbed site for the EU DataGRID project. The site consists of about 120 dual-processor PCs distributed over several testbeds used for different purposes: software development, system integration, and application tests. Activities at the site included test productions of MonteCarlo data for LHC experiments, tutorials and demonstrations of GRID technologies, and support for individual users analysis. This paper focuses on node installation and configuration techniques, service management, user support in a gridified environment, and includes considerations on scalability and security issues and comparisons with "traditional" production systems, as seen from the administrator point of view.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 7 pages, LaTeX. PSN THCT00

    Advanced telemetry systems for payloads. Technology needs, objectives and issues

    Get PDF
    The current trends in advanced payload telemetry are the new developments in advanced modulation/coding, the applications of intelligent techniques, data distribution processing, and advanced signal processing methodologies. Concerted efforts will be required to design ultra-reliable man-rated software to cope with these applications. The intelligence embedded and distributed throughout various segments of the telemetry system will need to be overridden by an operator in case of life-threatening situations, making it a real-time integration issue. Suitable MIL standards on physical interfaces and protocols will be adopted to suit the payload telemetry system. New technologies and techniques will be developed for fast retrieval of mass data. Currently, these technology issues are being addressed to provide more efficient, reliable, and reconfigurable systems. There is a need, however, to change the operation culture. The current role of NASA as a leader in developing all the new innovative hardware should be altered to save both time and money. We should use all the available hardware/software developed by the industry and use the existing standards rather than inventing our own

    DAME: A Distributed Web based Framework for Knowledge Discovery in Databases

    Get PDF
    Massive data sets explored in many e-science communities, as in the Astrophysics case, are gathered by a very large number of techniques and stored in very diversified and often-incompatible data repositories. Moreover, we need to integrate services across distributed, heterogeneous, dynamic virtual organizations formed from the different resources within a single enterprise and/or from external resource sharing and service provider relationships. The DAME project aims at creating a distributed e-infrastructure to guarantee integrated and asynchronous access to data collected by very different experiments and scientific communities in order to correlate them and improve their scientific usability. The project consists of a data mining framework with powerful software instruments capable to work on massive data sets, organized by following Virtual Observatory standards, in a distributed computing environment. The integration process can be technically challenging because of the need to achieve a specific quality of service when running on top of different native platforms. In these terms, the result of the DAME project effort is a service-oriented architecture, by using appropriate standards and incorporating Cloud/Grid paradigms andWeb services, that will have as main target the integration of interdisciplinary distributed systems within and across organizational domains

    A Distributed Test System Architecture for Open-source IoT Software

    Get PDF
    International audienceIn this paper, we discuss challenges that are specific to testing of open IoT software systems. The analysis reveals gaps compared to wireless sensor networks as well as embedded software. We propose a testing framework which (a) supports continuous integration techniques, (b) allows for the integration of project contributors to volunteer hardware and software resources to the test system, and (c) can function as a permanent distributed plugtest for network interop-erability testing. The focus of this paper lies in open-source IoT development but many aspects are also applicable to closed-source projects

    Performance analysis integration in the Uintah software development cycle

    Get PDF
    ManuscriptThe increasing complexity of high-performance computing environments and programming methodologies presents challenges for empirical performance evaluation. Evolving parallel and distributed systems require performance technology that can be flexibly configured to observe different events and associated performance data of interest. It must also be possible to integrate performance evaluation techniques with the programming paradigms and software engineering methods. This is particularly important for tracking performance on parallel software projects involving many code teams over many stages of development. This paper describes the integration of the TAU and XPARE tools in the Uintah Computational Framework (UCF). Discussed is the use of performance mapping techniques to associate low-level performance data to higher levels of abstraction in UCF and the use of performance regression testing to provides a historical portfolio of the evolution of application performance. A scalability study shows the benefits of integrating performance technology in building large-scale parallel applications

    Advanced techniques in reliability model representation and solution

    Get PDF
    The current tendency of flight control system designs is towards increased integration of applications and increased distribution of computational elements. The reliability analysis of such systems is difficult because subsystem interactions are increasingly interdependent. Researchers at NASA Langley Research Center have been working for several years to extend the capability of Markov modeling techniques to address these problems. This effort has been focused in the areas of increased model abstraction and increased computational capability. The reliability model generator (RMG) is a software tool that uses as input a graphical object-oriented block diagram of the system. RMG uses a failure-effects algorithm to produce the reliability model from the graphical description. The ASSURE software tool is a parallel processing program that uses the semi-Markov unreliability range evaluator (SURE) solution technique and the abstract semi-Markov specification interface to the SURE tool (ASSIST) modeling language. A failure modes-effects simulation is used by ASSURE. These tools were used to analyze a significant portion of a complex flight control system. The successful combination of the power of graphical representation, automated model generation, and parallel computation leads to the conclusion that distributed fault-tolerant system architectures can now be analyzed
    • …
    corecore