28,539 research outputs found

    Model based code generation for distributed embedded systems

    Get PDF
    Embedded systems are becoming increasingly complex and more distributed. Cost and quality requirements necessitate reuse of the functional software components for multiple deployment architectures. An important step is the allocation of software components to hardware. During this process the differences between the hardware and application software architectures must be reconciled. In this paper we discuss an architecture driven approach involving model-based techniques to resolve these differences and integrate hardware and software components. The system architecture serves as the underpinning based on which distributed real-time components can be generated. Generation of various embedded system architectures using the same functional architecture is discussed. The approach leverages the following technologies – IME (Integrated Modeling Environment), the SAE AADL (Architecture Analysis and Design Language), and Ocarina. The approach is illustrated using the electronic throttle control system as a case study

    Foggy clouds and cloudy fogs: a real need for coordinated management of fog-to-cloud computing systems

    Get PDF
    The recent advances in cloud services technology are fueling a plethora of information technology innovation, including networking, storage, and computing. Today, various flavors have evolved of IoT, cloud computing, and so-called fog computing, a concept referring to capabilities of edge devices and users' clients to compute, store, and exchange data among each other and with the cloud. Although the rapid pace of this evolution was not easily foreseeable, today each piece of it facilitates and enables the deployment of what we commonly refer to as a smart scenario, including smart cities, smart transportation, and smart homes. As most current cloud, fog, and network services run simultaneously in each scenario, we observe that we are at the dawn of what may be the next big step in the cloud computing and networking evolution, whereby services might be executed at the network edge, both in parallel and in a coordinated fashion, as well as supported by the unstoppable technology evolution. As edge devices become richer in functionality and smarter, embedding capacities such as storage or processing, as well as new functionalities, such as decision making, data collection, forwarding, and sharing, a real need is emerging for coordinated management of fog-to-cloud (F2C) computing systems. This article introduces a layered F2C architecture, its benefits and strengths, as well as the arising open and research challenges, making the case for the real need for their coordinated management. Our architecture, the illustrative use case presented, and a comparative performance analysis, albeit conceptual, all clearly show the way forward toward a new IoT scenario with a set of existing and unforeseen services provided on highly distributed and dynamic compute, storage, and networking resources, bringing together heterogeneous and commodity edge devices, emerging fogs, as well as conventional clouds.Peer ReviewedPostprint (author's final draft

    Study on advanced information processing system

    Get PDF
    Issues related to the reliability of a redundant system with large main memory are addressed. In particular, the Fault-Tolerant Processor (FTP) for Advanced Launch System (ALS) is used as a basis for our presentation. When the system is free of latent faults, the probability of system crash due to nearly-coincident channel faults is shown to be insignificant even when the outputs of computing channels are infrequently voted on. In particular, using channel error maskers (CEMs) is shown to improve reliability more effectively than increasing the number of channels for applications with long mission times. Even without using a voter, most memory errors can be immediately corrected by CEMs implemented with conventional coding techniques. In addition to their ability to enhance system reliability, CEMs--with a low hardware overhead--can be used to reduce not only the need of memory realignment, but also the time required to realign channel memories in case, albeit rare, such a need arises. Using CEMs, we have developed two schemes, called Scheme 1 and Scheme 2, to solve the memory realignment problem. In both schemes, most errors are corrected by CEMs, and the remaining errors are masked by a voter

    A Quantum Monte Carlo algorithm for non-local corrections to the Dynamical Mean-Field Approximation

    Full text link
    We present the algorithmic details of the dynamical cluster approximation (DCA), with a quantum Monte Carlo (QMC) method used to solve the effective cluster problem. The DCA is a fully-causal approach which systematically restores non-local correlations to the dynamical mean field approximation (DMFA) while preserving the lattice symmetries. The DCA becomes exact for an infinite cluster size, while reducing to the DMFA for a cluster size of unity. We present a generalization of the Hirsch-Fye QMC algorithm for the solution of the embedded cluster problem. We use the two-dimensional Hubbard model to illustrate the performance of the DCA technique. At half-filling, we show that the DCA drives the spurious finite-temperature antiferromagnetic transition found in the DMFA slowly towards zero temperature as the cluster size increases, in conformity with the Mermin-Wagner theorem. Moreover, we find that there is a finite temperature metal to insulator transition which persists into the weak-coupling regime. This suggests that the magnetism of the model is Heisenberg like for all non-zero interactions. Away from half-filling, we find that the sign problem that arises in QMC simulations is significantly less severe in the context of DCA. Hence, we were able to obtain good statistics for small clusters. For these clusters, the DCA results show evidence of non-Fermi liquid behavior and superconductivity near half-filling.Comment: 25 pages, 15 figure

    Redundancy management for efficient fault recovery in NASA's distributed computing system

    Get PDF
    The management of redundancy in computer systems was studied and guidelines were provided for the development of NASA's fault-tolerant distributed systems. Fault recovery and reconfiguration mechanisms were examined. A theoretical foundation was laid for redundancy management by efficient reconfiguration methods and algorithmic diversity. Algorithms were developed to optimize the resources for embedding of computational graphs of tasks in the system architecture and reconfiguration of these tasks after a failure has occurred. The computational structure represented by a path and the complete binary tree was considered and the mesh and hypercube architectures were targeted for their embeddings. The innovative concept of Hybrid Algorithm Technique was introduced. This new technique provides a mechanism for obtaining fault tolerance while exhibiting improved performance
    corecore