408,375 research outputs found

    Introduction to the ISO specification language LOTOS

    Get PDF
    LOTOS is a specification language that has been specifically developed for the formal description of the OSI (Open Systems Interconnection) architecture, although it is applicable to distributed, concurrent systems in general. In LOTOS a system is seen as a set of processes which interact and exchange data with each other and with their environment. LOTOS is expected to become an ISO international standard by 1988

    Concurrent Image Processing Executive (CIPE). Volume 1: Design overview

    Get PDF
    The design and implementation of a Concurrent Image Processing Executive (CIPE), which is intended to become the support system software for a prototype high performance science analysis workstation are described. The target machine for this software is a JPL/Caltech Mark 3fp Hypercube hosted by either a MASSCOMP 5600 or a Sun-3, Sun-4 workstation; however, the design will accommodate other concurrent machines of similar architecture, i.e., local memory, multiple-instruction-multiple-data (MIMD) machines. The CIPE system provides both a multimode user interface and an applications programmer interface, and has been designed around four loosely coupled modules: user interface, host-resident executive, hypercube-resident executive, and application functions. The loose coupling between modules allows modification of a particular module without significantly affecting the other modules in the system. In order to enhance hypercube memory utilization and to allow expansion of image processing capabilities, a specialized program management method, incremental loading, was devised. To minimize data transfer between host and hypercube, a data management method which distributes, redistributes, and tracks data set information was implemented. The data management also allows data sharing among application programs. The CIPE software architecture provides a flexible environment for scientific analysis of complex remote sensing image data, such as planetary data and imaging spectrometry, utilizing state-of-the-art concurrent computation capabilities

    X-rays from accretion shocks in T Tauri stars: The case of BP Tau

    Full text link
    We present an XMM-Newton observation of the classical T Tauri star BP Tau. In the XMM-Newton RGS spectrum the O {\sc vii} triplet is clearly detected with a very weak forbidden line indicating high plasma densities and/or a high UV flux environment. At the same time concurrent UV data point to a small hot spot filling factor suggesting an accretion funnel shock as the site of the X-ray and UV emission. Together with the X-ray data on TW Hya these new observations suggest such funnels to be a general feature in classical T Tauri stars.Comment: 4 pages, 4 figures, accepted by A&

    Statistical analysis and use of the VAS radiance data

    Get PDF
    Special radiosonde soundings at 75 km spacings and 3 hour intervals provided an opportunity to learn more about mesoscale data and storm-environment interactions. Relatively small areas of intense convection produce major changes in surrounding fields of thermodynamic, kinematic, and energy variables. The Red River Valley tornado outbreak was studied. Satellite imagery and surface data were used to specify cloud information needed in the radiative heating/cooling calculations. A feasibility study for computing boundary layer winds from satellite-derived thermal data was completed. Winds obtained from TIROS-N retrievals compared very favorably with corresponding values from concurrent rawisonde thermal data, and both sets of thermally-derived winds showed good agreements with observed values

    Supporting Attention Allocation in Multitask Environments : Effects of Likelihood Alarm Systems on Trust, Behavior, and Performance

    Get PDF
    Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG geförderten) Allianz- bzw. Nationallizenz frei zugänglich.This publication is with permission of the rights owner freely accessible due to an Alliance licence and a national licence (funded by the DFG, German Research Foundation) respectively.Objective: The aim of the current study was to investigate potential benefits of likelihood alarm systems (LASs) over binary alarm systems (BASs) in a multitask environment. Background: Several problems are associated with the use of BASs, because most of them generate high numbers of false alarms. Operators lose trust in the systems and ignore alarms or cross-check all of them when other information is available. The first behavior harms safety, whereas the latter one reduces productivity. LASs represent an alternative, which is supposed to improve operators’ attention allocation. Method: We investigated LASs and BASs in a dual-task paradigm with and without the possibility to cross-check alerts with raw data information. Participants’ trust in the system, their behavior, and their performance in the alert and the concurrent task were assessed. Results: Reported trust, compliance with alarms, and performance in the alert and the concurrent task were higher for the LAS than for the BAS. The cross-check option led to an increase in alert task performance for both systems and a decrease in concurrent task performance for the BAS, which did not occur in the LAS condition. Conclusion: LASs improve participants’ attention allocation between two different tasks and therefore lead to an increase in alert task and concurrent task performance. The performance maximum is achieved when LAS is combined with a cross-check option for validating alerts with additional information. Application: The use of LASs instead of BASs in safety-related multitask environments has the potential to increase safety and productivity likewise

    Performance Evaluation of Blocking and Non-Blocking Concurrent Queues on GPUs

    Get PDF
    The efficiency of concurrent data structures is crucial to the performance of multi-threaded programs in shared-memory systems. The arbitrary execution of concurrent threads, however, can result in an incorrect behavior of these data structures. Graphics Processing Units (GPUs) have appeared as a powerful platform for high-performance computing. As regular data-parallel computations are straightforward to implement on traditional CPU architectures, it is challenging to implement them in a SIMD environment in the presence of thousands of active threads on GPU architectures. In this thesis, we implement a concurrent queue data structure and evaluate its performance on GPUs to understand how it behaves in a massively-parallel GPU environment. We implement both blocking and non-blocking approaches and compare their performance and behavior using both micro-benchmark and real-world application. We provide a complete evaluation and analysis of our implementations on an AMD Radeon R7 GPU. Our experiment shows that non-blocking approach outperforms blocking approach by up to 15.1 times when sufficient thread-level parallelism is present

    Brief Announcement: Update Consistency in Partitionable Systems

    Get PDF
    Data replication is essential to ensure reliability, availability and fault-tolerance of massive distributed applications over large scale systems such as the Internet. However, these systems are prone to partitioning, which by Brewer's CAP theorem [1] makes it impossible to use a strong consistency criterion like atomicity. Eventual consistency [2] guaranties that all replicas eventually converge to a common state when the participants stop updating. However, it fails to fully specify shared objects and requires additional non-intuitive and error-prone distributed specification techniques, that must take into account all possible concurrent histories of updates to specify this common state [3]. This approach, that can lead to specifications as complicated as the implementations themselves, is limited by a more serious issue. The concurrent specification of objects uses the notion of concurrent events. In message-passing systems, two events are concurrent if they are enforced by different processes and each process enforced its event before it received the notification message from the other process. In other words, the notion of concurrency depends on the implementation of the object, not on its specification. Consequently, the final user may not know if two events are concurrent without explicitly tracking the messages exchanged by the processes. A specification should be independent of the system on which it is implemented. We believe that an object should be totally specified by two facets: its abstract data type, that characterizes its sequential executions, and a consistency criterion, that defines how it is supposed to behave in a distributed environment. Not only sequential specification helps repeal the problem of intention, it also allows to use the well studied and understood notions of languages and automata. This makes possible to apply all the tools developed for sequential systems, from their simple definition using structures and classes to the most advanced techniques like model checking and formal verification. Eventual consistency (EC) imposes no constraint on the convergent state, that very few depends on the sequential specification. For example, an implementation that ignores all the updates is eventually consistent, as all replicas converge to the initial state. We propose a new consistency criterion, update consistency (UC), in which the convergent state must be obtained by a total ordering of the updates, that contains the sequential order of eachComment: in DISC14 - 28th International Symposium on Distributed Computing, Oct 2014, Austin, United State

    From Event-B models to code: sensing, actuating, and the environment

    No full text
    The Event-B method is a formal approach for modelling systems in safety-, and business-critical, domains. We focus, in this paper, on multi-tasking, embedded control systems. Initially, system specification takes place at a high level of abstraction; detail is added in refinement steps as the development proceeds toward implementation. In previous work, we presented an approach for generating code, for concurrent programs, from Event-B. Translators generate program code for tasks that access data in a safe way, using shared objects. We did not distinguish between tasks of the environment and those of the controller. The work described in this paper offers improved modelling and code generation support, where we separate the environment from the controller. The events in the system can participate in actuating or sensing roles. In the resulting code, sensing and actuation can be simulated using a form of subroutine call; or additional information can be provided to allow a task to read/write directly from/to a specfied memory location
    • …
    corecore