1,779 research outputs found

    Modern middleware for the data acquisition of the Cherenkov Telescope Array

    Full text link
    The data acquisition system (DAQ) of the future Cherenkov Telescope Array (CTA) must be ef- ficient, modular and robust to be able to cope with the very large data rate of up to 550 Gbps coming from many telescopes with different characteristics. The use of modern middleware, namely ZeroMQ and Protocol Buffers, can help to achieve these goals while keeping the development effort to a reasonable level. Protocol Buffers are used as an on-line data for- mat, while ZeroMQ is employed to communicate between processes. The DAQ will be controlled and monitored by the Alma Common Software (ACS). Protocol Buffers from Google are a way to define high-level data structures through an in- terface description language (IDL) and a meta-compiler. ZeroMQ is a middleware that augments the capabilities of TCP/IP sockets. It does not implement very high-level features like those found in CORBA for example, but makes use of sockets easier, more robust and almost as effective as raw TCP. The use of these two middlewares enabled us to rapidly develop a robust prototype of the DAQ including data persistence to compressed FITS files.Comment: In Proceedings of the 34th International Cosmic Ray Conference (ICRC2015), The Hague, The Netherlands. All CTA contributions at arXiv:1508.0589

    CSP channels for CAN-bus connected embedded control systems

    Get PDF
    Closed loop control system typically contains multitude of sensors and actuators operated simultaneously. So they are parallel and distributed in its essence. But when mapping this parallelism to software, lot of obstacles concerning multithreading communication and synchronization issues arise. To overcome this problem, the CT kernel/library based on CSP algebra has been developed. This project (TES.5410) is about developing communication extension to the CT library to make it applicable in distributed systems. Since the library is tailored for control systems, properties and requirements of control systems are taken into special consideration. Applicability of existing middleware solutions is examined. A comparison of applicable fieldbus protocols is done in order to determine most suitable ones and CAN fieldbus is chosen to be first fieldbus used. Brief overview of CSP and existing CSP based libraries is given. Middleware architecture is proposed along with few novel ideas

    The Design and Implementation of MCFlow: a Real-time Multi-core Aware Middleware for Dependent Task Graphs

    Get PDF
    Modern computer architectures have evolved from uni-processor platforms to multi-processor and multi-core plat- forms, but traditional real-time distributed middleware such as RT-CORBA has not kept pace with that evolution. To address those issues, this paper describes the design and implementation of MCFlow, a new real-time distributed middleware for dependent task graphs running on multi-core platforms. MCFlow provides the following contributions to the state of the art in real-time middleware: (1) it provides an efficient C++ based component model through which computations can be configured flexibly for execution within a single core, across cores of a common host, or spanning multiple hosts; (2) it allows optimizations for inter-component communication to avoid data copying without sacrificing the parallel executability of data dependent tasks; (3) it strictly separates timing and functional concerns of an application so that they can evolve and can be configured independently; and (4) it provides a novel event dispatching architecture that uses lock free algorithms to avoid mutex locking and reduce memory contention, CPU context switching, and priority inversion. We also present an empirical evaluation that demonstrates the efficacy of our approach

    A Model-based transformation process to validate and implement high-integrity systems

    Get PDF
    Despite numerous advances, building High-Integrity Embedded systems remains a complex task. They come with strong requirements to ensure safety, schedulability or security properties; one needs to combine multiple analysis to validate each of them. Model-Based Engineering is an accepted solution to address such complexity: analytical models are derived from an abstraction of the system to be built. Yet, ensuring that all abstractions are semantically consistent, remains an issue, e.g. when performing model checking for assessing safety, and then for schedulability using timed automata, and then when generating code. Complexity stems from the high-level view of the model compared to the low-level mechanisms used. In this paper, we present our approach based on AADL and its behavioral annex to refine iteratively an architecture description. Both application and runtime components are transformed into basic AADL constructs which have a strict counterpart in classical programming languages or patterns for verification. We detail the benefits of this process to enhance analysis and code generation. This work has been integrated to the AADL-tool support OSATE2

    P4CEP: Towards In-Network Complex Event Processing

    Full text link
    In-network computing using programmable networking hardware is a strong trend in networking that promises to reduce latency and consumption of server resources through offloading to network elements (programmable switches and smart NICs). In particular, the data plane programming language P4 together with powerful P4 networking hardware has spawned projects offloading services into the network, e.g., consensus services or caching services. In this paper, we present a novel case for in-network computing, namely, Complex Event Processing (CEP). CEP processes streams of basic events, e.g., stemming from networked sensors, into meaningful complex events. Traditionally, CEP processing has been performed on servers or overlay networks. However, we argue in this paper that CEP is a good candidate for in-network computing along the communication path avoiding detouring streams to distant servers to minimize communication latency while also exploiting processing capabilities of novel networking hardware. We show that it is feasible to express CEP operations in P4 and also present a tool to compile CEP operations, formulated in our P4CEP rule specification language, to P4 code. Moreover, we identify challenges and problems that we have encountered to show future research directions for implementing full-fledged in-network CEP systems.Comment: 6 pages. Author's versio

    Distributed Computing Grid Experiences in CMS

    Get PDF
    The CMS experiment is currently developing a computing system capable of serving, processing and archiving the large number of events that will be generated when the CMS detector starts taking data. During 2004 CMS undertook a large scale data challenge to demonstrate the ability of the CMS computing system to cope with a sustained data-taking rate equivalent to 25% of startup rate. Its goals were: to run CMS event reconstruction at CERN for a sustained period at 25 Hz input rate; to distribute the data to several regional centers; and enable data access at those centers for analysis. Grid middleware was utilized to help complete all aspects of the challenge. To continue to provide scalable access from anywhere in the world to the data, CMS is developing a layer of software that uses Grid tools to gain access to data and resources, and that aims to provide physicists with a user friendly interface for submitting their analysis jobs. This paper describes the data challenge experience with Grid infrastructure and the current development of the CMS analysis system

    Automatic Deployment Space Exploration Using Refinement Transformations

    Get PDF
    To manage the complex engineering information for real-time systems, the system under development may be modelled in a high-level architecture de- scription language. This high-level information provides a basis for deployment space exploration as it can be used to generate a low-level implementation. During this deployment mapping many platform-dependent choices have to be made whose consequences cannot be easily predicted. In this paper we present an approach to the automatic exploration of the deployment space based on platform-based design. All possible solutions of a deployment step are generated using a refinement trans- formation. Non-conforming deployment alternatives are pruned as early as possible using simulation or analytical methods. We validate the feasibility of our approach by deploying part of an automotive power window optimized for its real-time be- haviour using an AUTOSAR-like representation. First results are promising and show that the optimal solution can indeed be found efficiently with our approach

    Timed Automata Models for Principled Composition of Middleware

    Get PDF
    Middleware for Distributed Real-time and Embedded (DRE) systems has grown more and more complex in recent years due to the varying functional and temporal requirements of complex real-time applications. To enable DRE middleware to be conïŹgured and customized to meet the demands of diïŹ€erent applications, a body of ongoing research has focused on applying model-driven development techniques to developing QoS-enabled middleware. While current approaches for modeling middleware focus on easing the task of as-assembling, deploying and conïŹguring middleware and middleware-based applications, a more formal basis for correct middleware composition and conïŹguration in the context of individual applications is needed. While the modeling community has used application-level formal models that are more abstract to uncover certain ïŹ‚aws in system design, a more fundamental and lower-level set of models is needed to be able to uncover more subtle safety and timing errors introduced by interference between application computations, particularly in the face of alternative concurrency strategies in the middleware layer. In this research, we have examined how detailed formal models of lower-level middle-ware building blocks provide an appropriate level of abstraction both for modeling and synthesis of a variety of kinds of middleware from these building blocks. When combined with model checking techniques, these formal models can help developers in composing correct combinations of middleware mechanisms, and conïŹguring those mechanisms for each particular application
    • 

    corecore