165,962 research outputs found

    Data reduction in the ITMS system through a data acquisition model with self-adaptive sampling rate

    Get PDF
    Long pulse or steady state operation of fusion experiments require data acquisition and processing systems that reduce the volume of data involved. The availability of self-adaptive sampling rate systems and the use of real-time lossless data compression techniques can help solve these problems. The former is important for continuous adaptation of sampling frequency for experimental requirements. The latter allows the maintenance of continuous digitization under limited memory conditions. This can be achieved by permanent transmission of compressed data to other systems. The compacted transfer ensures the use of minimum bandwidth. This paper presents an implementation based on intelligent test and measurement system (ITMS), a data acquisition system architecture with multiprocessing capabilities that permits it to adapt the system’s sampling frequency throughout the experiment. The sampling rate can be controlled depending on the experiment’s specific requirements by using an external dc voltage signal or by defining user events through software. The system takes advantage of the high processing capabilities of the ITMS platform to implement a data reduction mechanism based in lossless data compression algorithms which are themselves based in periodic deltas

    Self-adaptive sampling rate data acquisition in JET’s correlation reflectometer

    Full text link
    Data acquisition systems with self-adaptive sampling rate capabilities have been proposed as a solution to reduce the shear amount of data collected in every discharge of present fusion devices. This paper discusses the design of such a system for its use in the KG8B correlation reflectometer at JET. The system, which is based on the ITMS platform, continuously adapts the sample rate during the acquisition depending on the signal bandwidth. Data are acquired continuously at the expected maximum sample rate and transferred to a memory buffer in the host processor. Thereafter the rest of the process is based on software. Data are read from the memory buffer in blocks and for each block an intelligent decimation algorithm is applied. The decimation algorithm determines the signal bandwidth for each block in order to choose the optimum sample rate for that block, and from there the decimation factor to be used. Memory buffers are used to adapt the throughput of the three main software modules _data acquisition, processing, and storage_ following a typical producer-consumer architecture. The system optimizes the amount of data collected while maintaining the same information. Design issues are discussed and results of performance evaluation are presented

    Intelligent agent for formal modelling of temporal multi-agent systems

    Get PDF
    Software systems are becoming complex and dynamic with the passage of time, and to provide better fault tolerance and resource management they need to have the ability of self-adaptation. Multi-agent systems paradigm is an active area of research for modeling real-time systems. In this research, we have proposed a new agent named SA-ARTIS-agent, which is designed to work in hard real-time temporal constraints with the ability of self-adaptation. This agent can be used for the formal modeling of any self-adaptive real-time multi-agent system. Our agent integrates the MAPE-K feedback loop with ARTIS agent for the provision of self-adaptation. For an unambiguous description, we formally specify our SA-ARTIS-agent using Time-Communicating Object-Z (TCOZ) language. The objective of this research is to provide an intelligent agent with self-adaptive abilities for the execution of tasks with temporal constraints. Previous works in this domain have used Z language which is not expressive to model the distributed communication process of agents. The novelty of our work is that we specified the non-terminating behavior of agents using active class concept of TCOZ and expressed the distributed communication among agents. For communication between active entities, channel communication mechanism of TCOZ is utilized. We demonstrate the effectiveness of the proposed agent using a real-time case study of traffic monitoring system

    Adaptive Real Time Imaging Synthesis Telescopes

    Full text link
    The digital revolution is transforming astronomy from a data-starved to a data-submerged science. Instruments such as the Atacama Large Millimeter Array (ALMA), the Large Synoptic Survey Telescope (LSST), and the Square Kilometer Array (SKA) will measure their accumulated data in petabytes. The capacity to produce enormous volumes of data must be matched with the computing power to process that data and produce meaningful results. In addition to handling huge data rates, we need adaptive calibration and beamforming to handle atmospheric fluctuations and radio frequency interference, and to provide a user environment which makes the full power of large telescope arrays accessible to both expert and non-expert users. Delayed calibration and analysis limit the science which can be done. To make the best use of both telescope and human resources we must reduce the burden of data reduction. Our instrumentation comprises of a flexible correlator, beam former and imager with digital signal processing closely coupled with a computing cluster. This instrumentation will be highly accessible to scientists, engineers, and students for research and development of real-time processing algorithms, and will tap into the pool of talented and innovative students and visiting scientists from engineering, computing, and astronomy backgrounds. Adaptive real-time imaging will transform radio astronomy by providing real-time feedback to observers. Calibration of the data is made in close to real time using a model of the sky brightness distribution. The derived calibration parameters are fed back into the imagers and beam formers. The regions imaged are used to update and improve the a-priori model, which becomes the final calibrated image by the time the observations are complete

    Prototype of Fault Adaptive Embedded Software for Large-Scale Real-Time Systems

    Get PDF
    This paper describes a comprehensive prototype of large-scale fault adaptive embedded software developed for the proposed Fermilab BTeV high energy physics experiment. Lightweight self-optimizing agents embedded within Level 1 of the prototype are responsible for proactive and reactive monitoring and mitigation based on specified layers of competence. The agents are self-protecting, detecting cascading failures using a distributed approach. Adaptive, reconfigurable, and mobile objects for reliablility are designed to be self-configuring to adapt automatically to dynamically changing environments. These objects provide a self-healing layer with the ability to discover, diagnose, and react to discontinuities in real-time processing. A generic modeling environment was developed to facilitate design and implementation of hardware resource specifications, application data flow, and failure mitigation strategies. Level 1 of the planned BTeV trigger system alone will consist of 2500 DSPs, so the number of components and intractable fault scenarios involved make it impossible to design an `expert system' that applies traditional centralized mitigative strategies based on rules capturing every possible system state. Instead, a distributed reactive approach is implemented using the tools and methodologies developed by the Real-Time Embedded Systems group.Comment: 2nd Workshop on Engineering of Autonomic Systems (EASe), in the 12th Annual IEEE International Conference and Workshop on the Engineering of Computer Based Systems (ECBS), Washington, DC, April, 200

    DeSyRe: on-Demand System Reliability

    No full text
    The DeSyRe project builds on-demand adaptive and reliable Systems-on-Chips (SoCs). As fabrication technology scales down, chips are becoming less reliable, thereby incurring increased power and performance costs for fault tolerance. To make matters worse, power density is becoming a significant limiting factor in SoC design, in general. In the face of such changes in the technological landscape, current solutions for fault tolerance are expected to introduce excessive overheads in future systems. Moreover, attempting to design and manufacture a totally defect and fault-free system, would impact heavily, even prohibitively, the design, manufacturing, and testing costs, as well as the system performance and power consumption. In this context, DeSyRe delivers a new generation of systems that are reliable by design at well-balanced power, performance, and design costs. In our attempt to reduce the overheads of fault-tolerance, only a small fraction of the chip is built to be fault-free. This fault-free part is then employed to manage the remaining fault-prone resources of the SoC. The DeSyRe framework is applied to two medical systems with high safety requirements (measured using the IEC 61508 functional safety standard) and tight power and performance constraints

    A Dual Digital Signal Processor VME Board For Instrumentation And Control Applications

    Full text link
    A Dual Digital Signal Processing VME Board was developed for the Continuous Electron Beam Accelerator Facility (CEBAF) Beam Current Monitor (BCM) system at Jefferson Lab. It is a versatile general-purpose digital signal processing board using an open architecture, which allows for adaptation to various applications. The base design uses two independent Texas Instrument (TI) TMS320C6711, which are 900 MFLOPS floating-point digital signal processors (DSP). Applications that require a fixed point DSP can be implemented by replacing the baseline DSP with the pin-for-pin compatible TMS320C6211. The design can be manufactured with a reduced chip set without redesigning the printed circuit board. For example it can be implemented as a single-channel DSP with no analog I/O.Comment: 3 PDF page
    corecore