798 research outputs found

    Technical Proposal for FASER: ForwArd Search ExpeRiment at the LHC

    Full text link
    FASER is a proposed small and inexpensive experiment designed to search for light, weakly-interacting particles during Run 3 of the LHC from 2021-23. Such particles may be produced in large numbers along the beam collision axis, travel for hundreds of meters without interacting, and then decay to standard model particles. To search for such events, FASER will be located 480 m downstream of the ATLAS IP in the unused service tunnel TI12 and be sensitive to particles that decay in a cylindrical volume with radius R=10 cm and length L=1.5 m. FASER will complement the LHC's existing physics program, extending its discovery potential to a host of new, light particles, with potentially far-reaching implications for particle physics and cosmology. This document describes the technical details of the FASER detector components: the magnets, the tracker, the scintillator system, and the calorimeter, as well as the trigger and readout system. The preparatory work that is needed to install and operate the detector, including civil engineering, transport, and integration with various services is also presented. The information presented includes preliminary cost estimates for the detector components and the infrastructure work, as well as a timeline for the design, construction, and installation of the experiment.Comment: 82 pages, 62 figures; submitted to the CERN LHCC on 7 November 201

    Robust fault tolerant control of induction motor system

    Get PDF
    Research into fault tolerant control (FTC, a set of techniques that are developed to increase plant availability and reduce the risk of safety hazards) for induction motors is motivated by practical concerns including the need for enhanced reliability, improved maintenance operations and reduced cost. Its aim is to prevent that simple faults develop into serious failure. Although, the subject of induction motor control is well known, the main topics in the literature are concerned with scalar and vector control and structural stability. However, induction machines experience various fault scenarios and to meet the above requirements FTC strategies based on existing or more advanced control methods become desirable. Some earlier studies on FTC have addressed particular problems of 3-phase sensor current/voltage FTC, torque FTC, etc. However, the development of these methods lacks a more general understanding of the overall problem of FTC for an induction motor based on a true fault classification of possible fault types.In order to develop a more general approach to FTC for induction motors, i.e. not just designing specific control approaches for individual induction motor fault scenarios, this thesis has carried out a systematic research on induction motor systems considering the various faults that can typically be present, having either “additive” fault or “multiplicative” effects on the system dynamics, according to whether the faults are sensor or actuator (additive fault) types or component or motor faults (multiplicative fault) types.To achieve the required objectives, an active approach to FTC is used, making use of fault estimation (FE, an approach that determine the magnitude of a fault signal online) and fault compensation. This approach of FTC/FE considers an integration of the electrical and mechanical dynamics, initially using adaptive and/or sliding mode observers, Linear Parameter Varying (LPV, in which nonlinear systems are locally decomposed into several linear systems scheduled by varying parameters) and then using back-stepping control combined with observer/estimation methods for handling certain forms of nonlinearity.In conclusion, the thesis proposed an integrated research of induction motor FTC/FE with the consideration of different types of faults and different types of uncertainties, and validated the approaches through simulations and experiments

    A 2D DWT architecture suitable for the Embedded Zerotree Wavelet Algorithm

    Get PDF
    Digital Imaging has had an enormous impact on industrial applications such as the Internet and video-phone systems. However, demand for industrial applications is growing enormously. In particular, internet application users are, growing at a near exponential rate. The sharp increase in applications using digital images has caused much emphasis on the fields of image coding, storage, processing and communications. New techniques are continuously developed with the main aim of increasing efficiency. Image coding is in particular a field of great commercial interest. A digital image requires a large amount of data to be created. This large amount of data causes many problems when storing, transmitting or processing the image. Reducing the amount of data that can be used to represent an image is the main objective of image coding. Since the main objective is to reduce the amount of data that represents an image, various techniques have been developed and are continuously developed to increase efficiency. The JPEG image coding standard has enjoyed widespread acceptance, and the industry continues to explore its various implementation issues. However, recent research indicates multiresolution based image coding is a far superior alternative. A recent development in the field of image coding is the use of Embedded Zerotree Wavelet (EZW) as the technique to achieve image compression. One of The aims of this theses is to explain how this technique is superior to other current coding standards. It will be seen that an essential part orthis method of image coding is the use of multi resolution analysis, a subband system whereby the subbands arc logarithmically spaced in frequency and represent an octave band decomposition. The block structure that implements this function is termed the two dimensional Discrete Wavelet Transform (2D-DWT). The 20 DWT is achieved by several architectures and these are analysed in order to choose the best suitable architecture for the EZW coder. Finally, this architecture is implemented and verified using the Synopsys Behavioural Compiler and recommendations are made based on experimental findings

    MPEG Reconfigurable Video Coding

    Get PDF
    WOS - ISBN: 978-1-4419-6344-4The currentmonolithic and lengthy scheme behind the standardization and the design of new video coding standards is becoming inappropriate to satisfy the dynamism and changing needs of the video coding community. Such a scheme and specification formalism do not enable designers to exploit the clear commonalities between the different codecs, neither at the level of the specification nor at the level of the implementation. Such a problem is one of the main reasons for the typical long time interval elapsing between the time a new idea is validated until it is implemented in consumer products as part of a worldwide standard. The analysis of this problem originated a new standard initiative within the ISO/IEC MPEG committee, called Reconfigurable Video Coding (RVC). The main idea is to develop a video coding standard that overcomes many shortcomings of the current standardization and specification process by updating and progressively incrementing a modular library of components. As the name implies, flexibility and reconfigurability are new attractive features of the RVC standard. The RVC framework is based on the usage of a new actor/dataflow oriented language called CAL for the specification of the standard library and the instantiation of the RVC decoder model. CAL dataflow models expose the intrinsic concurrency of the algorithms by employing the notions of actor programming and dataflow. This chapter gives an overview of the concepts and technologies building the standard RVC framework and the non standard tools supporting the RVC model from the instantiation and simulation of the CAL model to the software and/or hardware code synthesis

    The DS-Pnet modeling formalism for cyber-physical system development

    Get PDF
    This work presents the DS-Pnet modeling formalism (Dataflow, Signals and Petri nets), designed for the development of cyber-physical systems, combining the characteristics of Petri nets and dataflows to support the modeling of mixed systems containing both reactive parts and data processing operations. Inheriting the features of the parent IOPT Petri net class, including an external interface composed of input and output signals and events, the addition of dataflow operations brings enhanced modeling capabilities to specify mathematical data transformations and graphically express the dependencies between signals. Data-centric systems, that do not require reactive controllers, are designed using pure dataflow models. Component based model composition enables reusing existing components, create libraries of previously tested components and hierarchically decompose complex systems into smaller sub-systems. A precise execution semantics was defined, considering the relationship between dataflow and Petri net nodes, providing an abstraction to define the interface between reactive controllers and input and output signals, including analog sensors and actuators. The new formalism is supported by the IOPT-Flow Web based tool framework, offering tools to design and edit models, simulate model execution on the Web browser, plus model-checking and software/hardware automatic code generation tools to implement controllers running on embedded devices (C,VHDL and JavaScript). A new communication protocol was created to permit the automatic implementation of distributed cyber-physical systems composed of networks of remote components communicating over the Internet. The editor tool connects directly to remote embedded devices running DS-Pnet models and may import remote components into new models, contributing to simplify the creation of distributed cyber-physical applications, where the communication between distributed components is specified just by drawing arcs. Several application examples were designed to validate the proposed formalism and the associated framework, ranging from hardware solutions, industrial applications to distributed software applications

    Engineering Education and Research Using MATLAB

    Get PDF
    MATLAB is a software package used primarily in the field of engineering for signal processing, numerical data analysis, modeling, programming, simulation, and computer graphic visualization. In the last few years, it has become widely accepted as an efficient tool, and, therefore, its use has significantly increased in scientific communities and academic institutions. This book consists of 20 chapters presenting research works using MATLAB tools. Chapters include techniques for programming and developing Graphical User Interfaces (GUIs), dynamic systems, electric machines, signal and image processing, power electronics, mixed signal circuits, genetic programming, digital watermarking, control systems, time-series regression modeling, and artificial neural networks

    Advanced photonic and electronic systems - WILGA 2017

    Get PDF
    WILGA annual symposium on advanced photonic and electronic systems has been organized by young scientist for young scientists since two decades. It traditionally gathers more than 350 young researchers and their tutors. Ph.D students and graduates present their recent achievements during well attended oral sessions. Wilga is a very good digest of Ph.D. works carried out at technical universities in electronics and photonics, as well as information sciences throughout Poland and some neighboring countries. Publishing patronage over Wilga keep Elektronika technical journal by SEP, IJET by PAN and Proceedings of SPIE. The latter world editorial series publishes annually more than 200 papers from Wilga. Wilga 2017 was the XL edition of this meeting. The following topical tracks were distinguished: photonics, electronics, information technologies and system research. The article is a digest of some chosen works presented during Wilga 2017 symposium. WILGA 2017 works were published in Proc. SPIE vol.10445

    Study and design of the readout unit module for the LHCb experiment

    Get PDF
    corecore