356,312 research outputs found

    Plant Process Emulator

    Get PDF
    The purpose of this project is to provide the VCU Engineering Students with a training system to simulate the use of Industrial Automation systems. Students need a wide variety of training systems to adequately train and improve their knowledge of all the fundamentals of PLC systems. There are multiple companies that sell a very expensive training setup that can teach students about Proportional-Integral-Derivative (PID) control systems and mechanical systems but those systems cost too much (~$20,000+) for a small university or trade school to fund. The training system that was built provides the student with real world control and monitoring of physical plant attributes like fluid level control and temperature control. A Programmable Logic Controller (PLC) is used to instantiate the PID’s for both level control and temperature control. A level transmitter and a thermocouple act as the process variables and the solenoid valves and a heater act as the manipulating variables to adjust the level and temperature respectively. All components of the system work harmoniously together to simulate a physical plant process. The demonstrations run through this trainer show how the hardware and software work together to allow the operator control of the system. The goal is to allow students a chance to be exposed to different uses of PLC’s and PID’s.https://scholarscompass.vcu.edu/capstone/1192/thumbnail.jp

    Dynamic modelling, validation and analysis of coal-fired subcritical power plant

    Get PDF
    Coal-fired power plants are the main source of global electricity. As environmental regulations tighten, there is need to improve the design, operation and control of existing or new built coal-fired power plants. Modelling and simulation is identified as an economic, safe and reliable approach to reach this objective. In this study, a detailed dynamic model of a 500 MWe coal-fired subcritical power plant was developed using gPROMS based on first principles. Model validations were performed against actual plant measurements and the relative error was less than 5%. The model is able to predict plant performance reasonably from 70% load level to full load. Our analysis showed that implementing load changes through ramping introduces less process disturbances than step change. The model can be useful for providing operator training and for process troubleshooting among others

    NIO: Lightweight neural operator-based architecture for video frame interpolation

    Full text link
    We present, NIO - Neural Interpolation Operator, a lightweight efficient neural operator-based architecture to perform video frame interpolation. Current deep learning based methods rely on local convolutions for feature learning and require a large amount of training on comprehensive datasets. Furthermore, transformer-based architectures are large and need dedicated GPUs for training. On the other hand, NIO, our neural operator-based approach learns the features in the frames by translating the image matrix into the Fourier space by using Fast Fourier Transform (FFT). The model performs global convolution, making it discretization invariant. We show that NIO can produce visually-smooth and accurate results and converges in fewer epochs than state-of-the-art approaches. To evaluate the visual quality of our interpolated frames, we calculate the structural similarity index (SSIM) and Peak Signal to Noise Ratio (PSNR) between the generated frame and the ground truth frame. We provide the quantitative performance of our model on Vimeo-90K dataset, DAVIS, UCF101 and DISFA+ dataset

    Self-Organized Operational Neural Networks with Generative Neurons

    Get PDF
    Operational Neural Networks (ONNs) have recently been proposed to address the well-known limitations and drawbacks of conventional Convolutional Neural Networks (CNNs) such as network homogeneity with the sole linear neuron model. ONNs are heterogenous networks with a generalized neuron model that can encapsulate any set of non-linear operators to boost diversity and to learn highly complex and multi-modal functions or spaces with minimal network complexity and training data. However, Greedy Iterative Search (GIS) method, which is the search method used to find optimal operators in ONNs takes many training sessions to find a single operator set per layer. This is not only computationally demanding, but the network heterogeneity is also limited since the same set of operators will then be used for all neurons in each layer. Moreover, the performance of ONNs directly depends on the operator set library used, which introduces a certain risk of performance degradation especially when the optimal operator set required for a particular task is missing from the library. In order to address these issues and achieve an ultimate heterogeneity level to boost the network diversity along with computational efficiency, in this study we propose Self-organized ONNs (Self-ONNs) with generative neurons that have the ability to adapt (optimize) the nodal operator of each connection during the training process. Therefore, Self-ONNs can have an utmost heterogeneity level required by the learning problem at hand. Moreover, this ability voids the need of having a fixed operator set library and the prior operator search within the library in order to find the best possible set of operators. We further formulate the training method to back-propagate the error through the operational layers of Self-ONNs.Comment: 14 pages, 14 figures, journal articl

    Neural Operator Learning for Ultrasound Tomography Inversion

    Full text link
    Neural operator learning as a means of mapping between complex function spaces has garnered significant attention in the field of computational science and engineering (CS&E). In this paper, we apply Neural operator learning to the time-of-flight ultrasound computed tomography (USCT) problem. We learn the mapping between time-of-flight (TOF) data and the heterogeneous sound speed field using a full-wave solver to generate the training data. This novel application of operator learning circumnavigates the need to solve the computationally intensive iterative inverse problem. The operator learns the non-linear mapping offline and predicts the heterogeneous sound field with a single forward pass through the model. This is the first time operator learning has been used for ultrasound tomography and is the first step in potential real-time predictions of soft tissue distribution for tumor identification in beast imaging.Comment: 4 pages, 1 figur

    On the importance of cyber-security training for multi-vector energy distribution system operators

    Get PDF
    Multi-vector Energy Distribution Systems (EDS) are increasingly connected to provide new services to consumers and Distribution Network Operators (DNO). This exponential growth in connectivity, while beneficial, tremendously increases the attack surface of critical infrastructures, demonstrating a clear need for energy operator cyber-security training. This paper highlights the cyber-security challenges faced by EDS operators as well as the impact a successful cyber-attack could have on the grid. Finally, training needs are contextualised through cyber-attack examples

    Asynchronous Optimization Methods for Efficient Training of Deep Neural Networks with Guarantees

    Full text link
    Asynchronous distributed algorithms are a popular way to reduce synchronization costs in large-scale optimization, and in particular for neural network training. However, for nonsmooth and nonconvex objectives, few convergence guarantees exist beyond cases where closed-form proximal operator solutions are available. As most popular contemporary deep neural networks lead to nonsmooth and nonconvex objectives, there is now a pressing need for such convergence guarantees. In this paper, we analyze for the first time the convergence of stochastic asynchronous optimization for this general class of objectives. In particular, we focus on stochastic subgradient methods allowing for block variable partitioning, where the shared-memory-based model is asynchronously updated by concurrent processes. To this end, we first introduce a probabilistic model which captures key features of real asynchronous scheduling between concurrent processes; under this model, we establish convergence with probability one to an invariant set for stochastic subgradient methods with momentum. From the practical perspective, one issue with the family of methods we consider is that it is not efficiently supported by machine learning frameworks, as they mostly focus on distributed data-parallel strategies. To address this, we propose a new implementation strategy for shared-memory based training of deep neural networks, whereby concurrent parameter servers are utilized to train a partitioned but shared model in single- and multi-GPU settings. Based on this implementation, we achieve on average 1.2x speed-up in comparison to state-of-the-art training methods for popular image classification tasks without compromising accuracy
    • …
    corecore