10,247 research outputs found

    Noise control and utility: From regulatory network to spatial patterning

    Get PDF
    Stochasticity (or noise) at cellular and molecular levels has been observed extensively as a universal feature for living systems. However, how living systems deal with noise while performing desirable biological functions remains a major mystery. Regulatory network configurations, such as their topology and timescale, are shown to be critical in attenuating noise, and noise is also found to facilitate cell fate decision. Here we review major recent findings on noise attenuation through regulatory control, the benefit of noise via noise-induced cellular plasticity during developmental patterning, and summarize key principles underlying noise control

    Study of a unified hardware and software fault-tolerant architecture

    Get PDF
    A unified architectural concept, called the Fault Tolerant Processor Attached Processor (FTP-AP), that can tolerate hardware as well as software faults is proposed for applications requiring ultrareliable computation capability. An emulation of the FTP-AP architecture, consisting of a breadboard Motorola 68010-based quadruply redundant Fault Tolerant Processor, four VAX 750s as attached processors, and four versions of a transport aircraft yaw damper control law, is used as a testbed in the AIRLAB to examine a number of critical issues. Solutions of several basic problems associated with N-Version software are proposed and implemented on the testbed. This includes a confidence voter to resolve coincident errors in N-Version software. A reliability model of N-Version software that is based upon the recent understanding of software failure mechanisms is also developed. The basic FTP-AP architectural concept appears suitable for hosting N-Version application software while at the same time tolerating hardware failures. Architectural enhancements for greater efficiency, software reliability modeling, and N-Version issues that merit further research are identified

    Hydrological modelling with weather radar data in urban drainage systems

    Get PDF
    The management of large scale strategic urban combined drainage systems is becoming increasingly dependent upon weather radar systems which can provide quantitative precipitation information to improve the overall efficiency of a system's operational performance. Thus, there has been an increasing requirement for a more detailed knowledge of the radar rainfall data accuracy and the development of a mathematical rainfall-runoff model that can be used to analyse and control a system in real-time. Within this context, several important factors including signal attenuation, temporal and spatial data resolutions and rainfall quantisation schemes that determine the accuracy of radar rainfall estimates were examined in this thesis. In order to facilitate real-time flow simulation and forecast, a Conceptually Parametrised Transfer Function (CPTF) model has been developed based on Dynamic Linear Reservoir theory. The model is structurally simple and operationally reliable. It can be easily identified and robustly updated following a pulse response-to-CPTF procedure in which Genetic Algorithms play a key role. Using the model, the accuracy of areal rainfall estimates obtained by the Hameldon Hill radar has been assessed, firstly by comparing the radar rainfall estimates with `ground truth', and then by comparing the simulated hydrographs with the actual flow observations. Finally, a case study was conducted using radar rainfall data to highlight the potential benefit of real-time control for the strategic urban drainage system in the Fylde Coast. The major achievements documented in this thesis are: 1) A rule for determination of an appropriate input data resolution for hydrological models; 2) A general probability density function for describing the sampled radar rainfall intensities; 3) An efficient quantising law (ß-Law) and an associated adaptive rainfall quantisation scheme; 4) Three general conceptual pulse-response functions developed based on Dynamic Linear Reservoir theory; 5) CPTF model; and 6) A case study on the potential benefit of real-time control in the Fylde urban drainage system

    DEEP LEARNING BASED POWER SYSTEM STABILITY ASSESSMENT FOR REDUCED WECC SYSTEM

    Get PDF
    Power system stability is the ability of power system, for a giving initial operating condition, to reach a new operation condition with most of the system variables bounded in normal range after subjecting to a short or long disturbance. Traditional power system stability mainly uses time-domain simulation which is very time consuming and only appropriate for offline assessment. Nowadays, with increasing penetration of inverter based renewable, large-scale distributed energy storage integration and operation uncertainty brought by weather and electricity market, system dynamic and operating condition is more dramatic, and traditional power system stability assessment based on scheduling may not be able to cover all the real-time dispatch scenarios, also online assessment and self-awareness for modern power system becomes more and more important and urgent for power system dynamic security. With the development of fast computation resources and more available online dataset, machine learning techniques have been developed and applied to many areas recently and could potentially applied to power system application. In this dissertation, a deep learning-based power system stability assessment is proposed. Its accurate and fast assessment for power system dynamic security is useful in many places, including day-ahead scheduling, real-time operation, and long-term planning. The simplified Western Electricity Coordinating Council (WECC) 240-bus system with renewable penetration up to 49.2% is used as the study system. The dataset generation, model training and error analysis are demonstrated, and the results show that the proposed deep learning-based method can accurately and fast predict the power system stability. Compared with traditional time simulation method, its near millisecond prediction makes the online assessment and self-awareness possible in future power system application

    Forecasting and Prediction of Solar Energy Generation using Machine Learning Techniques

    Get PDF
    The growing demand for renewable energy sources, especially wind and solar power, has increased the requirement for precise forecasts in the energy production process. Using machine learning (ML)techniques offers a revolutionary way to deal with this problem, and this thesis uses machinelearning (ML) to estimate solar energy production with the goal of revolutionizing decision-making processes through the analysis of large datasets and the generation of accurate forecasts.Solar meteorological data is analyzed methodologically using regression, time series analysis, and deep learning algorithms. The study demonstrates how well machine learning-based forecasting works to anticipate future solar energy output. Quantitative evaluations show excellent prediction accuracy and verify the techniques used. For example, the key observations made were that the Multiple Linear Regression methods demonstrates reasonable predictive ability with moderate Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) values yet slightly lower R-squared values compared to other methods.The study also provides a reflective analysis of result significance, methodology dependability, and result generalizability, as well as a summary of its limits and recommendations for further study. The conclusion provides implications for broader applications across energy sectors and emphasizes the critical role that ML-based forecasting plays in predicting solar energy generation. By utilizing renewable energy sources like solar power, this approach aims to lessen dependency on non-renewable resources and pave the way for a more sustainable future

    Developing Executable Digital Models with Model-Based Systems Engineering – An Unmanned Aerial Vehicle Surveillance Scenario Example

    Get PDF
    There is an increase in complexity in modern systems that causes inconsistencies in the iterative exchange loops of the system design process and in turn, demands greater quality of system organization and optimization techniques. A recent transition from document-centric systems engineering to Model-Based Systems Engineering (MBSE) is being documented in literature from various industries to address these issues. This study aims to investigate how MBSE can be used as a starting point in developing digital twins (DT). Specifically, the adoption of MBSE for realizing DT has been investigated, resulting in various literature reviews that indicate the most prevalent methodologies and tools used to enhance and validate existing and future systems. An MBSE-enabled template for virtual model development was executed for the creation of executable models, which can serve as a research testbed for DT and system and system-of-systems optimization. This study explores the feasibility of this MBSE-enabled template by creating and simulating a surveillance system that monitors and reports on the health status and performance of an armored fighting vehicle via an Unmanned Aerial Vehicle (UAV). The objective of this template is to demonstrate how executable SysML diagrams are used to establish a collaborative working environment between multiple platforms to better convey system behavior, modifications, and analytics for various system stakeholders

    Combat Identification Modeling Using Neural Network Techniques

    Get PDF
    The purposes of this research were: (1) validating Kim’s (2007) simulation method by applying analytic methods and (2) comparing the two different Robust Parameter Design methods with three measures of performance (label accuracy for enemy, friendly, and clutter). Considering the features of CID, input variables were defined as two controllable (threshold combination of detector and classifier) and three uncontrollable (map size, number of enemies and friendly). The first set of experiments considers Kim’s method using analytical methods. In order to create response variables, Kim’s method uses Monte Carlo simulation. The output results showed no difference between simulation and the analytic method. The second set of experiments compared the measures of performance between a standard RPD used by Kim and a new method using Artificial Neural Networks (ANNs). To find optimal combinations of detection and classification thresholds, Kim’s model uses regression with a combined array design, whereas the ANNs method uses ANN with a crossed array design. In the case of label accuracy for enemy, Kim’s solution showed the higher expected value, however it also showed a higher variance. Additionally, the model’s residuals were higher for Kim’s model

    Efficient techniques for soft tissue modeling and simulation

    Get PDF
    Performing realistic deformation simulations in real time is a challenging problem in computer graphics. Among numerous proposed methods including Finite Element Modeling and ChainMail, we have implemented a mass spring system because of its acceptable accuracy and speed. Mass spring systems have, however, some drawbacks such as, the determination of simulation coefficients with their iterative nature. Given the correct parameters, mass spring systems can accurately simulate tissue deformations but choosing parameters that capture nonlinear deformation behavior is extremely difficult. Since most of the applications require a large number of elements i. e. points and springs in the modeling process it is extremely difficult to reach realtime performance with an iterative method. We have developed a new parameter identification method based on neural networks. The structure of the mass spring system is modified and neural networks are integrated into this structure. The input space consists of changes in spring lengths and velocities while a "teacher" signal is chosen as the total spring force, which is expressed in terms of positional changes and applied external forces. Neural networks are trained to learn nonlinear tissue characteristics represented by spring stiffness and damping in the mass spring algorithm. The learning algorithm is further enhanced by an adaptive learning rate, developed particularly for mass spring systems. In order to avoid the iterative approach in deformation simulations we have developed a new deformation algorithm. This algorithm defines the relationships between points and springs and specifies a set of rules on spring movements and deformations. These rules result in a deformation surface, which is called the search space. The deformation algorithm then finds the deformed points and springs in the search space with the help of the defined rules. The algorithm also sets rules on each element i. e. triangle or tetrahedron so that they do not pass through each other. The new algorithm is considerably faster than the original mass spring systems algorithm and provides an opportunity for various deformation applications. We have used mass spring systems and the developed method in the simulation of craniofacial surgery. For this purpose, a patient-specific head model was generated from MRI medical data by applying medical image processing tools such as, filtering, the segmentation and polygonal representation of such model is obtained using a surface generation algorithm. Prism volume elements are generated between the skin and bone surfaces so that different tissue layers are included to the head model. Both methods produce plausible results verified by surgeons
    • …
    corecore