2,729 research outputs found

    Phased mission reliability analysis of unmanned ship systems

    Get PDF
    With the development of unmanned ships, their use in production is becoming more and more common. However, the unmanned ship work cycle is long and the work environment is complex, and it is still very difficult to calculate the phased mission reliability without unmanned ship. We analyze the unmanned ship phased mission reliability based on the binary decision diagram. Moreover, redundancy is used as the unmanned ship reliability optimization scheme. Considering the resource limitation, and the capacity of unmanned ship, the redundancy allocation scheme of unmanned ship is established. The redundancy allocation scheme is solved by marginal optimization algorithm. Finally, a case study is established to analyze the effectiveness and practicality of the proposed method

    A GUIDED SIMULATION METHODOLOGY FOR DYNAMIC PROBABILISTIC RISK ASSESSMENT OF COMPLEX SYSTEMS

    Get PDF
    Probabilistic risk assessment (PRA) is a systematic process of examining how engineered systems work to ensure safety. With the growth of the size of the dynamic systems and the complexity of the interactions between hardware, software, and humans, it is extremely difficult to enumerate the risky scenarios by the traditional PRA methods. Over the past 15 years, a host of DPRA methods have been proposed to serve as supplemental tools to traditional PRA to deal with complex dynamic systems. A new dynamic probabilistic risk assessment framework is proposed in this dissertation. In this framework a new exploration strategy is employed. The engineering knowledge of the system is explicitly used to guide the simulation to achieve higher efficiency and accuracy. The engineering knowledge is reflected in the "Planner" which is responsible for generating plans as a high level map to guide the simulation. A scheduler is responsible for guiding the simulation by controlling the timing and occurrence of the random events. During the simulation the possible random events are proposed to the scheduler at branch points. The scheduler decides which events are to be simulated. Scheduler would favor the events with higher values. The value of a proposed event depends on the information gain from exploring that scenario, and the importance factor of the scenario. The information gain is measured by the information entropy, and the importance factor is based on the engineering judgment. The simulation results are recorded and grouped for later studies. The planner may "learn" from the simulation results, and update the plan to guide further simulation. SIMPRA is the software package which implements the new methodology. It provides the users with a friendly interface and a rich DPRA library to aid in the construction of the simulation model. The engineering knowledge can be input into the Planner, which would generate a plan automatically. The scheduler would guide the simulation according to the plan. The simulation generates many accident event sequences and estimates of the end state probabilities

    Transit Detection in the MEarth Survey of Nearby M Dwarfs: Bridging the Clean-First, Search-Later Divide

    Full text link
    In the effort to characterize the masses, radii, and atmospheres of potentially habitable exoplanets, there is an urgent need to find examples of such planets transiting nearby M dwarfs. The MEarth Project is an ongoing effort to do so, as a ground-based photometric survey designed to detect exoplanets as small as 2 Earth radii transiting mid-to-late M dwarfs within 33 pc of the Sun. Unfortunately, identifying transits of such planets in photometric monitoring is complicated both by the intrinsic stellar variability that is common among these stars and by the nocturnal cadence, atmospheric variations, and instrumental systematics that often plague Earth-bound observatories. Here we summarize the properties of MEarth data gathered so far, and we present a new framework to detect shallow exoplanet transits in wiggly and irregularly-spaced light curves. In contrast to previous methods that clean trends from light curves before searching for transits, this framework assesses the significance of individual transits simultaneously while modeling variability, systematics, and the photometric quality of individual nights. Our Method for Including Starspots and Systematics in the Marginalized Probability of a Lone Eclipse (MISS MarPLE) uses a computationally efficient semi-Bayesian approach to explore the vast probability space spanned by the many parameters of this model, naturally incorporating the uncertainties in these parameters into its evaluation of candidate events. We show how to combine individual transits processed by MISS MarPLE into periodic transiting planet candidates and compare our results to the popular Box-fitting Least Squares (BLS) method with simulations. By applying MISS MarPLE to observations from the MEarth Project, we demonstrate the utility of this framework for robustly assessing the false alarm probability of transit signals in real data. [slightly abridged]Comment: accepted to the Astronomical Journal, 21 pages, 12 figure

    Enhancing the performance of automated guided vehicles through reliability, operation and maintenance assessment

    Get PDF
    Automated guided vehicles (AGVs), a type of unmanned moving robots that move along fixed routes or are directed by laser navigation systems, are increasingly used in modern society to improve efficiency and lower the cost of production. A fleet of AGVs operate together to form a fully automatic transport system, which is known as an AGV system. To date, their added value in efficiency improvement and cost reduction has been sufficiently explored via conducting in-depth research on route optimisation, system layout configuration, and traffic control. However, their safe application has not received sufficient attention although the failure of AGVs may significantly impact the operation and efficiency of the entire system. This issue becomes more markable today particularly in the light of the fact that the size of AGV systems is becoming much larger and their operating environment is becoming more complex than ever before. This motivates the research into AGV reliability, availability and maintenance issues in this thesis, which aims to answer the following four fundamental questions: (1) How could AGVs fail? (2) How is the reliability of individual AGVs in the system assessed? (3) How does a failed AGV affect the operation of the other AGVs and the performance of the whole system? (4) How can an optimal maintenance strategy for AGV systems be achieved? In order to answer these questions, the method for identifying the critical subsystems and actions of AGVs is studied first in this thesis. Then based on the research results, mathematical models are developed in Python to simulate AGV systems and assess their performance in different scenarios. In the research of this thesis, Failure Mode, Effects and Criticality Analysis (FMECA) was adopted first to analyse the failure modes and effects of individual AGV subsystems. The interactions of these subsystems were studied via performing Fault Tree Analysis (FTA). Then, a mathematical model was developed to simulate the operation of a single AGV with the aid of Petri Nets (PNs). Since most existing AGV systems in modern industries and warehouses consist of multiple AGVs that operate synchronously to perform specific tasks, it is necessary to investigate the interactions between different AGVs in the same system. To facilitate the research of multi-AGV systems, the model of a three-AGV system with unidirectional paths was considered. In the model, an advanced concept PN, namely Coloured Petri Net (CPN), was creatively used to describe the movements of the AGVs. Attributing to the application of CPN, not only the movements of the AGVs but also the various operation and maintenance activities of the AGV systems (for example, item delivery, corrective maintenance, periodic maintenance, etc.) can be readily simulated. Such a unique technique provides us with an effective tool to investigate larger-scale AGV systems. To investigate the reliability, efficiency and maintenance of dynamic AGV systems which consist of multiple single-load and multi-load AGVs traveling along different bidirectional routes in different missions, an AGV system consisting of 9 stations was simulated using the CPN methods. Moreover, the automatic recycling of failed AGVs is studied as well in order to further reduce human participation in the operation of AGV systems. Finally, the simulation results were used to optimise the design, operation and maintenance of multi-AGV systems with the consideration of the throughputs and corresponding costs of them.The research reported in this thesis contributes to the design, reliability, operation, and maintenance of large-scale AGV systems in the modern and rapidly changing world.</div

    Deployable antenna phase A study

    Get PDF
    Applications for large deployable antennas were re-examined, flight demonstration objectives were defined, the flight article (antenna) was preliminarily designed, and the flight program and ground development program, including the support equipment, were defined for a proposed space transportation system flight experiment to demonstrate a large (50 to 200 meter) deployable antenna system. Tasks described include: (1) performance requirements analysis; (2) system design and definition; (3) orbital operations analysis; and (4) programmatic analysis

    Probabilistic Risk Assessment Procedures Guide for NASA Managers and Practitioners (Second Edition)

    Get PDF
    Probabilistic Risk Assessment (PRA) is a comprehensive, structured, and logical analysis method aimed at identifying and assessing risks in complex technological systems for the purpose of cost-effectively improving their safety and performance. NASA's objective is to better understand and effectively manage risk, and thus more effectively ensure mission and programmatic success, and to achieve and maintain high safety standards at NASA. NASA intends to use risk assessment in its programs and projects to support optimal management decision making for the improvement of safety and program performance. In addition to using quantitative/probabilistic risk assessment to improve safety and enhance the safety decision process, NASA has incorporated quantitative risk assessment into its system safety assessment process, which until now has relied primarily on a qualitative representation of risk. Also, NASA has recently adopted the Risk-Informed Decision Making (RIDM) process [1-1] as a valuable addition to supplement existing deterministic and experience-based engineering methods and tools. Over the years, NASA has been a leader in most of the technologies it has employed in its programs. One would think that PRA should be no exception. In fact, it would be natural for NASA to be a leader in PRA because, as a technology pioneer, NASA uses risk assessment and management implicitly or explicitly on a daily basis. NASA has probabilistic safety requirements (thresholds and goals) for crew transportation system missions to the International Space Station (ISS) [1-2]. NASA intends to have probabilistic requirements for any new human spaceflight transportation system acquisition. Methods to perform risk and reliability assessment in the early 1960s originated in U.S. aerospace and missile programs. Fault tree analysis (FTA) is an example. It would have been a reasonable extrapolation to expect that NASA would also become the world leader in the application of PRA. That was, however, not to happen. Early in the Apollo program, estimates of the probability for a successful roundtrip human mission to the moon yielded disappointingly low (and suspect) values and NASA became discouraged from further performing quantitative risk analyses until some two decades later when the methods were more refined, rigorous, and repeatable. Instead, NASA decided to rely primarily on the Hazard Analysis (HA) and Failure Modes and Effects Analysis (FMEA) methods for system safety assessment

    Reliability Evaluation and Prediction Method with Small Samples

    Get PDF
    How to accurately evaluate and predict the degradation state of the components with small samples is a critical and practical problem. To address the problems of unknown degradation state of components, difficulty in obtaining relevant environmental data and small sample size in the field of reliability prediction, a reliability evaluation and prediction method based on Cox model and 1D CNN-BiLSTM model is proposed in this paper. Taking the historical fault data of six components of a typical load-haul-dump (LHD) machine as an example, a reliability evaluation method based on Cox model with small sample size is applied by comparing the reliability evaluation models such as logistic regression (LR) model, support vector machine (SVM) model and back propagation neural network (BPNN) model in a comprehensive manner. On this basis, a reliability prediction method based on one-dimensional convolutional neural network-bi-directional long and short-term memory network (1D CNN-BiLSTM) is applied with the objective of minimizing the prediction error. The applicability as well as the effectiveness of the proposed model is verified by comparing typical time series prediction models such as the autoregressive integrated moving average (ARIMA) model and multiple linear regression (MLR). The experimental results show that the proposed model is valuable for the development of reliability plans and for the implementation of reliability maintenance activities

    Automatic Generation of Generalized Event Sequence Diagrams for Guiding Simulation Based Dynamic Probabilistic Risk Assessment of Complex Systems

    Get PDF
    Dynamic probabilistic risk assessment (DPRA) is a systematic and comprehensive methodology that has been used and refined over the past two decades to evaluate the risks associated with complex systems such as nuclear power plants, space missions, chemical plants, and military systems. A critical step in DPRA is generating risk scenarios which are used to enumerate and assess the probability of different outcomes. The classical approach to generating risk scenarios is not, however, sufficient to deal with the complexity of the above-mentioned systems. The primary contribution of this dissertation is in offering a new method for capturing different types of engineering knowledge and using them to automatically generate risk scenarios, presented in the form of generalized event sequence diagrams, for dynamic systems. This new method, as well as several important applications, is described in detail. The most important application is within a new framework for DPRA in which the risk simulation environment is guided to explore more interesting scenarios such as low-probability/high-consequence scenarios. Another application considered is the use of the method to enhance the process of risk-based design

    Proceedings of the Second International Mobile Satellite Conference (IMSC 1990)

    Get PDF
    Presented here are the proceedings of the Second International Mobile Satellite Conference (IMSC), held June 17-20, 1990 in Ottawa, Canada. Topics covered include future mobile satellite communications concepts, aeronautical applications, modulation and coding, propagation and experimental systems, mobile terminal equipment, network architecture and control, regulatory and policy considerations, vehicle antennas, and speech compression

    Decision Support Elements and Enabling Techniques to Achieve a Cyber Defence Situational Awareness Capability

    Full text link
    [ES] La presente tesis doctoral realiza un análisis en detalle de los elementos de decisión necesarios para mejorar la comprensión de la situación en ciberdefensa con especial énfasis en la percepción y comprensión del analista de un centro de operaciones de ciberseguridad (SOC). Se proponen dos arquitecturas diferentes basadas en el análisis forense de flujos de datos (NF3). La primera arquitectura emplea técnicas de Ensemble Machine Learning mientras que la segunda es una variante de Machine Learning de mayor complejidad algorítmica (lambda-NF3) que ofrece un marco de defensa de mayor robustez frente a ataques adversarios. Ambas propuestas buscan automatizar de forma efectiva la detección de malware y su posterior gestión de incidentes mostrando unos resultados satisfactorios en aproximar lo que se ha denominado un SOC de próxima generación y de computación cognitiva (NGC2SOC). La supervisión y monitorización de eventos para la protección de las redes informáticas de una organización debe ir acompañada de técnicas de visualización. En este caso, la tesis aborda la generación de representaciones tridimensionales basadas en métricas orientadas a la misión y procedimientos que usan un sistema experto basado en lógica difusa. Precisamente, el estado del arte muestra serias deficiencias a la hora de implementar soluciones de ciberdefensa que reflejen la relevancia de la misión, los recursos y cometidos de una organización para una decisión mejor informada. El trabajo de investigación proporciona finalmente dos áreas claves para mejorar la toma de decisiones en ciberdefensa: un marco sólido y completo de verificación y validación para evaluar parámetros de soluciones y la elaboración de un conjunto de datos sintéticos que referencian unívocamente las fases de un ciberataque con los estándares Cyber Kill Chain y MITRE ATT & CK.[CA] La present tesi doctoral realitza una anàlisi detalladament dels elements de decisió necessaris per a millorar la comprensió de la situació en ciberdefensa amb especial èmfasi en la percepció i comprensió de l'analista d'un centre d'operacions de ciberseguretat (SOC). Es proposen dues arquitectures diferents basades en l'anàlisi forense de fluxos de dades (NF3). La primera arquitectura empra tècniques de Ensemble Machine Learning mentre que la segona és una variant de Machine Learning de major complexitat algorítmica (lambda-NF3) que ofereix un marc de defensa de major robustesa enfront d'atacs adversaris. Totes dues propostes busquen automatitzar de manera efectiva la detecció de malware i la seua posterior gestió d'incidents mostrant uns resultats satisfactoris a aproximar el que s'ha denominat un SOC de pròxima generació i de computació cognitiva (NGC2SOC). La supervisió i monitoratge d'esdeveniments per a la protecció de les xarxes informàtiques d'una organització ha d'anar acompanyada de tècniques de visualització. En aquest cas, la tesi aborda la generació de representacions tridimensionals basades en mètriques orientades a la missió i procediments que usen un sistema expert basat en lògica difusa. Precisament, l'estat de l'art mostra serioses deficiències a l'hora d'implementar solucions de ciberdefensa que reflectisquen la rellevància de la missió, els recursos i comeses d'una organització per a una decisió més ben informada. El treball de recerca proporciona finalment dues àrees claus per a millorar la presa de decisions en ciberdefensa: un marc sòlid i complet de verificació i validació per a avaluar paràmetres de solucions i l'elaboració d'un conjunt de dades sintètiques que referencien unívocament les fases d'un ciberatac amb els estàndards Cyber Kill Chain i MITRE ATT & CK.[EN] This doctoral thesis performs a detailed analysis of the decision elements necessary to improve the cyber defence situation awareness with a special emphasis on the perception and understanding of the analyst of a cybersecurity operations center (SOC). Two different architectures based on the network flow forensics of data streams (NF3) are proposed. The first architecture uses Ensemble Machine Learning techniques while the second is a variant of Machine Learning with greater algorithmic complexity (lambda-NF3) that offers a more robust defense framework against adversarial attacks. Both proposals seek to effectively automate the detection of malware and its subsequent incident management, showing satisfactory results in approximating what has been called a next generation cognitive computing SOC (NGC2SOC). The supervision and monitoring of events for the protection of an organisation's computer networks must be accompanied by visualisation techniques. In this case, the thesis addresses the representation of three-dimensional pictures based on mission oriented metrics and procedures that use an expert system based on fuzzy logic. Precisely, the state-of-the-art evidences serious deficiencies when it comes to implementing cyber defence solutions that consider the relevance of the mission, resources and tasks of an organisation for a better-informed decision. The research work finally provides two key areas to improve decision-making in cyber defence: a solid and complete verification and validation framework to evaluate solution parameters and the development of a synthetic dataset that univocally references the phases of a cyber-attack with the Cyber Kill Chain and MITRE ATT & CK standards.Llopis Sánchez, S. (2023). Decision Support Elements and Enabling Techniques to Achieve a Cyber Defence Situational Awareness Capability [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/19424
    corecore