523 research outputs found

    Functional sets with typed symbols: Framework and mixed Polynotopes for hybrid nonlinear reachability and filtering

    Full text link
    Verification and synthesis of Cyber-Physical Systems (CPS) are challenging and still raise numerous issues so far. In this paper, an original framework with mixed sets defined as function images of symbol type domains is first proposed. Syntax and semantics are explicitly distinguished. Then, both continuous (interval) and discrete (signed, boolean) symbol types are used to model dependencies through linear and polynomial functions, so leading to mixed zonotopic and polynotopic sets. Polynotopes extend sparse polynomial zonotopes with typed symbols. Polynotopes can both propagate a mixed encoding of intervals and describe the behavior of logic gates. A functional completeness result is given, as well as an inclusion method for elementary nonlinear and switching functions. A Polynotopic Kalman Filter (PKF) is then proposed as a hybrid nonlinear extension of Zonotopic Kalman Filters (ZKF). Bridges with a stochastic uncertainty paradigm are outlined. Finally, several discrete, continuous and hybrid numerical examples including comparisons illustrate the effectiveness of the theoretical results.Comment: 21 pages, 8 figure

    Genesis and consequences of fracturing in the cretaceous carbonate reservoirs of North Oman

    Get PDF
    North Oman is underlain by Cretaceous Natih and Shuaiba carbonates, which are important hydrocarbon reservoirs. Fracturing, especially fracture clusters, contributes significantly to reservoir performance. The fractures in the Natih are strongly affected by mechanical layering, whereas the Shuaiba is less obviously layered, except in the NW, where Upper Shuaiba is present. The fracture network of the Lower Shuaiba in the central and SE region of north Oman is dominated by fault–related fractures and associated corridors. Late Cretaceous deformation created NW-WNW strike-slip faults and associated fractures, as well as activation of salt diapirs. Tertiary deformation (NE shortening) resulted in the creation of abundant NE oriented background fractures, and more importantly NE fracture corridors that act as conduits to flow. Salt diapirs, when reactivated during Tertiary events, result in more intense fracturing locally. Field scale analyses of the fracture networks for Ghaba North and Lekhwair A North (both Shuaiba), based on BHI logs, reveal a change in dominant fracture orientation between the SE and the NW parts of north Oman. The NE fracture corridors play a major role in connecting the NW-WNW fractures seen in Lekhwair A North, and the current NE oriented maximum horizontal stress may also play a role

    Probabilistic Risk Assessment Procedures Guide for NASA Managers and Practitioners (Second Edition)

    Get PDF
    Probabilistic Risk Assessment (PRA) is a comprehensive, structured, and logical analysis method aimed at identifying and assessing risks in complex technological systems for the purpose of cost-effectively improving their safety and performance. NASA's objective is to better understand and effectively manage risk, and thus more effectively ensure mission and programmatic success, and to achieve and maintain high safety standards at NASA. NASA intends to use risk assessment in its programs and projects to support optimal management decision making for the improvement of safety and program performance. In addition to using quantitative/probabilistic risk assessment to improve safety and enhance the safety decision process, NASA has incorporated quantitative risk assessment into its system safety assessment process, which until now has relied primarily on a qualitative representation of risk. Also, NASA has recently adopted the Risk-Informed Decision Making (RIDM) process [1-1] as a valuable addition to supplement existing deterministic and experience-based engineering methods and tools. Over the years, NASA has been a leader in most of the technologies it has employed in its programs. One would think that PRA should be no exception. In fact, it would be natural for NASA to be a leader in PRA because, as a technology pioneer, NASA uses risk assessment and management implicitly or explicitly on a daily basis. NASA has probabilistic safety requirements (thresholds and goals) for crew transportation system missions to the International Space Station (ISS) [1-2]. NASA intends to have probabilistic requirements for any new human spaceflight transportation system acquisition. Methods to perform risk and reliability assessment in the early 1960s originated in U.S. aerospace and missile programs. Fault tree analysis (FTA) is an example. It would have been a reasonable extrapolation to expect that NASA would also become the world leader in the application of PRA. That was, however, not to happen. Early in the Apollo program, estimates of the probability for a successful roundtrip human mission to the moon yielded disappointingly low (and suspect) values and NASA became discouraged from further performing quantitative risk analyses until some two decades later when the methods were more refined, rigorous, and repeatable. Instead, NASA decided to rely primarily on the Hazard Analysis (HA) and Failure Modes and Effects Analysis (FMEA) methods for system safety assessment

    A Method to Define Requirements for System of Systems

    Get PDF
    The purpose of this research was to develop and apply a systems-based method for defining System of Systems (SoS) requirements using an inductive research design. Just as traditional Systems Engineering (TSE) includes a requirements definition phase, so too does System of Systems Engineering (SoSE); only with a wider, more over- arching, systemic perspective. TSE addresses the design and development of a single system with generally a very specific functional purpose enabled by any number of subcomponents. SoSE however, addresses the design and development of a large, complex system to meet a wide range of functional purposes enabled by any number of constituent systems, each of which may have its own individually-managed and funded TSE effort in execution. To date, the body of prescriptive guidance on how to define SoS requirements is extremely limited and nothing exists today that offers a methodological approach capable of being leveraged against real-world SoS problems. As a result, SoSE practitioners are left attempting to apply TSE techniques, methods, and tools to address requirements for the more complex problems of the SoS domain. This research addressed this gap in the systems body of knowledge by developing a method, grounded in systems principles and theory, that offers practitioners a systemic, flexible method for defining unifying and measurable SoS requirements. This provides element system managers and engineers a SoS focus to their efforts while still maximizing their autonomy to achieve system-level requirements. A rigorous mixed-method research methodology, employing inductive methods with a case application was used to develop and validate the SoS Requirements Definition Method. Two research questions provided the research focus: • How does the current body of knowledge inform the definition of a system theoretic construct to define SoS requirements? • What results from the demonstration of the candidate construct for SoS requirements definition? Using Discoverers \u27 Induction (Whewell, 1858), coupled with coding techniques from the grounded theory method (Glaser & Strauss, 1967), a systems-based method for defining SoS requirements was constructed and applied to a real-world SoS requirements definition case. The structured systemic method advances the SoSE field and shows significant promise for further development to support SoSE practitioners in the area of SoS requirements engineering

    The Fifth NASA Symposium on VLSI Design

    Get PDF
    The fifth annual NASA Symposium on VLSI Design had 13 sessions including Radiation Effects, Architectures, Mixed Signal, Design Techniques, Fault Testing, Synthesis, Signal Processing, and other Featured Presentations. The symposium provides insights into developments in VLSI and digital systems which can be used to increase data systems performance. The presentations share insights into next generation advances that will serve as a basis for future VLSI design

    Working Notes from the 1992 AAAI Workshop on Automating Software Design. Theme: Domain Specific Software Design

    Get PDF
    The goal of this workshop is to identify different architectural approaches to building domain-specific software design systems and to explore issues unique to domain-specific (vs. general-purpose) software design. Some general issues that cut across the particular software design domain include: (1) knowledge representation, acquisition, and maintenance; (2) specialized software design techniques; and (3) user interaction and user interface

    Fault-tolerant Stochastic Distributed Systems

    Get PDF
    The present doctoral thesis discusses the design of fault-tolerant distributed systems, placing emphasis in addressing the case where the actions of the nodes or their interactions are stochastic. The main objective is to detect and identify faults to improve the resilience of distributed systems to crash-type faults, as well as detecting the presence of malicious nodes in pursuit of exploiting the network. The proposed analysis considers malicious agents and computational solutions to detect faults. Crash-type faults, where the affected component ceases to perform its task, are tackled in this thesis by introducing stochastic decisions in deterministic distributed algorithms. Prime importance is placed on providing guarantees and rates of convergence for the steady-state solution. The scenarios of a social network (state-dependent example) and consensus (time- dependent example) are addressed, proving convergence. The proposed algorithms are capable of dealing with packet drops, delays, medium access competition, and, in particular, nodes failing and/or losing network connectivity. The concept of Set-Valued Observers (SVOs) is used as a tool to detect faults in a worst-case scenario, i.e., when a malicious agent can select the most unfavorable sequence of communi- cations and inject a signal of arbitrary magnitude. For other types of faults, it is introduced the concept of Stochastic Set-Valued Observers (SSVOs) which produce a confidence set where the state is known to belong with at least a pre-specified probability. It is shown how, for an algorithm of consensus, it is possible to exploit the structure of the problem to reduce the computational complexity of the solution. The main result allows discarding interactions in the model that do not contribute to the produced estimates. The main drawback of using classical SVOs for fault detection is their computational burden. By resorting to a left-coprime factorization for Linear Parameter-Varying (LPV) systems, it is shown how to reduce the computational complexity. By appropriately selecting the factorization, it is possible to consider detectable systems (i.e., unobservable systems where the unobservable component is stable). Such a result plays a key role in the domain of Cyber-Physical Systems (CPSs). These techniques are complemented with Event- and Self-triggered sampling strategies that enable fewer sensor updates. Moreover, the same triggering mechanisms can be used to make decisions of when to run the SVO routine or resort to over-approximations that temporarily compromise accuracy to gain in performance but maintaining the convergence characteristics of the set-valued estimates. A less stringent requirement for network resources that is vital to guarantee the applicability of SVO-based fault detection in the domain of Networked Control Systems (NCSs)
    • …
    corecore