We propose a formal world model, grounded in structural causal models, which we call Structural Causal World Models (SCWMs): interpretable, structured, and machine-verifiable representations of environmental, contextual, and system-internal conditions that define the circumstances under which a system can operate safely. Unlike existing domain-specific approaches, our methodology is domain-agnostic and applicable across diverse safety-critical contexts. By unifying symbolic constraints, probabilistic uncertainty, and causal dependencies, our proposed methodology enables traceable hazard analysis, systematic requirement propagation, and context-aware refinement of safety constraints. We illustrate the methodology through autonomous driving examples, focusing on hazard analysis and safety requirement derivation. More broadly, this work contributes to reducing uncertainty in the safety assurance of AI-based autonomous systems by providing a means of closing the semantic gap in the definition of the system safety requirements associated within complex environments and functions, providing a basis for causal hazard and risk analysis, verification of probabilistic guarantees and run-time monitoring to counteract residual AI model insufficiencies
Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.