1,444 research outputs found

    Characterisation of Blast Loading from Ideal and Non-Ideal Explosives

    Get PDF
    Explosive detonation in its simplest form can be characterised by an instantaneous release of energy at an infinitely small point in space as a solid explosive material. This is a result of chemical decomposition of an explosive which reforms as high pressure and temperature gases which expand radially. This supersonic expansion of detonation products compresses the surrounding medium resulting in a shock wave discontinuity which propagates away from an explosive epicentre at high speeds. This has the potential of significant damage to anything the shock wave interacts with. Shock wave quantification work conducted in 1940`s through to the 1980`s was done so to understand the effects of large scale explosive detonation which was an immediate threat due to the discovery of the nuclear bomb. Highly skilled experimental and theoretical scientists were assigned the task of capturing the effects of large scale detonations through innovative solutions and development of pressure gauges. The in-depth fundamental understanding of physics, combustion and fluid dynamics the researchers utilised resulted in the well-favoured semi-empirical blast predictions for simplistic free-field spherical/hemispherical blasts.\\ A broad amount of literature has been published on free-air characterisation of spherical/hemispherical explosives, with the detonation process and subsequent shock wave formation mechanics being well understood. However, there is yet to be a definitive and robust understanding of how deterministic a shock waves spatial and temporal parameters are for simplistic scenarios. This goes as far as some studies suggesting that semi-empirical tools are not as effective as previously assumed. Often the use of numerical simulations provide reasonable insights to blast loading conditions imparted on structures and scenarios with higher complexities. However, when the validation data used is assumed to exhibit erroneousness, the schemes are no longer characteristically high in fidelity. The lack of quantified variability and confidence in the data which is published, are significant issues for engineers when designing infrastructure that is both robust enough to withstand extreme loading, and not overly conservative that there are cost and material waste implications. This issue is investigated thoroughly within this thesis, highlighting the sensitivity of blast parameters across the scaled distance ranges, and determining their predictability with both numerical simulation and semi-empirical tools. The vast majority of free-field characterisation has been conducted using military grade explosive which exhibit ideal detonation behaviours; meaning the detonation reaction is effectively instantaneous. Ideal explosives, by the theoretical definition, can be categorised by a simplistic instantaneous energy release. In far-field regimes, any explosive with ideal-like compositions and behaviours should be scalable with mass. This assumption is not valid for homemade explosives (HME), such as ANFO (Ammonium Nitrate + Fuel Oil), whose compositions are usually homogenous, resulting in a finite reaction zone length. These can be long enough to cause failures in detonations and exhibit a variety of different energy releases depending on the mass of the charge resulting in HME's having different TNT equivalence values depending on their scale. Early works of ANFO characterisation was done so in the desire to replace TNT, to assess its capability of producing similar yields for a fraction of the manufacturing costs. This meant the hemispherical detonations of ANFO which have led to its overall classification, were done using charges of over 100kg and therefore non-ideal reaction zone effects become negligible in comparison to the overall charge size. Yields presented in this region were consistently measured at around 80\% of a similar TNT detonation and has therefore been incorrectly assumed a rule for ANFO across all mass ranges within published literature.\\ There is a distinct lack of characterisation of non-ideal explosives throughout the mass scales, posing a significant implication for designing structures to withstand the threat of HMEs. With the knowledge that energy is released at a much slower rate when detonating these compositions, the assumption that large scale trials accurately capturing the behaviour of a small charge masses, when scaled down, is not verified. Most HMEs will be hand held devices or, at the very least, backpack size, meaning the threat currently is not predictive with confidence through validated data conducted under well-controlled conditions. Small scale ANFO trials have demonstrated this to be the case within this thesis, with theoretical mechanisms proposed which offering a prediction method of the behaviour of non-ideal detonation across all mass scales. Findings in this PhD thesis will offer a conclusion on whether shock waves in free-field scenarios are deterministic for both ideal and no-ideal explosives, with a particular emphasis on the far-field range. The results presented are developments in the accurate quantification of shock wave loading conditions a structure is subjected to through explosive detonation and should be used by engineers to establish robust, probabilistic but accurate designs

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Efficient Model Checking: The Power of Randomness

    Get PDF

    A Kernel Perspective on Behavioural Metrics for Markov Decision Processes

    Full text link
    Behavioural metrics have been shown to be an effective mechanism for constructing representations in reinforcement learning. We present a novel perspective on behavioural metrics for Markov decision processes via the use of positive definite kernels. We leverage this new perspective to define a new metric that is provably equivalent to the recently introduced MICo distance (Castro et al., 2021). The kernel perspective further enables us to provide new theoretical results, which has so far eluded prior work. These include bounding value function differences by means of our metric, and the demonstration that our metric can be provably embedded into a finite-dimensional Euclidean space with low distortion error. These are two crucial properties when using behavioural metrics for reinforcement learning representations. We complement our theory with strong empirical results that demonstrate the effectiveness of these methods in practice.Comment: Published in TML

    Compositional Probabilistic Model Checking with String Diagrams of MDPs

    Full text link
    We present a compositional model checking algorithm for Markov decision processes, in which they are composed in the categorical graphical language of string diagrams. The algorithm computes optimal expected rewards. Our theoretical development of the algorithm is supported by category theory, while what we call decomposition equalities for expected rewards act as a key enabler. Experimental evaluation demonstrates its performance advantages.Comment: 32 pages, Extended version of a paper in CAV 202

    Exemplars as a least-committed alternative to dual-representations in learning and memory

    Get PDF
    Despite some notable counterexamples, the theoretical and empirical exchange between the fields of learning and memory is limited. In an attempt to promote further theoretical exchange, I explored how learning and memory may be conceptualized as distinct algorithms that operate on a the same representations of past experiences. I review representational and process assumptions in learning and memory, by the example of evaluative conditioning and false recognition, and identified important similarities in the theoretical debates. Based on my review, I identify global matching memory models and their exemplar representation as a promising candidate for a common representational substrate that satisfies the principle of least commitment. I then present two cases in which exemplar-based global matching models, which take characteristics of the stimulus material and context into account, suggest parsimonious explanations for empirical dissociations in evaluative conditioning and false recognition in long-term memory. These explanations suggest reinterpretations of findings that are commonly taken as evidence for dual-representation models. Finally, I report the same approach provides also provides a natural unitary account of false recognition in short-term memory, a finding which challenges the assumption that short-term memory is insulated from long-term memory. Taken together, this work illustrates the broad explanatory scope and the integrative and yet parsimonious potential of exemplar-based global matching models
    • …
    corecore