342,601 research outputs found

    A foundation for multi-level modelling

    Get PDF
    Multi-level modelling allows types and instances to be mixed in the same model, however there are several proposals for how meta- models can support this. This paper proposes a meta-circular basis for meta-modelling and shows how it supports two leading approaches to multi-level modelling

    A foundation for multi-level modelling

    Get PDF
    Multi-level modelling allows types and instances to be mixed in the same model, however there are several proposals for how metamodels can support this. This paper proposes a meta-circular basis for meta-modelling and shows how it supports two leading approaches to multi-level modelling

    Risk management in intelligent agents

    Full text link
    University of Technology, Sydney. Faculty of Engineering and Information Technology.This thesis presents the development of a generalised risk analysis, modelling and management framework for intelligent agents based on the state-of-art techniques from knowledge representation and uncertainty management in the field of Artificial Intelligence (AI). Assessment and management of risk are well established common practices in human society. However, formal recognition and treatment of risk are not usually considered in the design and implementation of (most existing) intelligent agents and information systems. This thesis aims to fill this gap and improve the overall performance of an intelligent agent. By providing a formal framework that can be easily implemented in practice, my work enables an agent to assess and manage relevant domain risks in a consistent, systematic and intelligent manner. In this thesis, I canvas a wide range of theories and techniques in AI research that deal with uncertainty representation and management. I formulated a generalised concept of risk for intelligent agents and developed formal qualitative and quantitative representations of risk based on the Possible Worlds paradigm. By adapting a selection of mature knowledge modelling and reasoning techniques, I develop a qualitative and a quantitative approach of modelling domains for risk assessment and management. Both approaches are developed under the same theoretical assumptions and use the same domain analysis procedure; both share a similar iterative process to maintain and improve domain knowledge base continuously over time. Most importantly, the knowledge modelling and reasoning techniques used in both approaches share the same underlying paradigm of Possible Worlds. The close connection between the two risk modelling and reasoning approaches leads us to combine them into a hybrid, multi-level, iterative risk modelling and management framework for intelligent agents, or HiRMA, that is generalised for risk modelling and management in many disparate problem domains and environments. Finally, I provide a top-level guide on how HiRMA can be implemented in a practical domain and a software architecture for such an implementation. My work lays a solid foundation for building better decision support tools (with respect to risk management) that can be integrated into existing or future intelligent agents

    Formal Modelling, Testing and Verification of HSA Memory Models using Event-B

    Full text link
    The HSA Foundation has produced the HSA Platform System Architecture Specification that goes a long way towards addressing the need for a clear and consistent method for specifying weakly consistent memory. HSA is specified in a natural language which makes it open to multiple ambiguous interpretations and could render bugs in implementations of it in hardware and software. In this paper we present a formal model of HSA which can be used in the development and verification of both concurrent software applications as well as in the development and verification of the HSA-compliant platform itself. We use the Event-B language to build a provably correct hierarchy of models from the most abstract to a detailed refinement of HSA close to implementation level. Our memory models are general in that they represent an arbitrary number of masters, programs and instruction interleavings. We reason about such general models using refinements. Using Rodin tool we are able to model and verify an entire hierarchy of models using proofs to establish that each refinement is correct. We define an automated validation method that allows us to test baseline compliance of the model against a suite of published HSA litmus tests. Once we complete model validation we develop a coverage driven method to extract a richer set of tests from the Event-B model and a user specified coverage model. These tests are used for extensive regression testing of hardware and software systems. Our method of refinement based formal modelling, baseline compliance testing of the model and coverage driven test extraction using the single language of Event-B is a new way to address a key challenge facing the design and verification of multi-core systems.Comment: 9 pages, 10 figure

    Development of a Methodology for the Economic Assessment of Managerial Decisions as a Factor of Increased Economic Security

    Full text link
    The article notes that the emergence of such a phenomenon as the interdependence of security and development, the so-called security-development nexus, becomes a determinant during the development of strategic documents at all hierarchical levels. It gives relevance to the search for methodological solutions that would on a strategic level take into account any potential threats to economic security, and on a tactical level provide for pragmatic actions that are not in conflict with the strategic development vector of business entities. The authors identify the instability factors that pose a real threat to economic security. They substantiate the expediency of forming a new model of the national economy development with a focal point on new industrialization. The article factors in the most important trends in the development of the global economy that determine the strategic vector of enhancing the economic security in Russia. It is ascertained that in the conditions of new industrialization, the intellectual core of the high-tech economy sector is formed by convergent technologies (NBICS technologies). The authors offer a methodological approach to the economic assessment of managerial decisions in the context of uncertainty. They also identify methodological principles that must be taken into account in developing a modern methodology for the economic assessment of business decisions. The principles include forming a preferred reality, or the so-called “vision of the future,” the priority of network solutions as the basis for the formation of new markets; mass customization and individualization of demands, principal changes in the profile of competences that ensure competitiveness on the labor market, use of the ideology of inclusive development and impact investment that creates common values. The proposed methodology is based on the optimum combination of traditional methods used for the economic assessment of managerial decisions with the method of real options and reflexive assessments with regard to entropy as a measure of uncertainty. The proposed methodological approach has been tested in respect of the Ural mining and metallurgical complex.The article has been prepared with the support of the grant from the Russian Foundation for Basic Research № 16–06–00403 "Modelling the Motivational Potentials of the Multi-subject Industrial Policy in the Context of New Industrialization"

    Learning Temporal Strategic Relationships using Generative Adversarial Imitation Learning

    Full text link
    This paper presents a novel framework for automatic learning of complex strategies in human decision making. The task that we are interested in is to better facilitate long term planning for complex, multi-step events. We observe temporal relationships at the subtask level of expert demonstrations, and determine the different strategies employed in order to successfully complete a task. To capture the relationship between the subtasks and the overall goal, we utilise two external memory modules, one for capturing dependencies within a single expert demonstration, such as the sequential relationship among different sub tasks, and a global memory module for modelling task level characteristics such as best practice employed by different humans based on their domain expertise. Furthermore, we demonstrate how the hidden state representation of the memory can be used as a reward signal to smooth the state transitions, eradicating subtle changes. We evaluate the effectiveness of the proposed model for an autonomous highway driving application, where we demonstrate its capability to learn different expert policies and outperform state-of-the-art methods. The scope in industrial applications extends to any robotics and automation application which requires learning from complex demonstrations containing series of subtasks.Comment: International Foundation for Autonomous Agents and Multiagent Systems, 201

    Advances in computational modelling for personalised medicine after myocardial infarction

    Get PDF
    Myocardial infarction (MI) is a leading cause of premature morbidity and mortality worldwide. Determining which patients will experience heart failure and sudden cardiac death after an acute MI is notoriously difficult for clinicians. The extent of heart damage after an acute MI is informed by cardiac imaging, typically using echocardiography or sometimes, cardiac magnetic resonance (CMR). These scans provide complex data sets that are only partially exploited by clinicians in daily practice, implying potential for improved risk assessment. Computational modelling of left ventricular (LV) function can bridge the gap towards personalised medicine using cardiac imaging in patients with post-MI. Several novel biomechanical parameters have theoretical prognostic value and may be useful to reflect the biomechanical effects of novel preventive therapy for adverse remodelling post-MI. These parameters include myocardial contractility (regional and global), stiffness and stress. Further, the parameters can be delineated spatially to correspond with infarct pathology and the remote zone. While these parameters hold promise, there are challenges for translating MI modelling into clinical practice, including model uncertainty, validation and verification, as well as time-efficient processing. More research is needed to (1) simplify imaging with CMR in patients with post-MI, while preserving diagnostic accuracy and patient tolerance (2) to assess and validate novel biomechanical parameters against established prognostic biomarkers, such as LV ejection fraction and infarct size. Accessible software packages with minimal user interaction are also needed. Translating benefits to patients will be achieved through a multidisciplinary approach including clinicians, mathematicians, statisticians and industry partners

    Undergraduate geotechnical engineering education of the 21st century

    Get PDF
    Forum papers are thought-provoking opinion pieces or essays founded in fact, sometimes containing speculation, on a civil engineering topic of general interest and relevance to the readership of the journal.Peer ReviewedPostprint (author's final draft

    Derivation of the out-of-plane behaviour of masonry through homogenization strategies: Micro-scale level

    Get PDF
    Two simple and reliable homogenized models are presented for the characterization of the masonry behaviour via a representative volume element (RVE) defined at a structural level. An FE micro modelling approach within a plate formulation assumption (Kirchhoff-Love and Mindlin-Reissner theory) using Cauchy continuum hypotheses and first-order homogenization theory is adopted. Brick units are considered elastic and modelled through quadrilateral finite elements (FEs) with linear interpolation. Mortar joints are assumed to be inelastic and reduced to zero-thickness interface FEs. A multi-surface plasticity model governs the strength envelope of mortar joints. It can reproduce fracture, frictional slip and crushing along the interface elements, hence making possible the prediction of a stepped, toothed or de-bonding failure pattern of masonry.Validation tests on the homogenized procedures are undertaken to conclude on the correct identification of the elastic stiffness properties, in the ability to reproduce the masonry orthotropic behaviour and the effect of potential pre-compressive states. Furthermore, the approaches are extended to characterize a case study of an English-bond masonry wall. Both the validation and application steps provide excellent results when compared with available experimental and numerical data from the literature. Conclusions on the influence of three-dimensional shear stresses and the effect of potential discontinuities along the thickness direction are also outlined.The two homogenized approaches are, for the running- and English-bond masonry cases, integrated within a FE code. By providing reliable and low computational cost solutions', these are particularly suitable to be combined within multi-scale approaches.This work was supported by FCT (Portuguese Foundation for Science and Technology), within ISISE, scholarship SFRH/BD/95086/2013. This work was also partly financed by FEDER funds through the Competitivity Factors Operational Programme - COMPETE and by national funds through FCT - Foundation for Science and Technology within the scope of the project POCI-01-0145-FEDER-007633
    corecore