4,986 research outputs found

    Dynamic finite-strain modelling of the human left ventricle in health and disease using an immersed boundary-finite element method

    Get PDF
    Detailed models of the biomechanics of the heart are important both for developing improved interventions for patients with heart disease and also for patient risk stratification and treatment planning. For instance, stress distributions in the heart affect cardiac remodelling, but such distributions are not presently accessible in patients. Biomechanical models of the heart offer detailed three-dimensional deformation, stress and strain fields that can supplement conventional clinical data. In this work, we introduce dynamic computational models of the human left ventricle (LV) that are derived from clinical imaging data obtained from a healthy subject and from a patient with a myocardial infarction (MI). Both models incorporate a detailed invariant-based orthotropic description of the passive elasticity of the ventricular myocardium along with a detailed biophysical model of active tension generation in the ventricular muscle. These constitutive models are employed within a dynamic simulation framework that accounts for the inertia of the ventricular muscle and the blood that is based on an immersed boundary (IB) method with a finite element description of the structural mechanics. The geometry of the models is based on data obtained non-invasively by cardiac magnetic resonance (CMR). CMR imaging data are also used to estimate the parameters of the passive and active constitutive models, which are determined so that the simulated end-diastolic and end-systolic volumes agree with the corresponding volumes determined from the CMR imaging studies. Using these models, we simulate LV dynamics from end-diastole to end-systole. The results of our simulations are shown to be in good agreement with subject-specific CMR-derived strain measurements and also with earlier clinical studies on human LV strain distributions

    ISSN: Transitioning to linked data

    Get PDF
    International audienceISSN numbers reliably identify all types of continuing resources worldwide: in 2007, the scope of the standard, originally limited to serials, was extended so as to also include ongoing integrating resources. Bibliographic records describing resources identified by an ISSN are produced by ISSN national centres – there are also in charge of their updates. ISSN records are regularly sent to the ISSN Register, a bibliographic database which currently contains more than 1.9 million records. The Register is maintained by the ISSN International Centre, which is also in charge of providing access to its bibliographic information through innovative tools and services. The ISSN International Centre sees linked data principles and tools as a prominent way to distribute information from its own Register; and more generally bibliographic information about continuing resources. It seeks also to harness the tremendous opportunities of re-using data from other organizations, belonging or not to the library world, in order to enhance its knowledge on its own data, and to propose better services. The ISSN International Centre has therefore launched several projects related to that domain. On one hand, it has participated to the development of PRESS OO , an extension of the FRBR OO ontology for continuing resources. On the other hand, it has launched ROAD, the Registry of Open Access Resources, which disseminates bibliographic information on open access publications in the web of data. These two experiments have helped the ISSN International Centre to start setting up its linked data policy – or policies: various data models will be designed to fit the needs of the different users; different services and tools will be provided to free users and to customers of the ISSN Portal

    Object-Oriented Programming and Parallelism

    Get PDF
    Initially, object-orientation and parallelism originated and developed as separate and relatively independent areas. During the last decade, however, more and more researchers were attracted by the benefits from a potential marriage of the two powerful paradigms. Numerous research projects and an increasing number of practical applications were aimed at different forms of amalgamation of parallelism with object-orientation. It has been realized that parallelism is a inherently needed enhancement for the traditional object-oriented programming (OOP) paradigm, and that object orientation can add significant flexibility to the parallel programming paradigm

    Agent-based modeling: a systematic assessment of use cases and requirements for enhancing pharmaceutical research and development productivity.

    Get PDF
    A crisis continues to brew within the pharmaceutical research and development (R&D) enterprise: productivity continues declining as costs rise, despite ongoing, often dramatic scientific and technical advances. To reverse this trend, we offer various suggestions for both the expansion and broader adoption of modeling and simulation (M&S) methods. We suggest strategies and scenarios intended to enable new M&S use cases that directly engage R&D knowledge generation and build actionable mechanistic insight, thereby opening the door to enhanced productivity. What M&S requirements must be satisfied to access and open the door, and begin reversing the productivity decline? Can current methods and tools fulfill the requirements, or are new methods necessary? We draw on the relevant, recent literature to provide and explore answers. In so doing, we identify essential, key roles for agent-based and other methods. We assemble a list of requirements necessary for M&S to meet the diverse needs distilled from a collection of research, review, and opinion articles. We argue that to realize its full potential, M&S should be actualized within a larger information technology framework--a dynamic knowledge repository--wherein models of various types execute, evolve, and increase in accuracy over time. We offer some details of the issues that must be addressed for such a repository to accrue the capabilities needed to reverse the productivity decline

    JTruss: A CAD-Oriented Educational Open-Source Software for Static Analysis of Truss-Type Structures

    Get PDF
    A CAD-oriented software (JTruss) for the static analysis of planar and spatial truss-type structures is presented. Developed for educational purposes, JTruss is part of an open-source project and is characterised by complete accessibility (i.e. platform independent) and high software compatibility. CAD methodologies are employed to implement commands for handling graphic models. A student friendly graphical interface, tailored mainly for structural mechanics introductory courses in engineering and architecture programs, is conceived. Accordingly, the standard sequence involved in the software design, namely preprocessing, processing and post-processing, is implemented aiming to improve the structural behaviour interpretation. (C) 2008 Wiley Periodicals, Inc. Comput Aprol Eng Educ 16: 280-289, 2008: Published online in Wiley InterScience (www.interscience.wiley.com) DOI 10.1002/cae.2015

    Crowdsourcing Linked Data on listening experiences through reuse and enhancement of library data

    Get PDF
    Research has approached the practice of musical reception in a multitude of ways, such as the analysis of professional critique, sales figures and psychological processes activated by the act of listening. Studies in the Humanities, on the other hand, have been hindered by the lack of structured evidence of actual experiences of listening as reported by the listeners themselves, a concern that was voiced since the early Web era. It was however assumed that such evidence existed, albeit in pure textual form, but could not be leveraged until it was digitised and aggregated. The Listening Experience Database (LED) responds to this research need by providing a centralised hub for evidence of listening in the literature. Not only does LED support search and reuse across nearly 10,000 records, but it also provides machine-readable structured data of the knowledge around the contexts of listening. To take advantage of the mass of formal knowledge that already exists on the Web concerning these contexts, the entire framework adopts Linked Data principles and technologies. This also allows LED to directly reuse open data from the British Library for the source documentation that is already published. Reused data are re-published as open data with enhancements obtained by expanding over the model of the original data, such as the partitioning of published books and collections into individual stand-alone documents. The database was populated through crowdsourcing and seamlessly incorporates data reuse from the very early data entry phases. As the sources of the evidence often contain vague, fragmentary of uncertain information, facilities were put in place to generate structured data out of such fuzziness. Alongside elaborating on these functionalities, this article provides insights into the most recent features of the latest instalment of the dataset and portal, such as the interlinking with the MusicBrainz database, the relaxation of geographical input constraints through text mining, and the plotting of key locations in an interactive geographical browser

    KEMNAD: A Knowledge Engineering Methodology for Negotiating Agent Development

    Get PDF
    Automated negotiation is widely applied in various domains. However, the development of such systems is a complex knowledge and software engineering task. So, a methodology there will be helpful. Unfortunately, none of existing methodologies can offer sufficient, detailed support for such system development. To remove this limitation, this paper develops a new methodology made up of: (1) a generic framework (architectural pattern) for the main task, and (2) a library of modular and reusable design pattern (templates) of subtasks. Thus, it is much easier to build a negotiating agent by assembling these standardised components rather than reinventing the wheel each time. Moreover, since these patterns are identified from a wide variety of existing negotiating agents(especially high impact ones), they can also improve the quality of the final systems developed. In addition, our methodology reveals what types of domain knowledge need to be input into the negotiating agents. This in turn provides a basis for developing techniques to acquire the domain knowledge from human users. This is important because negotiation agents act faithfully on the behalf of their human users and thus the relevant domain knowledge must be acquired from the human users. Finally, our methodology is validated with one high impact system

    Towards an interoperable metamodel suite: size assessment as one input

    Full text link
    In recent years, many metamodels have been introduced in the software engi- neering literature and standards. These metamodels vary in their focus across, for example, process, product, organizational and measurement aspects of software development and have typically been developed independently of each other with shared concepts being only accidental. There is thus an increasing concern in the standards communities that possible conicts of structure and semantics between these various metamodels will hinder their widespread adoption. The complexity of these metamodels has also increased significantly and is another barrier in their appreciation. This complexity is compounded when more than one metamodel is used in the lifecycle of a software project. Therefore there is a need to have interoperable metamodels. As a first step towards engendering interoperability and/or possible mergers between metamodels, we examine the size and complexity of various meta- models. To do this, we have used the Rossi and Brinkkemper metrics-based approach to evaluate the size and complexity of several standard metamodels including UML 2.3, BPMN 2.0, ODM, SMM and OSM. The size and complexity of these metamodels is also compared with the previous version of UML, BPMN and Activity diagrams. The comparatively large sizes of BPMN 2.0 and UML 2.3 suggest that future integration with these metamodels might be more difficult than with the other metamodels under study (especially ODM, SSM and OSM)
    • 

    corecore