6,472 research outputs found
Clinical guidelines as plans: An ontological theory
Clinical guidelines are special types of plans realized by collective agents. We provide an ontological theory of such plans that is designed to support the construction of a framework in which guideline-based information systems can be employed in the management of workflow in health care organizations.
The framework we propose allows us to represent in formal terms how clinical guidelines are realized through the actions of are realized through the actions of individuals organized into teams. We provide various levels of implementation representing different levels of conformity on the part of health care organizations.
Implementations built in conformity with our framework are marked by two dimensions of flexibility that are designed to make them more likely to be accepted by health care professionals than standard guideline-based management systems. They do justice to the fact 1) that responsibilities within a health care organization are widely shared, and 2) that health care professionals may on different occasions be non-compliant with guidelines for a variety of well justified reasons.
The advantage of the framework lies in its built-in flexibility, its sensitivity to clinical context, and its ability to use inference tools based on a robust ontology. One disadvantage lies in its complicated implementation
A Domain-Specific Language and Editor for Parallel Particle Methods
Domain-specific languages (DSLs) are of increasing importance in scientific
high-performance computing to reduce development costs, raise the level of
abstraction and, thus, ease scientific programming. However, designing and
implementing DSLs is not an easy task, as it requires knowledge of the
application domain and experience in language engineering and compilers.
Consequently, many DSLs follow a weak approach using macros or text generators,
which lack many of the features that make a DSL a comfortable for programmers.
Some of these features---e.g., syntax highlighting, type inference, error
reporting, and code completion---are easily provided by language workbenches,
which combine language engineering techniques and tools in a common ecosystem.
In this paper, we present the Parallel Particle-Mesh Environment (PPME), a DSL
and development environment for numerical simulations based on particle methods
and hybrid particle-mesh methods. PPME uses the meta programming system (MPS),
a projectional language workbench. PPME is the successor of the Parallel
Particle-Mesh Language (PPML), a Fortran-based DSL that used conventional
implementation strategies. We analyze and compare both languages and
demonstrate how the programmer's experience can be improved using static
analyses and projectional editing. Furthermore, we present an explicit domain
model for particle abstractions and the first formal type system for particle
methods.Comment: Submitted to ACM Transactions on Mathematical Software on Dec. 25,
201
A Unified Research Data Infrastructure for Catalysis Research â Challenges and Concepts
Modern research methods produce large amounts of scientifically valuable data. Tools to process and analyze such data have advanced rapidly. Yet, access to large amounts of highâquality data remains limited in many fields, including catalysis research. Implementing the concept of FAIR data (Findable, Accessible, Interoperable, Reusable) in the catalysis community would improve this situation dramatically. The German NFDI initiative (National Research Data Infrastructure) aims to create a unique research data infrastructure covering all scientific disciplines. One of the consortia, NFDI4Cat, proposes a concept that serves all aspects and fields of catalysis research. We present a perspective on the challenging path ahead. Starting out from the current state, research needs are identified. A vision for a integrating all research data along the catalysis value chain, from molecule to chemical process, is developed. Respective core development topics are discussed, including ontologies, metadata, required infrastructure, IP, and the embedding into research community. This Concept paper aims to inspire not only researchers in the catalysis field, but to spark similar efforts also in other disciplines and on an international level.DFG, 441926934, NFDI4Cat â NFDI fĂŒr Wissenschaften mit Bezug zur Katalys
Recommended from our members
A Unified Research Data Infrastructure for Catalysis Research â Challenges and Concepts
Modern research methods produce large amounts of scientifically valuable data. Tools to process and analyze such data have advanced rapidly. Yet, access to large amounts of high-quality data remains limited in many fields, including catalysis research. Implementing the concept of FAIR data (Findable, Accessible, Interoperable, Reusable) in the catalysis community would improve this situation dramatically. The German NFDI initiative (National Research Data Infrastructure) aims to create a unique research data infrastructure covering all scientific disciplines. One of the consortia, NFDI4Cat, proposes a concept that serves all aspects and fields of catalysis research. We present a perspective on the challenging path ahead. Starting out from the current state, research needs are identified. A vision for a integrating all research data along the catalysis value chain, from molecule to chemical process, is developed. Respective core development topics are discussed, including ontologies, metadata, required infrastructure, IP, and the embedding into research community. This Concept paper aims to inspire not only researchers in the catalysis field, but to spark similar efforts also in other disciplines and on an international level. © 2021 The Authors. ChemCatChem published by Wiley-VCH Gmb
Towards a unified methodology for supporting the integration of data sources for use in web applications
Organisations are making increasing use of web applications and web-based systems as an integral part of providing services. Examples include personalised dynamic user content on a website, social media plug-ins or web-based mapping tools. For these types of applications to have maximum use for the user where the applications are fully functional, they require the integration of data from multiple sources. The focus of this thesis is in improving this integration process with a focus on web applications with multiple sources of data.
Integration of data from multiple sources is problematic for many reasons. Current integration methods tend to be domain specific and application specific. They are often complex, have compatibility issues with different technologies, lack maturity, are difficult to re-use, and do not accommodate new and emerging models and integration technologies. Technologies to achieve integration, such as brokers and translators do exist, but they cannot be used as a generic solution for developing web-applications achieving the integration outcomes required for successful web application development due to their domain specificity. It is because of these difficulties with integration, and the wide variety of integration approaches that there is a need to provide assistance to the developer in selecting the integration approach most appropriate to their needs.
This thesis proposes GIWeb, a unified top-down data integration methodology instantiated with a framework that will aid developers in their integration process. It will act as a conceptual structure to support the chosen technical approach. The framework will assist in the integration of data sources to support web application builders. The thesis presents the rationale for the need for the framework based on an examination of the range of applications, associated data sources and the range of potential solutions. The framework is evaluated using four case studies
Conservation process model (cpm). A twofold scientific research scope in the information modelling for cultural heritage
The aim of the present research is to develop an instrument able to adequately support the conservation process by means of a twofold approach, based on both BIM environment and ontology formalisation. Although BIM has been successfully experimented within AEC (Architecture Engineering Construction) field, it has showed many drawbacks for architectural heritage. To cope with unicity and more generally complexity of ancient buildings, applications so far developed have shown to poorly adapt BIM to conservation design with unsatisfactory results (Dore, Murphy 2013; Carrara 2014). In order to combine achievements reached within AEC through BIM environment (design control and management) with an appropriate, semantically enriched and flexible The presented model has at its core a knowledge base developed through information ontologies and oriented around the formalization and computability of all the knowledge necessary for the full comprehension of the object of architectural heritage an its conservation. Such a knowledge representation is worked out upon conceptual categories defined above all within architectural criticism and conservation scope. The present paper aims at further extending the scope of conceptual modelling within cultural heritage conservation already formalized by the model. A special focus is directed on decay analysis and surfaces conservation project
An Introduction to Programming for Bioscientists: A Python-based Primer
Computing has revolutionized the biological sciences over the past several
decades, such that virtually all contemporary research in the biosciences
utilizes computer programs. The computational advances have come on many
fronts, spurred by fundamental developments in hardware, software, and
algorithms. These advances have influenced, and even engendered, a phenomenal
array of bioscience fields, including molecular evolution and bioinformatics;
genome-, proteome-, transcriptome- and metabolome-wide experimental studies;
structural genomics; and atomistic simulations of cellular-scale molecular
assemblies as large as ribosomes and intact viruses. In short, much of
post-genomic biology is increasingly becoming a form of computational biology.
The ability to design and write computer programs is among the most
indispensable skills that a modern researcher can cultivate. Python has become
a popular programming language in the biosciences, largely because (i) its
straightforward semantics and clean syntax make it a readily accessible first
language; (ii) it is expressive and well-suited to object-oriented programming,
as well as other modern paradigms; and (iii) the many available libraries and
third-party toolkits extend the functionality of the core language into
virtually every biological domain (sequence and structure analyses,
phylogenomics, workflow management systems, etc.). This primer offers a basic
introduction to coding, via Python, and it includes concrete examples and
exercises to illustrate the language's usage and capabilities; the main text
culminates with a final project in structural bioinformatics. A suite of
Supplemental Chapters is also provided. Starting with basic concepts, such as
that of a 'variable', the Chapters methodically advance the reader to the point
of writing a graphical user interface to compute the Hamming distance between
two DNA sequences.Comment: 65 pages total, including 45 pages text, 3 figures, 4 tables,
numerous exercises, and 19 pages of Supporting Information; currently in
press at PLOS Computational Biolog
- âŠ