80 research outputs found
Intelligent Systems
This book is dedicated to intelligent systems of broad-spectrum application, such as personal and social biosafety or use of intelligent sensory micro-nanosystems such as "e-nose", "e-tongue" and "e-eye". In addition to that, effective acquiring information, knowledge management and improved knowledge transfer in any media, as well as modeling its information content using meta-and hyper heuristics and semantic reasoning all benefit from the systems covered in this book. Intelligent systems can also be applied in education and generating the intelligent distributed eLearning architecture, as well as in a large number of technical fields, such as industrial design, manufacturing and utilization, e.g., in precision agriculture, cartography, electric power distribution systems, intelligent building management systems, drilling operations etc. Furthermore, decision making using fuzzy logic models, computational recognition of comprehension uncertainty and the joint synthesis of goals and means of intelligent behavior biosystems, as well as diagnostic and human support in the healthcare environment have also been made easier
Biomedical applications of belief networks
Biomedicine is an area in which computers have long been expected to play a significant
role. Although many of the early claims have proved unrealistic, computers are gradually
becoming accepted in the biomedical, clinical and research environment. Within these
application areas, expert systems appear to have met with the most resistance, especially
when applied to image interpretation.In order to improve the acceptance of computerised decision support systems it is
necessary to provide the information needed to make rational judgements concerning
the inferences the system has made. This entails an explanation of what inferences
were made, how the inferences were made and how the results of the inference are to
be interpreted. Furthermore there must be a consistent approach to the combining of
information from low level computational processes through to high level expert analyses.nformation from low level computational processes through to high level expert analyses.
Until recently ad hoc formalisms were seen as the only tractable approach to reasoning
under uncertainty. A review of some of these formalisms suggests that they are less
than ideal for the purposes of decision making. Belief networks provide a tractable way
of utilising probability theory as an inference formalism by combining the theoretical
consistency of probability for inference and decision making, with the ability to use the
knowledge of domain experts.nowledge of domain experts.
The potential of belief networks in biomedical applications has already been recog¬
nised and there has been substantial research into the use of belief networks for medical
diagnosis and methods for handling large, interconnected networks. In this thesis the use
of belief networks is extended to include detailed image model matching to show how,
in principle, feature measurement can be undertaken in a fully probabilistic way. The
belief networks employed are usually cyclic and have strong influences between adjacent
nodes, so new techniques for probabilistic updating based on a model of the matching
process have been developed.An object-orientated inference shell called FLAPNet has been implemented and used
to apply the belief network formalism to two application domains. The first application is
model-based matching in fetal ultrasound images. The imaging modality and biological
variation in the subject make model matching a highly uncertain process. A dynamic,
deformable model, similar to active contour models, is used. A belief network combines
constraints derived from local evidence in the image, with global constraints derived from
trained models, to control the iterative refinement of an initial model cue.In the second application a belief network is used for the incremental aggregation of
evidence occurring during the classification of objects on a cervical smear slide as part of
an automated pre-screening system. A belief network provides both an explicit domain
model and a mechanism for the incremental aggregation of evidence, two attributes
important in pre-screening systems.Overall it is argued that belief networks combine the necessary quantitative features
required of a decision support system with desirable qualitative features that will lead
to improved acceptability of expert systems in the biomedical domain
Explainable temporal data mining techniques to support the prediction task in Medicine
In the last decades, the increasing amount of data available in all fields raises the necessity to discover new knowledge and explain the hidden information found. On one hand, the rapid increase of interest in, and use of, artificial intelligence (AI) in computer applications has raised a parallel concern about its ability (or lack thereof) to provide understandable, or explainable, results to users. In the biomedical informatics and computer science communities, there is considerable discussion about the `` un-explainable" nature of artificial intelligence, where often algorithms and systems leave users, and even developers, in the dark with respect to how results were obtained. Especially in the biomedical context, the necessity to explain an artificial intelligence system result is legitimate of the importance of patient safety. On the other hand, current database systems enable us to store huge quantities of data. Their analysis through data mining techniques provides the possibility to extract relevant knowledge and useful hidden information. Relationships and patterns within these data could provide new medical knowledge. The analysis of such healthcare/medical data collections could greatly help to observe the health conditions of the population and extract useful information that can be exploited in the assessment of healthcare/medical processes. Particularly, the prediction of medical events is essential for preventing disease, understanding disease mechanisms, and increasing patient quality of care. In this context, an important aspect is to verify whether the database content supports the capability of predicting future events. In this thesis, we start addressing the problem of explainability, discussing some of the most significant challenges need to be addressed with scientific and engineering rigor in a variety of biomedical domains. We analyze the ``temporal component" of explainability, focusing on detailing different perspectives such as: the use of temporal data, the temporal task, the temporal reasoning, and the dynamics of explainability in respect to the user perspective and to knowledge. Starting from this panorama, we focus our attention on two different temporal data mining techniques. The first one, based on trend abstractions, starting from the concept of Trend-Event Pattern and moving through the concept of prediction, we propose a new kind of predictive temporal patterns, namely Predictive Trend-Event Patterns (PTE-Ps). The framework aims to combine complex temporal features to extract a compact and non-redundant predictive set of patterns composed by such temporal features. The second one, based on functional dependencies, we propose a methodology for deriving a new kind of approximate temporal functional dependencies, called Approximate Predictive Functional Dependencies (APFDs), based on a three-window framework. We then discuss the concept of approximation, the data complexity of deriving an APFD, the introduction of two new error measures, and finally the quality of APFDs in terms of coverage and reliability. Exploiting these methodologies, we analyze intensive care unit data from the MIMIC dataset
High-Performance Modelling and Simulation for Big Data Applications
This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications
- …