50 research outputs found

    Acta Polytechnica Hungarica 2015

    Get PDF

    Advanced Signal Processing and Control in Anaesthesia

    Get PDF
    This thesis comprises three major stages: classification of depth of anaesthesia (DOA); modelling a typical patient’s behaviour during a surgical procedure; and control of DOAwith simultaneous administration of propofol and remifentanil. Clinical data gathered in theoperating theatre was used in this project. Multiresolution wavelet analysis was used to extract meaningful features from the auditory evoked potentials (AEP). These features were classified into different DOA levels using a fuzzy relational classifier (FRC). The FRC uses fuzzy clustering and fuzzy relational composition. The FRC had a good performance and was able to distinguish between the DOA levels. A hybrid patient model was developed for the induction and maintenance phase of anaesthesia. An adaptive network-based fuzzy inference system was used to adapt Takagi-Sugeno-Kang (TSK) fuzzy models relating systolic arterial pressure (SAP), heart rate (HR), and the wavelet extracted AEP features with the effect concentrations of propofol and remifentanil. The effect of surgical stimuli on SAP and HR, and the analgesic properties of remifentanil were described by Mamdani fuzzy models, constructed with anaesthetist cooperation. The model proved to be adequate, reflecting the effect of drugs and surgical stimuli. A multivariable fuzzy controller was developed for the simultaneous administration of propofol and remifentanil. The controller is based on linguistic rules that interact with three decision tables, one of which represents a fuzzy PI controller. The infusion rates of the two drugs are determined according to the DOA level and surgical stimulus. Remifentanil is titrated according to the required analgesia level and its synergistic interaction with propofol. The controller was able to adequately achieve and maintain the target DOA level, under different conditions. Overall, it was possible to model the interaction between propofol and remifentanil, and to successfully use this model to develop a closed-loop system in anaesthesia

    INVESTIGATIONS ON COGNITIVE COMPUTATION AND COMPUTATIONAL COGNITION

    Get PDF
    This Thesis describes our work at the boundary between Computer Science and Cognitive (Neuro)Science. In particular, (1) we have worked on methodological improvements to clustering-based meta-analysis of neuroimaging data, which is a technique that allows to collectively assess, in a quantitative way, activation peaks from several functional imaging studies, in order to extract the most robust results in the cognitive domain of interest. Hierarchical clustering is often used in this context, yet it is prone to the problem of non-uniqueness of the solution: a different permutation of the same input data might result in a different clustering result. In this Thesis, we propose a new version of hierarchical clustering that solves this problem. We also show the results of a meta-analysis, carried out using this algorithm, aimed at identifying specific cerebral circuits involved in single word reading. Moreover, (2) we describe preliminary work on a new connectionist model of single word reading, named the two-component model because it postulates a cascaded information flow from a more cognitive component that computes a distributed internal representation for the input word, to an articulatory component that translates this code into the corresponding sequence of phonemes. Output production is started when the internal code, which evolves in time, reaches a sufficient degree of clarity; this mechanism has been advanced as a possible explanation for behavioral effects consistently reported in the literature on reading, with a specific focus on the so called serial effects. This model is here discussed in its strength and weaknesses. Finally, (3) we have turned to consider how features that are typical of human cognition can inform the design of improved artificial agents; here, we have focused on modelling concepts inspired by emotion theory. A model of emotional interaction between artificial agents, based on probabilistic finite state automata, is presented: in this model, agents have personalities and attitudes that can change through the course of interaction (e.g. by reinforcement learning) to achieve autonomous adaptation to the interaction partner. Markov chain properties are then applied to derive reliable predictions of the outcome of an interaction. Taken together, these works show how the interplay between Cognitive Science and Computer Science can be fruitful, both for advancing our knowledge of the human brain and for designing more and more intelligent artificial systems

    Pertanika Journal of Science & Technology

    Get PDF

    Geographic Information Systems and Science

    Get PDF
    Geographic information science (GISc) has established itself as a collaborative information-processing scheme that is increasing in popularity. Yet, this interdisciplinary and/or transdisciplinary system is still somewhat misunderstood. This book talks about some of the GISc domains encompassing students, researchers, and common users. Chapters focus on important aspects of GISc, keeping in mind the processing capability of GIS along with the mathematics and formulae involved in getting each solution. The book has one introductory and eight main chapters divided into five sections. The first section is more general and focuses on what GISc is and its relation to GIS and Geography, the second is about location analytics and modeling, the third on remote sensing data analysis, the fourth on big data and augmented reality, and, finally, the fifth looks over volunteered geographic information.info:eu-repo/semantics/publishedVersio

    Beurteilung der Resttragfähigkeit von Bauwerken mit Hilfe der Fuzzy-Logik und Entscheidungstheorie

    Get PDF
    Whereas the design of new structures is almost completely regulated by codes, there are no objective ways for the evaluation of existing facilities. Experts often are not familiar with the new tasks in system identification and try to retrieve at least some information from available documents. They therefore make compromises which, for many stakeholders, are not satisfying. Consequently, this publication presents a more objective and more realistic method for condition assessment. Necessary basics for this task are fracture mechanics combined with computational analysis, methods and techniques for geometry recording and material investigation, ductility and energy dissipation, risk analysis and uncertainty consideration. Present tools for evaluation perform research on how to analytically conceptualize a structure directly from given loads and measured response. Since defects are not necessarily visible or in a direct way detectable, several damage indices are combined and integrated in a model of the real system. Fuzzy-sets are ideally suited to illustrate parametric/data uncertainty and system- or model uncertainty. Trapezoidal membership functions may very well represent the condition state of structural components as function of damage extent or performance. Tthe residual load-bearing capacity can be determined by successively performing analyses in three steps. The "Screening assessment" shall eliminate a large majority of structures from detailed consideration and advise on immediate precautions to save lives and high economic values. Here, the defects have to be explicitly defined and located. If this is impossible, an "approximate evaluation" should follow describing system geometry, material properties and failure modes in detail. Here, a fault-tree helps investigate defaults in a systematic way avoiding random search or negligence of important features or damage indices. In order to inform about the structural system it is deemed essential not only due to its conceptual clarity, but also due to its applicational simplicity. It therefore represents an important prerequisite in condition assessment though special circumstances might require "fur-ther investigations" to consider the actual material parameters and unaccounted reserves due to spatial or other secondary contributions. Here, uncertainties with respect to geometry, material, loading or modeling should in no case be neglected, but explicitly quantified. Postulating a limited set of expected failure modes is not always sufficient, since detectable signature changes are seldom directly attributable and every defect might -together with other unforeseen situations- become decisive. So, a determination of all possible scenarios to consider every imaginable influence would be required. Risk is produced by a combination of various and ill-defined failure modes. Due to the interaction of many variables there is no simple and reliable way to predict which failure mode is dominant. Risk evaluation therefore comprises the estimation of the prognostic factor with respect to undesir-able events, component importance and the expected damage extent.Während die Bemessung von Tragwerken im allgemeinen durch Vorschriften geregelt ist, gibt es für die Zustandsbewertung bestehender Bauwerken noch keine objektiven Richtlinien. Viele Experten sind mit der neuen Problematik (Systemidentifikation anhand von Belastung und daraus entstehender Strukturantwort) noch nicht vertraut und begnügen sich daher mit Kompromißlösungen. Für viele Bauherren ist dies unbefriedigend, weshalb hier eine objektivere und wirklichkeitsnähere Zustandsbewertung vorgestellt wird. Wichtig hierfür sind theoretische Grundlagen der Schadensanalyse, Methoden und Techniken zur Geometrie- und Materialerkundung, Duktilität und Energieabsorption, Risikoanalyse und Beschreibung von Unsicherheiten. Da nicht alle Schäden offensichtlich sind, kombiniert man zur Zeit mehrere Zustandsindikatoren, bereitet die registrierten Daten gezielt auf, und integriert sie vor einer endgültigen Bewertung in ein validiertes Modell. Werden deterministische Nachweismethoden mit probabilstischen kombiniert, lassen sich nur zufällige Fehler problemlos minimieren. Systematische Fehler durch ungenaue Modellierung oder vagem Wissen bleiben jedoch bestehen. Daß Entscheidungsträger mit unsicheren, oft sogar widersprüchlichen Angaben subjektiv urteilen, ist also nicht zu vermeiden. In dieser Arbeit wird gezeigt, wie mit Hilfe eines dreistufigen Bewertungsverfahrens Tragglieder in Qualitätsklassen eingestuft werden können. Abhängig von ihrem mittleren Schadensausmaß, ihrer Strukturbedeutung I (wiederum von ihrem Stellenwert bzw. den Konsequenzen ihrer Schädigung abhängig) und ihrem Prognosefaktor L ergibt sich ihr Versagensrisiko mit. Das Risiko für eine Versagen der Gesamtstruktur wird aus der Topologie ermittelt. Wenn das mittlere Schadensausmaß nicht eindeutig festgelegt werden kann, oder wenn die Material-, Geometrie- oder Lastangaben vage sind, wird im Rahmen "Weitergehender Untersuchungen" ein mathematisches Verfahren basierend auf der Fuzzy-Logik vorgeschlagen. Es filtert auch bei komplexen Ursache-Wirkungsbeziehungen die dominierende Schadensursache heraus und vermeidet, daß mit Unsicherheiten behaftete Parameter für zuverlässige Absolutwerte gehalten werden. Um den mittleren Schadensindex und daraus das Risiko zu berechnen, werden die einzelnen Schadensindizes (je nach Fehlermodus) abhängig von ihrer Bedeutung mit Wichtungsfaktoren belegt,und zusätzlich je nach Art, Bedeutung und Zuverlässigkeit der erhaltenen Information durch Gamma dividiert. Hiermit wurde ein neues Verfahren zur Analyse komplexer Versagensmechanismen vorgestellt, welches nachvollziehbare Schlußfolgerungen ermöglicht

    Proceedings of the 5th International Workshop "What can FCA do for Artificial Intelligence?", FCA4AI 2016(co-located with ECAI 2016, The Hague, Netherlands, August 30th 2016)

    Get PDF
    International audienceThese are the proceedings of the fifth edition of the FCA4AI workshop (http://www.fca4ai.hse.ru/). Formal Concept Analysis (FCA) is a mathematically well-founded theory aimed at data analysis and classification that can be used for many purposes, especially for Artificial Intelligence (AI) needs. The objective of the FCA4AI workshop is to investigate two main main issues: how can FCA support various AI activities (knowledge discovery, knowledge representation and reasoning, learning, data mining, NLP, information retrieval), and how can FCA be extended in order to help AI researchers to solve new and complex problems in their domain. Accordingly, topics of interest are related to the following: (i) Extensions of FCA for AI: pattern structures, projections, abstractions. (ii) Knowledge discovery based on FCA: classification, data mining, pattern mining, functional dependencies, biclustering, stability, visualization. (iii) Knowledge processing based on concept lattices: modeling, representation, reasoning. (iv) Application domains: natural language processing, information retrieval, recommendation, mining of web of data and of social networks, etc

    Fuzzy Logic

    Get PDF
    Fuzzy Logic is becoming an essential method of solving problems in all domains. It gives tremendous impact on the design of autonomous intelligent systems. The purpose of this book is to introduce Hybrid Algorithms, Techniques, and Implementations of Fuzzy Logic. The book consists of thirteen chapters highlighting models and principles of fuzzy logic and issues on its techniques and implementations. The intended readers of this book are engineers, researchers, and graduate students interested in fuzzy logic systems

    Innovative techniques to devise 3D-printed anatomical brain phantoms for morpho-functional medical imaging

    Get PDF
    Introduction. The Ph.D. thesis addresses the development of innovative techniques to create 3D-printed anatomical brain phantoms, which can be used for quantitative technical assessments on morpho-functional imaging devices, providing simulation accuracy not obtainable with currently available phantoms. 3D printing (3DP) technology is paving the way for advanced anatomical modelling in biomedical applications. Despite the potential already expressed by 3DP in this field, it is still little used for the realization of anthropomorphic phantoms of human organs with complex internal structures. Making an anthropomorphic phantom is very different from making a simple anatomical model and 3DP is still far from being plug-and-print. Hence, the need to develop ad-hoc techniques providing innovative solutions for the realization of anatomical phantoms with unique characteristics, and greater ease-of-use. Aim. The thesis explores the entire workflow (brain MRI images segmentation, 3D modelling and materialization) developed to prototype a new complex anthropomorphic brain phantom, which can simulate three brain compartments simultaneously: grey matter (GM), white matter (WM) and striatum (caudate nucleus and putamen, known to show a high uptake in nuclear medicine studies). The three separate chambers of the phantom will be filled with tissue-appropriate solutions characterized by different concentrations of radioisotope for PET/SPECT, para-/ferro-magnetic metals for MRI, and iodine for CT imaging. Methods. First, to design a 3D model of the brain phantom, it is necessary to segment MRI images and to extract an error-less STL (Standard Tessellation Language) description. Then, it is possible to materialize the prototype and test its functionality. - Image segmentation. Segmentation is one of the most critical steps in modelling. To this end, after demonstrating the proof-of-concept, a multi-parametric segmentation approach based on brain relaxometry was proposed. It includes a pre-processing step to estimate relaxation parameter maps (R1 = longitudinal relaxation rate, R2 = transverse relaxation rate, PD = proton density) from the signal intensities provided by MRI sequences of routine clinical protocols (3D-GrE T1-weighted, FLAIR and fast-T2-weighted sequences with ≤ 3 mm slice thickness). In the past, maps of R1, R2, and PD were obtained from Conventional Spin Echo (CSE) sequences, which are no longer suitable for clinical practice due to long acquisition times. Rehabilitating the multi-parametric segmentation based on relaxometry, the estimation of pseudo-relaxation maps allowed developing an innovative method for the simultaneous automatic segmentation of most of the brain structures (GM, WM, cerebrospinal fluid, thalamus, caudate nucleus, putamen, pallidus, nigra, red nucleus and dentate). This method allows the segmentation of higher resolution brain images for future brain phantom enhancements. - STL extraction. After segmentation, the 3D model of phantom is described in STL format, which represents the shapes through the approximation in manifold mesh (i.e., collection of triangles, which is continuous, without holes and with a positive – not zero – volume). For this purpose, we developed an automatic procedure to extract a single voxelized surface, tracing the anatomical interface between the phantom's compartments directly on the segmented images. Two tubes were designed for each compartment (one for filling and the other to facilitate the escape of air). The procedure automatically checks the continuity of the surface, ensuring that the 3D model could be exported in STL format, without errors, using a common image-to-STL conversion software. Threaded junctions were added to the phantom (for the hermetic closure) using a mesh processing software. The phantom's 3D model resulted correct and ready for 3DP. Prototyping. Finally, the most suitable 3DP technology is identified for the materialization. We investigated the material extrusion technology, named Fused Deposition Modeling (FDM), and the material jetting technology, named PolyJet. FDM resulted the best candidate for our purposes. It allowed materializing the phantom's hollow compartments in a single print, without having to print them in several parts to be reassembled later. FDM soluble internal support structures were completely removable after the materialization, unlike PolyJet supports. A critical aspect, which required a considerable effort to optimize the printing parameters, was the submillimetre thickness of the phantom walls, necessary to avoid distorting the imaging simulation. However, 3D printer manufacturers recommend maintaining a uniform wall thickness of at least 1 mm. The optimization of printing path made it possible to obtain strong, but not completely waterproof walls, approximately 0.5 mm thick. A sophisticated technique, based on the use of a polyvinyl-acetate solution, was developed to waterproof the internal and external phantom walls (necessary requirement for filling). A filling system was also designed to minimize the residual air bubbles, which could result in unwanted hypo-intensity (dark) areas in phantom-based imaging simulation. Discussions and conclusions. The phantom prototype was scanned trough CT and PET/CT to evaluate the realism of the brain simulation. None of the state-of-the-art brain phantoms allow such anatomical rendering of three brain compartments. Some represent only GM and WM, others only the striatum. Moreover, they typically have a poor anatomical yield, showing a reduced depth of the sulci and a not very faithful reproduction of the cerebral convolutions. The ability to simulate the three brain compartments simultaneously with greater accuracy, as well as the possibility of carrying out multimodality studies (PET/CT, PET/MRI), which represent the frontier of diagnostic imaging, give this device cutting-edge prospective characteristics. The effort to further customize 3DP technology for these applications is expected to increase significantly in the coming years
    corecore