61 research outputs found

    The Care2Report System: Automated Medical Reporting as an Integrated Solution to Reduce Administrative Burden in Healthcare

    Get PDF
    Documenting patient medical information in the electronic medical record is a time-consuming task at the expense of direct patient care. We propose an integrated solution to automate the process of medical reporting. This vision is enabled through the integration of speech and action recognition technology with semantic interpretation based on knowledge graphs. This paper presents our dialogue summarization pipeline that transforms speech into a medical report via transcription and formal representation. We discuss the functional and technical architecture of our Care2Report system along with an initial system evaluation with data of real consultation sessions

    Facilitating and Enhancing the Performance of Model Selection for Energy Time Series Forecasting in Cluster Computing Environments

    Get PDF
    Applying Machine Learning (ML) manually to a given problem setting is a tedious and time-consuming process which brings many challenges with it, especially in the context of Big Data. In such a context, gaining insightful information, finding patterns, and extracting knowledge from large datasets are quite complex tasks. Additionally, the configurations of the underlying Big Data infrastructure introduce more complexity for configuring and running ML tasks. With the growing interest in ML the last few years, particularly people without extensive ML expertise have a high demand for frameworks assisting people in applying the right ML algorithm to their problem setting. This is especially true in the field of smart energy system applications where more and more ML algorithms are used e.g. for time series forecasting. Generally, two groups of non-expert users are distinguished to perform energy time series forecasting. The first one includes the users who are familiar with statistics and ML but are not able to write the necessary programming code for training and evaluating ML models using the well-known trial-and-error approach. Such an approach is time consuming and wastes resources for constructing multiple models. The second group is even more inexperienced in programming and not knowledgeable in statistics and ML but wants to apply given ML solutions to their problem settings. The goal of this thesis is to scientifically explore, in the context of more concrete use cases in the energy domain, how such non-expert users can be optimally supported in creating and performing ML tasks in practice on cluster computing environments. To support the first group of non-expert users, an easy-to-use modular extendable microservice-based ML solution for instrumenting and evaluating ML algorithms on top of a Big Data technology stack is conceptualized and evaluated. Our proposed solution facilitates applying trial-and-error approach by hiding the low level complexities from the users and introduces the best conditions to efficiently perform ML tasks in cluster computing environments. To support the second group of non-expert users, the first solution is extended to realize meta learning approaches for automated model selection. We evaluate how meta learning technology can be efficiently applied to the problem space of data analytics for smart energy systems to assist energy system experts which are not data analytics experts in applying the right ML algorithms to their data analytics problems. To enhance the predictive performance of meta learning, an efficient characterization of energy time series datasets is required. To this end, Descriptive Statistics Time based Meta Features (DSTMF), a new kind of meta features, is designed to accurately capture the deep characteristics of energy time series datasets. We find that DSTMF outperforms the other state-of-the-art meta feature sets introduced in the literature to characterize energy time series datasets in terms of the accuracy of meta learning models and the time needed to extract them. Further enhancement in the predictive performance of the meta learning classification model is achieved by training the meta learner on new efficient meta examples. To this end, we proposed two new approaches to generate new energy time series datasets to be used as training meta examples by the meta learner depending on the type of time series dataset (i.e. generation or energy consumption time series). We find that extending the original training sets with new meta examples generated by our approaches outperformed the case in which the original is extended by new simulated energy time series datasets

    Generic analysis support for understanding, evaluating and comparing enterprise architecture models

    Get PDF
    Enterprise Architecture Management (EAM) is one mean to deal with the increasing complexity of today’s IT landscapes. Architectural models are used within EAM to describe the business processes, the used applications, the required infrastructure as well as the dependencies between them. The creation of those models is expensive, since the whole organization and therewith a large amount of data has to be considered. It is important to make use of these models and reuse them for planning purposes and decision making. The models are a solid foundation for various kinds of analyses that support the understanding, evaluation and comparisons of them. Analyses can approximate the effects of the retirement of an application or of a server failure. It is also possible to quantify the models using metrics like the IT coverage of business processes or the workload of a server. The generation of views sets the focus on a specific aspect of the model. An example is the limitation to the processes and applications of a specific organization unit. Architectural models can also be used for planning purposes. The development of a target architecture is supported by identifying weak points and evaluating planning scenarios. Current approaches for EAM analysis are typically isolated ones, addressing only a limited subset of the different analysis goals. An integrated approach that covers the different information demands of the stakeholders is missing. Additionally, the analysis approaches are highly dependent on the utilized meta model. This is a serious problem since the EAM domain is characterized by a large variety of frameworks and meta models. In this thesis, we propose a generic framework that supports the different analysis activities during EAM. We develop the required techniques for the specification and execution of analyses, independently from the utilized meta model. An analysis language is implemented for the definition and customization of the analyses according to the current needs of the stakeholder. Thereby, we focus on reuse and a generic definition. We utilize a generic representation format to be able to abstract from the great variety of used meta models in the EAM domain. The execution of the analyses is done with Semantic Web Technologies and data-flow based model analysis. The framework is applied for the identification of weak points as well as the evaluation of planning scenarios regarding consistency of changes and goal fulfillment. Two methods are developed for these tasks, as well as respective analysis support is identified and implemented. These are, for example, a change impact analysis, specific metrics or the scoping of the architectural model according to different aspects. Finally, the coverage of the framework regarding existing EA analysis approaches is determined with a scenario-based evaluation. The applicability and relevance of the language and of the proposed methods is proved within three large case studies

    Implementación técnica de una arquitectura orientada a integrar conocimiento externo heterogéneo en motor de reglas

    Get PDF
    En un contexto de negocios globalizado donde la completitud de la información es la suma de varias partes, resolver problemas se convierte en una tarea que involucra tiempo, análisis y experiencia. Una organización ve limitado su ámbito de acción porque necesita información de terceros para evaluar en forma íntegra y completa una colección de datos. Para superar estos problemas se propone implementar un motor de reglas capaz de interactuar mediante reglas con servicios usando Json como mensajería de intercambio de datos. El modelo propuesto mejora la capacidad de conocimiento al compartir información entre sistemas heterogéneos usando los estándares de la comunidad para resolver problemas complejos.Workshop: WISS - Innovación en Sistemas de SoftwareRed de Universidades con Carreras en Informátic

    Ferramenta de gestão de protocolos clínicos

    Get PDF
    Decision support systems are currently important tools to guide the clinician’s decisions and to help on the patient’s treatments. These systems have been studied over the last decades, leading to some well-defined best practices for building new solutions. This project had the objective of building a clinical decision system with a core engine based on predefined rules, which can be customized by end-users. This work had as main motivation the treatment of diabetic inpatients and outpatients, in hospital services others than endocrinology. To keep the solution generic, the system does not depend on any specific patient data, neither on the protocols. This application follows the client-server model. based on a microservice architecture, providing a modern web user interface. The project was carried out in a close collaboration with the Hospital Center of Baixo do Vouga, resulting in a solution that can assists health professionals in the treatment of patients, reducing errors and providing a better monitoring of health care services.Nos últimos anos, têm sido estudadas diversas metodologias para aumentar a qualidade da execução dos tratamentos oferecidos aos doentes hospitalizados. Foram igualmente desenvolvidos sistemas computacionais para auxiliar a tomada de decisões clínicas. O objetivo deste trabalho consistiu no desenvolvimento de uma aplicação web para apoiar a execução de tratamentos clínicos, seguindo regras previamente estabelecidas. Estas regras constituem as premissas base que definem o procedimento a aplicar, ou seja, a estrutura do protocolo clínico. Este trabalho teve como principal motivação o tratamento de pacientes com diabetes que são internados ou atendidos em serviços hospitalares não especializados nesta doença. Contudo, para não limitar a sua aplicação a um cenário específico, a solução foi pensada para ser flexível e ser aplicável em qualquer cenário clínico. Esta aplicação segue o modelo cliente-servidor. com base numa arquiteture de microserviços, fornecendo uma interface de utilizador web moderna. O projeto decorreu em estreita colaboração com o Centro Hospitalar do Baixo do Vouga, tendo como resultado uma solução que auxilia os profissionais de saúde no tratamento de doentes internados, reduzindo o risco de erros e aumentando o controlo e monitorização dos cuidados de saúde.Mestrado em Engenharia de Computadores e Telemátic

    Health State Estimation

    Full text link
    Life's most valuable asset is health. Continuously understanding the state of our health and modeling how it evolves is essential if we wish to improve it. Given the opportunity that people live with more data about their life today than any other time in history, the challenge rests in interweaving this data with the growing body of knowledge to compute and model the health state of an individual continually. This dissertation presents an approach to build a personal model and dynamically estimate the health state of an individual by fusing multi-modal data and domain knowledge. The system is stitched together from four essential abstraction elements: 1. the events in our life, 2. the layers of our biological systems (from molecular to an organism), 3. the functional utilities that arise from biological underpinnings, and 4. how we interact with these utilities in the reality of daily life. Connecting these four elements via graph network blocks forms the backbone by which we instantiate a digital twin of an individual. Edges and nodes in this graph structure are then regularly updated with learning techniques as data is continuously digested. Experiments demonstrate the use of dense and heterogeneous real-world data from a variety of personal and environmental sensors to monitor individual cardiovascular health state. State estimation and individual modeling is the fundamental basis to depart from disease-oriented approaches to a total health continuum paradigm. Precision in predicting health requires understanding state trajectory. By encasing this estimation within a navigational approach, a systematic guidance framework can plan actions to transition a current state towards a desired one. This work concludes by presenting this framework of combining the health state and personal graph model to perpetually plan and assist us in living life towards our goals.Comment: Ph.D. Dissertation @ University of California, Irvin

    Обробка повідомлень про доставку екіпажів в інформаційній мережі авіакомпанії

    Get PDF
    Робота публікується згідно наказу ректора від 29.12.2020 р. №580/од "Про розміщення кваліфікаційних робіт вищої освіти в репозиторії НАУ". Керівник проекту: к.т.н., доцент Сураєв Вадим ФедоровичIn the modern world and Ukraine, aviation ranks first in the transportation of passengers and cargo and is undoubtedly the most convenient and fastest mode of transport. The main task of aviation is to ensure the safety of passengers. Flight safety is affected by several factors: the reliability of aircraft and ground equipment, the quality of flight training, climate, the quality of the control system. Analyzing plane crashes, scientists say that the main factor is the mistakes of pilots or controllers.У сучасному світі та Україні авіація посідає перше місце у перевезенні пасажирів та вантажів і, безсумнівно, є найзручнішим та найшвидшим видом транспорту. Основним завданням авіації є забезпечення безпеки пасажирів. На безпеку польотів впливає кілька факторів: надійність літаків і наземного обладнання, якість льотної підготовки, клімат, якість системи управління. Аналізуючи авіакатастрофи, вчені кажуть, що головним фактором є помилки пілотів або контролерів
    corecore