1,052 research outputs found

    Automatic production and integration of knowledge to the support of the decision and planning activities in medical-clinical diagnosis, treatment and prognosis.

    Get PDF
    El concepto de procedimiento médico se refiere al conjunto de actividades seguidas por los profesionales de la salud para solucionar o mitigar el problema de salud que afecta a un paciente. La toma de decisiones dentro del procedimiento médico ha sido, por largo tiempo, uno de las áreas más interesantes de investigación en la informática médica y el contexto de investigación de esta tesis. La motivación para desarrollar este trabajo de investigación se basa en tres aspectos fundamentales: no hay modelos de conocimiento para todas las actividades médico-clínicas que puedan ser inducidas a partir de datos médicos, no hay soluciones de aprendizaje inductivo para todas las actividades de la asistencia médica y no hay un modelo integral que formalice el concepto de procedimiento médico. Por tanto, nuestro objetivo principal es desarrollar un modelo computable basado en conocimiento que integre todas las actividades de decisión y planificación para el diagnóstico, tratamiento y pronóstico médico-clínicos. Para alcanzar el objetivo principal, en primer lugar, explicamos el problema de investigación. En segundo lugar, describimos los antecedentes del problema de investigación desde los contextos médico e informático. En tercer lugar, explicamos el desarrollo de la propuesta de investigación, basada en cuatro contribuciones principales: un nuevo modelo, basado en datos y conocimiento, para la actividad de planificación en el diagnóstico y tratamiento médico-clínicos; una novedosa metodología de aprendizaje inductivo para la actividad de planificación en el diagnóstico y tratamiento médico-clínico; una novedosa metodología de aprendizaje inductivo para la actividad de decisión en el pronóstico médico-clínico, y finalmente, un nuevo modelo computable, basado en datos y conocimiento, que integra las actividades de decisión y planificación para el diagnóstico, tratamiento y pronóstico médico-clínicos.The concept of medical procedure refers to the set of activities carried out by the health care professionals to solve or mitigate the health problems that affect a patient. Decisions making within a medical procedure has been, for a long time, one of the most interesting research areas in medical informatics and the research context of this thesis. The motivation to develop this research work is based on three main aspects: Nowadays there are not knowledge models for all the medical-clinical activities that can be induced from medical data, there are not inductive learning solutions for all the medical-clinical activities, and there is not an integral model that formalizes the concept of medical procedure. Therefore, our main objective is to develop a computable model based in knowledge that integrates all the decision and planning activities for the medical-clinical diagnosis, treatment and prognosis. To achieve this main objective: first, we explain the research problem. Second, we describe the background of the work from both the medical and the informatics contexts. Third, we explain the development of the research proposal based on four main contributions: a novel knowledge representation model, based in data, to the planning activity in medical-clinical diagnosis and treatment; a novel inductive learning methodology to the planning activity in diagnosis and medical-clinical treatment; a novel inductive learning methodology to the decision activity in medical-clinical prognosis, and finally, a novel computable model, based on data and knowledge, which integrates the decision and planning activities of medical-clinical diagnosis, treatment and prognosis

    Big data and predictive analytics in healthcare in Bangladesh: regulatory challenges

    Get PDF
    Big data analytics and artificial intelligence are revolutionizing the global healthcare industry. As the world accumulates unfathomable volumes of data and health technology grows more and more critical to the advancement of medicine, policymakers and regulators are faced with tough challenges around data security and data privacy. This paper reviews existing regulatory frameworks for artificial intelligence-based medical devices and health data privacy in Bangladesh. The study is legal research employing a comparative approach where data is collected from primary and secondary legal materials and filtered based on policies relating to medical data privacy and medical device regulation of Bangladesh. Such policies are then compared with benchmark policies of the European Union and the USA to test the adequacy of the present regulatory framework of Bangladesh and identify the gaps in the current regulation. The study highlights the gaps in policy and regulation in Bangladesh that are hampering the widespread adoption of big data analytics and artificial intelligence in the industry. Despite the vast benefits that big data would bring to Bangladesh's healthcare industry, it lacks the proper data governance and legal framework necessary to gain consumer trust and move forward. Policymakers and regulators must work collaboratively with clinicians, patients and industry to adopt a new regulatory framework that harnesses the potential of big data but ensures adequate privacy and security of personal data. The article opens valuable insight to regulators, academicians, researchers and legal practitioners regarding the present regulatory loopholes in Bangladesh involving exploiting the promise of big data in the medical field. The study concludes with the recommendation for future research into the area of privacy as it relates to artificial intelligence-based medical devices should consult the patients' perspective by employing quantitative analysis research methodology. © 2021 The Author(s

    Can process mining automatically describe care pathways of patients with long-term conditions in UK primary care? A study protocol

    Get PDF
    Introduction In the UK, primary care is seen as the optimal context for delivering care to an ageing population with a growing number of long-term conditions. However, if it is to meet these demands effectively and efficiently, a more precise understanding of existing care processes is required to ensure their configuration is based on robust evidence. This need to understand and optimise organisational performance is not unique to healthcare, and in industries such as telecommunications or finance, a methodology known as ‘process mining’ has become an established and successful method to identify how an organisation can best deploy resources to meet the needs of its clients and customers. Here and for the first time in the UK, we will apply it to primary care settings to gain a greater understanding of how patients with two of the most common chronic conditions are managed. Methods and analysis The study will be conducted in three phases; first, we will apply process mining algorithms to the data held on the clinical management system of four practices of varying characteristics in the West Midlands to determine how each interacts with patients with hypertension or type 2 diabetes. Second, we will use traditional process mapping exercises at each practice to manually produce maps of care processes for the selected condition. Third, with the aid of staff and patients at each practice, we will compare and contrast the process models produced by process mining with the process maps produced via manual techniques, review differences and similarities between them and the relative importance of each. The first pilot study will be on hypertension and the second for patients diagnosed with type 2 diabetes

    Finding Relevant Sequences With The Least Temporal Contradiction Measure: Application to Hydrological Data

    Get PDF
    International audienceIn this paper, we present a knowledge discovery process applied to hydrological data. To achieve this objective, we apply an algorithm to extract sequential patterns on data collected at stations located along several rivers. The data is pre-processed in order to obtain different spatial proximities and the number of patterns is estimated to highlight the influence of defined spatial relationship. We provide an objective measure of assessment, called the least temporal contradiction, to help the expert in discovering new knowledge. Such elements can be used to assess spatialized indicators to assist the interpretation of ecological and rivers monitoring pressure data

    Ensuring text and data mining : remaining issues with the EU copyright exceptions and possible ways out

    Get PDF
    This article updates and expands the work presented in A. Strowel and R. Ducato, “Artificial intelligence and text and data mining: a copyright carol” in E. Rosati (ed.), Handbook of EU Copyright Law, Routledge, forthcoming 2021. A sincere thanks to Roberto Caso and Ula Furgal for the constructive discussion on an early draft of this article. The authors have jointly conceived the paper and share the views expressed therein. Nonetheless, while Section 4 is attributable to Alain Strowel, Section 3 is specifically attributable to Rossana Ducato. Both authors equally contributed to the drafting of the remaining sections.Peer reviewedPublisher PD

    Negative Correlation Learning for Customer Churn Prediction: A Comparison Study

    Get PDF
    Recently, telecommunication companies have been paying more attention toward the problem of identification of customer churn behavior. In business, it is well known for service providers that attracting new customers is much more expensive than retaining existing ones. Therefore, adopting accurate models that are able to predict customer churn can effectively help in customer retention campaigns and maximizing the profit. In this paper we will utilize an ensemble of Multilayer perceptrons (MLP) whose training is obtained using negative correlation learning (NCL) for predicting customer churn in a telecommunication company. Experiments results confirm that NCL based MLP ensemble can achieve better generalization performance (high churn rate) compared with ensemble of MLP without NCL (flat ensemble) and other common data mining techniques used for churn analysis
    corecore