102 research outputs found

    White learning methodology: a case study of cancer-related disease factors analysis in real-time PACS environment

    Get PDF
    Bayesian network is a probabilistic model of which the prediction accuracy may not be one of the highest in the machine learning family. Deep learning (DL) on the other hand possess of higher predictive power than many other models. How reliable the result is, how it is deduced, how interpretable the prediction by DL mean to users, remain obscure. DL functions like a black box. As a result, many medical practitioners are reductant to use deep learning as the only tool for critical machine learning application, such as aiding tool for cancer diagnosis. In this paper, a framework of white learning is being proposed which takes advantages of both black box learning and white box learning. Usually, black box learning will give a high standard of accuracy and white box learning will provide an explainable direct acyclic graph. According to our design, there are 3 stages of White Learning, loosely coupled WL, semi coupled WL and tightly coupled WL based on degree of fusion of the white box learning and black box learning. In our design, a case of loosely coupled WL is tested on breast cancer dataset. This approach uses deep learning and an incremental version of Naïve Bayes network. White learning is largely defied as a systemic fusion of machine learning models which result in an explainable Bayes network which could find out the hidden relations between features and class and deep learning which would give a higher accuracy of prediction than other algorithms. We designed a series of experiments for this loosely coupled WL model. The simulation results show that using WL compared to standard black-box deep learning, the levels of accuracy and kappa statistics could be enhanced up to 50%. The performance of WL seems more stable too in extreme conditions such as noise and high dimensional data. The relations by Bayesian network of WL are more concise and stronger in affinity too. The experiments results deliver positive signals that WL is possible to output both high classification accuracy and explainable relations graph between features and class. [Abstract copyright: Copyright © 2020. Published by Elsevier B.V.

    Building planning action models using activity recognition

    Get PDF
    Activity recognition is receiving a special attention because it can be used in many areas. This field of artificial intelligence has been widely investigated lately for tasks such as following the behavior of people with some kind of cognitive impairment. For instance, elderly people with dementia. The recognition of the activities that these people carry on permits to offer assistance in case they need it while they are performing the activities. Currently, there are many systems capable of recognizing the activities that a user performs in a specific environment. Most of these systems have two problems. First, they recognize states of the activities instead of the entire activity. For instance, instead of recognizing an activity that starts at the moment ���� and ends at ����, these systems split the time line in fixed-length temporal windows that are classified as belonging to an activity or another. These windows sometimes overlap two activities which makes classifying the activities more difficult. Also this prevents the system from detecting the states of the system before and after each activity. These states are needed to build behavioral models. Second, most of these systems recognize complete high-level activities such as cooking or making tea but they can not recognize the low-level activities that compose the high-level activities. For example, pick-up the fork or switch on the oven. For this reason, most of the systems in the literature can not be used to assist people during the activities since they recognize the activity itself and they can not provide the low-level activities that the user has to execute to complete the high-level activity. For these reasons, in this thesis we have three objectives. The first objective is the development of a new activity recognition algorithm capable of overcoming the problems that the fixed-length temporal windows cause and, also, capable of extracting the states that the system can traverse. The second objective is to automatically generate an automated planning domain able to represent the user behavior using the activity recognition system. In order to do that, we will use the activity recognition system developed in the previous step to recognize the activities and the states of the environment before and after the activities. Once the system is capable of performing this task, the planning domain is generated using that information. Then, the automated planning domain will be used by an automated planner to generate sequences of actions to reach the goal of the user. That way, those sequences will be used to assist users by telling them the next action or actions to accomplish their goals. The third objective is to use automatically generated planning domains for guiding users to accomplish the task they pursue. In addition, we want to check whether the generated plans can be used to recognize the activities alone or to help a sensors system to improve its results. Also, the generated plans will be used to predict the next activities that the user may perform. This way, we will test the planning domains and the plans generated by the planner to check if they are capable of offering information to recognize the activity that the user performs or, at least, offering information for the activity recognition system to improve its results. --------------------------El campo del reconocimiento de actividades realizadas por las personas recibe en la actualidad una especial atención debido a sus numerosas áreas de aplicación y al desarrollo de las tecnologías que lo hacen posible. En los últimos años, se ha investigado mucho en este área de la Inteligencia Artificial para tareas como el seguimiento de personas que presenten algún tipo de dependencia. Por ejemplo, personas mayores con demencia senil o cualquier otra discapacidad cognitiva. El reconocimiento de las actividades que estas personas llevan a cabo podría permitir la monitorización inteligente de dichas actividades y, en caso de ser necesario, asistir a estas personas para permitirles completar con éxito las tareas que deben realizar. En la actualidad, existen sistemas capaces de reconocer lo que los usuarios de dichos sistemas realizan dentro de un entorno definido. La mayoría de estos sistemas tiene dos problemas. El primer problema es que reconocen sólo estados parciales de cada actividad. Es decir, en lugar de reconocer una actividad completa que comienza en el punto ���� y termina en el punto ����, estos sistemas dividen la línea temporal en ventanas de tiempo de un tamaño determinado que clasifican como pertenecientes a una actividad u otra. Estas ventanas temporales tienen el problema de que muchas veces solapan dos actividades, lo que hace muy difícil la correcta clasificación de las actividades a las que pertenecen dichas ventanas. Además, esto también dificulta la obtención de los estados que anteceden y suceden a las actividades. Dichos estados son necesarios a la hora de construir modelos de comportamiento. El segundo problema es que reconocen actividades de alto nivel completas como cocinar o preparar té, pero no las actividades de bajo nivel o acciones, como coger el tenedor o encender la tetera, que componen dichas actividades de alto nivel. Así, salvo unas pocas excepciones, la mayoría de los sistemas presentes en la literatura no pueden ser utilizados para asistir a los usuarios durante la realizaciíon de las actividades ya que no pueden saber cómo de completa está la actividad. Por todo ello, en esta tesis se plantean tres grandes objetivos. El primero es la implementación de un nuevo algoritmo de reconocimiento de actividades que evite los problemas que provocan las ventanas temporales de longitud fija y pueda ser utilizado también para extraer los estados por los que el usuario lleva al sistema a través de sus acciones. El segundo objetivo es la generación automática de un dominio de planificación automática que represente el comportamiento del usuario a partir de las actividades reconocidas. Para ello se utilizará el algoritmo desarrollado en el paso anterior para reconocer las acciones que componen las actividades y los estados que anteceden y suceden a cada acción. Una vez que se tiene un sistema capaz de generar los estados y acciones realizadas por el usuario, se genera un dominio de planificación utilizando dicha información. Entonces, el dominio de planificación podrá ser utilizado por un planificador automático ya existente para generar secuencias de acciones a partir del estado actual, que podrán ser utilizadas para asistir al usuario en determinadas situaciones. El tercer objetivo es el de estudiar la utilización de los dominios de planificación generados para crear planes y guiar con estos a los usuarios del sistema. Además, se quiere comprobar si los dominios de planificación generados por el sistema pueden ser utilizados para reconocer actividades por sí solo o junto con un sistema de sensores para conseguir así mejores resultados. También se quiere probar su capacidad para predecir futuras acciones.Programa en Ciencia y Tecnología InformáticaPresidente: José Manuel Molina López; Vocal: Thomas Leo Mccluskey; Secretario: Xavier Alamán Roldá

    Artificial Intelligence for Cognitive Health Assessment: State-of-the-Art, Open Challenges and Future Directions

    Get PDF
    The subjectivity and inaccuracy of in-clinic Cognitive Health Assessments (CHA) have led many researchers to explore ways to automate the process to make it more objective and to facilitate the needs of the healthcare industry. Artificial Intelligence (AI) and machine learning (ML) have emerged as the most promising approaches to automate the CHA process. In this paper, we explore the background of CHA and delve into the extensive research recently undertaken in this domain to provide a comprehensive survey of the state-of-the-art. In particular, a careful selection of significant works published in the literature is reviewed to elaborate a range of enabling technologies and AI/ML techniques used for CHA, including conventional supervised and unsupervised machine learning, deep learning, reinforcement learning, natural language processing, and image processing techniques. Furthermore, we provide an overview of various means of data acquisition and the benchmark datasets. Finally, we discuss open issues and challenges in using AI and ML for CHA along with some possible solutions. In summary, this paper presents CHA tools, lists various data acquisition methods for CHA, provides technological advancements, presents the usage of AI for CHA, and open issues, challenges in the CHA domain. We hope this first-of-its-kind survey paper will significantly contribute to identifying research gaps in the complex and rapidly evolving interdisciplinary mental health field
    corecore