3 research outputs found

    Cervical cancer risk prediction with robust ensemble and explainable black boxes method

    Get PDF
    Clinical decision support systems (CDSS) that make use of algorithms based on intelligent systems, such as machine learning or deep learning, they sufer from the fact that often the methods used are hard to interpret and difcult to understand on how some decisions are made; the opacity ofsome methods, sometimes voluntary due to problems such as data privacy or the techniques used to protect intellectual property, makes these systems very complicated. Besides this series of problems, the results obtained also sufer from the poor possibility of being interpreted; in the clinical context therefore it is required that the methods used are as accurate as possible, transparent techniques and explainable results. In this work the problem of the development of cervical cancer is treated, a disease that mainly afects the female population. In order to introduce advanced machine learning techniques in a clinical decision support system that can be transparent and explainable, a robust, accurate ensemble method is presented, in terms of error and sensitivity linked to the classifcation of possible development of the aforementioned pathology and advanced techniques are also presented of explainability and interpretability (Explanaible Machine Learning) applied to the context of CDSS such as Lime and Shapley. The results obtained, as well as being interesting, are understandable and can be implemented in the treatment of this type of problem

    Trust and Transparency in Artificial Intelligence. Ethics & Society Opinion. European Commission

    Get PDF
    The Ethics and Society Subproject has developed this Opinion in order to clarify lessons the Human Brain Project (HBP) can draw from the current discussion of artificial intelligence, in particular the social and ethical aspects of AI, and outline areas where it could usefully contribute. The EU and numerous other bodies are promoting and implementing a wide range of policies aimed to ensure that AI is beneficial - that it serves society. The HBP as a leading project bringing together neuroscience and ICT is in an excellent position to contribute to and to benefit from these discussions. This Opinion therefore highlights some key aspects of the discussion, shows its relevance to the HBP and develops a list of six recommendations

    A perspetiva do utilizador sobre a ética de sistemas com inteligência artificial

    Get PDF
    A ideia de utilizar robôs para desempenharem tarefas metódicas de forma a aplicarmos o tempo de forma eficiente e em iniciativas em que seja realmente importante a mão humana, parece um conceito bastante recente e que ainda precisa de ser aperfeiçoado. No entanto, a Inteligência Artificial já ocupa espaço em grande parte das nossas rotinas, sem darmos conta de que estamos a ser utilizadores de ferramentas com estas características. O que achávamos ser apenas possível existir em filmes de ficção científica, já circula no nosso mundo e apresenta uma grande importância na sociedade. Esta investigação tem como objetivo analisar a perspetiva do utilizador quanto à ética dos sistemas com inteligência artificial, mesmo que não tenha consciência de que seja utilizador destes sistemas. Inerente a isto, é analisado o conhecimento dos utilizadores sobre esta temática e sobre a sua opinião referente ao impacto que a sua experiência de utilização teria no caso de uma regulamentação ser aplicada a estas tecnologias. Para além disto, pretende-se analisar se os sentimentos de confiança e segurança que os utilizadores apresentam durante a experiência de utilização é relevante nessa mesma experiência e de que forma a perceção do utilizador relativa à existência de uma regulamentação orientada por determinados princípios éticos poderia influenciar a utilização destes sistemas e, bem assim, o desenvolvimento de sistemas de inteligência artificial.The idea of using robots to perform methodical tasks to apply time efficiently and in initiatives where the human hand is important, seems like a recent concept that still needs to be perfected. However, Artificial Intelligence already occupies space in a large part of our routines, without us realising that we are being users of tools with these characteristics. What we thought was only possible to exist in science fiction films, already circulates in our world and presents a great importance in society. This research aims to analyse the user's perspective on the ethics of systems with artificial intelligence, even if they are not aware that they are users of these systems. Inherent to this is the analysis of users' knowledge on this topic and their opinion on the impact that their experience of use would have if a regulation were applied to these technologies. Furthermore, it is intended to analyse whether the feelings of trust and security that users have during their experience of use are relevant to that experience and how the user's perception of the existence of a regulation guided by certain ethical principles could influence the use of these systems, as well as the development of artificial intelligence systems
    corecore