38 research outputs found

    How can humans leverage machine learning? From Medical Data Wrangling to Learning to Defer to Multiple Experts

    Get PDF
    Mención Internacional en el título de doctorThe irruption of the smartphone into everyone’s life and the ease with which we digitise or record any data supposed an explosion of quantities of data. Smartphones, equipped with advanced cameras and sensors, have empowered individuals to capture moments and contribute to the growing pool of data. This data-rich landscape holds great promise for research, decision-making, and personalized applications. By carefully analyzing and interpreting this wealth of information, valuable insights, patterns, and trends can be uncovered. However, big data is worthless in a vacuum. Its potential value is unlocked only when leveraged to drive decision-making. In recent times we have been participants of the outburst of artificial intelligence: the development of computer systems and algorithms capable of perceiving, reasoning, learning, and problem-solving, emulating certain aspects of human cognitive abilities. Nevertheless, our focus tends to be limited, merely skimming the surface of the problem, while the reality is that the application of machine learning models to data introduces is usually fraught. More specifically, there are two crucial pitfalls frequently neglected in the field of machine learning: the quality of the data and the erroneous assumption that machine learning models operate autonomously. These two issues have established the foundation for the motivation driving this thesis, which strives to offer solutions to two major associated challenges: 1) dealing with irregular observations and 2) learning when and who should we trust. The first challenge originates from our observation that the majority of machine learning research primarily concentrates on handling regular observations, neglecting a crucial technological obstacle encountered in practical big-data scenarios: the aggregation and curation of heterogeneous streams of information. Before applying machine learning algorithms, it is crucial to establish robust techniques for handling big data, as this specific aspect presents a notable bottleneck in the creation of robust algorithms. Data wrangling, which encompasses the extraction, integration, and cleaning processes necessary for data analysis, plays a crucial role in this regard. Therefore, the first objective of this thesis is to tackle the frequently disregarded challenge of addressing irregularities within the context of medical data. We will focus on three specific aspects. Firstly, we will tackle the issue of missing data by developing a framework that facilitates the imputation of missing data points using relevant information derived from alternative data sources or past observations. Secondly, we will move beyond the assumption of homogeneous observations, where only one statistical data type (such as Gaussian) is considered, and instead, work with heterogeneous observations. This means that different data sources can be represented by various statistical likelihoods, such as Gaussian, Bernoulli, categorical, etc. Lastly, considering the temporal enrichment of todays collected data and our focus on medical data, we will develop a novel algorithm capable of capturing and propagating correlations among different data streams over time. All these three problems are addressed in our first contribution which involves the development of a novel method based on Deep Generative Models (DGM) using Variational Autoencoders (VAE). The proposed model, the Sequential Heterogeneous Incomplete VAE (Shi- VAE), enables the aggregation of multiple heterogeneous data streams in a modular manner, taking into consideration the presence of potential missing data. To demonstrate the feasibility of our approach, we present proof-of-concept results obtained from a real database generated through continuous passive monitoring of psychiatric patients. Our second challenge relates to the misbelief that machine learning algorithms can perform independently. However, this notion that AI systems can solely account for automated decisionmaking, especially in critical domains such as healthcare, is far from reality. Our focus now shifts towards a specific scenario where the algorithm has the ability to make predictions independently or alternatively defer the responsibility to a human expert. The purpose of including the human is not to obtain jsut better performance, but also more reliable and trustworthy predictions we can rely on. In reality, however, important decisions are not made by one person but are usually committed by an ensemble of human experts. With this in mind, two important questions arise: 1) When should the human or the machine bear responsibility and 2) among the experts, who should we trust? To answer the first question, we will employ a recent theory known as Learning to defer (L2D). In L2D we are not only interested in abstaining from prediction but also in understanding the humans confidence for making such prediction. thus deferring only when the human is more likely to be correct. The second question about who to defer among a pool of experts has not been yet answered in the L2D literature, and this is what our contributions aim to provide. First, we extend the two yet proposed consistent surrogate losses in the L2D literature to the multiple-expert setting. Second, we study the frameworks ability to estimate the probability that a given expert correctly predicts and assess whether the two surrogate losses are confidence calibrated. Finally, we propose a conformal inference technique that chooses a subset of experts to query when the system defers. Ensembling experts based on confidence levels is vital to optimize human-machine collaboration. In conclusion, this doctoral thesis has investigated two cases where humans can leverage the power of machine learning: first, as a tool to assist in data wrangling and data understanding problems and second, as a collaborative tool where decision-making can be automated by the machine or delegated to human experts, fostering more transparent and trustworthy solutions.La irrupción de los smartphones en la vida de todos y la facilidad con la que digitalizamos o registramos cualquier situación ha supuesto una explosión en la cantidad de datos. Los teléfonos, equipados con cámaras y sensores avanzados, han contribuido a que las personas puedann capturar más momentos, favoreciendo así el creciente conjunto de datos. Este panorama repleto de datos aporta un gran potencial de cara a la investigación, la toma de decisiones y las aplicaciones personalizadas. Mediante el análisis minucioso y una cuidada interpretación de esta abundante información, podemos descubrir valiosos patrones, tendencias y conclusiones Sin embargo, este gran volumen de datos no tiene valor por si solo. Su potencial se desbloquea solo cuando se aprovecha para impulsar la toma de decisiones. En tiempos recientes, hemos sido testigos del auge de la inteligencia artificial: el desarrollo de sistemas informáticos y algoritmos capaces de percibir, razonar, aprender y resolver problemas, emulando ciertos aspectos de las capacidades cognitivas humanas. No obstante, solemos centrarnos solo en la superficie del problema mientras que la realidad es que la aplicación de modelos de aprendizaje automático a los datos presenta desafíos significativos. Concretamente, se suelen pasar por alto dos problemas cruciales en el campo del aprendizaje automático: la calidad de los datos y la suposición errónea de que los modelos de aprendizaje automático pueden funcionar de manera autónoma. Estos dos problemas han sido el fundamento de la motivación que impulsa esta tesis, que se esfuerza en ofrecer soluciones a dos desafíos importantes asociados: 1) lidiar con datos irregulares y 2) aprender cuándo y en quién debemos confiar. El primer desafío surge de nuestra observación de que la mayoría de las investigaciones en aprendizaje automático se centran principalmente en manejar datos regulares, descuidando un obstáculo tecnológico crucial que se encuentra en escenarios prácticos con gran cantidad de datos: la agregación y el curado de secuencias heterogéneas. Antes de aplicar algoritmos de aprendizaje automático, es crucial establecer técnicas robustas para manejar estos datos, ya que est problemática representa un cuello de botella claro en la creación de algoritmos robustos. El procesamiento de datos (en concreto, nos centraremos en el término inglés data wrangling), que abarca los procesos de extracción, integración y limpieza necesarios para el análisis de datos, desempeña un papel crucial en este sentido. Por lo tanto, el primer objetivo de esta tesis es abordar el desafío normalmente paso por alto de tratar datos irregulare. Específicamente, bajo el contexto de datos médicos. Nos centraremos en tres aspectos principales. En primer lugar, abordaremos el problema de los datos perdidos mediante el desarrollo de un marco que facilite la imputación de estos datos perdidos utilizando información relevante obtenida de fuentes de datos de diferente naturalaeza u observaciones pasadas. En segundo lugar, iremos más allá de la suposición de lidiar con observaciones homogéneas, donde solo se considera un tipo de dato estadístico (como Gaussianos) y, en su lugar, trabajaremos con observaciones heterogéneas. Esto significa que diferentes fuentes de datos pueden estar representadas por diversas distribuciones de probabilidad, como Gaussianas, Bernoulli, categóricas, etc. Por último, teniendo en cuenta el enriquecimiento temporal de los datos hoy en día y nuestro enfoque directo sobre los datos médicos, propondremos un algoritmo innovador capaz de capturar y propagar la correlación entre diferentes flujos de datos a lo largo del tiempo. Todos estos tres problemas se abordan en nuestra primera contribución, que implica el desarrollo de un método basado en Modelos Generativos Profundos (Deep Genarative Model en inglés) utilizando Autoencoders Variacionales (Variational Autoencoders en ingés). El modelo propuesto, Sequential Heterogeneous Incomplete VAE (Shi-VAE), permite la agregación de múltiples flujos de datos heterogéneos de manera modular, teniendo en cuenta la posible presencia de datos perdidos. Para demostrar la viabilidad de nuestro enfoque, presentamos resultados de prueba de concepto obtenidos de una base de datos real generada a través del monitoreo continuo pasivo de pacientes psiquiátricos. Nuestro segundo desafío está relacionado con la creencia errónea de que los algoritmos de aprendizaje automático pueden funcionar de manera independiente. Sin embargo, esta idea de que los sistemas de inteligencia artificial pueden ser los únicos responsables en la toma de decisione, especialmente en dominios críticos como la atención médica, está lejos de la realidad. Ahora, nuestro enfoque se centra en un escenario específico donde el algoritmo tiene la capacidad de realizar predicciones de manera independiente o, alternativamente, delegar la responsabilidad en un experto humano. La inclusión del ser humano no solo tiene como objetivo obtener un mejor rendimiento, sino también obtener predicciones más transparentes y seguras en las que podamos confiar. En la realidad, sin embargo, las decisiones importantes no las toma una sola persona, sino que generalmente son el resultado de la colaboración de un conjunto de expertos. Con esto en mente, surgen dos preguntas importantes: 1) ¿Cuándo debe asumir la responsabilidad el ser humano o cuándo la máquina? y 2) de entre los expertos, ¿en quién debemos confiar? Para responder a la primera pregunta, emplearemos una nueva teoría llamada Learning to defer (L2D). En L2D, no solo estamos interesados en abstenernos de hacer predicciones, sino también en comprender cómo de seguro estará el experto para hacer dichas predicciones, diferiendo solo cuando el humano sea más probable en predecir correcatmente. La segunda pregunta sobre a quién deferir entre un conjunto de expertos aún no ha sido respondida en la literatura de L2D, y esto es precisamente lo que nuestras contribuciones pretenden proporcionar. En primer lugar, extendemos las dos primeras surrogate losses consistentes propuestas hasta ahora en la literatura de L2D al contexto de múltiples expertos. En segundo lugar, estudiamos la capacidad de estos modelos para estimar la probabilidad de que un experto dado haga predicciones correctas y evaluamos si estas surrogate losses están calibradas en términos de confianza. Finalmente, proponemos una técnica de conformal inference que elige un subconjunto de expertos para consultar cuando el sistema decide diferir. Esta combinación de expertos basada en los respectivos niveles de confianza es fundamental para optimizar la colaboración entre humanos y máquinas En conclusión, esta tesis doctoral ha investigado dos casos en los que los humanos pueden aprovechar el poder del aprendizaje automático: primero, como herramienta para ayudar en problemas de procesamiento y comprensión de datos y, segundo, como herramienta colaborativa en la que la toma de decisiones puede ser automatizada para ser realizada por la máquina o delegada a expertos humanos, fomentando soluciones más transparentes y seguras.Programa de Doctorado en Multimedia y Comunicaciones por la Universidad Carlos III de Madrid y la Universidad Rey Juan CarlosPresidente: Joaquín Míguez Arenas.- Secretario: Juan José Murillo Fuentes.- Vocal: Mélanie Natividad Fernández Pradie

    Medical data wrangling with sequential variational autoencoders

    Get PDF
    Medical data sets are usually corrupted by noise and missing data. These missing patterns are commonly assumed to be completely random, but in medical scenarios, the reality is that these patterns occur in bursts due to sensors that are off for some time or data collected in a misaligned uneven fashion, among other causes. This paper proposes to model medical data records with heterogeneous data types and bursty missing data using sequential variational autoencoders (VAEs). In particular, we propose a new methodology, the Shi-VAE, which extends the capabilities of VAEs to sequential streams of data with missing observations. We compare our model against state-of-theart solutions in an intensive care unit database (ICU) and a dataset of passive human monitoring. Furthermore, we find that standard error metrics such as RMSE are not conclusive enough to assess temporal models and include in our analysis the cross-correlation between the ground truth nd the imputed signal. We show that Shi-VAE achieves the best performance in terms of using both metrics, with lower computational complexity than the GP-VAE model, which is the state-of-the-art method for medical records.This work was supported in part by Spanish Government MCI under Grants TEC2017-92552-EXP and RTI2018-099655-B-100, in part by Comunidad de Madrid under Grants IND2017/TIC-7618, IND2018/TIC-9649, IND2020/TIC-17372, and Y2018/TCS-4705, in part by BBVA Foundation under the Deep-DARWiN Project, and in part by the European Union (FEDER) and the European Research Council (ERC) through the European Union's Horizon 2020 research and innovation program under Grant 714161

    Formation and Photoinduced Electron Transfer in Porphyrin- and Phthalocyanine-Bearing N-Doped Graphene Hybrids Synthesized by Click Chemistry

    Get PDF
    Graphene doped with heteroatoms such as nitrogen, boron, and phosphorous by replacing some of the skeletal carbon atoms is emerging as an important class of two-dimensional materials as it offers the much-needed bandgap for optoelectronic applications and provides better access for chemical functionalization at the heteroatom sites. Covalent grafting of photosensitizers onto such doped graphenes makes them extremely useful for light-induced applications. Herein, we report the covalent functionalization of N-doped graphene (NG) with two well-known electron donor photosensitizers, namely, zinc porphyrin (ZnP) and zinc phthalocyanine (ZnPc), using the simple click chemistry approach. Covalent attachment of ZnP and ZnPc at the N-sites of NG in NG−ZnP and NG−ZnPc hybrids was confirmed by using a range of spectroscopic, thermogravimetric and imaging techniques. Ground- and excited-state interactions in NG−ZnP and NG−ZnPc were monitored by using spectral and electrochemical techniques. Efficient quenching of photosensitizer fluorescence in these hybrids was observed, and the relatively easier oxidations of ZnP and ZnPc supported excited-state charge-separation events. Photoinduced charge separation in NG−ZnP and NG−ZnPc hybrids was confirmed by using the ultrafast pump-probe technique. The measured rate constants were of the order of 1010 s,−1 thus indicating ultrafast electron transfer phenomena

    Self-Assembly-Directed Organization of a Fullerene–Bisporphyrin into Supramolecular Giant Donut Structures for Excited-State Charge Stabilization

    Get PDF
    Functional materials composed of spontaneously self-assembled electron donor and acceptor entities capable of generating long-lived charge-separated states upon photoillumination are in great demand as they are key in building the next generation of light energy harvesting devices. However, creating such well-defined architectures is challenging due to the intricate molecular design, multistep synthesis, and issues associated in demonstrating long-lived electron transfer. In this study, we have accomplished these tasks and report the synthesis of a new fullerene–bis-Zn-porphyrin e-bisadduct by tether-directed functionalization of C60 via a multistep synthetic protocol. Supramolecular oligomers were subsequently formed involving the two porphyrin-bearing arms embracing a fullerene cage of the vicinal molecule as confirmed by MALDI-TOF spectrometry and variable temperature NMR. In addition, the initially formed worm-like oligomers are shown to evolve to generate donut-like aggregates by AFM monitoring that was also supported by theoretical calculations. The final supramolecular donuts revealed an inner cavity size estimated as 23 nm, close to that observed in photosynthetic antenna systems. Upon systematic spectral, computational, and electrochemical studies, an energy level diagram was established to visualize the thermodynamic feasibility of electron transfer in these donor–acceptor constructs. Subsequently, transient pump–probe spectral studies covering the wide femtosecond-to-millisecond time scale were performed to confirm the formation of long-lived charge-separated states. The lifetime of the final charge-separated state was about 40 μs, thus highlighting the significance of the current approach of building giant self-organized donor–acceptor assemblies for light energy harvesting applications

    A photoresponsive graphene oxide-C60 conjugate

    Full text link
    [EN] An all-carbon donor–acceptor hybrid combining graphene oxide (GO) and C60 has been prepared. Laser flash photolysis measurements revealed the occurrence of photoinduced electron transfer from the GO electron donor to the C60 electron acceptor in the conjugate.This research was financially supported by the Spanish Ministry of Economy and Competitiveness of Spain (CTQ2010-17498, MAT2010-20843-C02-01 and PLE-2009-0038) and a Severo Ochoa operating grant from the Spanish Ministry of Economy and Competitiveness. We also acknowledge financial support from the Spanish Ministry of Economy and Competitiveness, Comunidad de Madrid (CAM 09-S2009_MAT-1467), Generalitat Valenciana (PROMETEO program), and VLC/Campus Microcluster "Nanomateriales Funcionales y Nanodispositivos".Barrejón, M.; Vizuete, M.; Gómez Escalonilla, M.; Fierro, J.; Berlanga, I.; Zamora, F.; Abellán, G.... (2014). A photoresponsive graphene oxide-C60 conjugate. Chemical Communications. 50(65):9053-9055. doi:10.1039/C3CC49589BS90539055506

    Nanoarchitectures based on graphene and carbon nanotubes: design, synthesis and properties

    Get PDF
    El trabajo realizado en esta tesis doctoral se ha focalizado en el diseño, síntesis y estudio de las propiedades de materiales moleculares para su aplicación en electrónica molecular. Para ello, se ha estudiado la reactividad química de diferentes alótropos de carbono, principalmente nanotubos de carbono y grafeno, con moléculas orgánicas electroactivas, mediante la aplicación de diferentes metodologías sintéticas (acoplamientos de Sonogashira, química “click“, reacciones de cicloadicción, etc...). Por otro lado se ha llevado a cabo la caracterización de los nanoconjugados resultantes mediante diferentes técnicas espectroscópicas como espectroscopía ultravioleta-visible, Raman e infrarroja. También han sido realizados análisis termogravimétricos con el fin de estudiar la estabilidad y el grado de funcionalizacion de los nanomateriales obtenidos. Otras técnicas como la espectroscopía fotoelectrónica de rayos X (XPS), nos han permitido estudiar la composicion superficial de las muestras confirmando así la obtención de los materiales deseados; estudios de microscopía de fuerza atómica (AFM) y de transmisión electrónica (TEM) nos han ayudado a obtener información sobre la morfologia de los materiales preparados, siendo en muchos casos útil para confirmar la presencia de las moléculas electroactivas ancladas a la superficies de los nanotubos de carbono o del grafeno. Finalmente, mediante estudios electroquimicos y de espectroscopía de absorción transitoria se han analizado las propiedades electrónicas de los nanohíbridos finales, confirmándose la existencia, en la mayoría de los casos, de procesos de transferencia electrónica entre las diferentes unidades del sistema (dador y aceptor)

    Chemically Cross-Linked Carbon Nanotube Films Engineered to Control Neuronal Signaling

    Get PDF
    In recent years, the use of free-standing carbon nanotube (CNT) films for neural tissue-engineering have attracted tremendous attention. CNT films show large surface area and high electrical conductivity that combined to flexibility and biocompatibility may promote neuron growth and differentiation while stimulating neural activity. Besides, adhesion, survival, and growth of neurons can be modulated through chemical modification of CNTs. Axonal and synaptic signaling can also be positively tuned by these materials. Here we describe the ability of free-standing CNT films to influence neuronal activity. We demonstrate that the degree of crosslinking between the CNTs has a strong impact on the electrical conductivity of the substrate, which, in turn, regulates neural circuit outputs
    corecore