877 research outputs found
Tracking the risk of a deployed model and detecting harmful distribution shifts
When deployed in the real world, machine learning models inevitably encounter
changes in the data distribution, and certain -- but not all -- distribution
shifts could result in significant performance degradation. In practice, it may
make sense to ignore benign shifts, under which the performance of a deployed
model does not degrade substantially, making interventions by a human expert
(or model retraining) unnecessary. While several works have developed tests for
distribution shifts, these typically either use non-sequential methods, or
detect arbitrary shifts (benign or harmful), or both. We argue that a sensible
method for firing off a warning has to both (a) detect harmful shifts while
ignoring benign ones, and (b) allow continuous monitoring of model performance
without increasing the false alarm rate. In this work, we design simple
sequential tools for testing if the difference between source (training) and
target (test) distributions leads to a significant increase in a risk function
of interest, like accuracy or calibration. Recent advances in constructing
time-uniform confidence sequences allow efficient aggregation of statistical
evidence accumulated during the tracking process. The designed framework is
applicable in settings where (some) true labels are revealed after the
prediction is performed, or when batches of labels become available in a
delayed fashion. We demonstrate the efficacy of the proposed framework through
an extensive empirical study on a collection of simulated and real datasets.Comment: Accepted as a conference paper at ICLR 202
The Emergence of Gravitational Wave Science: 100 Years of Development of Mathematical Theory, Detectors, Numerical Algorithms, and Data Analysis Tools
On September 14, 2015, the newly upgraded Laser Interferometer
Gravitational-wave Observatory (LIGO) recorded a loud gravitational-wave (GW)
signal, emitted a billion light-years away by a coalescing binary of two
stellar-mass black holes. The detection was announced in February 2016, in time
for the hundredth anniversary of Einstein's prediction of GWs within the theory
of general relativity (GR). The signal represents the first direct detection of
GWs, the first observation of a black-hole binary, and the first test of GR in
its strong-field, high-velocity, nonlinear regime. In the remainder of its
first observing run, LIGO observed two more signals from black-hole binaries,
one moderately loud, another at the boundary of statistical significance. The
detections mark the end of a decades-long quest, and the beginning of GW
astronomy: finally, we are able to probe the unseen, electromagnetically dark
Universe by listening to it. In this article, we present a short historical
overview of GW science: this young discipline combines GR, arguably the
crowning achievement of classical physics, with record-setting, ultra-low-noise
laser interferometry, and with some of the most powerful developments in the
theory of differential geometry, partial differential equations,
high-performance computation, numerical analysis, signal processing,
statistical inference, and data science. Our emphasis is on the synergy between
these disciplines, and how mathematics, broadly understood, has historically
played, and continues to play, a crucial role in the development of GW science.
We focus on black holes, which are very pure mathematical solutions of
Einstein's gravitational-field equations that are nevertheless realized in
Nature, and that provided the first observed signals.Comment: 41 pages, 5 figures. To appear in Bulletin of the American
Mathematical Societ
The Conformal Bootstrap: Theory, Numerical Techniques, and Applications
Conformal field theories have been long known to describe the fascinating
universal physics of scale invariant critical points. They describe continuous
phase transitions in fluids, magnets, and numerous other materials, while at
the same time sit at the heart of our modern understanding of quantum field
theory. For decades it has been a dream to study these intricate strongly
coupled theories nonperturbatively using symmetries and other consistency
conditions. This idea, called the conformal bootstrap, saw some successes in
two dimensions but it is only in the last ten years that it has been fully
realized in three, four, and other dimensions of interest. This renaissance has
been possible both due to significant analytical progress in understanding how
to set up the bootstrap equations and the development of numerical techniques
for finding or constraining their solutions. These developments have led to a
number of groundbreaking results, including world record determinations of
critical exponents and correlation function coefficients in the Ising and
models in three dimensions. This article will review these exciting
developments for newcomers to the bootstrap, giving an introduction to
conformal field theories and the theory of conformal blocks, describing
numerical techniques for the bootstrap based on convex optimization, and
summarizing in detail their applications to fixed points in three and four
dimensions with no or minimal supersymmetry.Comment: 81 pages, double column, 58 figures; v3: updated references, minor
typos correcte
How can humans leverage machine learning? From Medical Data Wrangling to Learning to Defer to Multiple Experts
Mención Internacional en el título de doctorThe irruption of the smartphone into everyone’s life and the ease with which we digitise or record
any data supposed an explosion of quantities of data. Smartphones, equipped with advanced
cameras and sensors, have empowered individuals to capture moments and contribute to the
growing pool of data. This data-rich landscape holds great promise for research, decision-making,
and personalized applications. By carefully analyzing and interpreting this wealth of information,
valuable insights, patterns, and trends can be uncovered.
However, big data is worthless in a vacuum. Its potential value is unlocked only when leveraged
to drive decision-making. In recent times we have been participants of the outburst of artificial
intelligence: the development of computer systems and algorithms capable of perceiving, reasoning,
learning, and problem-solving, emulating certain aspects of human cognitive abilities. Nevertheless,
our focus tends to be limited, merely skimming the surface of the problem, while the reality
is that the application of machine learning models to data introduces is usually fraught. More
specifically, there are two crucial pitfalls frequently neglected in the field of machine learning:
the quality of the data and the erroneous assumption that machine learning models operate
autonomously. These two issues have established the foundation for the motivation driving this
thesis, which strives to offer solutions to two major associated challenges: 1) dealing with irregular
observations and 2) learning when and who should we trust.
The first challenge originates from our observation that the majority of machine learning
research primarily concentrates on handling regular observations, neglecting a crucial technological
obstacle encountered in practical big-data scenarios: the aggregation and curation of heterogeneous
streams of information. Before applying machine learning algorithms, it is crucial to establish
robust techniques for handling big data, as this specific aspect presents a notable bottleneck in
the creation of robust algorithms. Data wrangling, which encompasses the extraction, integration,
and cleaning processes necessary for data analysis, plays a crucial role in this regard. Therefore,
the first objective of this thesis is to tackle the frequently disregarded challenge of addressing
irregularities within the context of medical data. We will focus on three specific aspects. Firstly,
we will tackle the issue of missing data by developing a framework that facilitates the imputation
of missing data points using relevant information derived from alternative data sources or past
observations. Secondly, we will move beyond the assumption of homogeneous observations,
where only one statistical data type (such as Gaussian) is considered, and instead, work with
heterogeneous observations. This means that different data sources can be represented by various
statistical likelihoods, such as Gaussian, Bernoulli, categorical, etc. Lastly, considering the
temporal enrichment of todays collected data and our focus on medical data, we will develop a novel algorithm capable of capturing and propagating correlations among different data streams
over time. All these three problems are addressed in our first contribution which involves the
development of a novel method based on Deep Generative Models (DGM) using Variational
Autoencoders (VAE). The proposed model, the Sequential Heterogeneous Incomplete VAE (Shi-
VAE), enables the aggregation of multiple heterogeneous data streams in a modular manner,
taking into consideration the presence of potential missing data. To demonstrate the feasibility
of our approach, we present proof-of-concept results obtained from a real database generated
through continuous passive monitoring of psychiatric patients.
Our second challenge relates to the misbelief that machine learning algorithms can perform
independently. However, this notion that AI systems can solely account for automated decisionmaking,
especially in critical domains such as healthcare, is far from reality. Our focus now shifts
towards a specific scenario where the algorithm has the ability to make predictions independently
or alternatively defer the responsibility to a human expert. The purpose of including the human
is not to obtain jsut better performance, but also more reliable and trustworthy predictions we
can rely on. In reality, however, important decisions are not made by one person but are usually
committed by an ensemble of human experts. With this in mind, two important questions arise:
1) When should the human or the machine bear responsibility and 2) among the experts, who
should we trust? To answer the first question, we will employ a recent theory known as Learning
to defer (L2D). In L2D we are not only interested in abstaining from prediction but also in
understanding the humans confidence for making such prediction. thus deferring only when the
human is more likely to be correct. The second question about who to defer among a pool of
experts has not been yet answered in the L2D literature, and this is what our contributions
aim to provide. First, we extend the two yet proposed consistent surrogate losses in the L2D
literature to the multiple-expert setting. Second, we study the frameworks ability to estimate
the probability that a given expert correctly predicts and assess whether the two surrogate losses
are confidence calibrated. Finally, we propose a conformal inference technique that chooses a
subset of experts to query when the system defers. Ensembling experts based on confidence
levels is vital to optimize human-machine collaboration.
In conclusion, this doctoral thesis has investigated two cases where humans can leverage the
power of machine learning: first, as a tool to assist in data wrangling and data understanding
problems and second, as a collaborative tool where decision-making can be automated by the
machine or delegated to human experts, fostering more transparent and trustworthy solutions.La irrupción de los smartphones en la vida de todos y la facilidad con la que digitalizamos o
registramos cualquier situación ha supuesto una explosión en la cantidad de datos. Los teléfonos,
equipados con cámaras y sensores avanzados, han contribuido a que las personas puedann capturar
más momentos, favoreciendo así el creciente conjunto de datos. Este panorama repleto de datos
aporta un gran potencial de cara a la investigación, la toma de decisiones y las aplicaciones
personalizadas. Mediante el análisis minucioso y una cuidada interpretación de esta abundante
información, podemos descubrir valiosos patrones, tendencias y conclusiones
Sin embargo, este gran volumen de datos no tiene valor por si solo. Su potencial se desbloquea
solo cuando se aprovecha para impulsar la toma de decisiones. En tiempos recientes, hemos sido
testigos del auge de la inteligencia artificial: el desarrollo de sistemas informáticos y algoritmos
capaces de percibir, razonar, aprender y resolver problemas, emulando ciertos aspectos de las
capacidades cognitivas humanas. No obstante, solemos centrarnos solo en la superficie del problema
mientras que la realidad es que la aplicación de modelos de aprendizaje automático a los datos
presenta desafíos significativos. Concretamente, se suelen pasar por alto dos problemas cruciales
en el campo del aprendizaje automático: la calidad de los datos y la suposición errónea de
que los modelos de aprendizaje automático pueden funcionar de manera autónoma. Estos dos
problemas han sido el fundamento de la motivación que impulsa esta tesis, que se esfuerza
en ofrecer soluciones a dos desafíos importantes asociados: 1) lidiar con datos irregulares y 2)
aprender cuándo y en quién debemos confiar.
El primer desafío surge de nuestra observación de que la mayoría de las investigaciones en
aprendizaje automático se centran principalmente en manejar datos regulares, descuidando un
obstáculo tecnológico crucial que se encuentra en escenarios prácticos con gran cantidad de
datos: la agregación y el curado de secuencias heterogéneas. Antes de aplicar algoritmos de
aprendizaje automático, es crucial establecer técnicas robustas para manejar estos datos, ya que
est problemática representa un cuello de botella claro en la creación de algoritmos robustos. El
procesamiento de datos (en concreto, nos centraremos en el término inglés data wrangling), que
abarca los procesos de extracción, integración y limpieza necesarios para el análisis de datos,
desempeña un papel crucial en este sentido. Por lo tanto, el primer objetivo de esta tesis es
abordar el desafío normalmente paso por alto de tratar datos irregulare. Específicamente, bajo
el contexto de datos médicos. Nos centraremos en tres aspectos principales. En primer lugar,
abordaremos el problema de los datos perdidos mediante el desarrollo de un marco que facilite
la imputación de estos datos perdidos utilizando información relevante obtenida de fuentes de
datos de diferente naturalaeza u observaciones pasadas. En segundo lugar, iremos más allá de la suposición de lidiar con observaciones homogéneas, donde solo se considera un tipo de dato
estadístico (como Gaussianos) y, en su lugar, trabajaremos con observaciones heterogéneas. Esto
significa que diferentes fuentes de datos pueden estar representadas por diversas distribuciones
de probabilidad, como Gaussianas, Bernoulli, categóricas, etc. Por último, teniendo en cuenta
el enriquecimiento temporal de los datos hoy en día y nuestro enfoque directo sobre los datos
médicos, propondremos un algoritmo innovador capaz de capturar y propagar la correlación
entre diferentes flujos de datos a lo largo del tiempo. Todos estos tres problemas se abordan
en nuestra primera contribución, que implica el desarrollo de un método basado en Modelos
Generativos Profundos (Deep Genarative Model en inglés) utilizando Autoencoders Variacionales
(Variational Autoencoders en ingés). El modelo propuesto, Sequential Heterogeneous Incomplete
VAE (Shi-VAE), permite la agregación de múltiples flujos de datos heterogéneos de manera
modular, teniendo en cuenta la posible presencia de datos perdidos. Para demostrar la viabilidad
de nuestro enfoque, presentamos resultados de prueba de concepto obtenidos de una base de datos
real generada a través del monitoreo continuo pasivo de pacientes psiquiátricos.
Nuestro segundo desafío está relacionado con la creencia errónea de que los algoritmos de
aprendizaje automático pueden funcionar de manera independiente. Sin embargo, esta idea de que
los sistemas de inteligencia artificial pueden ser los únicos responsables en la toma de decisione,
especialmente en dominios críticos como la atención médica, está lejos de la realidad. Ahora,
nuestro enfoque se centra en un escenario específico donde el algoritmo tiene la capacidad de
realizar predicciones de manera independiente o, alternativamente, delegar la responsabilidad
en un experto humano. La inclusión del ser humano no solo tiene como objetivo obtener un
mejor rendimiento, sino también obtener predicciones más transparentes y seguras en las que
podamos confiar. En la realidad, sin embargo, las decisiones importantes no las toma una sola
persona, sino que generalmente son el resultado de la colaboración de un conjunto de expertos.
Con esto en mente, surgen dos preguntas importantes: 1) ¿Cuándo debe asumir la responsabilidad
el ser humano o cuándo la máquina? y 2) de entre los expertos, ¿en quién debemos confiar?
Para responder a la primera pregunta, emplearemos una nueva teoría llamada Learning to defer
(L2D). En L2D, no solo estamos interesados en abstenernos de hacer predicciones, sino también
en comprender cómo de seguro estará el experto para hacer dichas predicciones, diferiendo solo
cuando el humano sea más probable en predecir correcatmente. La segunda pregunta sobre a quién
deferir entre un conjunto de expertos aún no ha sido respondida en la literatura de L2D, y esto es
precisamente lo que nuestras contribuciones pretenden proporcionar. En primer lugar, extendemos
las dos primeras surrogate losses consistentes propuestas hasta ahora en la literatura de L2D al
contexto de múltiples expertos. En segundo lugar, estudiamos la capacidad de estos modelos para
estimar la probabilidad de que un experto dado haga predicciones correctas y evaluamos si estas
surrogate losses están calibradas en términos de confianza. Finalmente, proponemos una técnica
de conformal inference que elige un subconjunto de expertos para consultar cuando el sistema
decide diferir. Esta combinación de expertos basada en los respectivos niveles de confianza es
fundamental para optimizar la colaboración entre humanos y máquinas En conclusión, esta tesis doctoral ha investigado dos casos en los que los humanos pueden
aprovechar el poder del aprendizaje automático: primero, como herramienta para ayudar en
problemas de procesamiento y comprensión de datos y, segundo, como herramienta colaborativa en
la que la toma de decisiones puede ser automatizada para ser realizada por la máquina o delegada
a expertos humanos, fomentando soluciones más transparentes y seguras.Programa de Doctorado en Multimedia y Comunicaciones por la Universidad Carlos III de Madrid y la Universidad Rey Juan CarlosPresidente: Joaquín Míguez Arenas.- Secretario: Juan José Murillo Fuentes.- Vocal: Mélanie Natividad Fernández Pradie
Recommended from our members
UNDERSTANDING CONDITIONAL MODES OF ACTIONS IN CHEMICAL-INDUCED TOXICITY USING RULE MODELS
It is estimated that 115 million animals are used in experimental testing each year. Hence,
shifting efforts toward alternative methods for toxicity assessment is essential. However, slow regulatory acceptance of new approaches is governed by knowledge gaps in toxicity modes of action. In this thesis, I describe these challenges and the use of in vitro screening as an alternative of animal testing. I also discuss common data-based methods to derive hypotheses about toxicity modes of actions, and the associated limitations in capturing multiple biological perturbations.
I applied novel data-based workflows, using rule models, to prioritize in vitro assays predictive of toxicity as well as to detect significant polypharmacology profiles. I explain how constraints were applied to rule-based models to inform meaningful mechanistic interpretation for two toxicity endpoints: rat hepatotoxicity and acute toxicity. I compared assays selected, by rules, for predicting hepatotoxicity with endpoints used in in
vitro models from commercial sources. An overlap was observed including cytochrome
activity, mitochondrial toxicity and immunological responses. However, nuclear receptor
activity, identified in rules, is not currently covered in commercial setups. I also demonstrate that endocrine disruption endpoints extrapolate better into in vivo toxicity when a set of specific conditions are met, such as physicochemical properties associated with good bioavailability.
Next, I examined synergistic interactions between conditions in rules describing acute toxicity. I gained novel insights into how specific stressors potentiate the perturbation by known key events, such as acetylcholinesterase inhibition and neuro-signalling disruption. I show that examining polypharmacology profiles is particularly important at low bioactive potencies.
Further, the overall predictive performance of rules describing acute toxicity was tested against a benchmark Random Forest model in a conformal prediction framework. Irrespective to the data type used in the training, the models were prone to bias over compounds promiscuity, by which high promiscuous compounds were more likely to be predicted as toxic.
Overall, the studies conducted in this thesis provide novel insights into molecular mechanisms of toxicity, namely hepatotoxicity and acute toxicity, and with regards to chemical properties and polypharmacology. This knowledge can be used to improve the utility and design of alternative methods for toxicity, and hence, accelerate the regulatory acceptance.Islamic Development Bank
Cambridge Trust Fun
Distributionally Robust Statistical Verification with Imprecise Neural Networks
A particularly challenging problem in AI safety is providing guarantees on
the behavior of high-dimensional autonomous systems. Verification approaches
centered around reachability analysis fail to scale, and purely statistical
approaches are constrained by the distributional assumptions about the sampling
process. Instead, we pose a distributionally robust version of the statistical
verification problem for black-box systems, where our performance guarantees
hold over a large family of distributions. This paper proposes a novel approach
based on a combination of active learning, uncertainty quantification, and
neural network verification. A central piece of our approach is an ensemble
technique called Imprecise Neural Networks, which provides the uncertainty to
guide active learning. The active learning uses an exhaustive neural-network
verification tool Sherlock to collect samples. An evaluation on multiple
physical simulators in the openAI gym Mujoco environments with
reinforcement-learned controllers demonstrates that our approach can provide
useful and scalable guarantees for high-dimensional systems
Recommended from our members
Lectures on D-branes, gauge theories and Calabi-Yau singularities
These lectures, given at the Chinese Academy of Sciences for the BeiJing/HangZhou International Summer School in Mathematical Physics, are intended to introduce, to the beginning student in string theory and mathematical physics, aspects of the rich and beautiful subject of D-brane gauge theories constructed from local Calabi-Yau spaces. Topics such as orbifolds, toric singularities, del Pezzo surfaces as well as chaotic duality will be covered
- …