23 research outputs found
Interpretable Machine Learning Model for Clinical Decision Making
Despite machine learning models being increasingly used in medical decision-making and meeting classification predictive accuracy standards, they remain untrusted black-boxes due to decision-makers\u27 lack of insight into their complex logic. Therefore, it is necessary to develop interpretable machine learning models that will engender trust in the knowledge they generate and contribute to clinical decision-makers intention to adopt them in the field.
The goal of this dissertation was to systematically investigate the applicability of interpretable model-agnostic methods to explain predictions of black-box machine learning models for medical decision-making. As proof of concept, this study addressed the problem of predicting the risk of emergency readmissions within 30 days of being discharged for heart failure patients. Using a benchmark data set, supervised classification models of differing complexity were trained to perform the prediction task. More specifically, Logistic Regression (LR), Random Forests (RF), Decision Trees (DT), and Gradient Boosting Machines (GBM) models were constructed using the Healthcare Cost and Utilization Project (HCUP) Nationwide Readmissions Database (NRD). The precision, recall, area under the ROC curve for each model were used to measure predictive accuracy. Local Interpretable Model-Agnostic Explanations (LIME) was used to generate explanations from the underlying trained models. LIME explanations were empirically evaluated using explanation stability and local fit (R2).
The results demonstrated that local explanations generated by LIME created better estimates for Decision Trees (DT) classifiers
Uncertainty, risk, and financial disclosures : applications of natural language processing in behavioral economics
In the last decade, natural language processing (NLP) methods have received increasing attention for applications in behavioral economics. Such methods enable the automatic content analysis of large corpora of financial disclosures, e.g., annual reports or earnings calls. In this setting, a conceptually interesting but underexplored variable is linguistic uncertainty: Due to the unpredictability of the financial market, it is often necessary for corporate management to use hedge expressions such as “likely” or “possible” in their financial communication. On the other hand, management can also use uncertain language to influence investors strategically, for example, through deliberate obfuscation. In this dissertation, we present NLP methods for the automated detection of linguistic uncertainty. Furthermore, we introduce the first experimental study to establish a causal link between linguistic uncertainty and investor behavior. Finally, we propose regression models to explain and predict financial risk. In addition to the independent variable of linguistic uncertainty, we explore a psychometric and an assumption-free model based on Deep Learning
DATA-DRIVEN ANALYTICAL MODELS FOR IDENTIFICATION AND PREDICTION OF OPPORTUNITIES AND THREATS
During the lifecycle of mega engineering projects such as: energy facilities,
infrastructure projects, or data centers, executives in charge should take into account
the potential opportunities and threats that could affect the execution of such projects.
These opportunities and threats can arise from different domains; including for
example: geopolitical, economic or financial, and can have an impact on different
entities, such as, countries, cities or companies. The goal of this research is to provide
a new approach to identify and predict opportunities and threats using large and diverse
data sets, and ensemble Long-Short Term Memory (LSTM) neural network models to
inform domain specific foresights. In addition to predicting the opportunities and
threats, this research proposes new techniques to help decision-makers for deduction
and reasoning purposes. The proposed models and results provide structured output to
inform the executive decision-making process concerning large engineering projects
(LEPs). This research proposes new techniques that not only provide reliable timeseries
predictions but uncertainty quantification to help make more informed decisions.
The proposed ensemble framework consists of the following components: first,
processed domain knowledge is used to extract a set of entity-domain features; second,
structured learning based on Dynamic Time Warping (DTW), to learn similarity
between sequences and Hierarchical Clustering Analysis (HCA), is used to determine
which features are relevant for a given prediction problem; and finally, an automated
decision based on the input and structured learning from the DTW-HCA is used to
build a training data-set which is fed into a deep LSTM neural network for time-series
predictions. A set of deeper ensemble programs are proposed such as Monte Carlo
Simulations and Time Label Assignment to offer a controlled setting for assessing the
impact of external shocks and a temporal alert system, respectively. The developed
model can be used to inform decision makers about the set of opportunities and threats
that their entities and assets face as a result of being engaged in an LEP accounting for
epistemic uncertainty
Applied Metaheuristic Computing
For decades, Applied Metaheuristic Computing (AMC) has been a prevailing optimization technique for tackling perplexing engineering and business problems, such as scheduling, routing, ordering, bin packing, assignment, facility layout planning, among others. This is partly because the classic exact methods are constrained with prior assumptions, and partly due to the heuristics being problem-dependent and lacking generalization. AMC, on the contrary, guides the course of low-level heuristics to search beyond the local optimality, which impairs the capability of traditional computation methods. This topic series has collected quality papers proposing cutting-edge methodology and innovative applications which drive the advances of AMC
Learning discrete word embeddings to achieve better interpretability and processing efficiency
L’omniprésente utilisation des plongements de mot dans le traitement des langues naturellesest la preuve de leur utilité et de leur capacité d’adaptation a une multitude de tâches. Ce-pendant, leur nature continue est une importante limite en terme de calculs, de stockage enmémoire et d’interprétation. Dans ce travail de recherche, nous proposons une méthode pourapprendre directement des plongements de mot discrets. Notre modèle est une adaptationd’une nouvelle méthode de recherche pour base de données avec des techniques dernier crien traitement des langues naturelles comme les Transformers et les LSTM. En plus d’obtenirdes plongements nécessitant une fraction des ressources informatiques nécéssaire à leur sto-ckage et leur traitement, nos expérimentations suggèrent fortement que nos représentationsapprennent des unités de bases pour le sens dans l’espace latent qui sont analogues à desmorphèmes. Nous appelons ces unités dessememes, qui, de l’anglaissemantic morphemes,veut dire morphèmes sémantiques. Nous montrons que notre modèle a un grand potentielde généralisation et qu’il produit des représentations latentes montrant de fortes relationssémantiques et conceptuelles entre les mots apparentés.The ubiquitous use of word embeddings in Natural Language Processing is proof of theirusefulness and adaptivity to a multitude of tasks. However, their continuous nature is pro-hibitive in terms of computation, storage and interpretation. In this work, we propose amethod of learning discrete word embeddings directly. The model is an adaptation of anovel database searching method using state of the art natural language processing tech-niques like Transformers and LSTM. On top of obtaining embeddings requiring a fractionof the resources to store and process, our experiments strongly suggest that our representa-tions learn basic units of meaning in latent space akin to lexical morphemes. We call theseunitssememes, i.e., semantic morphemes. We demonstrate that our model has a greatgeneralization potential and outputs representation showing strong semantic and conceptualrelations between related words
Deep learning applied to computational mechanics: A comprehensive review, state of the art, and the classics
Three recent breakthroughs due to AI in arts and science serve as motivation:
An award winning digital image, protein folding, fast matrix multiplication.
Many recent developments in artificial neural networks, particularly deep
learning (DL), applied and relevant to computational mechanics (solid, fluids,
finite-element technology) are reviewed in detail. Both hybrid and pure machine
learning (ML) methods are discussed. Hybrid methods combine traditional PDE
discretizations with ML methods either (1) to help model complex nonlinear
constitutive relations, (2) to nonlinearly reduce the model order for efficient
simulation (turbulence), or (3) to accelerate the simulation by predicting
certain components in the traditional integration methods. Here, methods (1)
and (2) relied on Long-Short-Term Memory (LSTM) architecture, with method (3)
relying on convolutional neural networks. Pure ML methods to solve (nonlinear)
PDEs are represented by Physics-Informed Neural network (PINN) methods, which
could be combined with attention mechanism to address discontinuous solutions.
Both LSTM and attention architectures, together with modern and generalized
classic optimizers to include stochasticity for DL networks, are extensively
reviewed. Kernel machines, including Gaussian processes, are provided to
sufficient depth for more advanced works such as shallow networks with infinite
width. Not only addressing experts, readers are assumed familiar with
computational mechanics, but not with DL, whose concepts and applications are
built up from the basics, aiming at bringing first-time learners quickly to the
forefront of research. History and limitations of AI are recounted and
discussed, with particular attention at pointing out misstatements or
misconceptions of the classics, even in well-known references. Positioning and
pointing control of a large-deformable beam is given as an example.Comment: 275 pages, 158 figures. Appeared online on 2023.03.01 at
CMES-Computer Modeling in Engineering & Science
The People Inside
Our collection begins with an example of computer vision that cuts through time and bureaucratic opacity to help us meet real people from the past. Buried in thousands of files in the National Archives of Australia is evidence of the exclusionary “White Australia” policies of the nineteenth and twentieth centuries, which were intended to limit and discourage immigration by non-Europeans. Tim Sherratt and Kate Bagnall decided to see what would happen if they used a form of face-detection software made ubiquitous by modern surveillance systems and applied it to a security system of a century ago. What we get is a new way to see the government documents, not as a source of statistics but, Sherratt and Bagnall argue, as powerful evidence of the people affected by racism