1,723 research outputs found
Smart Gas Sensors: Materials, Technologies, Practical Applications, and Use of Machine Learning – A Review
The electronic nose, popularly known as the E-nose, that combines gas sensor arrays (GSAs) with machine learning has gained a strong foothold in gas sensing technology. The E-nose designed to mimic the human olfactory system, is used for the detection and identification of various volatile compounds. The GSAs develop a unique signal fingerprint for each volatile compound to enable pattern recognition using machine learning algorithms. The inexpensive, portable and non-invasive characteristics of the E-nose system have rendered it indispensable within the gas-sensing arena. As a result, E-noses have been widely employed in several applications in the areas of the food industry, health management, disease diagnosis, water and air quality control, and toxic gas leakage detection. This paper reviews the various sensor fabrication technologies of GSAs and highlights the main operational framework of the E-nose system. The paper details vital signal pre-processing techniques of feature extraction, feature selection, in addition to machine learning algorithms such as SVM, kNN, ANN, and Random Forests for determining the type of gas and estimating its concentration in a competitive environment. The paper further explores the potential applications of E-noses for diagnosing diseases, monitoring air quality, assessing the quality of food samples and estimating concentrations of volatile organic compounds (VOCs) in air and in food samples. The review concludes with some challenges faced by E-nose, alternative ways to tackle them and proposes some recommendations as potential future work for further development and design enhancement of E-noses
Improving the predictive accuracy of the cross-selling of consumer loans using deep learning networks
Traditionally most cross-selling models in retail banking use demographics information and interactions with marketing as input to statistical models or machine learning algorithms to predict whether a customer is willing to purchase a given financial product or not. We overcome with such limitation by building several models that also use several years of account transaction data. The objective of this study is to analysis credit card transactions of customers, in order to come up with a good prediction in cross-selling products. We use deep-learning algorithm to analyze almost 800,000 credit cards transactions. The results show that such unique data contains valuable information on the customers’ consumption behavior and it can significantly increase the predictive accuracy of a cross-selling model. In summary, we develop an auto-encoder to extract features from the transaction data and use them as input to a classifier. We demonstrate that such features also have predictive power that enhances the performance of the cross-selling model even further
Data- og ekspertdreven variabelseleksjon for prediktive modeller i helsevesenet : mot økt tolkbarhet i underbestemte maskinlæringsproblemer
Modern data acquisition techniques in healthcare generate large collections of data from multiple sources, such as novel diagnosis and treatment methodologies. Some concrete examples are electronic healthcare record systems, genomics, and medical images. This leads to situations with often unstructured, high-dimensional heterogeneous patient cohort data where classical statistical methods may not be sufficient for optimal utilization of the data and informed decision-making. Instead, investigating such data structures with modern machine learning techniques promises to improve the understanding of patient health issues and may provide a better platform for informed decision-making by clinicians. Key requirements for this purpose include (a) sufficiently accurate predictions and (b) model interpretability. Achieving both aspects in parallel is difficult, particularly for datasets with few patients, which are common in the healthcare domain. In such cases, machine learning models encounter mathematically underdetermined systems and may overfit easily on the training data. An important approach to overcome this issue is feature selection, i.e., determining a subset of informative features from the original set of features with respect to the target variable. While potentially raising the predictive performance, feature selection fosters model interpretability by identifying a low number of relevant model parameters to better understand the underlying biological processes that lead to health issues.
Interpretability requires that feature selection is stable, i.e., small changes in the dataset do not lead to changes in the selected feature set. A concept to address instability is ensemble feature selection, i.e. the process of repeating the feature selection multiple times on subsets of samples of the original dataset and aggregating results in a meta-model. This thesis presents two approaches for ensemble feature selection, which are tailored towards high-dimensional data in healthcare: the Repeated Elastic Net Technique for feature selection (RENT) and the User-Guided Bayesian Framework for feature selection (UBayFS). While RENT is purely data-driven and builds upon elastic net regularized models, UBayFS is a general framework for ensembles with the capabilities to include expert knowledge in the feature selection process via prior weights and side constraints. A case study modeling the overall survival of cancer patients compares these novel feature selectors and demonstrates their potential in clinical practice.
Beyond the selection of single features, UBayFS also allows for selecting whole feature groups (feature blocks) that were acquired from multiple data sources, as those mentioned above. Importance quantification of such feature blocks plays a key role in tracing information about the target variable back to the acquisition modalities. Such information on feature block importance may lead to positive effects on the use of human, technical, and financial resources if systematically integrated into the planning of patient treatment by excluding the acquisition of non-informative features. Since a generalization of feature importance measures to block importance is not trivial, this thesis also investigates and compares approaches for feature block importance rankings.
This thesis demonstrates that high-dimensional datasets from multiple data sources in the medical domain can be successfully tackled by the presented approaches for feature selection. Experimental evaluations demonstrate favorable properties of both predictive performance, stability, as well as interpretability of results, which carries a high potential for better data-driven decision support in clinical practice.Moderne datainnsamlingsteknikker i helsevesenet genererer store datamengder fra flere kilder, som for eksempel nye diagnose- og behandlingsmetoder. Noen konkrete eksempler er elektroniske helsejournalsystemer, genomikk og medisinske bilder. Slike pasientkohortdata er ofte ustrukturerte, høydimensjonale og heterogene og hvor klassiske statistiske metoder ikke er tilstrekkelige for optimal utnyttelse av dataene og god informasjonsbasert beslutningstaking. Derfor kan det være lovende å analysere slike datastrukturer ved bruk av moderne maskinlæringsteknikker for å øke forståelsen av pasientenes helseproblemer og for å gi klinikerne en bedre plattform for informasjonsbasert beslutningstaking. Sentrale krav til dette formålet inkluderer (a) tilstrekkelig nøyaktige prediksjoner og (b) modelltolkbarhet. Å oppnå begge aspektene samtidig er vanskelig, spesielt for datasett med få pasienter, noe som er vanlig for data i helsevesenet. I slike tilfeller må maskinlæringsmodeller håndtere matematisk underbestemte systemer og dette kan lett føre til at modellene overtilpasses treningsdataene. Variabelseleksjon er en viktig tilnærming for å håndtere dette ved å identifisere en undergruppe av informative variabler med hensyn til responsvariablen. Samtidig som variabelseleksjonsmetoder kan lede til økt prediktiv ytelse, fremmes modelltolkbarhet ved å identifisere et lavt antall relevante modellparametere. Dette kan gi bedre forståelse av de underliggende biologiske prosessene som fører til helseproblemer.
Tolkbarhet krever at variabelseleksjonen er stabil, dvs. at små endringer i datasettet ikke fører til endringer i hvilke variabler som velges. Et konsept for å adressere ustabilitet er ensemblevariableseleksjon, dvs. prosessen med å gjenta variabelseleksjon flere ganger på en delmengde av prøvene i det originale datasett og aggregere resultater i en metamodell. Denne avhandlingen presenterer to tilnærminger for ensemblevariabelseleksjon, som er skreddersydd for høydimensjonale data i helsevesenet: "Repeated Elastic Net Technique for feature selection" (RENT) og "User-Guided Bayesian Framework for feature selection" (UBayFS). Mens RENT er datadrevet og bygger på elastic net-regulariserte modeller, er UBayFS et generelt rammeverk for ensembler som muliggjør inkludering av ekspertkunnskap i variabelseleksjonsprosessen gjennom forhåndsbestemte vekter og sidebegrensninger. En case-studie som modellerer overlevelsen av kreftpasienter sammenligner disse nye variabelseleksjonsmetodene og demonstrerer deres potensiale i klinisk praksis.
Utover valg av enkelte variabler gjør UBayFS det også mulig å velge blokker eller grupper av variabler som representerer de ulike datakildene som ble nevnt over. Kvantifisering av viktigheten av variabelgrupper spiller en nøkkelrolle for forståelsen av hvorvidt datakildene er viktige for responsvariablen. Tilgang til slik informasjon kan føre til at bruken av menneskelige, tekniske og økonomiske ressurser kan forbedres dersom informasjonen integreres systematisk i planleggingen av pasientbehandlingen. Slik kan man redusere innsamling av ikke-informative variabler. Siden generaliseringen av viktighet av variabelgrupper ikke er triviell, undersøkes og sammenlignes også tilnærminger for rangering av viktigheten til disse variabelgruppene.
Denne avhandlingen viser at høydimensjonale datasett fra flere datakilder fra det medisinske domenet effektivt kan håndteres ved bruk av variabelseleksjonmetodene som er presentert i avhandlingen. Eksperimentene viser at disse kan ha positiv en effekt på både prediktiv ytelse, stabilitet og tolkbarhet av resultatene. Bruken av disse variabelseleksjonsmetodene bærer et stort potensiale for bedre datadrevet beslutningsstøtte i klinisk praksis
Automated Distinct Bone Segmentation from Computed Tomography Images using Deep Learning
Large-scale CT scans are frequently performed for forensic and diagnostic purposes, to plan and
direct surgical procedures, and to track the development of bone-related diseases. This often
involves radiologists who have to annotate bones manually or in a semi-automatic way, which is
a time consuming task. Their annotation workload can be reduced by automated segmentation
and detection of individual bones. This automation of distinct bone segmentation not only has
the potential to accelerate current workflows but also opens up new possibilities for processing
and presenting medical data for planning, navigation, and education.
In this thesis, we explored the use of deep learning for automating the segmentation of all
individual bones within an upper-body CT scan. To do so, we had to find a network architec-
ture that provides a good trade-off between the problem’s high computational demands and the
results’ accuracy. After finding a baseline method and having enlarged the dataset, we set out
to eliminate the most prevalent types of error. To do so, we introduced an novel method called
binary-prediction-enhanced multi-class (BEM) inference, separating the task into two: Distin-
guishing bone from non-bone is conducted separately from identifying the individual bones.
Both predictions are then merged, which leads to superior results. Another type of error is tack-
led by our developed architecture, the Sneaky-Net, which receives additional inputs with larger
fields of view but at a smaller resolution. We can thus sneak more extensive areas of the input
into the network while keeping the growth of additional pixels in check.
Overall, we present a deep-learning-based method that reliably segments most of the over
one hundred distinct bones present in upper-body CT scans in an end-to-end trained matter
quickly enough to be used in interactive software. Our algorithm has been included in our
groups virtual reality medical image visualisation software SpectoVR with the plan to be used
as one of the puzzle piece in surgical planning and navigation, as well as in the education of
future doctors
NEMISA Digital Skills Conference (Colloquium) 2023
The purpose of the colloquium and events centred around the central role that data plays
today as a desirable commodity that must become an important part of massifying digital
skilling efforts. Governments amass even more critical data that, if leveraged, could
change the way public services are delivered, and even change the social and economic
fortunes of any country. Therefore, smart governments and organisations increasingly
require data skills to gain insights and foresight, to secure themselves, and for improved
decision making and efficiency. However, data skills are scarce, and even more
challenging is the inconsistency of the associated training programs with most curated for
the Science, Technology, Engineering, and Mathematics (STEM) disciplines.
Nonetheless, the interdisciplinary yet agnostic nature of data means that there is
opportunity to expand data skills into the non-STEM disciplines as well.College of Engineering, Science and Technolog
Immediate word recall in cognitive assessment can predict dementia using machine learning techniques
Uncertainty Quantification for Molecular Property Predictions with Graph Neural Architecture Search
Graph Neural Networks (GNNs) have emerged as a prominent class of data-driven
methods for molecular property prediction. However, a key limitation of typical
GNN models is their inability to quantify uncertainties in the predictions.
This capability is crucial for ensuring the trustworthy use and deployment of
models in downstream tasks. To that end, we introduce AutoGNNUQ, an automated
uncertainty quantification (UQ) approach for molecular property prediction.
AutoGNNUQ leverages architecture search to generate an ensemble of
high-performing GNNs, enabling the estimation of predictive uncertainties. Our
approach employs variance decomposition to separate data (aleatoric) and model
(epistemic) uncertainties, providing valuable insights for reducing them. In
our computational experiments, we demonstrate that AutoGNNUQ outperforms
existing UQ methods in terms of both prediction accuracy and UQ performance on
multiple benchmark datasets. Additionally, we utilize t-SNE visualization to
explore correlations between molecular features and uncertainty, offering
insight for dataset improvement. AutoGNNUQ has broad applicability in domains
such as drug discovery and materials science, where accurate uncertainty
quantification is crucial for decision-making
When Deep Learning Meets Polyhedral Theory: A Survey
In the past decade, deep learning became the prevalent methodology for
predictive modeling thanks to the remarkable accuracy of deep neural networks
in tasks such as computer vision and natural language processing. Meanwhile,
the structure of neural networks converged back to simpler representations
based on piecewise constant and piecewise linear functions such as the
Rectified Linear Unit (ReLU), which became the most commonly used type of
activation function in neural networks. That made certain types of network
structure \unicode{x2014}such as the typical fully-connected feedforward
neural network\unicode{x2014} amenable to analysis through polyhedral theory
and to the application of methodologies such as Linear Programming (LP) and
Mixed-Integer Linear Programming (MILP) for a variety of purposes. In this
paper, we survey the main topics emerging from this fast-paced area of work,
which bring a fresh perspective to understanding neural networks in more detail
as well as to applying linear optimization techniques to train, verify, and
reduce the size of such networks
Exploiting Emotions via Composite Pretrained Embedding and Ensemble Language Model
Decisions in the modern era are based on more than just the available data; they also incorporate feedback from online sources. Processing reviews known as Sentiment analysis (SA) or Emotion analysis. Understanding the user's perspective and routines is crucial now-a-days for multiple reasons. It is used by both businesses and governments to make strategic decisions. Various architectural and vector embedding strategies have been developed for SA processing. Accurate representation of text is crucial for automatic SA. Due to the large number of languages spoken and written, polysemy and syntactic or semantic issues were common. To get around these problems, we developed effective composite embedding (ECE), a method that combines the advantages of vector embedding techniques that are either context-independent (like glove & fasttext) or context-aware (like XLNet) to effectively represent the features needed for processing. To improve the performace towards emotion or sentiment we proposed stacked ensemble model of deep lanugae models.ECE with Ensembled model is evaluated on balanced dataset to prove that it is a reliable embedding technique and a generalised model for SA.In order to evaluate ECE, cutting-edge ML and Deep net language models are deployed and comapared. The model is evaluated using benchmark datset such as MR, Kindle along with realtime tweet dataset of user complaints . LIME is used to verify the model's predictions and to provide statistical results for sentence.The model with ECE embedding provides state-of-art results with real time dataset as well
On the Utility of Representation Learning Algorithms for Myoelectric Interfacing
Electrical activity produced by muscles during voluntary movement is a reflection of the firing patterns of relevant motor neurons and, by extension, the latent motor intent driving the movement. Once transduced via electromyography (EMG) and converted into digital form, this activity can be processed to provide an estimate of the original motor intent and is as such a feasible basis for non-invasive efferent neural interfacing. EMG-based motor intent decoding has so far received the most attention in the field of upper-limb prosthetics, where alternative means of interfacing are scarce and the utility of better control apparent. Whereas myoelectric prostheses have been available since the 1960s, available EMG control interfaces still lag behind the mechanical capabilities of the artificial limbs they are intended to steer—a gap at least partially due to limitations in current methods for translating EMG into appropriate motion commands. As the relationship between EMG signals and concurrent effector kinematics is highly non-linear and apparently stochastic, finding ways to accurately extract and combine relevant information from across electrode sites is still an active area of inquiry.This dissertation comprises an introduction and eight papers that explore issues afflicting the status quo of myoelectric decoding and possible solutions, all related through their use of learning algorithms and deep Artificial Neural Network (ANN) models. Paper I presents a Convolutional Neural Network (CNN) for multi-label movement decoding of high-density surface EMG (HD-sEMG) signals. Inspired by the successful use of CNNs in Paper I and the work of others, Paper II presents a method for automatic design of CNN architectures for use in myocontrol. Paper III introduces an ANN architecture with an appertaining training framework from which simultaneous and proportional control emerges. Paper Iv introduce a dataset of HD-sEMG signals for use with learning algorithms. Paper v applies a Recurrent Neural Network (RNN) model to decode finger forces from intramuscular EMG. Paper vI introduces a Transformer model for myoelectric interfacing that do not need additional training data to function with previously unseen users. Paper vII compares the performance of a Long Short-Term Memory (LSTM) network to that of classical pattern recognition algorithms. Lastly, paper vIII describes a framework for synthesizing EMG from multi-articulate gestures intended to reduce training burden
- …