1,783 research outputs found
Explainable Early Prediction of Gestational Diabetes Biomarkers by Combining Medical Background and Wearable Devices: A Pilot Study with a Cohort Group in South Africa
This study aims to explore the potential of Internet of Things (IoT) devices and explainable Artificial Intelligence (AI) techniques in predicting biomarker values associated with GDM when measured 13 - 16 weeks prior to diagnosis. We developed a system that forecasts biomarkers such as LDL, HDL, triglycerides, cholesterol, HbA1c, and results from the Oral Glucose Tolerance Test (OGTT) including fasting glucose, 1-hour, and 2-hour post- load glucose values. These biomarker values are predicted based on sensory measurements collected around week 12 of pregnancy, including continuous glucose levels, short physical movement recordings, and medical background information.
To the best of our knowledge, this is the first study to forecast GDM-associated biomarker values 13 to 16 weeks prior to the GDM screening test, using continuous glucose monitoring devices, a wristband for activity detection, and medical background data. We applied machine learning models, specifically Decision Tree and Random Forest Regressors, along with Coupled-Matrix Tensor Factorisation (CMTF) and Elastic Net techniques, examining all possible combinations of these methods across different data modalities. The results demonstrated good performance for most biomarkers. On average, the models achieved Mean Squared Error (MSE) between 0.29 and 0.42 and Mean Absolute Error (MAE) between 0.23 and 0.45 for biomarkers like HDL, LDL, cholesterol, and HbA1c. For the OGTT glucose values, the average MSE ranged from 0.95 to 2.44, and the average MAE ranged from 0.72 to 0.91. Additionally, the utilisation of CMTF with Alternating Least Squares technique yielded slightly better results (0.16 MSE and 0.07 MAE on average) compared to the well-known Elastic Net feature se- lection technique. While our study was conducted with a limited cohort in South Africa, our findings offer promising indications regarding the potential for predicting biomarker values in pregnant women through the integration of wearable devices and medical background data in the analysis. Nevertheless, further validation on a larger, more diverse cohort is imperative to substantiate these encouraging results
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
MGG: Accelerating Graph Neural Networks with Fine-grained intra-kernel Communication-Computation Pipelining on Multi-GPU Platforms
The increasing size of input graphs for graph neural networks (GNNs)
highlights the demand for using multi-GPU platforms. However, existing
multi-GPU GNN systems optimize the computation and communication individually
based on the conventional practice of scaling dense DNNs. For irregularly
sparse and fine-grained GNN workloads, such solutions miss the opportunity to
jointly schedule/optimize the computation and communication operations for
high-performance delivery. To this end, we propose MGG, a novel system design
to accelerate full-graph GNNs on multi-GPU platforms. The core of MGG is its
novel dynamic software pipeline to facilitate fine-grained
computation-communication overlapping within a GPU kernel. Specifically, MGG
introduces GNN-tailored pipeline construction and GPU-aware pipeline mapping to
facilitate workload balancing and operation overlapping. MGG also incorporates
an intelligent runtime design with analytical modeling and optimization
heuristics to dynamically improve the execution performance. Extensive
evaluation reveals that MGG outperforms state-of-the-art full-graph GNN systems
across various settings: on average 4.41X, 4.81X, and 10.83X faster than DGL,
MGG-UVM, and ROC, respectively
Simultaneous Multiparametric and Multidimensional Cardiovascular Magnetic Resonance Imaging
No abstract available
On the Utility of Representation Learning Algorithms for Myoelectric Interfacing
Electrical activity produced by muscles during voluntary movement is a reflection of the firing patterns of relevant motor neurons and, by extension, the latent motor intent driving the movement. Once transduced via electromyography (EMG) and converted into digital form, this activity can be processed to provide an estimate of the original motor intent and is as such a feasible basis for non-invasive efferent neural interfacing. EMG-based motor intent decoding has so far received the most attention in the field of upper-limb prosthetics, where alternative means of interfacing are scarce and the utility of better control apparent. Whereas myoelectric prostheses have been available since the 1960s, available EMG control interfaces still lag behind the mechanical capabilities of the artificial limbs they are intended to steer—a gap at least partially due to limitations in current methods for translating EMG into appropriate motion commands. As the relationship between EMG signals and concurrent effector kinematics is highly non-linear and apparently stochastic, finding ways to accurately extract and combine relevant information from across electrode sites is still an active area of inquiry.This dissertation comprises an introduction and eight papers that explore issues afflicting the status quo of myoelectric decoding and possible solutions, all related through their use of learning algorithms and deep Artificial Neural Network (ANN) models. Paper I presents a Convolutional Neural Network (CNN) for multi-label movement decoding of high-density surface EMG (HD-sEMG) signals. Inspired by the successful use of CNNs in Paper I and the work of others, Paper II presents a method for automatic design of CNN architectures for use in myocontrol. Paper III introduces an ANN architecture with an appertaining training framework from which simultaneous and proportional control emerges. Paper Iv introduce a dataset of HD-sEMG signals for use with learning algorithms. Paper v applies a Recurrent Neural Network (RNN) model to decode finger forces from intramuscular EMG. Paper vI introduces a Transformer model for myoelectric interfacing that do not need additional training data to function with previously unseen users. Paper vII compares the performance of a Long Short-Term Memory (LSTM) network to that of classical pattern recognition algorithms. Lastly, paper vIII describes a framework for synthesizing EMG from multi-articulate gestures intended to reduce training burden
A computational framework for pharmaco-mechanical interactions in arterial walls using parallel monolithic domain decomposition methods
A computational framework is presented to numerically simulate the effects of
antihypertensive drugs, in particular calcium channel blockers, on the
mechanical response of arterial walls. A stretch-dependent smooth muscle model
by Uhlmann and Balzani is modified to describe the interaction of
pharmacological drugs and the inhibition of smooth muscle activation. The
coupled deformation-diffusion problem is then solved using the finite element
software FEDDLib and overlapping Schwarz preconditioners from the Trilinos
package FROSch. These preconditioners include highly scalable parallel GDSW
(generalized Dryja-Smith-Widlund) and RDSW (reduced GDSW) preconditioners.
Simulation results show the expected increase in the lumen diameter of an
idealized artery due to the drug-induced reduction of smooth muscle
contraction, as well as a decrease in the rate of arterial contraction in the
presence of calcium channel blockers. Strong and weak parallel scalability of
the resulting computational implementation are also analyzed
Beam scanning by liquid-crystal biasing in a modified SIW structure
A fixed-frequency beam-scanning 1D antenna based on Liquid Crystals (LCs) is designed for application in 2D scanning with lateral alignment. The 2D array environment imposes full decoupling of adjacent 1D antennas, which often conflicts with the LC requirement of DC biasing: the proposed design accommodates both. The LC medium is placed inside a Substrate Integrated Waveguide (SIW) modified to work as a Groove Gap Waveguide, with radiating slots etched on the upper broad wall, that radiates as a Leaky-Wave Antenna (LWA). This allows effective application of the DC bias voltage needed for tuning the LCs. At the same time, the RF field remains laterally confined, enabling the possibility to lay several antennas in parallel and achieve 2D beam scanning. The design is validated by simulation employing the actual properties of a commercial LC medium
Neural Residual Radiance Fields for Streamably Free-Viewpoint Videos
The success of the Neural Radiance Fields (NeRFs) for modeling and free-view
rendering static objects has inspired numerous attempts on dynamic scenes.
Current techniques that utilize neural rendering for facilitating free-view
videos (FVVs) are restricted to either offline rendering or are capable of
processing only brief sequences with minimal motion. In this paper, we present
a novel technique, Residual Radiance Field or ReRF, as a highly compact neural
representation to achieve real-time FVV rendering on long-duration dynamic
scenes. ReRF explicitly models the residual information between adjacent
timestamps in the spatial-temporal feature space, with a global
coordinate-based tiny MLP as the feature decoder. Specifically, ReRF employs a
compact motion grid along with a residual feature grid to exploit inter-frame
feature similarities. We show such a strategy can handle large motions without
sacrificing quality. We further present a sequential training scheme to
maintain the smoothness and the sparsity of the motion/residual grids. Based on
ReRF, we design a special FVV codec that achieves three orders of magnitudes
compression rate and provides a companion ReRF player to support online
streaming of long-duration FVVs of dynamic scenes. Extensive experiments
demonstrate the effectiveness of ReRF for compactly representing dynamic
radiance fields, enabling an unprecedented free-viewpoint viewing experience in
speed and quality.Comment: Accepted by CVPR 2023. Project page, see
https://aoliao12138.github.io/ReRF
Transition Physics and Boundary-Layer Stability: Computational Modeling in Compressible Flow
Laminar-to-turbulent transition of boundary layers remains a critical subject of study in aerodynamics. The differences in surface friction and heating between laminar and turbulent flows can be nearly an order of magnitude. Accurate prediction of the transition region between these two regimes is essential for design applications.
The objective of this work is to advance simplified approaches to representing the laminar
boundary layer and perturbation dynamics that usher flows to turbulence. A versatile boundary-layer solver called DEKAF including thermochemical effects has been created, and the in-house nonlinear parabolized stability equation technique called EPIC has been advanced, including an approach to reduce divergent growth associated with the inclusion of the mean-flow distortion. The simplified approaches are then applied to advance studies in improving aircraft energy efficiency.
Under the auspices of a NASA University Leadership Initiative, the transformative technology
of a swept, slotted, natural-laminar-flow wing is leveraged to maintain laminar flow over large
extents of the wing surface, thereby increasing energy efficiency. From an aircraft performance
perspective, sweep is beneficial as it reduces the experienced wave drag. From a boundary-layer transition perspective, though, sweep introduces several physical complications, spawned by the crossflow instability mechanism. As sweep is increased, the crossflow mechanism becomes increasingly unstable, and can lead to an early transition to turbulence. The overarching goal of the present analysis then is to address the question, how much sweep can be applied to this wing while maintaining the benefits of the slotted, natural-laminar-flow design? Linear and nonlinear stability analyses will be presented to assess various pathways to turbulence.
In addition, companion computations are presented to accompany the risk-reduction experiment run in the Klebanoff-Saric Wind Tunnel at Texas A&M University. Linear analyses assess a wide range of various configurations to inform experimentalists where relevant unstable content resides. A comparison between simulation and experimental measurements is presented, for which there is a good agreement
Tensor-variate machine learning on graphs
Traditional machine learning algorithms are facing significant challenges as the world enters the era of big data, with a dramatic expansion in volume and range of applications and an increase in the variety of data sources. The large- and multi-dimensional nature of data often increases the computational costs associated with their processing and raises the risks of model over-fitting - a phenomenon known as the curse of dimensionality. To this end, tensors have become a subject of great interest in the data analytics community, owing to their remarkable ability to super-compress high-dimensional data into a low-rank format, while retaining the original data structure and interpretability. This leads to a significant reduction in computational costs, from an exponential complexity to a linear one in the data dimensions.
An additional challenge when processing modern big data is that they often reside on irregular domains and exhibit relational structures, which violates the regular grid assumptions of traditional machine learning models. To this end, there has been an increasing amount of research in generalizing traditional learning algorithms to graph data. This allows for the processing of graph signals while accounting for the underlying relational structure, such as user interactions in social networks, vehicle flows in traffic networks, transactions in supply chains, chemical bonds in proteins, and trading data in financial networks, to name a few.
Although promising results have been achieved in these fields, there is a void in literature when it comes to the conjoint treatment of tensors and graphs for data analytics. Solutions in this area are increasingly urgent, as modern big data is both large-dimensional and irregular in structure. To this end, the goal of this thesis is to explore machine learning methods that can fully exploit the advantages of both tensors and graphs. In particular, the following approaches are introduced: (i) Graph-regularized tensor regression framework for modelling high-dimensional data while accounting for the underlying graph structure; (ii) Tensor-algebraic approach for computing efficient convolution on graphs; (iii) Graph tensor network framework for designing neural learning systems which is both general enough to describe most existing neural network architectures and flexible enough to model large-dimensional data on any and many irregular domains. The considered frameworks were employed in several real-world applications, including air quality forecasting, protein classification, and financial modelling. Experimental results validate the advantages of the proposed methods, which achieved better or comparable performance against state-of-the-art models. Additionally, these methods benefit from increased interpretability and reduced computational costs, which are crucial for tackling the challenges posed by the era of big data.Open Acces
- …