279 research outputs found
DropIn: Making Reservoir Computing Neural Networks Robust to Missing Inputs by Dropout
The paper presents a novel, principled approach to train recurrent neural
networks from the Reservoir Computing family that are robust to missing part of
the input features at prediction time. By building on the ensembling properties
of Dropout regularization, we propose a methodology, named DropIn, which
efficiently trains a neural model as a committee machine of subnetworks, each
capable of predicting with a subset of the original input features. We discuss
the application of the DropIn methodology in the context of Reservoir Computing
models and targeting applications characterized by input sources that are
unreliable or prone to be disconnected, such as in pervasive wireless sensor
networks and ambient intelligence. We provide an experimental assessment using
real-world data from such application domains, showing how the Dropin
methodology allows to maintain predictive performances comparable to those of a
model without missing features, even when 20\%-50\% of the inputs are not
available
On the prediction of clinical outcomes using Heart Rate Variability estimated from wearable devices
This thesis explores the use of Heart Rate Variability as a tool for predicting health outcomes, focusing on data derived from photoplethysmography (PPG) sensors in wrist-worn wearable devices such as smartwatches. These devices offer a unique opportunity for cost-effective, continuous, and unobtrusive monitoring of heart health. However, PPG data is susceptible to motion artefacts, challenging the reliability of Heart Rate Variability metrics derived from it.
A critical finding of this research is the unreliability of specific frequency-domain Heart Rate Variability features, such as the Sympathovagal Balance Index (SVI), due to low signal-to-noise ratio in certain frequency bands. Conversely, the thesis demonstrates that most HRV features, including Root Mean Square of Successive Differences between normal heartbeats (RMSSD) and Standard Deviation of Normal heartbeats (SDNN), can be reliably extracted under conditions of motion, such as during physical activity or recovery from exercise. This is achieved by employing accelerometry data from wearable devices to filter out unreliable PPG data.
The thesis also addresses the issue of missing data in Heart Rate Variability analysis, a consequence of motion artefacts and the energy-saving strategies of wearable devices. By exploring different interpolation methods and their effects on Heart Rate Variability features, this research identifies the best approaches for handling missing data. Particularly, it recommends operating on timestamp time-series over duration time-series, contradicting traditional Heart Rate Variability preprocessing practices. Quadratic interpolation in the time domain was identified as the most effective method, introducing minimal error across numerous Heart Rate Variability features, contrary to interpolation in the duration domain.
The research presented in this thesis evaluates Heart Rate Variability features derived from ultra-short measurement windows, demonstrating the feasibility of accurately estimating RMSSD and SDNN using 30-second and 1-minute time windows, respectively. This study, unique in assessing the effect of missing values on ultra-short Heart Rate Variability data, reveals that missing values significantly impact SDNN estimations while moderately affecting RMSSD. The analysis highlights that ultra-short inter-beat interval time series limit the assessment of very low frequency (VLF) components, increasing bias in SDNN estimates. This finding is particularly significant in light of the prevalent use of SDNN in commercial wearables, underscoring its importance for continuous heart health monitoring. The study notes that the shorter the measurement window and the greater the amount of missing values, the larger the bias observed in SDNN.
A novel aspect of the thesis is the creation of an innovative mathematical model designed to estimate the impact of circadian rhythms on resting heart rate. This model stands out for its computational efficiency, making it particularly suitable for data obtained from wearable devices. It surpasses the single component cosinor model in accuracy, demonstrated by a lower root mean square error (RMSE) in predicting future heart rate values. Additionally, it retains the advantage of providing easily interpretable parameters, such as MESOR, Acrophase, and Amplitude, which are essential for assessing changes in heart activity.
The thesis demonstrates that Heart Rate data can accurately estimate SDNN24 (the Standard Deviation of NN intervals over 24 hours), with a difference of about 0.22±11.47 (RMSE = 53.81 and ). This finding indicates that despite being fragmentary, 24-hour HR data from wrist-worn fitness devices is adequate for estimating SDNN24 and assessing health status, as evidenced by an F1 score of 0.97. The robustness of SDNN24 estimation against noisy data suggests that wrist-worn wearables are capable of reliably monitoring cardiovascular health on a continuous basis, thus facilitating early interventions in response to changes in Sinoatrial Node activity.
The final part of the thesis introduces an innovative approach to health outcome prediction, employing Heart Rate Variability data gathered during exercise alongside Electronic Health Record data. Employing Large Language Models to process EHR data and Convolutional AutoEncoders for Heart Rate Variability analysis, this approach reveals the untapped potential of exercise Heart Rate Variability data in health monitoring and prediction. Deep Learning models incorporating Heart Rate Variability data demonstrated enhanced predictive accuracy for cardiovascular diseases (CVD), coronary heart disease (CHD), and Angina, evidenced by higher Area Under the Curve (AUC) scores compared to models using only Electronic Health Records and demographic/behavioural data. The highest AUC scores achieved were 0.71 for CVD, 0.74 for CHD, and 0.73 for Angina.
In conclusion, this thesis contributes to the field of biomedical engineering by enhancing the understanding and application of HRV analysis in health outcome prediction using wearable device data. It offers insights for future work in continuous, unobtrusive health monitoring and underscores the need for further research in this rapidly evolving domain
Topographic mapping for quality inspection and intelligent filtering of smart-bracelet data
Wrist-worn wearable devices equipped with heart activity sensors can provide valuable data that can be used for preventative health. However, hearth activity analysis from these devices suffers from noise introduced by motion artifacts. Methods traditionally used to remove outliers based on motion data can yield to discarding clean data, if some movement was present, and accepting noisy data, i.e., subject was still but the sensor was misplaced. This work shows that self-organizing maps (SOMs) can be used to effectively accept or reject sections of heart data collected from unreliable devices, such as wrist-worn devices. In particular, the proposed SOM-based filter can accept a larger amount of measurements (less false negatives) with an higher overall quality with respect to methods solely based on statistical analysis of motion data. We provide an empirical analysis on real-world wearable data, comprising heart and motion data of users. We show how topographic mapping can help identifying and interpreting patterns in the sensor data and help relating them to an assessment of user state. More importantly, our experimental results show the proposed approach is able to retain almost twice the amount of data while keeping samples with an error that is an order of magnitude lower with respect to a filter based on accelerometric data
FashionSearch++: Improving Consumer-to-Shop Clothes Retrieval with Hard Negatives
Consumer-to-shop clothes retrieval has recently emerged in computer vision and multimedia communities with the development of architectures that can find similar in-shop clothing images given a query photo. Due to its nature, the main challenge lies in the domain gap between user-acquired and in-shop images. In this paper, we follow the most recent successful research in this area employing convolutional neural networks as feature extractors and propose to enhance the training supervision through a modified triplet loss that takes into account hard negative examples. We test the proposed approach on the Street2Shop dataset, achieving results comparable to state-of-the-art solutions and demonstrating good generalization properties when dealing with different settings and clothing categories
Unifying hardware and software benchmarking: a resource-agnostic model
Lilja (2005) states that “In the field of computer science and engineering there is surprisingly little agreement on how to measure something as fun- damental as the performance of a computer system.”. The field lacks of the most fundamental element for sharing measures and results: an appropriate metric to express performance.
Since the introduction of laptops and mobile devices, there has been a strong research focus towards the energy efficiency of hardware. Many papers, both from academia and industrial research labs, focus on methods and ideas to lower power consumption in order to lengthen the battery life of portable device components. Much less effort has been spent on defining the responsibility of software in the overall computational system energy consumption. Some attempts have been made to describe the energy behaviour of software, but none of them abstract from the physical machine where the measurements were taken. In our opinion this is a strong drawback because results can not be generalized.
In this work we attempt to bridge the gap between characterization and prediction, of both hardware and software, of performance and energy, in a single unified model. We propose a model designed to be as simple as possible, generic enough to be abstract from the specific resource being described or predicted (applying to both time, memory and energy), but also concrete and practical, allowing useful and precise performance and energy predictions. The model applies to the broadest set of resource possible. We focus mainly on time and memory (hence bridging hardware benchmarking and classical algorithms time complexity), and energy consumption. To ensure a wide applicability of the model in real world scenario, the model is completely black-box, it does not require any information about the source code of the program, and only relies on external metrics, like completion time, energy consumption, or performance counters.
Extending the benchmarking model, we define the notion of experimental computational complexity, as the characterization of how the resource usage changes as the input size grows.
Finally, we define a high-level energy model capable of characterizing the power consumption of computers and clusters, in terms of the usage of resources as defined by our benchmarking model.
We tested our model in four experiments:
Expressiveness: we show the close relationship between energy and clas- sical theoretical complexity, also showing that our experimental com- putational complexity is expressive enough to capture interesting be- haviour of programs simply analysing their resource usage.
Performance prediction we use the large database of performance mea- sures available on the CPU SPEC website to train our model and predict the performance of the CPU SPEC suite on randomly selected computers.
Energy profiling: we tested our model to characterize and predict the power usage of a cluster running OpenFOAM, changing the number of active nodes and cores.
Scheduling: on heterogeneous systems applying our performance pre- diction model to features of programs extracted at runtime, we predict the device where is most convenient to execute the programs, in an heterogeneous system
Studio sulle proprietĂ di assorbimento energetico degli algoritmi
Si propone una metodologia di misurazione del consumo energetico dei programmi con una unità di misura che renda i risultati indipendenti dall’Hardware usato. Si analizzano inoltre i risultati studiando le proprietà energetiche di algoritmi noti e di architetture multicore
Atti diplomatici romani, 338-270 a.C. Cronologia e contesto storico
Il soggetto della tesi è l'analisi degli atti diplomatici romani stipulati durante la conquista romana dell'Italia. Questi atti sono principalmente foedera, paces, societates e amicitiae, con i loro equivalenti greci.
A partire dall'analisi sistematica degli atti diplomatici, ho dimostrato che l'attività diplomatica roman portò alla conquista dell'Italia tanto quanto l'attività militare. L'azione diplomatica dei Romani nel panorama politico italico fu differente da quella di altre potenze; di conseguenza, gli atti diplomatici Romani erano molto elaborati (per quanto ci è dato vedere). Infine, molte fonti che sembrano a prima vista incoerenti acquistano senso se analizzate a partire da un'analisi della situazione diplomatica.
Le conclusioni riguardano la strategia geopolitica e diplomatica dei Romani fra IV e III sec. a.C. I Romani usarono la diplomazia come strumento di conquista. Furono sofisticati nel redigere gli atti diplomatici, scegliendo con cura clausole e termini e usandoli per promuovere la presenza romana fra altre popolazioni italiche, ampliando i propri orizzonti diplomatici.
Con la diplomazia, i Romani presero contatti con molte presenze politiche fra i popoli italici e italioti; stipularono paci, passarono di guerra in guerra - provocandone alcune piĂą utili per loro stessi - , conclusero alleanze che risultarono nell'ampliamento dell'esercito romano, colonizzarono territori, tennero d'occhio le potenze che non erano ancora sotto il loro dominio.The main subject is the analysis of Roman diplomatic acts stipulated during the Roman conquest of Italy. These acts are mainly foedera, paces, societates and amicitiae, with their Greek equivalents.
Starting from the systematic analysis of diplomatic acts, I have argued that Roman diplomatic action led to the conquest of Italy as much as military action. Moreover, Roman diplomatic action in the Italian political landscape was different from other powers; subsequently, Roman diplomatic acts (as far as we can notice) were much elaborated. Finally, many sources thought to be incoherent acquire sense if they are read within a diplomatic analysis.
My conclusions concern Roman geopolitical and diplomatic strategy between the IV and III centuries BC. The Romans used diplomacy as a tool of conquest. They were sophisticated in redacting diplomatic acts, carefully choosing clauses and words, and they used them to promote Roman presence among other Italic peoples, widening their diplomatic horizon.
With diplomacy, the Romans took contact with many political presences among Italic and Italiote peoples; they made peace, moving onto other wars; they provoked useful wars for them; they made alliances that provided also military enlargements for the Roman army; they colonized territories; they carefully kept an eye on the powers that were not yet under their dominion
Dual-Branch Collaborative Transformer for Virtual Try-On
Image-based virtual try-on has recently gained a lot of attention in both the scientific and fashion industry communities due to its challenging setting and practical real-world applications. While pure convolutional approaches have been explored to solve the task, Transformer-based architectures have not received significant attention yet. Following the intuition that self- and cross-attention operators can deal with long-range dependencies and hence improve the generation, in this paper we extend a Transformer-based virtual try-on model by adding a dual-branch collaborative module that can exploit cross-modal information at generation time. We perform experiments on the VITON dataset, which is the standard benchmark for the task, and on a recently collected virtual try-on dataset with multi-category clothing, Dress Code. Experimental results demonstrate the effectiveness of our solution over previous methods and show that Transformer-based architectures can be a viable alternative for virtual try-on
- …