117 research outputs found

    Heart rate detection from the supratrochlear vessels using a virtual reality headset integrated PPG sensor

    Get PDF
    An increasing amount of virtual reality (VR) research is carried out to support the vast number of applications across mental health, exercise and entertainment fields. Often, this research involves the recording of physiological measures such as heart rate recordings with an electrocardiogram (ECG). One challenge is to enable remote, reliable and unobtrusive VR and heart rate data collection which would allow a wider application of VR research and practice in the field in future. To address the challenge, this work assessed the viability of replacing standard ECG devices with a photoplethysmography (PPG) sensor that is directly integrated into a VR headset over the branches of the supratrochlear vessels. The objective of this study was to investigate the reliability of the PPG sensor for heart-rate detection. A total of 21 participants were recruited. They were asked to wear an ECG belt as ground truth and a VR headset with the embedded PPG sensor. Signals from both sensors were captured in free standing and sitting positions. Results showed that VR headset with an integrated PPG sensor is a viable alternative to an ECG for heart rate measurements in optimal conditions with limited movement. Future research will extend on this finding by testing it in more interactive VR settings

    Systematic whole-genome sequencing reveals an unexpected diversity among actinomycetoma pathogens and provides insights into their antibacterial susceptibilities

    Get PDF
    Mycetoma is a neglected tropical chronic granulomatous inflammatory disease of the skin and subcutaneous tissues. More than 70 species with a broad taxonomic diversity have been implicated as agents of mycetoma. Understanding the full range of causative organisms and their antibiotic sensitivity profiles are essential for the appropriate treatment of infections. The present study focuses on the analysis of full genome sequences and antibiotic inhibitory concentration profiles of actinomycetoma strains from patients seen at the Mycetoma Research Centre in Sudan with a view to developing rapid diagnostic tests. Seventeen pathogenic isolates obtained by surgical biopsies were sequenced using MinION and Illumina methods, and their antibiotic inhibitory concentration profiles determined. The results highlight an unexpected diversity of actinomycetoma causing pathogens, including three Streptomyces isolates assigned to species not previously associated with human actinomycetoma and one new Streptomyces species. Thus, current approaches for clinical and histopathological classification of mycetoma may need to be updated. The standard treatment for actinomycetoma is a combination of sulfamethoxazole/trimethoprim and amoxicillin/clavulanic acid. Most tested isolates had a high IC (inhibitory concentration) to sulfamethoxazole/trimethoprim or to amoxicillin alone. However, the addition of the β-lactamase inhibitor clavulanic acid to amoxicillin increased susceptibility, particularly for Streptomyces somaliensis and Streptomyces sudanensis. Actinomadura madurae isolates appear to have a particularly high IC under laboratory conditions, suggesting that alternative agents, such as amikacin, could be considered for more effective treatment. The results obtained will inform future diagnostic methods for the identification of actinomycetoma and treatment

    Discriminant analysis based on robust regularized covariance estimation

    No full text
    Abweichender Titel laut Übersetzung der Verfasserin/des VerfassersZsfassung in dt. SpracheDie einfache Form der linearen Diskriminanzanalyse (LDA) macht diese zu einem der meistbenutzten Werkzeuge für die Klassifikation von Objekten, wobei die Abhängigkeit von einem Schätzer für die inverse Kovarianzmatriz einen gewichtigen Nachteil dieser Methode darstellt. In unzähligen Anwendungen stehen sehr viele gemessene Merkmale einigen wenigen Beobachtungen gegenüber, wovon einige auch kontaminiert sein können. Jede dieser Eigenschaften macht dieses einfache Werkzeug unbrauchbar für eine Anwendung. Regularisierung ist eine allseits bekannte Methode um einen guten Schätzer für die inverse Kovarianzmatriz zu bekommen, selbst wenn die Kovarianzmatrix schlecht konditioniert ist. Allerdings ist auch diese Methode nicht vor dem Einfluss von Kontamination gefeit und kann in diesem Fall keine zuverlässige Schätzung liefern. Indem Ideen des FAST-MCD Algorithmus zur Bestimmung einer robusten multivariaten Lokations- und Streuungsschätzung aufgegriffen werden, kann allerdings eine robuste, regularisierte Schätzung der inversen Kovarianzmatrix durchgeführt und für LDA verwendet werden. Unter Berücksichtigung des Klassifikations-Kontexts wird ein Maß, ähnlich dem Deviance-Maß in anderen Klassifikationsmethoden, definiert und zur Bestimmung des optimalen Werts des benötigten Regularisierungsparameters verwendet. Eine ausführliche Simulationsstudie zeigt die überragende Leistung des neuen Klassifikations-Algorithmus' für hochdimensionale Daten und kleiner Stichprobengröße, wenn kontaminierte Beobachtungen vorhanden sind, aber auch die hohe Effizienz im Falle von nicht-kontaminierten Daten.Its simple form makes linear discriminant analysis (LDA) a prevalent tool for classification, yet the dependency on an estimate of the precision matrix is a major drawback. In many applications more features than observations are available and some of these observations may be contaminated, impeding use of this simple tool. Regularization techniques, or sparse methods, are well known to give good estimates of the precision matrix when the sample covariance matrix is rank-deficient or ill-conditioned, however contamination also breaks these methods. By borrowing ideas from the FAST-MCD algorithm for robust multivariate location and scale estimation, a robust regularized estimate of the precision matrix can be obtained and used for LDA. In consideration of the classification context, a measure similar to the deviance measure used in other classification methods is defined and used to obtain the optimal value for the required regularization parameter. An extensive simulation study shows the superior performance of the new classification algorithm for high-dimensional data and low sample size in the presence of contaminated observations, but also its high efficiency for uncontaminated data.5

    Robust estimation and variable selection in high-dimensional linear regression models

    No full text
    Linear regression models are commonly used statistical models for predicting a response from a set of predictors. Technological advances allow for simultaneous collection of many predictors, but often only a small number of these is relevant for prediction. Identifying this set of predictors in high-dimensional linear regression models with emphasis on accurate prediction is thus a common goal of quantitative data analyses. While a large number of predictors promises to capture as much information as possible, it bears a risk of containing contaminated values. If not handled properly, contamination can affect statistical analyses and lead to spurious scientific discoveries, jeopardizing the generalizability of findings. In this dissertation I propose robust regularized estimators for sparse linear regression with reliable prediction and variable selection performance under the presence of contamination in the response and one or more predictors. I present theoretical and extensive empirical results underscoring that the penalized elastic net S-estimator is robust towards aberrant contamination and leads to better predictions for heavy tailed error distributions than competing estimators. Especially in these more challenging scenarios, competing robust methods reliant on an auxiliary estimate of the residual scale, are more affected by contamination due to the high finite-sample bias introduced by regularization. For improved variable selection I propose the adaptive penalized elastic net S-estimator. I show this estimator identifies the truly irrelevant predictors with high probability as sample size increases and estimates the parameters of the truly relevant predictors as accurately as if these relevant predictors were known in advance. For practical applications robustness of variable selection is essential. This is highlighted by a case study for identifying proteins to predict stenosis of heart vessels, a sign of complication after cardiac transplantation. High robustness comes at the price of more taxing computations. I present optimized algorithms and heuristics for feasible computation of the estimates in a wide range of applications. With the software made publicly available, the proposed estimators are viable alternatives to non-robust methods, supporting discovery of generalizable scientific results.Science, Faculty ofStatistics, Department ofGraduat
    corecore