6 research outputs found
Noise-in, Bias-out: Balanced and Real-time MoCap Solving
Real-time optical Motion Capture (MoCap) systems have not benefited from the
advances in modern data-driven modeling. In this work we apply machine learning
to solve noisy unstructured marker estimates in real-time and deliver robust
marker-based MoCap even when using sparse affordable sensors. To achieve this
we focus on a number of challenges related to model training, namely the
sourcing of training data and their long-tailed distribution. Leveraging
representation learning we design a technique for imbalanced regression that
requires no additional data or labels and improves the performance of our model
in rare and challenging poses. By relying on a unified representation, we show
that training such a model is not bound to high-end MoCap training data
acquisition, and exploit the advances in marker-less MoCap to acquire the
necessary data. Finally, we take a step towards richer and affordable MoCap by
adapting a body model-based inverse kinematics solution to account for
measurement and inference uncertainty, further improving performance and
robustness. Project page: https://moverseai.github.io/noise-tailComment: Project page: https://moverseai.github.io/noise-tai
How Innocent is the Restenosis of the Infarct-related Coronary Artery After Successful Initial Recanalization?
The present case report describes a patient who sustained an acute inferior wall myocardial infarction, but initially remained clinically stable, then he underwent a successful coronary angioplasty and stenting procedure of a totally occluded right coronary artery, subsequently developing a dramatic clinical course with cardiogenic shock and cardiac arrest due to acute stent thrombosis which was successfully managed with repeat coronary angioplasty. We attributed this discrepant clinical manifestation of acute coronary occlusion to coronary collaterals, initially being present and then disappearing following the recanalization procedure, as being responsible for the dramatic clinical picture following the stent thrombosis
Drone Control in AR: An Intuitive System for Single-Handed Gesture Control, Drone Tracking, and Contextualized Camera Feed Visualization in Augmented Reality
Traditional drone handheld remote controllers, although well-established and widely used, are not a particularly intuitive control method. At the same time, drone pilots normally watch the drone video feed on a smartphone or another small screen attached to the remote. This forces them to constantly shift their visual focus from the drone to the screen and vice-versa. This can be an eye-and-mind-tiring and stressful experience, as the eyes constantly change focus and the mind struggles to merge two different points of view. This paper presents a solution based on Microsoft’s HoloLens 2 headset that leverages augmented reality and gesture recognition to make drone piloting easier, more comfortable, and more intuitive. It describes a system for single-handed gesture control that can achieve all maneuvers possible with a traditional remote, including complex motions; a method for tracking a real drone in AR to improve flying beyond line of sight or at distances where the physical drone is hard to see; and the option to display the drone’s live video feed in AR, either in first-person-view mode or in context with the environment
Drone Control in AR: An Intuitive System for Single-Handed Gesture Control, Drone Tracking, and Contextualized Camera Feed Visualization in Augmented Reality
Traditional drone handheld remote controllers, although well-established and widely used, are not a particularly intuitive control method. At the same time, drone pilots normally watch the drone video feed on a smartphone or another small screen attached to the remote. This forces them to constantly shift their visual focus from the drone to the screen and vice-versa. This can be an eye-and-mind-tiring and stressful experience, as the eyes constantly change focus and the mind struggles to merge two different points of view. This paper presents a solution based on Microsoft’s HoloLens 2 headset that leverages augmented reality and gesture recognition to make drone piloting easier, more comfortable, and more intuitive. It describes a system for single-handed gesture control that can achieve all maneuvers possible with a traditional remote, including complex motions; a method for tracking a real drone in AR to improve flying beyond line of sight or at distances where the physical drone is hard to see; and the option to display the drone’s live video feed in AR, either in first-person-view mode or in context with the environment
Volume-of-Interest Aware Deep Neural Networks for Rapid Chest CT-Based COVID-19 Patient Risk Assessment
Since December 2019, the world has been devastated by the Coronavirus Disease 2019 (COVID-19) pandemic. Emergency Departments have been experiencing situations of urgency where clinical experts, without long experience and mature means in the fight against COVID-19, have to rapidly decide the most proper patient treatment. In this context, we introduce an artificially intelligent tool for effective and efficient Computed Tomography (CT)-based risk assessment to improve treatment and patient care. In this paper, we introduce a data-driven approach built on top of volume-of-interest aware deep neural networks for automatic COVID-19 patient risk assessment (discharged, hospitalized, intensive care unit) based on lung infection quantization through segmentation and, subsequently, CT classification. We tackle the high and varying dimensionality of the CT input by detecting and analyzing only a sub-volume of the CT, the Volume-of-Interest (VoI). Differently from recent strategies that consider infected CT slices without requiring any spatial coherency between them, or use the whole lung volume by applying abrupt and lossy volume down-sampling, we assess only the “most infected volume” composed of slices at its original spatial resolution. To achieve the above, we create, present and publish a new labeled and annotated CT dataset with 626 CT samples from COVID-19 patients. The comparison against such strategies proves the effectiveness of our VoI-based approach. We achieve remarkable performance on patient risk assessment evaluated on balanced data by reaching 88.88%, 89.77%, 94.73% and 88.88% accuracy, sensitivity, specificity and F1-score, respectively