7 research outputs found

    Hand-Washing Video Dataset Annotated According to the World Health Organization’s Hand-Washing Guidelines

    Get PDF
    Funding Information: Funding: This research was funded by the Ministry of Education and Science, Republic of Latvia, project “Integration of reliable technologies for protection against COVID-19 in healthcare and high-risk areas”, project No. VPP-COVID-2020/1-0004. Publisher Copyright: © 2021 by the authors. Licensee MDPI, Basel, Switzerland.Washing hands is one of the most important ways to prevent infectious diseases, including COVID-19. The World Health Organization (WHO) has published hand-washing guidelines. This paper presents a large real-world dataset with videos recording medical staff washing their hands as part of their normal job duties in the Pauls Stradins Clinical University Hospital. There are 3185 hand-washing episodes in total, each of which is annotated by up to seven different persons. The annotations classify the washing movements according to the WHO guidelines by marking each frame in each video with a certain movement code. The intention of this “in-the-wild” dataset is two-fold: to serve as a basis for training machine-learning classifiers for automated hand-washing movement recognition and quality control, and to allow to investigation of the real-world quality of washing performed by working medical staff. We demonstrate how the data can be used to train a machine-learning classifier that achieves classification accuracy of 0.7511 on a test dataset.publishersversionPeer reviewe

    Development and experimental validation of high performance embedded intelligence and fail-operational urban surround perception solutions of the PRYSTINE project

    Get PDF
    Automated Driving Systems (ADSs) commend a substantial reduction of human-caused road accidents while simultaneously lowering emissions, mitigating congestion, decreasing energy consumption and increasing overall productivity. However, achieving higher SAE levels of driving automation and complying with ISO26262 C and D Automotive Safety Integrity Levels (ASILs) is a multi-disciplinary challenge that requires insights into safety-critical architectures, multi-modal perception and real-time control. This paper presents an assorted effort carried out in the European H2020 ECSEL project—PRYSTINE. In this paper, we (1) investigate Simplex, 1oo2d and hybrid fail-operational computing architectures, (2) devise a multi-modal perception system with fail-safety mechanisms, (3) present a passenger vehicle-based demonstrator for low-speed autonomy and (4) suggest a trust-based fusion approach validated on a heavy-duty truck.</p

    Improving Semantic Segmentation of Urban Scenes for Self-Driving Cars with Synthetic Images

    No full text
    Semantic segmentation of an incoming visual stream from cameras is an essential part of the perception system of self-driving cars. State-of-the-art results in semantic segmentation have been achieved with deep neural networks (DNNs), yet training them requires large datasets, which are difficult and costly to acquire and time-consuming to label. A viable alternative to training DNNs solely on real-world datasets is to augment them with synthetic images, which can be easily modified and generated in large numbers. In the present study, we aim at improving the accuracy of semantic segmentation of urban scenes by augmenting the Cityscapes real-world dataset with synthetic images generated with the open-source driving simulator CARLA (Car Learning to Act). Augmentation with synthetic images with a low degree of photorealism from the MICC-SRI (Media Integration and Communication Center&ndash;Semantic Road Inpainting) dataset does not result in the improvement of the accuracy of semantic segmentation, yet both MobileNetV2 and Xception DNNs used in the present study demonstrate a better accuracy after training on the custom-made CCM (Cityscapes-CARLA Mixed) dataset, which contains both real-world Cityscapes images and high-resolution synthetic images generated with CARLA, than after training only on the real-world Cityscapes images. However, the accuracy of semantic segmentation does not improve proportionally to the amount of the synthetic data used for augmentation, which indicates that augmentation with a larger amount of synthetic data is not always better

    Noninvasive optical diagnostics of enhanced green fluorescent protein expression in skeletal muscle for comparison of electroporation and sonoporation efficiencies

    No full text
    E-ISSN: 1560-2281. Impact Factor: 2.859We highlight the options available for noninvasive optical diagnostics of reporter gene expression in mouse tibialis cranialis muscle. An in vivo multispectral imaging technique combined with fluorescence spectroscopy point measurements has been used for the transcutaneous detection of enhanced green fluorescent protein (EGFP) expression, providing information on location and duration of EGFP expression and allowing quantification of EGFP expression levels. For EGFP coding plasmid (pEGFP-Nuc Vector, 10  μg/50  ml10  μg/50  ml) transfection, we used electroporation or ultrasound enhanced microbubble cavitation [sonoporation (SP)]. The transcutaneous EGFP fluorescence in live mice was monitored over a period of one year using the described parameters: area of EGFP positive fibers, integral intensity, and mean intensity of EGFP fluorescence. The most efficient transfection of EGFP coding plasmid was achieved, when one high voltage and four low voltage electric pulses were applied. This protocol resulted in the highest short-term and long-term EGFP expression. Other electric pulse protocols as well as SP resulted in lower fluorescence intensities of EGFP in the transfected area. We conclude that noninvasive multispectral imaging technique combined with fluorescence spectroscopy point measurements is a suitable method to estimate the dynamics and efficiency of reporter gene transfection in vivoBiochemijos katedraBiologijos katedraLietuvos sporto universitetasVytauto Didžiojo universiteta

    Development and Experimental Validation of High Performance Embedded Intelligence and Fail-Operational Urban Surround Perception Solutions of the PRYSTINE Project

    No full text
    Automated Driving Systems (ADSs) commend a substantial reduction of human-caused road accidents while simultaneously lowering emissions, mitigating congestion, decreasing energy consumption and increasing overall productivity. However, achieving higher SAE levels of driving automation and complying with ISO26262 C and D Automotive Safety Integrity Levels (ASILs) is a multi-disciplinary challenge that requires insights into safety-critical architectures, multi-modal perception and real-time control. This paper presents an assorted effort carried out in the European H2020 ECSEL project&mdash;PRYSTINE. In this paper, we (1) investigate Simplex, 1oo2d and hybrid fail-operational computing architectures, (2) devise a multi-modal perception system with fail-safety mechanisms, (3) present a passenger vehicle-based demonstrator for low-speed autonomy and (4) suggest a trust-based fusion approach validated on a heavy-duty truck
    corecore