4,767 research outputs found

    Wearing face mask detection using deep learning through COVID-19 pandemic

    Full text link
    During the COVID-19 pandemic, wearing a face mask has been known to be an effective way to prevent the spread of COVID-19. In lots of monitoring tasks, humans have been replaced with computers thanks to the outstanding performance of the deep learning models. Monitoring the wearing of a face mask is another task that can be done by deep learning models with acceptable accuracy. The main challenge of this task is the limited amount of data because of the quarantine. In this paper, we did an investigation on the capability of three state-of-the-art object detection neural networks on face mask detection for real-time applications. As mentioned, here are three models used, Single Shot Detector (SSD), two versions of You Only Look Once (YOLO) i.e., YOLOv4-tiny, and YOLOv4-tiny-3l from which the best was selected. In the proposed method, according to the performance of different models, the best model that can be suitable for use in real-world and mobile device applications in comparison to other recent studies was the YOLOv4-tiny model, with 85.31% and 50.66 for mean Average Precision (mAP) and Frames Per Second (FPS), respectively. These acceptable values were achieved using two datasets with only 1531 images in three separate classes.Comment: Accepted to Scientia Iranica Journa

    A Survey on Computer Vision based Human Analysis in the COVID-19 Era

    Full text link
    The emergence of COVID-19 has had a global and profound impact, not only on society as a whole, but also on the lives of individuals. Various prevention measures were introduced around the world to limit the transmission of the disease, including face masks, mandates for social distancing and regular disinfection in public spaces, and the use of screening applications. These developments also triggered the need for novel and improved computer vision techniques capable of (i) providing support to the prevention measures through an automated analysis of visual data, on the one hand, and (ii) facilitating normal operation of existing vision-based services, such as biometric authentication schemes, on the other. Especially important here, are computer vision techniques that focus on the analysis of people and faces in visual data and have been affected the most by the partial occlusions introduced by the mandates for facial masks. Such computer vision based human analysis techniques include face and face-mask detection approaches, face recognition techniques, crowd counting solutions, age and expression estimation procedures, models for detecting face-hand interactions and many others, and have seen considerable attention over recent years. The goal of this survey is to provide an introduction to the problems induced by COVID-19 into such research and to present a comprehensive review of the work done in the computer vision based human analysis field. Particular attention is paid to the impact of facial masks on the performance of various methods and recent solutions to mitigate this problem. Additionally, a detailed review of existing datasets useful for the development and evaluation of methods for COVID-19 related applications is also provided. Finally, to help advance the field further, a discussion on the main open challenges and future research direction is given.Comment: Submitted to Image and Vision Computing, 44 pages, 7 figure

    CGAMES'2009

    Get PDF

    A novel DeepMaskNet model for face mask detection and masked facial recognition

    Get PDF
    Coronavirus disease (COVID-19) has significantly affected the daily life activities of people globally. To prevent the spread of COVID-19, the World Health Organization has recommended the people to wear face mask in public places. Manual inspection of people for wearing face masks in public places is a challenging task. Moreover, the use of face masks makes the traditional face recognition techniques ineffective, which are typically designed for unveiled faces. Thus, introduces an urgent need to develop a robust system capable of detecting the people not wearing the face masks and recognizing different persons while wearing the face mask. In this paper, we propose a novel DeepMasknet framework capable of both the face mask detection and masked facial recognition. Moreover, presently there is an absence of a unified and diverse dataset that can be used to evaluate both the face mask detection and masked facial recognition. For this purpose, we also developed a largescale and diverse unified mask detection and masked facial recognition (MDMFR) dataset to measure the performance of both the face mask detection and masked facial recognition methods. Experimental results on multiple datasets including the cross-dataset setting show the superiority of our DeepMasknet framework over the contemporary models

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Smart Pain Assessment tool for critically ill patients unable to communicate: Early stage development of a medical device

    Get PDF
    Critically ill patients often experience pain during their treatment but due to patients’ lowered ability to communicate, pain assessment may be challenging. The aim of the study was to develop the concept of the Smart Pain Assessment tool based on the Internet of Things technology for critically ill patients who are unable to communicate their pain. The study describes two phases of the early stage development of the Smart Pain Assessment tool in a medical device development framework. The initiation Phase I consists of a scoping review, conducted to explore the potentiality of the Internet of Things technology in basic nursing care. In the formulation Phase II, the prototype of the Smart Pain Assessment tool was tested and the concept was evaluated for feasibility. The prototype was tested with healthy participants (n=31) during experimental pain, measuring pain-related physiological variables and activity of five facial muscles. The variables were combined using machine learning to create a model for pain prediction. The feasibility of the concept was evaluated in focus group interviews with critical care nurses (n=20) as potential users of the device. The literature review suggests that the development of Internet of Things -based innovations in basic nursing care is diverse but still in its early stages. The prototype was able to identify experimental pain and classify its intensity as mild or moderate/severe with 83% accuracy. In addition, three of the five facial muscles tested were recognised to provide the most pain-related information. According to critical care nurses, the Smart Pain Assessment tool could be used to ensure pain assessment, but it needs to be integrated into an existing patient monitoring and information system, and the reliability of the data provided by the device needs to be assessable for nurses. Based on the results of this study, detecting and classifying experimental pain's intensity automatically using an Internet of Things -based device is possible. The prototype of the device should be further developed and tested in clinical trials, involving the users at each stage of the development to ensure clinical relevance and a user-centric design.ÄlykĂ€s kipumittari kommunikoimaan kykenemĂ€ttömille kriittisesti sairaille potilaille: LÀÀkinnĂ€llisen laitteen varhainen kehittĂ€minen Kriittisesti sairaat potilaat kokevat usein kipua hoidon aikana, mutta potilaiden kivun arviointi on haastavaa tilanteissa, joissa potilaan kyky kommunikoida on alentunut. Tutkimuksen tavoitteena oli kehittÀÀ toimintakonsepti esineiden internet -teknologiaan perustuvalle ÄlykkÀÀlle kipumittarille, joka on suunniteltu kriittisesti sairaille potilaille, jotka eivĂ€t kykene kommunikoimaan kivustaan. Tutkimuksessa kuvataan ÄlykkÀÀn kipumittarin varhaisia kehitysvaiheita lÀÀkinnĂ€llisen laitteen kehitysprosessin mukaisesti. Aloitusvaiheessa I toteutettiin kartoittava kirjallisuuskatsaus, jossa selvitettiin esineiden internet -teknologian mahdollisuuksia perushoidossa. Muotoiluvaiheessa II testattiin laitteen prototyyppiĂ€ ja arvioitiin laitteen toimintakonseptin toteutettavuutta. Prototyypin testaukseen osallistui terveitĂ€ koehenkilöitĂ€ (n=31), joille tuotettiin kipua. Kipualtistuksen aikana mitattiin kipuun liittyviĂ€ fysiologisia muuttujia ja viiden kasvolihaksen aktivoitumista. Muuttujat yhdistettiin koneoppimismenetelmĂ€llĂ€ kivun ennustemalliksi. LisĂ€ksi teho-osastolla työskentelevĂ€t sairaanhoitajat (n=20) arvioivat fokusryhmĂ€haastatteluissa laitteen toimintakonseptin toteutettavuutta. Kirjallisuuskatsauksen tuloksista kĂ€y ilmi, ettĂ€ esineiden internetiin perustuvien innovaatioiden kehittĂ€minen perushoidon tukemiseen on monipuolista mutta se on vielĂ€ alkuvaiheessa. ÄlykkÀÀn kipumittarin prototyyppi osoittautui lupaavaksi kokeellisen kivun tunnistamisessa ja sen voimakkuuden luokittelussa, saavuttaen 83 %:n tarkkuuden kivun luokittelussa lievÀÀn tai kohtalaiseen/voimakkaaseen. LisĂ€ksi todettiin, ettĂ€ viidestĂ€ mitatusta kasvolihaksesta kolme antoi merkittĂ€vintĂ€ tietoa kivun tunnistamiseen ja voimakkuuteen liittyen. Sairaanhoitajat nĂ€kivĂ€t potentiaalia ÄlykkÀÀn kipumittarin kĂ€ytössĂ€ potilaiden kivun arvioinnissa teho-osastolla. Laite tulisi kuitenkin integroida kĂ€ytössĂ€ olevaan potilastietojĂ€rjestelmÀÀn, ja laitteen tuottamien tietojen luotettavuus tulisi olla hoitajien arvioitavissa. Tulosten perusteella esineiden internet -teknologiaan perustuvan laitteen avulla on mahdollista tunnistaa ja luokitella kokeellisen kivun voimakkuutta automaattisesti. Laitteen prototyyppiĂ€ tulee jatkokehittÀÀ ja testata kliinisissĂ€ tutkimuksissa. Tulevat kĂ€yttĂ€jĂ€t tulee ottaa mukaan jokaiseen kehitysvaiheeseen laitteen kliinisen merkityksen ja kĂ€yttĂ€jĂ€lĂ€htöisen muotoilun varmistamiseksi

    Hybrid deep feature generation for appropriate face mask use detection

    Get PDF
    Mask usage is one of the most important precautions to limit the spread of COVID-19. Therefore, hygiene rules enforce the correct use of face coverings. Automated mask usage classification might be used to improve compliance monitoring. This study deals with the problem of inappropriate mask use. To address that problem, 2075 face mask usage images were collected. The individual images were labeled as either mask, no masked, or improper mask. Based on these labels, the following three cases were created: Case 1: mask versus no mask versus improper mask, Case 2: mask versus no mask + improper mask, and Case 3: mask versus no mask. This data was used to train and test a hybrid deep feature-based masked face classification model. The presented method comprises of three primary stages: (i) pre-trained ResNet101 and DenseNet201 were used as feature generators; each of these generators extracted 1000 features from an image; (ii) the most discriminative features were selected using an improved RelieF selector; and (iii) the chosen features were used to train and test a support vector machine classifier. That resulting model attained 95.95%, 97.49%, and 100.0% classification accuracy rates on Case 1, Case 2, and Case 3, respectively. Having achieved these high accuracy values indicates that the proposed model is fit for a practical trial to detect appropriate face mask use in real time
    • 

    corecore