149 research outputs found

    Translation and psychometrics of instrument of professional attitude for student nurses (IPASN) scale

    Get PDF
    Background: Achieving professional identity is one of the research priorities, and considering the importance of professional attitude among student nurses, it is necessary to identify a scale that is able to measure their achievement in professional attitude. Objectives: The present study was conducted with the aim of translation and psychometrics of instrument of professional attitude for student nurses (IPASN) scale. Methods: In this cross-sectional study, the translation and psychometrics of �instrument of professional attitude for student nurses scale� was performed based on the model of Wild et al. The third to eighth semester nursing students of Ilam University of Medical Sciences comprised the research population who were 300 students. After translation and retranslation, the editorial comments of the scale designer were applied. Then, content validity, face validity, confirmatory factor analysis, internal consistency, and test-retest reliability of the Persian version were calculated. Data were analyzed using SPSS software version 20 and EQS6.1. Results: The confirmatory factor analysis of the 28-item scale with its 8 sub-scales was confirmed by deleting the statement 7 and moving the items 10, 15, and 18. The reliability of the internal consistency was calculated to be α = 0.89 for the total scale, and (0.89), (0.45), (0.67), (0.69), (0.69), (0.73), (0.70), and (0.93) for the sub-scales, respectively. Pearson�s correlation coefficient was r = 0.79 for test-retest reliability (P < 0.005). Conclusions: This study shows that the modified Persian version of the instrument of professional attitude for student nurses scale with 27 statements is valid and reliable, and can be used to assess the nursing students towards their professional life. © 2020, Author(s)

    Video Transformers: A Survey

    Full text link
    Transformer models have shown great success handling long-range interactions, making them a promising tool for modeling video. However they lack inductive biases and scale quadratically with input length. These limitations are further exacerbated when dealing with the high dimensionality introduced with the temporal dimension. While there are surveys analyzing the advances of Transformers for vision, none focus on an in-depth analysis of video-specific designs. In this survey we analyze main contributions and trends of works leveraging Transformers to model video. Specifically, we delve into how videos are handled as input-level first. Then, we study the architectural changes made to deal with video more efficiently, reduce redundancy, re-introduce useful inductive biases, and capture long-term temporal dynamics. In addition we provide an overview of different training regimes and explore effective self-supervised learning strategies for video. Finally, we conduct a performance comparison on the most common benchmark for Video Transformers (i.e., action classification), finding them to outperform 3D ConvNets even with less computational complexity

    Privacy-Constrained Biometric System for Non-cooperative Users

    Get PDF
    With the consolidation of the new data protection regulation paradigm for each individual within the European Union (EU), major biometric technologies are now confronted with many concerns related to user privacy in biometric deployments. When individual biometrics are disclosed, the sensitive information about his/her personal data such as financial or health are at high risk of being misused or compromised. This issue can be escalated considerably over scenarios of non-cooperative users, such as elderly people residing in care homes, with their inability to interact conveniently and securely with the biometric system. The primary goal of this study is to design a novel database to investigate the problem of automatic people recognition under privacy constraints. To do so, the collected data-set contains the subject's hand and foot traits and excludes the face biometrics of individuals in order to protect their privacy. We carried out extensive simulations using different baseline methods, including deep learning. Simulation results show that, with the spatial features extracted from the subject sequence in both individual hand or foot videos, state-of-the-art deep models provide promising recognition performance

    Effect of acute caffeine administration on hyperalgesia and allodynia in a rat neuropathic pain model

    Get PDF
    Introduction: Damage to the central and peripheral nervous system causes neuropathic pain. Caffeine is a plant alkaloid and non-selective antagonist of A1, A2a and A2b adenosine receptors. It is reported that caffeine increases the threshold of pain. In this study, the effect of acute caffeine on behavioral responses of neuropathic pain was investigated. Materials and Methods: The present study was conducted on 56 adult male Wistar rats in the weight range of 220-250 g. Neuropathic pain was induced by chronic constriction injury (CCI(. Animals were randomly divided into 7 groups (n = 8): Control, Sham, CCI, CCI + Saline, and CCI + Caffeine (10, 50 and 100 mg/kg). Thermal hyperalgesia, mechanical and thermal allodynia has been done on days 4,7, 14, 21, 28 after CCI. Results: Neuropathic rats desmostrated increased pain thresholds. Notably, caffeine at a dose of 10 mg/kg significantly increased the thermal allodynia., but at doses of 50 and 100 mg/kg, it significantly decreased the thermal hyperalgesia and mechanical allodynia. Conclusion: Our findings indicated that the effects of caffeine on pain responses are dose-dependent. Probably the inhibition of adenosine A1 receptors by caffeine increases pain responses, while the inhibition of A2a and A2b adenosine receptors is associated with protective effect of caffeine against pain responses. © 2020, Semnan University of Medical Sciences. All rights reserved

    Deep Burst Denoising

    Full text link
    Noise is an inherent issue of low-light image capture, one which is exacerbated on mobile devices due to their narrow apertures and small sensors. One strategy for mitigating noise in a low-light situation is to increase the shutter time of the camera, thus allowing each photosite to integrate more light and decrease noise variance. However, there are two downsides of long exposures: (a) bright regions can exceed the sensor range, and (b) camera and scene motion will result in blurred images. Another way of gathering more light is to capture multiple short (thus noisy) frames in a "burst" and intelligently integrate the content, thus avoiding the above downsides. In this paper, we use the burst-capture strategy and implement the intelligent integration via a recurrent fully convolutional deep neural net (CNN). We build our novel, multiframe architecture to be a simple addition to any single frame denoising model, and design to handle an arbitrary number of noisy input frames. We show that it achieves state of the art denoising results on our burst dataset, improving on the best published multi-frame techniques, such as VBM4D and FlexISP. Finally, we explore other applications of image enhancement by integrating content from multiple frames and demonstrate that our DNN architecture generalizes well to image super-resolution
    corecore