29 research outputs found

    The Implications of Naturalism as an Educational Philosophy in Jordan from the Perspectives of Childhood Education Teachers

    Get PDF
    The purpose of this study was to identify the educational implications of naturalism as an educational philosophy from the Jordanian childhood education teachers' perspectives. Each philosophy simply represents a unique conviction concerning the nature of the teaching/learning process. This study could serve as a grounded theory for Jordanian childhood teachers to comprehend the need for a clear educational philosophy within the Jordanian educational system. In addition, this research study would draw Jordanian childhood teachers' interest to be acquainted more with the educational principles of such philosophical theory. The researchers employed a questionnaire consisted of twenty one items, which correspond to the educational principles of naturalism. The quantitative approach is used to gather data as one of the techniques and descriptive due to its suitability for this study. The study findings revealed that Jordanian childhood education teachers' perspectives toward the implications of naturalism as an educational philosophy were positive for all domains; curriculum, aims, and activities. Based on the findings, the researchers provided some relevant recommendations. Keywords: Naturalism, Educational Philosophy, Childhood Education Teachers, Jordan.

    The application of polynomial discriminant function classifiers to isolated arabic speech recognition

    Get PDF
    In this paper, we apply polynomial discriminant function classifiers for isolated-word speaker-independent Arabic digit recognition. The performance of the polynomial classifier is evaluated for different implementations. We also provide a performance comparison between the polynomial classifier and Dynamic Time Warping (DTW). The polynomial classifier is found to outperform DTW in many aspects such as recognition rate, and computational and memory requirements

    Defensive Approximation: Securing CNNs using Approximate Computing

    Full text link
    In the past few years, an increasing number of machine-learning and deep learning structures, such as Convolutional Neural Networks (CNNs), have been applied to solving a wide range of real-life problems. However, these architectures are vulnerable to adversarial attacks. In this paper, we propose for the first time to use hardware-supported approximate computing to improve the robustness of machine learning classifiers. We show that our approximate computing implementation achieves robustness across a wide range of attack scenarios. Specifically, for black-box and grey-box attack scenarios, we show that successful adversarial attacks against the exact classifier have poor transferability to the approximate implementation. Surprisingly, the robustness advantages also apply to white-box attacks where the attacker has access to the internal implementation of the approximate classifier. We explain some of the possible reasons for this robustness through analysis of the internal operation of the approximate implementation. Furthermore, our approximate computing model maintains the same level in terms of classification accuracy, does not require retraining, and reduces resource utilization and energy consumption of the CNN. We conducted extensive experiments on a set of strong adversarial attacks; We empirically show that the proposed implementation increases the robustness of a LeNet-5 and an Alexnet CNNs by up to 99% and 87%, respectively for strong grey-box adversarial attacks along with up to 67% saving in energy consumption due to the simpler nature of the approximate logic. We also show that a white-box attack requires a remarkably higher noise budget to fool the approximate classifier, causing an average of 4db degradation of the PSNR of the input image relative to the images that succeed in fooling the exact classifierComment: ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS 2021

    مستوى الدعم الاجتماعي لدى طلبة مساقات الجمباز في كلية التربية الرياضية في جامعة اليرموك

    Get PDF
    هدفت الدراسة التعرف إلى مستوى الدعم الاجتماعي لدى طلبة مساقات الجمباز في كلية التربية الرياضية في جامعة اليرموك، اعتمدت الدراسة على الأسلوب الوصفي المسحي، وتم تطبيق الاستبانة على عينة عشوائية بلغت (80) طالباً وطالبة من طلبة مساقات الجمباز مستوى (تعليم وتدريب1+2). أظهرت نتائج الدراسة أن مستوى المساندة الاجتماعية لدى طلبة كلية التربية الرياضية جاء مرتفعاً، حيث أن المتوسطات الحسابية لجميع مجالات مقياس المساندة الاجتماعية جاءت بدرجة تقييم مرتفعة، حيث جاء بالمرتبة الأولى مجال المساندة من قبل الزملاء، وبالمرتبة الثانية مجال المساندة من مدرسين مساقات الجمباز، وبالمرتبة الثالثة والأخيرة جاء مجال المساندة من الأسرة، وكشفت الدراسة عدم وجود فروق إحصائية في مستوى المساندة الاجتماعية لدى الطلبة وفقاً لمتغيري الجنس ومستوى المساق، وأوصت الدراسة بالعمل على توعية أولياء أمور الطلبة بأهمية تقديم الدعم والمساندة الاجتماعية سواء المعنوية أم المادية لأبنائهم الطلبة. The article aimed to meet the level of social support provided to the students of gymnastics at the faculty of physical education at Yarmouk University. The study adopted the descriptive-analytic method. A questionnaire was applied on a random sample of (80) male and female students majoring in gymnastics levels (1+2) the results of the study showed that the levels of social support for the students of the faculty of physical education were high, arithmetic means for all levels of social support scale came out with high evaluation degrees. At first place came the field of support from collagens. In second place the field of support from teachers of gymnastics. In third place is the field of support from family. The study revealed that there were no static differences among students in the level of social support. According to the two variables of gender and course level. The study came out with a commendation on raising awareness among students\u27 parents on the importance of providing social support to their sons and daughters morally and financially

    Defending with Errors: Approximate Computing for Robustness of Deep Neural Networks

    Full text link
    Machine-learning architectures, such as Convolutional Neural Networks (CNNs) are vulnerable to adversarial attacks: inputs crafted carefully to force the system output to a wrong label. Since machine-learning is being deployed in safety-critical and security-sensitive domains, such attacks may have catastrophic security and safety consequences. In this paper, we propose for the first time to use hardware-supported approximate computing to improve the robustness of machine-learning classifiers. We show that successful adversarial attacks against the exact classifier have poor transferability to the approximate implementation. Surprisingly, the robustness advantages also apply to white-box attacks where the attacker has unrestricted access to the approximate classifier implementation: in this case, we show that substantially higher levels of adversarial noise are needed to produce adversarial examples. Furthermore, our approximate computing model maintains the same level in terms of classification accuracy, does not require retraining, and reduces resource utilization and energy consumption of the CNN. We conducted extensive experiments on a set of strong adversarial attacks; We empirically show that the proposed implementation increases the robustness of a LeNet-5, Alexnet and VGG-11 CNNs considerably with up to 50% by-product saving in energy consumption due to the simpler nature of the approximate logic.Comment: arXiv admin note: substantial text overlap with arXiv:2006.0770

    Code-Bridged Classifier (CBC): A Low or Negative Overhead Defense for Making a CNN Classifier Robust Against Adversarial Attacks

    Full text link
    In this paper, we propose Code-Bridged Classifier (CBC), a framework for making a Convolutional Neural Network (CNNs) robust against adversarial attacks without increasing or even by decreasing the overall models' computational complexity. More specifically, we propose a stacked encoder-convolutional model, in which the input image is first encoded by the encoder module of a denoising auto-encoder, and then the resulting latent representation (without being decoded) is fed to a reduced complexity CNN for image classification. We illustrate that this network not only is more robust to adversarial examples but also has a significantly lower computational complexity when compared to the prior art defenses.Comment: 6 pages, Accepted and to appear in ISQED 202
    corecore