6 research outputs found

    An Arabic Sign Language Corpus for Instructional Language in School

    No full text
    Machine translation (MT) technology has made significant progress over the last decade and now offers the potential for Arabic sign language (ArSL) signers to access text published in Arabic. The dominant model of MT is now corpus based. In this model, the accuracy of translation correlates directly with size and coverage of the corpus. The corpus is a collection of translation examples constructed from existing documents such as books and newspapers; however, no written system for sign language (SL) comparable to that used for natural language has yet been developed. Hence, no SL documents exist, complicating the procedure for constructing an SL corpus. In countries such as Ireland and Germany, a number of corpora have already been developed from scratch and used for MT. There is no ArSL corpus for MT, requiring the creation of a new ArSL corpus for language instruction. The goal of building this corpus is to develop an automatic translation system from Arabic text to ArSL. This paper presents the ArSL corpus for instructional language constructed for use in schools, and the methodology used to create it. The corpus was collected at the College of Computer and Information Sciences at Imam Muhammad bin Saud University in Riyadh, Saudi Arabia. A group of interpreters and native signers with backgrounds in education were involved in this work. The corpus was constructed by collecting instructional sentences used daily in schools for the deaf. The syntax and morphology of each sentence were then manually analysed. Each sentence was individually translated, recorded on video, and stored in MPEG format. The corpus contains video data from three native signers. The videos were then annotated using an ELAN annotation tool. The annotated video data contain isolated signs accompanied by detailed information, such as manual and non-manual features. The last procedure in constructing the corpus was to create a bilingual dictionary from the annotated videos. The corpus comprises two main parts. The first part is the annotated video data, comprising isolated signs with detailed information, accompanied by manual and non-manual features. It also contains the Arabic translation script, including syntax and morphology details. The second part is the bilingual dictionary, delivered with the annotated videos

    Arabic Text to Arabic Sign Language Translation System for the Deaf and Hearing-Impaired Community

    No full text
    This paper describes a machine translation system that offers many deaf and hearing impaired people the chance to access published information in Arabic by translating text into their first language, Arabic Sign Language (ArSL). The system was created under the close guidance of a team that included three deaf native signers and one ArSL interpreter. We discuss problems inherent in the design and development of such translation systems and review previous ArSL machine translation systems, which all too often demonstrate a lack of collaboration between engineers and the deaf community. We describe and explain in detail both the adapted translation approach chosen for the proposed system and the ArSL corpus that we collected for this purpose. The corpus has 203 signed sentences (with 710 distinct signs) with content restricted to the domain of instructional language as typically used in deaf education. Evaluation shows that the system produces translated sign sentences outputs with an average word error rate of 46.7% and an average position error rate of 29.4% using leave-one out cross validation. The most frequent source of errors is missing signs in the corpus; this could be addressed in future by collecting more corpus material

    Arabic text to Arabic sign language example-based translation system

    No full text
    This dissertation presents the first corpus-based system for translation from Arabic text into Arabic Sign Language (ArSL) for the deaf and hearing impaired, for whom it can facilitate access to conventional media and allow communication with hearing people. In addition to the familiar technical problems of text-to-text machine translation,building a system for sign language translation requires overcoming some additional challenges. First,the lack of a standard writing system requires the building of a parallel text-to-sign language corpus from scratch, as well as computational tools to prepare this parallel corpus. Further, the corpus must facilitate output in visual form, which is clearly far more difficult than producing textual output. The time and effort involved in building such a parallel corpus of text and visual signs from scratch mean that we will inevitably be working with quite small corpora. We have constructed two parallel Arabic text-to-ArSL corpora for our system. The first was built from school level language instruction material and contains 203 signed sentences and 710 signs. The second was constructed from a children's story and contains 813 signed sentences and 2,478 signs. Working with corpora of limited size means that coverage is a huge issue. A new technique was derived to exploit Arabic morphological information to increase coverage and hence, translation accuracy. Further, we employ two different example-based translation methods and combine them to produce more accurate translation output. We have chosen to use concatenated sign video clips as output rather than a signing avatar, both for simplicity and because this allows us to distinguish more easily between translation errors and sign synthesis errors. Using leave-one-out cross-validation on our first corpus, the system produced translated sign sentence outputs with an average word error rate of 36.2% and an average position-independent error rate of 26.9%. The corresponding figures for our second corpus were an average word error rate of 44.0% and 28.1%. The most frequent source of errors is missing signs in the corpus; this could be addressed in the future by collecting more corpus material. Finally, it is not possible to compare the performance of our system with any other competing Arabic text-to-ArSL machine translation system since no other such systems exist at present

    A New Evaluation Approach for Sign Language Machine Translation

    No full text
    This paper proposes a new evaluation approach for sign language machine translation (SLMT). It aims to show a better correlation between its automatically generated scores and human judgements of translation accuracy. To show the correlation, an Arabic Sign Language (ArSL) corpus has been used for the evaluation experiments and the results obtained by various methods

    Experimental and Theoretical Study for the Popular Shilling Attacks Detection Methods in Collaborative Recommender System

    No full text
    The stability and reliability of filtration and recommender systems are crucial for continuous operation. The presence of fake profiles, known as “shilling attacks,” can undermine the reliability of these systems. Therefore, it is important to detect and classify these attacks. Numerous techniques for detecting shilling attacks have been proposed, including supervised, semi-supervised, unsupervised, Deep Learning, and hyper deep learning methods. These techniques utilize well-known shilling attack models to target collaborative recommender systems. While previous research has focused on evaluating shilling attack strategies from a global perspective, considering factors such as attack size and attacker’s knowledge, there is a lack of comparative studies on the various existing and commonly used attack detection methods. This paper aims to fill this gap by providing a comprehensive survey of shilling attack models, detection attributes, and detection algorithms. Furthermore, we explore the traits of injected profiles that are exploited by detection algorithms, which has not been thoroughly investigated in prior works. We also conduct experimental studies on popular attack detection methods. Our experimental results reveal that hybrid deep learning algorithms exhibit the highest performance in shilling detection, followed by supervised learning algorithms and semi-supervised learning algorithms. In contrast, the unsupervised technique performs poorly. The deep learning-based Shilling Attack Detection demonstrates accuracy and quality in accurately identifying a variety of mixed attacks. This study provides valuable insights into shilling attack models, detection attributes, and detection algorithms. Our findings highlight the superior performance of hybrid deep learning algorithms in shilling detection, as well as the limitations of unsupervised techniques. Deep learning-based Shilling Attack Detection showcases its effectiveness and accuracy in identifying various types of attacks

    Automated Diagnosis for Colon Cancer Diseases Using Stacking Transformer Models and Explainable Artificial Intelligence

    No full text
    Colon cancer is the third most common cancer type worldwide in 2020, almost two million cases were diagnosed. As a result, providing new, highly accurate techniques in detecting colon cancer leads to early and successful treatment of this disease. This paper aims to propose a heterogenic stacking deep learning model to predict colon cancer. Stacking deep learning is integrated with pretrained convolutional neural network (CNN) models with a metalearner to enhance colon cancer prediction performance. The proposed model is compared with VGG16, InceptionV3, Resnet50, and DenseNet121 using different evaluation metrics. Furthermore, the proposed models are evaluated using the LC25000 and WCE binary and muticlassified colon cancer image datasets. The results show that the stacking models recorded the highest performance for the two datasets. For the LC25000 dataset, the stacked model recorded the highest performance accuracy, recall, precision, and F1 score (100). For the WCE colon image dataset, the stacked model recorded the highest performance accuracy, recall, precision, and F1 score (98). Stacking-SVM achieved the highest performed compared to existing models (VGG16, InceptionV3, Resnet50, and DenseNet121) because it combines the output of multiple single models and trains and evaluates a metalearner using the output to produce better predictive results than any single model. Black-box deep learning models are represented using explainable AI (XAI)
    corecore