12 research outputs found

    Stromal cell-derived factor 1 polymorphism in patients infected with HIV and implications for AIDS progression in Tunisia

    Get PDF
    Sameh Amara1, Jorge Domenech2, Faouzi Jenhani31Cellular Immunology and Cytometry, National Blood Transfusion Center, Tunis, Tunisia; 2Hematopoiesis Laboratory, Faculty of Medicine, University of Tours, Tours, France; 3Faculty of Pharmacy, Unit Research in Immunology, Tunis, TunisiaBackground: An interesting finding in the epidemiology of human immunodeficiency virus (HIV) infection is that certain mutations in genes coding for chemokines, and their receptors and ligands, may confer resistance or susceptibility to HIV-1 infection and acquired immunodeficiency syndrome (AIDS) progression. The mutation most frequently studied is stromal cell-derived factor (SDF)1-3'A, a single nucleotide polymorphism in the 3' untranslated region at the 801 position of the SDF1 gene, which seems to be associated with susceptibility or resistance to diseases, including AIDS. We examined the frequency of the above polymorphisms in the Tunisian population, and evaluated their contribution to a protective genetic background against HIV infection and progression.Methods and materials: One hundred forty blood samples from HIV-infected patients from the Cellular Immunology Research Laboratory at the National Blood Transfusion Center were compared with those of 164 random blood donors from the same center. Genotyping was initially performed by polymerase chain reaction (PCR) analysis. SDF1 PCR product genomic regions were further subjected to restriction fragment length polymorphism analysis for genotype determination. Screening for the SDF1 polymorphism in the HIV-infected population yielded 56 heterozygous (40%), 52 mutation homozygous (37.1%), and 32 wild-type homozygous (22.8%) subjects. In contrast, in our healthy population, we found 70/164 heterozygous (42.6%), nine mutation homozygous (5.4%), and 85 wild-type homozygous (51.8%) subjects. The allele frequencies in the HIV-infected and healthy populations were f(SD1 3'A) = 57.1%, f(SDF1) = 42.8%, f(SDF1 3'A) = 26.8%, and f(SDF1) = 73.1%, respectively. The allelic and genotypic frequencies of the SDF1 3'A in our population show significantly higher distribution profiles compared with those observed in other Caucasian, European, and African American populations. Our results were examined by X2 test and appear to confirm an association between polymorphism and AIDS progression. A higher odds ratio (>1) was found for the SDF1-3'A allele than for the wild-type allele (<1).Conclusion: This result seems to confirm that the SDF1-3'A allele is associated with acceleration and progression from HIV infection to AIDS in the Tunisian population.Keywords: human immunodeficiency virus, SDF1 polymorphism, Tunisi

    Deep Full-Body HPE for Activity Recognition from RGB Frames Only

    No full text
    Human Pose Estimation (HPE) is defined as the problem of human joints’ localization (also known as keypoints: elbows, wrists, etc.) in images or videos. It is also defined as the search for a specific pose in space of all articulated joints. HPE has recently received significant attention from the scientific community. The main reason behind this trend is that pose estimation is considered as a key step for many computer vision tasks. Although many approaches have reported promising results, this domain remains largely unsolved due to several challenges such as occlusions, small and barely visible joints, and variations in clothing and lighting. In the last few years, the power of deep neural networks has been demonstrated in a wide variety of computer vision problems and especially the HPE task. In this context, we present in this paper a Deep Full-Body-HPE (DFB-HPE) approach from RGB images only. Based on ConvNets, fifteen human joint positions are predicted and can be further exploited for a large range of applications such as gesture recognition, sports performance analysis, or human-robot interaction. To evaluate the proposed deep pose estimation model, we apply it to recognize the daily activities of a person in an unconstrained environment. Therefore, the extracted features, represented by deep estimated poses, are fed to an SVM classifier. To validate the proposed architecture, our approach is tested on two publicly available benchmarks for pose estimation and activity recognition, namely the J-HMDBand CAD-60datasets. The obtained results demonstrate the efficiency of the proposed method based on ConvNets and SVM and prove how deep pose estimation can improve the recognition accuracy. By means of comparison with state-of-the-art methods, we achieve the best HPE performance, as well as the best activity recognition precision on the CAD-60 dataset

    Semi-automatic news video annotation framework for Arabic text

    No full text
    In this paper, we present a semi-automatic news video annotation tool. The tool and its algorithms are dedicated to artificial Arabic text embedded in video news in the form of static text as well as scrolling one. It is performed at two different levels. Including specificities of Arabic script, the tool manages a global level which concerns the entire video and a local level which concerns any specific frame extracted from the video. The global annotation is performed manually thanks to a user interface. As a result of this step, we obtain the global xml file. The local annotation at the frame level is done automatically according to the information contained in the global metafile and a proposed text tracking algorithm. The main application of our tool is the ground truthing of textual information in video content. It is being used for this purpose in the Arabic Text in Video (AcTiV) database project in our lab. One of the functions that AcTiV provides, is a benchmark to compare existing and future Arabic video OCR systems

    Data, protocol and algorithms for performance evaluation of text detection in Arabic news video

    No full text
    Benchmark datasets and their corresponding evaluation protocols are commonly used by the computer vision community, in a variety of application domains, to assess the performance of existing systems. Even though text detection and recognition in video has seen much progress in recent years, relatively little work has been done to propose standardized annotations and evaluation protocols especially for Arabic Video-OCR systems. In this paper, we present a framework for evaluating text detection in videos. Additionally, dataset, ground-truth annotations and evaluation protocols, are provided for Arabic text detection. Moreover, two published text detection algorithms are tested on a part of the AcTiV database and evaluated using a set of the proposed evaluation protocols

    Multi-dimensional long short-term memory networks for artificial Arabic text recognition in news video

    No full text
    This study presents a novel approach for Arabic video text recognition based on recurrent neural networks. In fact, embedded texts in videos represent a rich source of information for indexing and automatically annotating multimedia documents. However, video text recognition is a non-trivial task due to many challenges like the variability of text patterns and the complexity of backgrounds. In the case of Arabic, the presence of diacritic marks, the cursive nature of the script and the non-uniform intra/inter word distances, may introduce many additional challenges. The proposed system presents a segmentation-free method that relies specifically on a multi-dimensional long short-term memory coupled with a connectionist temporal classification layer. It is shown that using an efficient pre-processing step and a compact representation of Arabic character models brings robust performance and yields a low-error rate than other recently published methods. The authors’ system is trained and evaluated using the public AcTiV-R dataset under different evaluation protocols. The obtained results are very interesting. They also outperform current state-of-the-art approaches on the public dataset ALIF in terms of recognition rates at both character and line levels

    A dataset for Arabic text detection, tracking and recognition in news videos- AcTiV

    No full text
    Recently, promising results have been reported on video text detection and recognition. Most of the proposed methods are tested on private datasets with non-uniform evaluation metrics. We report here on the development of a publicly accessible annotated video dataset designed to assess the performance of different artificial Arabic text detection, tracking and recognition systems. The dataset includes 80 videos (more than 850,000 frames) collected from 4 different Arabic news channels. An attempt was made to ensure maximum diversities of the textual content in terms of size, position and background. This data is accompanied by detailed annotations for each textbox. We also present a region-based text detection approach in addition to a set of evaluation protocols on which the performance of different systems can be measured

    Open Datasets and Tools for Arabic Text Detection and Recognition in News Video Frames

    No full text
    Recognizing texts in video is more complex than in other environments such as scanned documents. Video texts appear in various colors, unknown fonts and sizes, often affected by compression artifacts and low quality. In contrast to Latin texts, there are no publicly available datasets which cover all aspects of the Arabic Video OCR domain. This paper describes a new well-defined and annotated Arabic-Text-in-Video dataset called AcTiV 2.0. The dataset is dedicated especially to building and evaluating Arabic video text detection and recognition systems. AcTiV 2.0 contains 189 video clips serving as a raw material for creating 4063 key frames for the detection task and 10,415 cropped text images for the recognition task. AcTiV 2.0 is also distributed with its annotation and evaluation tools that are made open-source for standardization and validation purposes. This paper also reports on the evaluation of several systems tested under the proposed detection and recognition protocols

    Text detection in arabic news video based on SWT operator and convolutional auto-encoders

    No full text
    Text detection in videos is a challenging problem due to variety of text specificities, presence of complex background and anti-aliasing/compression artifacts. In this paper, we present an approach for horizontally aligned artificial text detection in Arabic news video. The novelty of this method revolves around the combination of two techniques: an adapted version of the Stroke Width Transform (SWT) algorithm and a convolutional auto-encoder (CAE). First, the SWT extracts text candidates' components. They are then filtered and grouped using geometric constraints and Stroke Width information. Second, the CAE is used as an unsupervised feature learning method to discriminate the obtained textline candidates as text or non-text. We assess the proposed approach on the public Arabic-Text-in-Video database (AcTiV-DB) using different evaluation protocols including data from several TV channels. Experiments indicate that the use of learned features significantly improves the text detection results

    ICPR2016 contest on arabic text detection and recognition in video frames - AcTiVComp

    No full text
    This paper describes the AcTiVComp: detection and recognition of Arabic Text in Video competition in conjunction with the 23rd International Conference on Pattern Recognition (ICPR). The main objective of this competition is to evaluate the performance of participants' algorithms to automatically locate and/or recognize overlay text lines in Arabic video frames using the freely available AcTiV dataset. In this first edition of AcTiVComp, four groups with five systems are participating to the competition. In the detection challenge, the systems are compared based on the standard assessment metrics (i.e. recall, precision and F-score). The recognition results evaluation is based on the recognition rates at the character, word and line levels. The systems were tested in a blind manner on the closed-test set of the AcTiV dataset which is unknown to all participants. In addition to the test results, we also provide a short description of the participating groups and their systems
    corecore