13 research outputs found

    A General scheme for dithering multidimensional signals, and a visual instance of encoding images with limited palettes

    Get PDF
    AbstractThe core contribution of this paper is to introduce a general neat scheme based on soft vector clustering for the dithering of multidimensional signals that works in any space of arbitrary dimensionality, on arbitrary number and distribution of quantization centroids, and with a computable and controllable quantization noise. Dithering upon the digitization of one-dimensional and multi-dimensional signals disperses the quantization noise over the frequency domain which renders it less perceptible by signal processing systems including the human cognitive ones, so it has a very beneficial impact on vital domains such as communications, control, machine-learning, etc. Our extensive surveys have concluded that the published literature is missing such a neat dithering scheme. It is very desirable and insightful to visualize the behavior of our multidimensional dithering scheme; especially the dispersion of quantization noise over the frequency domain. In general, such visualization would be quite hard to achieve and perceive by the reader unless the target multidimensional signal itself is directly perceivable by humans. So, we chose to apply our multidimensional dithering scheme upon encoding true-color images – that are 3D signals – with palettes of limited sets of colors to show how it minimizes the visual distortions – esp. contouring effect – in the encoded images

    Efficient Detection of Attacks in SIP Based VoIP Networks Using Linear l1-SVM Classifier

    Get PDF
    The Session Initiation Protocol (SIP) is one of the most common protocols that are used for signaling function in Voice over IP (VoIP) networks. The SIP protocol is very popular because of its flexibility, simplicity, and easy implementation, so it is a target of many attacks. In this paper, we propose a new system to detect the Denial of Service (DoS) attacks (i.e. malformed message and invite flooding) and Spam over Internet Telephony (SPIT) attack in the SIP based VoIP networks using a linear Support Vector Machine with l1 regularization (i.e. l1-SVM) classifier. In our approach, we project the SIP messages into a very high dimensional space using string based n-gram features. Hence, a linear classifier is trained on the top of these features. Our experimental results show that the proposed system detects malformed message, invite flooding, and SPIT attacks with a high accuracy. In addition, the proposed system outperformed other systems significantly in the detection speed

    ANALYSIS OF A CHAOTIC SPIKING NEURAL MODEL: THE NDS NEURON

    No full text
    Further analysis and experimentation is carried out in this paper for a chaotic dynamic model, viz. the Nonlinear Dynamic State neuron (NDS). The analysis and experimentations are performed to further understand the underlying dynamics of the model and enhance it as well. Chaos provides many interesting properties that can be exploited to achieve computational tasks. Such properties are sensitivity to initial conditions, space filling, control and synchronization. Chaos might play an important role in information processing tasks in human brain as suggested by biologists. If artificial neural networks (ANNs) is equipped with chaos then it will enrich the dynamic behaviours of such networks. The NDS model has some limitations and can be overcome in different ways. In this paper different approaches are followed to push the boundaries of the NDS model in order to enhance it. One way is to study the effects of scaling the parameters of the chaotic equations of the NDS model and study the resulted dynamics. Another way is to study the method that is used in discretization of the original R¨ossler that the NDS model is based on. These approaches have revealed some facts about the NDS attractor and suggest why such a model can be stabilized to large number o

    Influence of Channel Selection and Subject’s Age on the Performance of the Single Channel EEG-Based Automatic Sleep Staging Algorithms

    No full text
    The electroencephalogram (EEG) signal is a key parameter used to identify the different sleep stages present in an overnight sleep recording. Sleep staging is crucial in the diagnosis of several sleep disorders; however, the manual annotation of the EEG signal is a costly and time-consuming process. Automatic sleep staging algorithms offer a practical and cost-effective alternative to manual sleep staging. However, due to the limited availability of EEG sleep datasets, the reliability of existing sleep staging algorithms is questionable. Furthermore, most reported experimental results have been obtained using adult EEG signals; the effectiveness of these algorithms using pediatric EEGs is unknown. In this paper, we conduct an intensive study of two state-of-the-art single-channel EEG-based sleep staging algorithms, namely DeepSleepNet and AttnSleep, using a recently released large-scale sleep dataset collected from 3984 patients, most of whom are children. The paper studies how the performance of these sleep staging algorithms varies when applied on different EEG channels and across different age groups. Furthermore, all results were analyzed within individual sleep stages to understand how each stage is affected by the choice of EEG channel and the participants’ age. The study concluded that the selection of the channel is crucial for the accuracy of the single-channel EEG-based automatic sleep staging methods. For instance, channels O1-M2 and O2-M1 performed consistently worse than other channels for both algorithms and through all age groups. The study also revealed the challenges in the automatic sleep staging of newborns and infants (1–52 weeks)

    Survey of Countering DoS/DDoS Attacks on SIP Based VoIP Networks

    No full text
    Voice over IP (VoIP) services hold promise because of their offered features and low cost. Most VoIP networks depend on the Session Initiation Protocol (SIP) to handle signaling functions. The SIP is a text-based protocol that is vulnerable to many attacks. Denial of Service (DoS) and distributed denial of service (DDoS) attacks are the most harmful types of attacks, because they drain VoIP resources and render SIP service unavailable to legitimate users. In this paper, we present recently introduced approaches to detect DoS and DDoS attacks, and classify them based on various factors. We then analyze these approaches according to various characteristics; furthermore, we investigate the main strengths and weaknesses of these approaches. Finally, we provide some remarks for enhancing the surveyed approaches and highlight directions for future research to build effective detection solutions

    Detecting SPIT Attacks in VoIP Networks Using Convolutional Autoencoders: A Deep Learning Approach

    No full text
    Voice over Internet Protocol (VoIP) is a technology that enables voice communication to be transmitted over the Internet, transforming communication in both personal and business contexts by offering several benefits such as cost savings and integration with other communication systems. However, VoIP attacks are a growing concern for organizations that rely on this technology for communication. Spam over Internet Telephony (SPIT) is a type of VoIP attack that involves unwanted calls or messages, which can be both annoying and pose security risks to users. Detecting SPIT can be challenging since it is often delivered from anonymous VoIP accounts or spoofed phone numbers. This paper suggests an anomaly detection model that utilizes a deep convolutional autoencoder to identify SPIT attacks. The model is trained on a dataset of normal traffic and then encodes new traffic into a lower-dimensional latent representation. If the network traffic varies significantly from the encoded normal traffic, the model flags it as anomalous. Additionally, the model was tested on two datasets and achieved F1 scores of 99.32% and 99.56%. Furthermore, the proposed model was compared to several traditional anomaly detection approaches and it outperformed them on both datasets

    Retrieval-Based Transformer Pseudocode Generation

    No full text
    The comprehension of source code is very difficult, especially if the programmer is not familiar with the programming language. Pseudocode explains and describes code contents that are based on the semantic analysis and understanding of the source code. In this paper, a novel retrieval-based transformer pseudocode generation model is proposed. The proposed model adopts different retrieval similarity methods and neural machine translation to generate pseudocode. The proposed model handles words of low frequency and words that do not exist in the training dataset. It consists of three steps. First, we retrieve the sentences that are similar to the input sentence using different similarity methods. Second, pass the source code retrieved (input retrieved) to the deep learning model based on the transformer to generate the pseudocode retrieved. Third, the replacement process is performed to obtain the target pseudo code. The proposed model is evaluated using Django and SPoC datasets. The experiments show promising performance results compared to other language models of machine translation. It reaches 61.96 and 50.28 in terms of BLEU performance measures for Django and SPoC, respectively

    Pseudocode Generation from Source Code Using the BART Model

    No full text
    In the software development process, more than one developer may work on developing the same program and bugs in the program may be fixed by a different developer; therefore, understanding the source code is an important issue. Pseudocode plays an important role in solving this problem, as it helps the developer to understand the source code. Recently, transformer-based pre-trained models achieved remarkable results in machine translation, which is similar to pseudocode generation. In this paper, we propose a novel automatic pseudocode generation from the source code based on a pre-trained Bidirectional and Auto-Regressive Transformer (BART) model. We fine-tuned two pre-trained BART models (i.e., large and base) using a dataset containing source code and its equivalent pseudocode. In addition, two benchmark datasets (i.e., Django and SPoC) were used to evaluate the proposed model. The proposed model based on the BART large model outperforms other state-of-the-art models in terms of BLEU measurement by 15% and 27% for Django and SPoC datasets, respectively

    Retrieval-Based Transformer Pseudocode Generation

    No full text
    The comprehension of source code is very difficult, especially if the programmer is not familiar with the programming language. Pseudocode explains and describes code contents that are based on the semantic analysis and understanding of the source code. In this paper, a novel retrieval-based transformer pseudocode generation model is proposed. The proposed model adopts different retrieval similarity methods and neural machine translation to generate pseudocode. The proposed model handles words of low frequency and words that do not exist in the training dataset. It consists of three steps. First, we retrieve the sentences that are similar to the input sentence using different similarity methods. Second, pass the source code retrieved (input retrieved) to the deep learning model based on the transformer to generate the pseudocode retrieved. Third, the replacement process is performed to obtain the target pseudo code. The proposed model is evaluated using Django and SPoC datasets. The experiments show promising performance results compared to other language models of machine translation. It reaches 61.96 and 50.28 in terms of BLEU performance measures for Django and SPoC, respectively
    corecore