10 research outputs found

    Garbage collection optimization for non uniform memory access architectures

    Get PDF
    Cache-coherent non uniform memory access (ccNUMA) architecture is a standard design pattern for contemporary multicore processors, and future generations of architectures are likely to be NUMA. NUMA architectures create new challenges for managed runtime systems. Memory-intensive applications use the systemā€™s distributed memory banks to allocate data, and the automatic memory manager collects garbage left in these memory banks. The garbage collector may need to access remote memory banks, which entails access latency overhead and potential bandwidth saturation for the interconnection between memory banks. This dissertation makes ļ¬ve signiļ¬cant contributions to garbage collection on NUMA systems, with a case study implementation using the Hotspot Java Virtual Machine. It empirically studies data locality for a Stop-The-World garbage collector when tracing connected objects in NUMA heaps. First, it identiļ¬es a locality richness which exists naturally in connected objects that contain a root object and its reachable setā€” ā€˜rooted sub-graphsā€™. Second, this dissertation leverages the locality characteristic of rooted sub-graphs to develop a new NUMA-aware garbage collection mechanism. A garbage collector thread processes a local root and its reachable set, which is likely to have a large number of objects in the same NUMA node. Third, a garbage collector thread steals references from sibling threads that run on the same NUMA node to improve data locality. This research evaluates the new NUMA-aware garbage collector using seven benchmarks of an established real-world DaCapo benchmark suite. In addition, evaluation involves a widely used SPECjbb benchmark and Neo4J graph database Java benchmark, as well as an artiļ¬cial benchmark. The results of the NUMA-aware garbage collector on a multi-hop NUMA architecture show an average of 15% performance improvement. Furthermore, this performance gain is shown to be as a result of an improved NUMA memory access in a ccNUMA system. Fourth, the existing Hotspot JVM adaptive policy for conļ¬guring the number of garbage collection threads is shown to be suboptimal for current NUMA machines. The policy uses outdated assumptions and it generates a constant thread count. In fact, the Hotspot JVM still uses this policy in the production version. This research shows that the optimal number of garbage collection threads is application-speciļ¬c and conļ¬guring the optimal number of garbage collection threads yields better collection throughput than the default policy. Fifth, this dissertation designs and implements a runtime technique, which involves heuristics from dynamic collection behavior to calculate an optimal number of garbage collector threads for each collection cycle. The results show an average of 21% improvements to the garbage collection performance for DaCapo benchmarks

    Detecting SPIT Attacks in VoIP Networks Using Convolutional Autoencoders: A Deep Learning Approach

    No full text
    Voice over Internet Protocol (VoIP) is a technology that enables voice communication to be transmitted over the Internet, transforming communication in both personal and business contexts by offering several benefits such as cost savings and integration with other communication systems. However, VoIP attacks are a growing concern for organizations that rely on this technology for communication. Spam over Internet Telephony (SPIT) is a type of VoIP attack that involves unwanted calls or messages, which can be both annoying and pose security risks to users. Detecting SPIT can be challenging since it is often delivered from anonymous VoIP accounts or spoofed phone numbers. This paper suggests an anomaly detection model that utilizes a deep convolutional autoencoder to identify SPIT attacks. The model is trained on a dataset of normal traffic and then encodes new traffic into a lower-dimensional latent representation. If the network traffic varies significantly from the encoded normal traffic, the model flags it as anomalous. Additionally, the model was tested on two datasets and achieved F1 scores of 99.32% and 99.56%. Furthermore, the proposed model was compared to several traditional anomaly detection approaches and it outperformed them on both datasets

    Improved prostate cancer diagnosis using a modified ResNet50-based deep learning architecture

    No full text
    Abstract Prostate cancer, the most common cancer in men, is influenced by age, family history, genetics, and lifestyle factors. Early detection of prostate cancer using screening methods improves outcomes, but the balance between overdiagnosis and early detection remains debated. Using Deep Learning (DL) algorithms for prostate cancer detection offers a promising solution for accurate and efficient diagnosis, particularly in cases where prostate imaging is challenging. In this paper, we propose a Prostate Cancer Detection Model (PCDM) model for the automatic diagnosis of prostate cancer. It proves its clinical applicability to aid in the early detection and management of prostate cancer in real-world healthcare environments. The PCDM model is a modified ResNet50-based architecture that integrates faster R-CNN and dual optimizers to improve the performance of the detection process. The model is trained on a large dataset of annotated medical images, and the experimental results show that the proposed model outperforms both ResNet50 and VGG19 architectures. Specifically, the proposed model achieves high sensitivity, specificity, precision, and accuracy rates of 97.40%, 97.09%, 97.56%, and 95.24%, respectively

    Skeletal age evaluation using hand X-rays to determine growth problems

    No full text
    A common clinical method for identifying anomalies in bone growth in infants and newborns is skeletal age estimation with X-ray images. Childrenā€™s bone abnormalities can result from several conditions including wounds, infections, or tumors. One of the most frequent reasons for bone issues is that most youngsters are affected by the slow displacement of bones caused by pressure applied to the growth plates as youngsters develop. The growth plate can be harmed by a lack of blood supply, separation from other parts of the bone, or slight misalignment. Problems with the growth plate prevent bones from developing, cause joint distortion, and may cause permanent joint injury. A significant discrepancy between the chronological and assessed ages may indicate a growth problem because determining bone age represents the real level of growth. Therefore, skeletal age estimation is performed to look for endocrine disorders, genetic problems, and growth anomalies. To address the bone age assessment challenge, this study uses the Radiological Society of North Americaā€™s Pediatric Bone Age Challenge dataset which contains 12,600 radiological images of the left hand of a patient that includes the gender and bone age information. A bone age evaluation system based on the hand skeleton guidelines is proposed in this study for the detection of hand bone maturation. The proposed approach is based on a customized convolutional neural network. For the calculation of the skeletal age, different data augmentation techniques are used; these techniques not only increase the dataset size but also impact the training of the model. The performance of the model is assessed against the Visual Geometry Group (VGG) model. Results demonstrate that the customized convolutional neural network (CNN) model outperforms the VGG model with 97% accuracy

    Design and Development of a Smart IoT-Based Robotic Solution for Wrist Rehabilitation

    No full text
    In this study, we present an IoT-based robot for wrist rehabilitation with a new protocol for determining the state of injured muscles as well as providing dynamic model parameters. In this model, the torque produced by the robot and the torque provided by the patient are determined and updated taking into consideration the constraints of fatigue. Indeed, in the proposed control architecture based on the EMG signal extraction, a fuzzy classifier was designed and implemented to estimate muscle fatigue. Based on this estimation, the patientā€™s torque is updated during the rehabilitation session. The first step of this protocol consists of calculating the subject-related parameters. This concerns axis offset, inertial parameters, passive stiffness, and passive damping. The second step is to determine the remaining component of the wrist model, including the interaction torque. The subject must perform the desired movements providing the torque necessary to move the robot in the desired direction. In this case, the robot applies a resistive torque to calculate the torque produced by the patient. After that, the protocol considers the patient and the robot as active and all exercises are performed accordingly. The developed robotics-based solution, including the proposed protocol, was tested on three subjects and showed promising results

    Context Aware Crowd Tracking and Anomaly Detection via Deep Learning and Social Force Model

    No full text
    The world’s expanding populace, the variety of human social factors, and the densely populated environment make humans feel uncertain. Individuals need a safety officer who generally deals with security viewpoints for this frailty. Currently, human monitoring techniques are time-consuming, work concentrated, and incapable. Therefore, autonomous surveillance frameworks are necessary for the modern day since they are able to address these problems. Nevertheless, hardships persist. The central concerns incorporate the detachment of the foreground from the scene and the understanding of the contextual structure of the environment for efficiently identifying unusual objects. In our work, we introduced a novel framework to tackle these difficulties by presenting a semantic segmentation technique for separating a foreground object. In our work, Super-pixels are generated using an improved watershed transform and then a conditional random field is implemented to obtain multi-object segmented frames by performing pixel-level labeling. Next, the Social Force model is introduced to extract the contextual structure of the environment via the fusion of a novel chosen particular histogram of an optical stream and inner force model. After using the computed social force, multi-people tracking is performed via three-dimensional template association using percentile rank and non-maximal suppression. Next, multi-object categorization is performed via deep learning Feature Pyramid Network. Finally, by considering the contextual structure of the environment, Jaccard similarity is utilized to make the decision for abnormality detection and identify the unusual objects from the scene. The invented framework is verified through rigorous investigations, and it obtained multi-people tracking efficiency of 92.2% and 89.1% over the UCSD and CUHK Avenue datasets. However, 95.2% and 93.7% abnormality detection efficiency is accomplished over UCSD and CUHK Avenue datasets, respectively

    Depth Sensors-Based Action Recognition Using a Modified K-Ary Entropy Classifier

    No full text
    Surveillance system is acquiring an ample interest in the field of computer vision. Existing surveillance system usually relies on optical or wearable sensors for indoor and outdoor activities. These sensors give reasonable performance in a simulation environment. However, when used under realistic settings, they could cause a large number of false alarms. Moreover, in a real-world scenario, positioning a depth camera at too great a distance from the subject could compromise image quality and result in the loss of depth information. Furthermore, depth information in RGB images may be lost when converting a 3D image to a 2D image. Therefore, extensive surveillance system research is moving on fused sensors, which has greatly improved action recognition performance. By taking into account the concept of fused sensors, this paper proposed a novel idea of a modified K-Ary entropy classifier algorithm to map the arbitrary size of vectors to a fixed-size subtree pattern for graph classification and to solve complex feature selection and classification problems using RGB-D data. The main aim of this paper is to increase the space between the intra-substructure nodes of a tree through entropy accumulation. Hence, the likelihood of classifying the minority class as belonging to the majority class has been reduced. The working of the proposed model has been described as follows: First, the depth and RGB images from three benchmark datasets have been taken as the input for the model. Then, using 2.5D cloud point modeling and ridge extraction, full-body features, and point-based features have been retrieved. Finally, for the efficacy of the surveillance system, a modified K-Ary entropy accumulation classifier is optimized by the probability-based incremental learning (PBIL) algorithm has been used. In both qualitative and quantitative experimental results, the testing results have shown 95.05%, 95.56%, and 95.08% performance over SYSU-ACTION, PRECIS HAR, and Northwestern-UCLA (N-UCLA) datasets. The proposed system could apply to various real-world emerging applications like human target tracking, security-critical human event detection, perimeter security, internet security, public safety etc

    Deep Learning-Based Intrusion Detection Methods in Cyber-Physical Systems: Challenges and Future Trends

    No full text
    A cyber-physical system (CPS) integrates various interconnected physical processes, computing resources, and networking units, as well as monitors the process and applications of the computing systems. Interconnection of the physical and cyber world initiates threatening security challenges, especially with the increasing complexity of communication networks. Despite efforts to combat these challenges, it is difficult to detect and analyze cyber-physical attacks in a complex CPS. Machine learning-based models have been adopted by researchers to analyze cyber-physical security systems. This paper discusses the security threats, vulnerabilities, challenges, and attacks of CPS. Initially, the CPS architecture is presented as a layered approach including the physical layer, network layer, and application layer in terms of functionality. Then, different cyber-physical attacks regarding each layer are elaborated, in addition to challenges and key issues associated with each layer. Afterward, deep learning models are analyzed for malicious URLs and intrusion detection in cyber-physical systems. A multilayer perceptron architecture is utilized for experiments using the malicious URL detection dataset and KDD Cup99 dataset, and its performance is compared with existing works. Lastly, we provide a roadmap of future research directions for cyber-physical security to investigate attacks concerning their source, complexity, and impact

    Deep Learning-Based Intrusion Detection Methods in Cyber-Physical Systems: Challenges and Future Trends

    No full text
    A cyber-physical system (CPS) integrates various interconnected physical processes, computing resources, and networking units, as well as monitors the process and applications of the computing systems. Interconnection of the physical and cyber world initiates threatening security challenges, especially with the increasing complexity of communication networks. Despite efforts to combat these challenges, it is difficult to detect and analyze cyber-physical attacks in a complex CPS. Machine learning-based models have been adopted by researchers to analyze cyber-physical security systems. This paper discusses the security threats, vulnerabilities, challenges, and attacks of CPS. Initially, the CPS architecture is presented as a layered approach including the physical layer, network layer, and application layer in terms of functionality. Then, different cyber-physical attacks regarding each layer are elaborated, in addition to challenges and key issues associated with each layer. Afterward, deep learning models are analyzed for malicious URLs and intrusion detection in cyber-physical systems. A multilayer perceptron architecture is utilized for experiments using the malicious URL detection dataset and KDD Cup99 dataset, and its performance is compared with existing works. Lastly, we provide a roadmap of future research directions for cyber-physical security to investigate attacks concerning their source, complexity, and impact

    Ensemble Learning Based on Hybrid Deep Learning Model for Heart Disease Early Prediction

    No full text
    Many epidemics have afflicted humanity throughout history, claiming many lives. It has been noted in our time that heart disease is one of the deadliest diseases that humanity has confronted in the contemporary period. The proliferation of poor habits such as smoking, overeating, and lack of physical activity has contributed to the rise in heart disease. The killing feature of heart disease, which has earned it the moniker the ā€œsilent killer,ā€ is that it frequently has no apparent signs in advance. As a result, research is required to develop a promising model for the early identification of heart disease using simple data and symptoms. The paperā€™s aim is to propose a deep stacking ensemble model to enhance the performance of the prediction of heart disease. The proposed ensemble model integrates two optimized and pre-trained hybrid deep learning models with the Support Vector Machine (SVM) as the meta-learner model. The first hybrid model is Convolutional Neural Network (CNN)-Long Short-Term Memory (LSTM) (CNN-LSTM), which integrates CNN and LSTM. The second hybrid model is CNN-GRU, which integrates CNN with a Gated Recurrent Unit (GRU). Recursive Feature Elimination (RFE) is also used for the feature selection optimization process. The proposed model has been optimized and tested using two different heart disease datasets. The proposed ensemble is compared with five machine learning models including Logistic Regression (LR), Random Forest (RF), K-Nearest Neighbors (K-NN), Decision Tree (DT), NaĆÆve Bayes (NB), and hybrid models. In addition, optimization techniques are used to optimize ML, DL, and the proposed models. The results obtained by the proposed model achieved the highest performance using the full feature set
    corecore