492 research outputs found

    Genetic and epigenetic determinants of diffuse large B-cell lymphoma

    Get PDF
    Diffuse large B-cell lymphoma (DLBCL) is the most common type of lymphoma and is notorious for its heterogeneity, aggressive nature, and the frequent development of resistance and/or relapse after treatment with standard chemotherapy. To address these problems, a strong emphasis has been placed on researching the molecular origins and mechanisms of DLBCL to develop effective treatments. One of the major insights produced by such research is that DLBCL almost always stems from genetic damage that occurs during the germinal center (GC) reaction, which is required for the production of high-affinity antibodies. Indeed, there is significant overlap between the mechanisms that govern the GC reaction and those that drive the progression of DLBCL. A second important insight is that some of the most frequent genetic mutations that occur in DLBCL are those related to chromatin and epigenetics, especially those related to proteins that “write” histone post-translational modifications (PTMs). Mutation or deletion of these epigenetic writers often renders cells unable to epigenetically “switch on” critical gene sets that are required to exit the GC reaction, differentiate, repair DNA, and other essential cellular functions. Failure to activate these genes locks cells into a genotoxic state that is conducive to oncogenesis and/or relapse

    A Novel Feature Set for Application Identification

    Get PDF
    Classifying Internet traffic into applications is vital to many areas, from quality of service (QoS) provisioning, to network management and security. The task is challenging as network applications are rather dynamic in nature, tend to use a web front-end and are typically encrypted, rendering traditional port-based and deep packet inspection (DPI) method unusable. Recent classification studies proposed two alternatives: using the statistical properties of traffic or inferring the behavioural patterns of network applications, both aiming to describe the activity within and among network flows in order to understand application usage and behaviour. The aim of this paper is to propose and investigate a novel feature to define application behaviour as seen through the generated network traffic by considering the timing and pattern of user events during application sessions, leading to an extended traffic feature set based on burstiness. The selected features were further used to train and test a supervised C5.0 machine learning classifier and led to a better characterization of network applications, with a traffic classification accuracy ranging between 90- 98%

    On Internet Traffic Classification: A Two-Phased Machine Learning Approach

    Get PDF
    Traffic classification utilizing flow measurement enables operators to perform essential network management. Flow accounting methods such as NetFlow are, however, considered inadequate for classification requiring additional packet-level information, host behaviour analysis, and specialized hardware limiting their practical adoption. This paper aims to overcome these challenges by proposing two-phased machine learning classification mechanism with NetFlow as input. The individual flow classes are derived per application through k-means and are further used to train a C5.0 decision tree classifier. As part of validation, the initial unsupervised phase used flow records of fifteen popular Internet applications that were collected and independently subjected to k-means clustering to determine unique flow classes generated per application. The derived flow classes were afterwards used to train and test a supervised C5.0 based decision tree. The resulting classifier reported an average accuracy of 92.37% on approximately 3.4 million test cases increasing to 96.67% with adaptive boosting. The classifier specificity factor which accounted for differentiating content specific from supplementary flows ranged between 98.37% and 99.57%. Furthermore, the computational performance and accuracy of the proposed methodology in comparison with similar machine learning techniques lead us to recommend its extension to other applications in achieving highly granular real-time traffic classification

    Perspectives on Auditing and Regulatory Compliance in Blockchain Transactions

    Get PDF
    The recent advent of blockchain technology is anticipated to revolutionize the operational processes of several industries including banking, finance, real estate, retail and benefit governmental as well as corporate information management structures. The underlying principles of information immutability, traceability, and verifiability built-in blockchain transactions may lead to greater adoption of distributed crypto-ledger applications in auditing automation, compliance monitoring, and guaranteeing high assurance. This chapter discusses the contemporary applications of blockchain technology in information auditing, exploring aspects such as data recording, accuracy, verification, transparency, and overall value of a decentralized blockchain crypto-ledger for auditors. Opportunities for timeliness, completeness, and reconciliation in appraising regulatory compliance of organizations employing blockchain-based contractual frameworks are also investigated. The chapter reviews the existing and anticipated challenges blockchain applications pose to traditional regulatory compliance models and the inherent risks for businesses and stakeholders. We highlight the impact of operational concerns such as decentralized transactions, network complexity, transaction reversals, credential management, software quality, and human resources. Finally, the chapter provides perspective on assurance complexities involved in transforming from proprietary to blockchain-based framework while adhering to IT control obligations dictated by three major auditing standards Sarbanes Oxley Act (SOX), Control Objectives for Information Technologies (COBIT), and International Standardization Organization (ISO) /International Electrotechnical Commission (IEC) 27001

    Anomaly Detection in Encrypted Internet Traffic Using Hybrid Deep Learning

    Get PDF
    An increasing number of Internet application services are relying on encrypted traffic to offer adequate consumer privacy. Anomaly detection in encrypted traffic to circumvent and mitigate cyber security threats is, however, an open and ongoing research challenge due to the limitation of existing traffic classification techniques. Deep learning is emerging as a promising paradigm, allowing reduction in manual determination of feature set to increase classification accuracy. The present work develops a deep learning-based model for detection of anomalies in encrypted network traffic. Three different publicly available datasets including the NSL-KDD, UNSW-NB15, and CIC-IDS-2017 are used to comprehensively analyze encrypted attacks targeting popular protocols. Instead of relying on a single deep learning model, multiple schemes using convolutional (CNN), long short-term memory (LSTM), and recurrent neural networks (RNNs) are investigated. Our results report a hybrid combination of convolutional (CNN) and gated recurrent unit (GRU) models as outperforming others. The hybrid approach benefits from the low-latency feature derivation of the CNN, and an overall improved training dataset fitting. Additionally, the highly effective generalization offered by GRU results in optimal time-domain-related feature extraction, resulting in the CNN and GRU hybrid scheme presenting the best model.</jats:p

    Hybrid Deep Learning Techniques for Securing Bioluminescent Interfaces in Internet of Bio Nano Things

    Get PDF
    The Internet of bio-nano things (IoBNT) is an emerging paradigm employing nanoscale (~1–100 nm) biological transceivers to collect in vivo signaling information from the human body and communicate it to healthcare providers over the Internet. Bio-nano-things (BNT) offer external actuation of in-body molecular communication (MC) for targeted drug delivery to otherwise inaccessible parts of the human tissue. BNTs are inter-connected using chemical diffusion channels, forming an in vivo bio-nano network, connected to an external ex vivo environment such as the Internet using bio-cyber interfaces. Bio-luminescent bio-cyber interfacing (BBI) has proven to be promising in realizing IoBNT systems due to their non-obtrusive and low-cost implementation. BBI security, however, is a key concern during practical implementation since Internet connectivity exposes the interfaces to external threat vectors, and accurate classification of anomalous BBI traffic patterns is required to offer mitigation. However, parameter complexity and underlying intricate correlations among BBI traffic characteristics limit the use of existing machine-learning (ML) based anomaly detection methods typically requiring hand-crafted feature designing. To this end, the present work investigates the employment of deep learning (DL) algorithms allowing dynamic and scalable feature engineering to discriminate between normal and anomalous BBI traffic. During extensive validation using singular and multi-dimensional models on the generated dataset, our hybrid convolutional and recurrent ensemble (CNN + LSTM) reported an accuracy of approximately ~93.51% over other deep and shallow structures. Furthermore, employing a hybrid DL network allowed automated extraction of normal as well as temporal features in BBI data, eliminating manual selection and crafting of input features for accurate prediction. Finally, we recommend deployment primitives of the extracted optimal classifier in conventional intrusion detection systems as well as evolving non-Von Neumann architectures for real-time anomaly detection

    Information Security Risk Assessment

    Get PDF
    Information security risk assessment is an important part of enterprises’ management practices that helps to identify, quantify, and prioritize risks against criteria for risk acceptance and objectives relevant to the organization. Risk management refers to a process that consists of identification, management, and elimination or reduction of the likelihood of events that can negatively affect the resources of the information system to reduce security risks that potentially have the ability to affect the information system, subject to an acceptable cost of protection means that contain a risk analysis, analysis of the “cost-effectiveness” parameter, and selection, construction, and testing of the security subsystem, as well as the study of all aspects of security.</jats:p

    Using Burstiness for Network Applications Classification.

    Get PDF
    Network traffic classification is a vital task for service operators, network engineers, and security specialists to manage network traffic, design networks, and detect threats. Identifying the type/name of applications that generate traffic is a challenging task as encrypting traffic becomes the norm for Internet communication. Therefore, relying on conventional techniques such as deep packet inspection (DPI) or port numbers is not efficient anymore. This paper proposes a novel flow statistical-based set of features that may be used for classifying applications by leveraging machine learning algorithms to yield high accuracy in identifying the type of applications that generate the traffic. The proposed features compute different timings between packets and flows. This work utilises tcptrace to extract features based on traffic burstiness and periods of inactivity (idle time) for the analysed traffic, followed by the C5.0 algorithm for determining the applications that generated it. The evaluation tests performed on a set of real, uncontrolled traffic, indicated that the method has an accuracy of 79% in identifying the correct network application.</jats:p
    corecore