573 research outputs found

    AppCon: Mitigating evasion attacks to ML cyber detectors

    Get PDF
    Adversarial attacks represent a critical issue that prevents the reliable integration of machine learning methods into cyber defense systems. Past work has shown that even proficient detectors are highly affected just by small perturbations to malicious samples, and that existing countermeasures are immature. We address this problem by presenting AppCon, an original approach to harden intrusion detectors against adversarial evasion attacks. Our proposal leverages the integration of ensemble learning to realistic network environments, by combining layers of detectors devoted to monitor the behavior of the applications employed by the organization. Our proposal is validated through extensive experiments performed in heterogeneous network settings simulating botnet detection scenarios, and consider detectors based on distinct machine-and deep-learning algorithms. The results demonstrate the effectiveness of AppCon in mitigating the dangerous threat of adversarial attacks in over 75% of the considered evasion attempts, while not being affected by the limitations of existing countermeasures, such as performance degradation in non-adversarial settings. For these reasons, our proposal represents a valuable contribution to the development of more secure cyber defense platforms

    Convex and non-convex optimization using centroid-encoding for visualization, classification, and feature selection

    Get PDF
    Includes bibliographical references.2022 Fall.Classification, visualization, and feature selection are the three essential tasks of machine learning. This Ph.D. dissertation presents convex and non-convex models suitable for these three tasks. We propose Centroid-Encoder (CE), an autoencoder-based supervised tool for visualizing complex and potentially large, e.g., SUSY with 5 million samples and high-dimensional datasets, e.g., GSE73072 clinical challenge data. Unlike an autoencoder, which maps a point to itself, a centroid-encoder has a modified target, i.e., the class centroid in the ambient space. We present a detailed comparative analysis of the method using various data sets and state-of-the-art techniques. We have proposed a variation of the centroid-encoder, Bottleneck Centroid-Encoder (BCE), where additional constraints are imposed at the bottleneck layer to improve generalization performance in the reduced space. We further developed a sparse optimization problem for the non-linear mapping of the centroid-encoder called Sparse Centroid-Encoder (SCE) to determine the set of discriminate features between two or more classes. The sparse model selects variables using the 1-norm applied to the input feature space. SCE extracts discriminative features from multi-modal data sets, i.e., data whose classes appear to have multiple clusters, by using several centers per class. This approach seems to have advantages over models which use a one-hot-encoding vector. We also provide a feature selection framework that first ranks each feature by its occurrence, and the optimal number of features is chosen using a validation set. CE and SCE are models based on neural network architectures and require the solution of non-convex optimization problems. Motivated by the CE algorithm, we have developed a convex optimization for the supervised dimensionality reduction technique called Centroid Component Retrieval (CCR). The CCR model optimizes a multi-objective cost by balancing two complementary terms. The first term pulls the samples of a class towards its centroid by minimizing a sample's distance from its class centroid in low dimensional space. The second term pushes the classes by maximizing the scattering volume of the ellipsoid formed by the class-centroids in embedded space. Although the design principle of CCR is similar to LDA, our experimental results show that CCR exhibits performance advantages over LDA, especially on high-dimensional data sets, e.g., Yale Faces, ORL, and COIL20. Finally, we present a linear formulation of Centroid-Encoder with orthogonality constraints, called Principal Centroid Component Analysis (PCCA). This formulation is similar to PCA, except the class labels are used to formulate the objective, resulting in the form of supervised PCA. We show the classification and visualization experiments results with this new linear tool

    Anomaly Detection in IoT: Methods, Techniques and Tools

    Get PDF
    [Abstract] Nowadays, the Internet of things (IoT) network, as system of interrelated computing devices with the ability to transfer data over a network, is present in many scenarios of everyday life. Understanding how traffic behaves can be done more easily if the real environment is replicated to a virtualized environment. In this paper, we propose a methodology to develop a systematic approach to dataset analysis for detecting traffic anomalies in an IoT network. The reader will become familiar with the specific techniques and tools that are used. The methodology will have five stages: definition of the scenario, injection of anomalous packages, dataset analysis, implementation of classification algorithms for anomaly detection and conclusions

    Power, Policy, and Digital Switchover: An Analysis of Communication Policy Making and its Challenges for Regulating Ghana’s Digital Television Sector

    Get PDF
    This thesis examines communication policy making in Ghana during the country’s digital switchover process launched in 2010. The thesis argues that Ghana’s digital switchover policy making process was an opportunity to refashion policy and regulatory structures towards the public interest that went beyond the modernisation of broadcasting transmission infrastructure and the innovations digital switchover brought. The thesis investigates whether, and the extent to which, structural and institutional characteristics in the communication policy arena facilitated or hindered broadcasting policy making, and explains the persistence of the analogue era broadcasting regulatory regime in the digital multichannel television market. Ghana’s return to Constitutional rule since 1992 led to the liberalisation of the broadcasting sector, permitting private ownership of broadcast media for the first time in the country’s history, as well as the reconfiguration of the communication policy making arena (and the wider policy environment), with more actors engaged in policy making. Yet, the manner in which this was achieved sustained the capability of state policy actors in the communication sector to influence the shape, pace and direction of policy due to the concentration of power within the Executive that granted the government excessive power. The thesis draws on political science and sociological concepts and approaches to analyse original qualitative data based on extensive documentary analysis and elite interviews with policy actors, during Ghana’s digital switchover policy making process from 2010. The study finds that political events during Ghana’s transition to Constitutional rule in the early 1990s, after ten years of military autocratic rule was the critical juncture that laid the foundation for a path-dependent communication policy making trajectory. Overtime this has produced a fractured and uncoordinated broadcasting policy making context whereby policy makers act without much consideration for the wider interest of the sector, whilst non-state policy actors remain ineffective to sustain advocacy that would serve the public interest. This played out during Ghana’s digital switchover process as the dominance of state-controlled policy actors ensured the framing of domestic digital switchover policy objectives along narrow externally set priorities at the expense of longstanding and pertinent broadcasting policy and regulatory concerns that could have been part of the country’s digital switchover policy making agenda. The study maintains that as the full implications of the digital switchover process on Ghana’s broadcasting sector becomes apparent, the continued lack of an adequate policy and regulatory framework for the new digital television broadcasting market, and, indeed the larger broadcasting sector, does not serve the public interest and as such, it impoverishes the broadcasting service available to citizens

    PrismatoidPatNet54: An Accurate ECG Signal Classification Model Using Prismatoid Pattern-Based Learning Architecture

    Full text link
    Background and objective: Arrhythmia is a widely seen cardiologic ailment worldwide, and is diagnosed using electrocardiogram (ECG) signals. The ECG signals can be translated manually by human experts, but can also be scheduled to be carried out automatically by some agents. To easily diagnose arrhythmia, an intelligent assistant can be used. Machine learning-based automatic arrhythmia detection models have been proposed to create an intelligent assistant. Materials and Methods: In this work, we have used an ECG dataset. This dataset contains 1000 ECG signals with 17 categories. A new hand-modeled learning network is developed on this dataset, and this model uses a 3D shape (prismatoid) to create textural features. Moreover, a tunable Q wavelet transform with low oscillatory parameters and a statistical feature extractor has been applied to extract features at both low and high levels. The suggested prismatoid pattern and statistical feature extractor create features from 53 sub-bands. A neighborhood component analysis has been used to choose the most discriminative features. Two classifiers, k nearest neighbor (kNN) and support vector machine (SVM), were used to classify the selected top features with 10-fold cross-validation. Results: The calculated best accuracy rate of the proposed model is equal to 97.30% using the SVM classifier. Conclusion: The computed results clearly indicate the success of the proposed prismatoid pattern-based model

    The ways of an empire: Continuity and change of route landscapes across the Taurus during the Hittite Period (ca. 1650\u20131200 BCE)

    Get PDF
    Routes are part of broader \u2019landscapes of movement\u2019, having an impact on and being impacted by other sociocultural processes. Most recent studies on connectivity networks remain highly topographic in scope, incidentally resulting in the restitution of a long term fixity. The anachronistic transposition of best known route networks across various ages, irrespective of context-specific circumstances, further enhances this static approach. On the other hand, when changes in connectivity are considered, trends are generally analysed over \u2019big jumps\u2019, often spanning several centuries. This article aims to contextualise dynamics of change in route trajectories within shorter and well-defined chronological boundaries with a case study on the evolution of route landscapes across the Taurus mountains under the Hittite kingdom and empire (ca. 1650\u20131200 BCE). I will adopt an integrated approach to multiple datasets, aiming to investigate variables operating at different time depths. In the conclusions, I will argue that, while the Hittite route system in the target area was in part rooted on previous patterns of connectivity, some eventful shifts can also be individuated and historically explained. This enables, in turn, an enhanced perspective on the formation and transformation of Hittite socio-cultural landscapes

    The ways of an empire: Continuity and change of route landscapes across the Taurus during the Hittite Period (ca. 1650–1200 BCE)

    Get PDF
    Abstract Routes are part of broader 'landscapes of movement', having an impact on and being impacted by other socio-cultural processes. Most recent studies on connectivity networks remain highly topographic in scope, incidentally resulting in the restitution of a long term fixity. The anachronistic transposition of best known route networks across various ages, irrespective of context-specific circumstances, further enhances this static approach. On the other hand, when changes in connectivity are considered, trends are generally analysed over 'big jumps', often spanning several centuries. This article aims to contextualise dynamics of change in route trajectories within shorter and well-defined chronological boundaries with a case study on the evolution of route landscapes across the Taurus mountains under the Hittite kingdom and empire (ca. 1650–1200 BCE). I will adopt an integrated approach to multiple datasets, aiming to investigate variables operating at different time depths. In the conclusions, I will argue that, while the Hittite route system in the target area was in part rooted on previous patterns of connectivity, some eventful shifts can also be individuated and historically explained. This enables, in turn, an enhanced perspective on the formation and transformation of Hittite socio-cultural landscapes

    Internet traffic prediction using recurrent neural networks

    Get PDF
    Network traffic prediction (NTP) represents an essential component in planning large-scale networks which are in general unpredictable and must adapt to unforeseen circumstances. In small to medium-size networks, the administrator can anticipate the fluctuations in traffic without the need of using forecasting tools, but in the scenario of large-scale networks where hundreds of new users can be added in a matter of weeks, more efficient forecasting tools are required to avoid congestion and over provisioning. Network and hardware resources are however limited; and hence resource allocation is critical for the NTP with scalable solutions. To this end, in this paper, we propose an efficient NTP by optimizing recurrent neural networks (RNNs) to analyse the traffic patterns that occur inside flow time series, and predict future samples based on the history of the traffic that was used for training. The predicted traffic with the proposed RNNs is compared with the real values that are stored in the database in terms of mean squared error, mean absolute error and categorical cross entropy. Furthermore, the real traffic samples for NTP training are compared with those from other techniques such as auto-regressive moving average (ARIMA) and AdaBoost regressor to validate the effectiveness of the proposed method. It is shown that the proposed RNN achieves a better performance than both the ARIMA and AdaBoost regressor when more samples are employed
    • …
    corecore