13 research outputs found

    Assessing the impact of intra-cloud live migration on anomaly detection

    Get PDF
    Virtualized cloud environments have emerged as a necessity within modern unified ICT infrastructures and have established themselves as a reliable backbone for numerous always-on services. `Live' intra-cloud virtual-machine (VM) migration is a widely used technique for efficient resource management employed within modern cloud infrastructures. Despite the benefits of such functionality, there are still several security issues which have not yet been thoroughly assessed and quantified. We investigate the impact of live virtual-machine migration on state-of-the-art anomaly detection (AD) techniques (namely PCA and K-means), by evaluating live migration under various attack types and intensities. We find that the performance for both detectors degrades as shown by their Receiver Operating Characteristics (ROC) curves when intra-cloud live migration is initiated while VMs are under a netscan (NS) or a denial-of-service (DoS) attack

    Hybrid self-organizing feature map (SOM) for anomaly detection in cloud infrastructures using granular clustering based upon value-difference metrics

    Get PDF
    We have witnessed an increase in the availability of data from diverse sources over the past few years. Cloud computing, big data and Internet-of-Things (IoT) are distinctive cases of such an increase which demand novel approaches for data analytics in order to process and analyze huge volumes of data for security and business use. Cloud computing has been becoming popular for critical structure IT mainly due to cost savings and dynamic scalability. Current offerings, however, are not mature enough with respect to stringent security and resilience requirements. Mechanisms such as anomaly detection hybrid systems are required in order to protect against various challenges that include network based attacks, performance issues and operational anomalies. Such hybrid AI systems include Neural Networks, blackboard systems, belief (Bayesian) networks, case-based reasoning and rule-based systems and can be implemented in a variety of ways. Traffic in the cloud comes from multiple heterogeneous domains and changes rapidly due to the variety of operational characteristics of the tenants using the cloud and the elasticity of the provided services. The underlying detection mechanisms rely upon measurements drawn from multiple sources. However, the characteristics of the distribution of measurements within specific subspaces might be unknown. We argue in this paper that there is a need to cluster the observed data during normal network operation into multiple subspaces each one of them featuring specific local attributes, i.e. granules of information. Clustering is implemented by the inference engine of a model hybrid NN system. Several variations of the so-called value-difference metric (VDM) are investigated like local histograms and the Canberra distance for scalar attributes, the Jaccard distance for binary word attributes, rough sets as well as local histograms over an aggregate ordering distance and the Canberra measure for vectorial attributes. Low-dimensional subspace representations of each group of points (measurements) in the context of anomaly detection in critical cloud implementations is based upon VD metrics and can be either parametric or non-parametric. A novel application of a Self-Organizing-Feature Map (SOFM) of reduced/aggregate ordered sets of objects featuring VD metrics (as obtained from distributed network measurements) is proposed. Each node of the SOFM stands for a structured local distribution of such objects within the input space. The so-called Neighborhood-based Outlier Factor (NOOF) is defined for such reduced/aggregate ordered sets of objects as a value-difference metric of histogrammes. Measurements that do not belong to local distributions are detected as anomalies, i.e. outliers of the trained SOFM. Several methods of subspace clustering using Expectation-Maximization Gaussian Mixture Models (a parametric approach) as well as local data densities (a non-parametric approach) are outlined and compared against the proposed method using data that are obtained from our cloud testbed in emulated anomalous traffic conditions. The results—which are obtained from a model NN system—indicate that the proposed method performs well in comparison with conventional techniques

    Neural projection techniques for the visual inspection of network traffic

    Get PDF
    A crucial aspect in network monitoring for security purposes is the visual inspection of the traffic pattern, mainly aimed to provide the network manager with a synthetic and intuitive representation of the current situation. Towards that end, neural projection techniques can map high-dimensional data into a low-dimensional space adaptively, for the user-friendly visualization of monitored network traffic. This work proposes two projection methods, namely, cooperative maximum likelihood Hebbian learning and auto-associative back-propagation networks, for the visual inspection of network traffic. This set of methods may be seen as a complementary tool in network security as it allows the visual inspection and comprehension of the traffic data internal structure. The proposed methods have been evaluated in two complementary and practical network-security scenarios: the on-line processing of network traffic at packet level, and the off-line processing of connection records, e.g. for post-mortem analysis or batch investigation. The empirical verification of the projection methods involved two experimental domains derived from the standard corpora for evaluation of computer network intrusion detection: the MIT Lincoln Laboratory DARPA dataset

    Differentiated Intrusion Detection and SVDD-based Feature Selection for Anomaly Detection

    Get PDF
    Most of existing intrusion detection techniques treat all types of attacks equally without any differentiation of the risk they pose to the information system. However, certain types of attacks are more harmful than others and their detection is critical to protection of the system. This study proposes a novel differentiated anomaly detection method that can more precisely detect intrusions of specific types of attacks. Although many researchers have been developed many efficient intrusion detection methods, fewer efforts have been made to extract effective features for host-based intrusion detection. In this study, we propose a new framework based on new viewpoints about system activities to extract host-based features, which can guide further exploration for new features. There are few feature selection methods for anomaly detections although lots of studies have been done for the feature selection both in classification and regression problems. This study proposes new support vector data description (SVDD)-based feature selection methods such as SVDD-R2-recursive feature elimination (RFE), SVDD-RFE and SVDDGradient method. Concrete experiments with both simulated and the Defense advanced research projects agency (DARPA) datasets shows promising performance of the proposed methods. These achievements in this dissertation could significantly contribute to anomaly detection field. In addition, the proposed differentiated detection and SVDD-based feature selection methods would benefit even other application areas beyond intrusion detectio

    Halal supply chain: mediating role of intention on manufacturer’s behaviour to utilize halal transportation services

    Get PDF
    As one of the world's major Muslim countries, Malaysia is seeing a growth in demand for halal products. Muslim consumers think that eating halal food is a religious responsibility for all Muslims. The increased awareness of the importance of halal products creates a sizable market opportunity for producers to make their halal products. In accordance with this, halal transportation services are a critical component of halal product manufacturing. Since halal is distinct and entails intricate regulations and executions, it requires significant expenditure. As with any other supply chain, transportation costs will be passed to end-users, typically customers, increasing the final product's price. While studies on Muslim customers and their preferences for halal products are somewhat widespread, studies on Muslim customers' preferences for halal transportation could still be considered as novel. The purpose of this research is to apply the hybrid theories of TRA and TPB as popular theories due to their relative simplicity and flexibility, as well as their effectiveness in forecasting customer intention and actual behaviour to use halal transportation services. From the 1729 manufacturers initially listed, 130 manufacturers were randomly chosen from the food and beverages operators in the Klang Valley listed on the Jabatan Kemajuan Islam Malaysia (JAKIM) website. 3 sets of questionnaires were distributed to each halal certified manufacturer in this survey to maintain homogeneity among the halal food and beverages manufacturers. A total of 390 people were sent the questionnaires. Foreign multinationals, Malaysian multinationals, Small and Medium Enterprises (SME), and bigger enterprises were the four types of companies that participated in this study. This study was designed to provide a better knowledge of Muslim customers' purchasing behavior regarding Halal transportation and other Halal supply chain operations. Additionally, this study may assist policymakers in forecasting consumer behavior toward Halal transportation and enhance their business strategies through sharia compliance to better serve Muslim consumers

    Conceptualizing the concept of disaster resilience: a hybrid approach in the context of earthquake hazard : case study of Tehran City, Iran

    Get PDF
    From the natural perspective, disaster resilience is defined as the ability of a system or community to resist, mitigate, respond, and recover from the effects of hazards in efficient and timely manner. How urban communities recover subsequent a disaster event is often conceptualized in terms of their disaster resilience level. While numerous studies have been carried out on the importance of disaster resilience measurement, a few of them suggest how and by which mechanism the concept can be quantified. Thus, the primary purpose of this thesis is to advance our understanding of the multifaceted nature of disaster resilience and answer to the general question of how the concept of disaster resilience can be operationalized in the context of earthquake hazard. The starting point for conceptualizing the concept of disaster resilience is performed through the development of measurement and benchmarking tools for better understanding of factors that contribute to resilience and the effectiveness of interventions to sustain it. Since constructing composite indicators has often been addressed to perform this task in literature, this research has proposed the new hybrid approach to develop a sound set of composite indicators in the context of earthquake hazard. The methodology has specially scrutinized data reduction and factor retention, and indicators weighting steps using a hybrid factor analysis and analytic network process (F’ANP). It replaces the hierarchical and deductive methods in the literature with an inductive method of factor analysis. The methodology also applies an unequal weighting method instead of an equal weighting in which the inter-dependencies and feedbacks among all indicators are considered. The 368 urban neighborhoods (within 22 urban regions and 116 sub-regions) of Tehran City were utilized as a case study and validation tool for developing a new set of composite indicators in this dissertation. The ability to measure disaster resilience and the issue of resilience building is important for a community such as Tehran in view of the fact that the urban areas within the city tend to be inherently vulnerable, partially because of the high population and building density, and partially due to their exposure to earthquake hazard. Visualization of the results (using Arc-GIS) provided a better understanding of resilience and its variation level at the scale of urban regions, sub-regions and urban neighborhoods. The results showed that the northern areas are relatively more disaster resilient while the regions located in the south or center of the city reflect lower level of disaster resilience. The reliability and validity of the proposed approach were assessed through comparing its results with the results of DROP and JICA studies using a scatter plot and Pearson’s correlation coefficient. The findings indicated that there is a strong positive relationship between the results of this study and the results of other two models.Wie sich StĂ€dte entwickeln, nachdem sie von einer Naturkatastrophe getroffen wurden ist abhĂ€ngig von ihrem Grad der Resilienz gegenĂŒber Katastrophen. Resilienz gegenĂŒber Naturkatastrophen aber keine fest definierte GrĂ¶ĂŸe sondern fasst eine Reihe von Eigenschaften eines System, in dieser Arbeit einer Stadt zusammen, die negative Folgen solcher Ereignisse reduzieren und sich von dem Ereignis wieder zu erholen. Die FĂ€higkeit außer den Risiken und der VulnerabilitĂ€t auch die Resilienz von StĂ€dten zu messen, wird zunehmend als ein grundlegendes Ziel der Risikominderung und des Risikomanagements betrachtet. Zahlreiche Studien beschreiben das Konzept der Resilienz und heben die Bedeutung fĂŒr die urbane Entwicklung heraus. Es wurden jedoch nur in wenigen Arbeiten tragfĂ€hige AnsĂ€tze entwickelt, wie und mit welcher Methodik die Resilienz gegenĂŒber Katastrophen gemessen werden können. Das primĂ€re Ziel dieser Dissertation ist, unser VerstĂ€ndnis der Resilienz zu erweitern und eine Operationalisierung des Begriffs zu entwickeln. Der Fokus der Arbeit ist dabei auf die Anwendung des Konzeptes der Resilienz im Zusammenhang mit Erdbebenrisiken gerichtet. Ausgehend von der Idee der Resilienzmessung ĂŒber einen kompositen Index wird in dieser Arbeit ein neues Indikatorenset aufgebaut, welches die Resilienz gegenĂŒber Erdbebenrisiken effektiv messen kann. Die Vorgehensweise, mit der die Relevanz der Indikatoren und Ihre ReliabilitĂ€t innerhalb eines kompositen Index sichergestellt wird, ist entscheidend fĂŒr die GĂŒte des Messverfahrens. Die vorgeschlagene Methodik ermöglicht eine Reduktion der Indikatoren und deren Gewichtung unter Verwendung einer hybriden Faktoren-Analyse und des Analytischen Netzwerkprozesses (F'ANP). Dies ersetzt die aus der Literatur bekannte hierarchisch-deduktive Methode durch eine induktive Methode der Faktorenanalyse. Die Methodik verwendet an Stelle einer Gleichgewichtung der Indikatoren eine ungleiche Gewichtung, in dem die Wechselbeziehungen und das Feedback zwischen allen Indikatoren berĂŒcksichtigt werden. Anhand der Fallstudie Teheran wird der Ansatz validiert und der neu entwickelte Satz von Sammelindikatoren fĂŒr 368 Wohnviertel in 22 stĂ€dtischen Regionen im Stadtgebiet von Teheran angewendet. Die Möglichkeit der Beurteilung der Resilienz einer Stadt ist insbesondere fĂŒr Teheran in Anbetracht der hohen Erdbebenrisikos, der hohen Bevölkerungs- und Bebauungsdichte von hoher Bedeutung. Die Ergebnisse werden mit Arc-GIS visualisiert und liefern ein besseres VerstĂ€ndnis der Resilienz und der Variationen innerhalb der Stadt. Die Ergebnisse zeigen, dass die nördlichen Regionen verhĂ€ltnismĂ€ĂŸig resilient gegenĂŒber Erdbeben sind. Die Regionen im SĂŒden und im Zentrum der Stadt weisen hingegen eine geringe Resilienz gegenĂŒber Erdbeben auf. Die ZuverlĂ€ssigkeit und die ValiditĂ€t des vorgeschlagenen Ansatzes wurden durch einen Vergleich mit den Ergebnissen bereits vorliegender Studien (DROP, JICA) beurteilt. Die Ergebnisse zeigen, dass es eine starke positive Korrelation zwischen des neu entwickelten Ansatzes und den vorliegenden AnsĂ€tzen gibt

    FIGHTING FINANCIAL CRIME IN THE DIGITAL AGE With special regard to cyber-enabled money laundering

    Get PDF
    In order to effectively combat money laundering and terrorist financing carried out by means of cryptocurrencies and cryptoassets, certain conditions must be met. First, there must be a sufficient and appropriate regulatory framework within which the necessary counter measures can be taken. Several examples show that this legal framework is not only formed by national and supranational state bodies, but also by rules created in the private sector through self-regulation. Both regulatory systems are equally limited in their mechanisms of action by data protection law. Money laundering manipulations predominantly, if not as a rule, involve data of the persons concerned, and counter-measures will consequently also have to take these data into account. The question of how to resolve this conflict between the interest of the business sector and society in combating a specific form of financial crime and the interest in protecting individual privacy is, as illustrated by a few examples, being addressed not only by state and private regulators but also by the courts. On the other hand, it should be noted that legally relevant rules must and can be applied regardless of their character. Despite the existing relatively extensive legal framework, users sometimes have problems from a practical point of view in fulfilling the obligations imposed on them. This concerns financial service providers who need concrete assistance, especially in the context of risk management. It is therefore recommended that this should be offered to them on various points by their trade associations through self-regulation. Especially in the case of non-transparent cyber-enabled criminal processes, the appropriate organisational structures and technical possibilities must be available to identify them as violations of the law. Only when the criminal structures, i.e. the technical ossibilities for abuse and the currently practiced manipulations, are known, can crimes be countered preventively and with repressive means. This concerns not only the financial institutions, but also the administrative or law enforcement authorities and the courts, where often a lack of expertise can be identified. This aspect is not only dealt with in part of this paper, but also in detail in the larger part of the publications compiled in the last section

    Anomaly detection in computer networks

    Get PDF
    Orientadores: Leonardo de Souza Mendes, Mario Lemes Proença JuniorTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia ElĂ©trica e de ComputaçãoResumo: Anomalias em redes de computadores sĂŁo desvios sĂșbitos e acentuados que ocorrem no trĂĄfego em consequĂȘncia de diversas situaçÔes como defeitos em softwares, uso abusivo de recursos da rede, falhas em equipamentos, erros em configuraçÔes e ataques. Nesta tese, Ă© proposto um sistema de detecção de anomalias em redes de computadores baseado em trĂȘs nĂ­veis de anĂĄlise. O primeiro nĂ­vel de anĂĄlise Ă© responsĂĄvel por comparar os dados coletados em um objeto SNMP (Simple Network Management Protocol) com o perfil de operaçÔes normais da rede. O segundo nĂ­vel de anĂĄlise correlaciona os alarmes gerados no primeiro nĂ­vel de anĂĄlise utilizando um grafo de dependĂȘncias que representa as relaçÔes entre os objetos SNMP monitorados. O terceiro nĂ­vel de anĂĄlise reĂșne os alarmes de segundo nĂ­vel utilizando informaçÔes sobre a topologia de rede e gera um alarme de terceiro nĂ­vel que reporta a propagação da anomalia pela rede. Os testes foram realizados na rede da Universidade Estadual de Londrina, utilizando situaçÔes reais. Os resultados mostraram que a proposta apresentou baixas taxas de falsos positivos combinadas a altas taxas de detecção. AlĂ©m disso, o sistema foi capaz de correlacionar alarmes gerados para diferentes objetos SNMP em toda a rede, produzindo conjuntos menores de alarmes que ofereceram ao administrador de redes uma visĂŁo panorĂąmica do problemaAbstract: Anomalies in computer networks are unexpected and significant deviations that occur in network traffic due to different situations such as software bugs, unfair resource usage, failures, misconfiguration and attacks. In this work, it is proposed an anomaly detection system based on three levels of analysis. The first level of analysis is responsible for comparing the data collected from SNMP (Simple Network Management Protocol) objects with the profile of network normal behavior. The second level of analysis correlates the alarms generated by the first level of analysis by using a dependency graph, which represents the relationships between the SNMP objects. The third level of analysis correlates the second level alarms by using network topology information. The third level generates a third level alarm that presents the anomaly propagation path through the network. Tests were performed in the State University of Londrina network, exploring real situations. Results showed that the proposal presents low false positive rates and high detection rates. Moreover, the proposed system is able to correlate alarms that were generated for SNMP objects at different places of the network, producing smaller sets of alarms that offer a wide-view of the problem to the network administratorDoutoradoTelecomunicaçÔes e TelemĂĄticaDoutor em Engenharia ElĂ©tric
    corecore