10 research outputs found

    Network entity characterization and attack prediction

    Get PDF
    The devastating effects of cyber-attacks, highlight the need for novel attack detection and prevention techniques. Over the last years, considerable work has been done in the areas of attack detection as well as in collaborative defense. However, an analysis of the state of the art suggests that many challenges exist in prioritizing alert data and in studying the relation between a recently discovered attack and the probability of it occurring again. In this article, we propose a system that is intended for characterizing network entities and the likelihood that they will behave maliciously in the future. Our system, namely Network Entity Reputation Database System (NERDS), takes into account all the available information regarding a network entity (e. g. IP address) to calculate the probability that it will act maliciously. The latter part is achieved via the utilization of machine learning. Our experimental results show that it is indeed possible to precisely estimate the probability of future attacks from each entity using information about its previous malicious behavior and other characteristics. Ranking the entities by this probability has practical applications in alert prioritization, assembly of highly effective blacklists of a limited length and other use cases.Comment: 30 pages, 8 figure

    Predictive Methods in Cyber Defense: Current Experience and Research Challenges

    Get PDF
    Predictive analysis allows next-generation cyber defense that is more proactive than current approaches based on intrusion detection. In this paper, we discuss various aspects of predictive methods in cyber defense and illustrate them on three examples of recent approaches. The first approach uses data mining to extract frequent attack scenarios and uses them to project ongoing cyberattacks. The second approach uses a dynamic network entity reputation score to predict malicious actors. The third approach uses time series analysis to forecast attack rates in the network. This paper presents a unique evaluation of the three distinct methods in a common environment of an intrusion detection alert sharing platform, which allows for a comparison of the approaches and illustrates the capabilities of predictive analysis for current and future research and cybersecurity operations. Our experiments show that all three methods achieved a sufficient technology readiness level for experimental deployment in an operational setting with promising accuracy and usability. Namely prediction and projection methods, despite their differences, are highly usable for predictive blacklisting, the first provides a more detailed output, and the second is more extensible. Network security situation forecasting is lightweight and displays very high accuracy, but does not provide details on predicted events

    Addressing practical challenges for anomaly detection in backbone networks

    Get PDF
    Network monitoring has always been a topic of foremost importance for both network operators and researchers for multiple reasons ranging from anomaly detection to tra c classi cation or capacity planning. Nowadays, as networks become more and more complex, tra c increases and security threats reproduce, achieving a deeper understanding of what is happening in the network has become an essential necessity. In particular, due to the considerable growth of cybercrime, research on the eld of anomaly detection has drawn signi cant attention in recent years and tons of proposals have been made. All the same, when it comes to deploying solutions in real environments, some of them fail to meet some crucial requirements. Taking this into account, this thesis focuses on lling this gap between the research and the non-research world. Prior to the start of this work, we identify several problems. First, there is a clear lack of detailed and updated information on the most common anomalies and their characteristics. Second, unawareness of sampled data is still common although the performance of anomaly detection algorithms is severely a ected. Third, operators currently need to invest many work-hours to manually inspect and also classify detected anomalies to act accordingly and take the appropriate mitigation measures. This is further exacerbated due to the high number of false positives and false negatives and because anomaly detection systems are often perceived as extremely complex black boxes. Analysing an issue is essential to fully comprehend the problem space and to be able to tackle it properly. Accordingly, the rst block of this thesis seeks to obtain detailed and updated real-world information on the most frequent anomalies occurring in backbone networks. It rst reports on the performance of di erent commercial systems for anomaly detection and analyses the types of network nomalies detected. Afterwards, it focuses on further investigating the characteristics of the anomalies found in a backbone network using one of the tools for more than half a year. Among other results, this block con rms the need of applying sampling in an operational environment as well as the unacceptably high number of false positives and false negatives still reported by current commercial tools. On the whole, the presence of ampling in large networks for monitoring purposes has become almost mandatory and, therefore, all anomaly detection algorithms that do not take that into account might report incorrect results. In the second block of this thesis, the dramatic impact of sampling on the performance of well-known anomaly detection techniques is analysed and con rmed. However, we show that the results change signi cantly depending on the sampling technique used and also on the common metric selected to perform the comparison. In particular, we show that, Packet Sampling outperforms Flow Sampling unlike previously reported. Furthermore, we observe that Selective Sampling (SES), a sampling technique that focuses on small ows, obtains much better results than traditional sampling techniques for scan detection. Consequently, we propose Online Selective Sampling, a sampling technique that obtains the same good performance for scan detection than SES but works on a per-packet basis instead of keeping all ows in memory. We validate and evaluate our proposal and show that it can operate online and uses much less resources than SES. Although the literature is plenty of techniques for detecting anomalous events, research on anomaly classi cation and extraction (e.g., to further investigate what happened or to share evidence with third parties involved) is rather marginal. This makes it harder for network operators to analise reported anomalies because they depend solely on their experience to do the job. Furthermore, this task is an extremely time-consuming and error-prone process. The third block of this thesis targets this issue and brings it together with the knowledge acquired in the previous blocks. In particular, it presents a system for automatic anomaly detection, extraction and classi cation with high accuracy and very low false positives. We deploy the system in an operational environment and show its usefulness in practice. The fourth and last block of this thesis presents a generalisation of our system that focuses on analysing all the tra c, not only network anomalies. This new system seeks to further help network operators by summarising the most signi cant tra c patterns in their network. In particular, we generalise our system to deal with big network tra c data. In particular, it deals with src/dst IPs, src/dst ports, protocol, src/dst Autonomous Systems, layer 7 application and src/dst geolocation. We rst deploy a prototype in the European backbone network of G EANT and show that it can process large amounts of data quickly and build highly informative and compact reports that are very useful to help comprehending what is happening in the network. Second, we deploy it in a completely di erent scenario and show how it can also be successfully used in a real-world use case where we analyse the behaviour of highly distributed devices related with a critical infrastructure sector.La monitoritzaci o de xarxa sempre ha estat un tema de gran import ancia per operadors de xarxa i investigadors per m ultiples raons que van des de la detecci o d'anomalies fins a la classi caci o d'aplicacions. Avui en dia, a mesura que les xarxes es tornen m es i m es complexes, augmenta el tr ansit de dades i les amenaces de seguretat segueixen creixent, aconseguir una comprensi o m es profunda del que passa a la xarxa s'ha convertit en una necessitat essencial. Concretament, degut al considerable increment del ciberactivisme, la investigaci o en el camp de la detecci o d'anomalies ha crescut i en els darrers anys s'han fet moltes i diverses propostes. Tot i aix o, quan s'intenten desplegar aquestes solucions en entorns reals, algunes d'elles no compleixen alguns requisits fonamentals. Tenint aix o en compte, aquesta tesi se centra a omplir aquest buit entre la recerca i el m on real. Abans d'iniciar aquest treball es van identi car diversos problemes. En primer lloc, hi ha una clara manca d'informaci o detallada i actualitzada sobre les anomalies m es comuns i les seves caracter stiques. En segona inst ancia, no tenir en compte la possibilitat de treballar amb nom es part de les dades (mostreig de tr ansit) continua sent bastant est es tot i el sever efecte en el rendiment dels algorismes de detecci o d'anomalies. En tercer lloc, els operadors de xarxa actualment han d'invertir moltes hores de feina per classi car i inspeccionar manualment les anomalies detectades per actuar en conseqüencia i prendre les mesures apropiades de mitigaci o. Aquesta situaci o es veu agreujada per l'alt nombre de falsos positius i falsos negatius i perqu e els sistemes de detecci o d'anomalies s on sovint percebuts com caixes negres extremadament complexes. Analitzar un tema es essencial per comprendre plenament l'espai del problema i per poder-hi fer front de forma adequada. Per tant, el primer bloc d'aquesta tesi pret en proporcionar informaci o detallada i actualitzada del m on real sobre les anomalies m es freqüents en una xarxa troncal. Primer es comparen tres eines comercials per a la detecci o d'anomalies i se n'estudien els seus punts forts i febles, aix com els tipus d'anomalies de xarxa detectats. Posteriorment, s'investiguen les caracter stiques de les anomalies que es troben en la mateixa xarxa troncal utilitzant una de les eines durant m es de mig any. Entre d'altres resultats, aquest bloc con rma la necessitat de l'aplicaci o de mostreig de tr ansit en un entorn operacional, aix com el nombre inacceptablement elevat de falsos positius i falsos negatius en eines comercials actuals. En general, el mostreig de tr ansit de dades de xarxa ( es a dir, treballar nom es amb una part de les dades) en grans xarxes troncals s'ha convertit en gaireb e obligatori i, per tant, tots els algorismes de detecci o d'anomalies que no ho tenen en compte poden veure seriosament afectats els seus resultats. El segon bloc d'aquesta tesi analitza i confi rma el dram atic impacte de mostreig en el rendiment de t ecniques de detecci o d'anomalies plenament acceptades a l'estat de l'art. No obstant, es mostra que els resultats canvien signi cativament depenent de la t ecnica de mostreig utilitzada i tamb e en funci o de la m etrica usada per a fer la comparativa. Contr ariament als resultats reportats en estudis previs, es mostra que Packet Sampling supera Flow Sampling. A m es, a m es, s'observa que Selective Sampling (SES), una t ecnica de mostreig que se centra en mostrejar fluxes petits, obt e resultats molt millors per a la detecci o d'escanejos que no pas les t ecniques tradicionals de mostreig. En conseqü encia, proposem Online Selective Sampling, una t ecnica de mostreig que obt e el mateix bon rendiment per a la detecci o d'escanejos que SES, per o treballa paquet per paquet enlloc de mantenir tots els fluxes a mem oria. Despr es de validar i evaluar la nostra proposta, demostrem que es capa c de treballar online i utilitza molts menys recursos que SES. Tot i la gran quantitat de tècniques proposades a la literatura per a la detecci o d'esdeveniments an omals, la investigaci o per a la seva posterior classi caci o i extracci o (p.ex., per investigar m es a fons el que va passar o per compartir l'evid encia amb tercers involucrats) es m es aviat marginal. Aix o fa que sigui m es dif cil per als operadors de xarxa analalitzar les anomalies reportades, ja que depenen unicament de la seva experi encia per fer la feina. A m es a m es, aquesta tasca es un proc es extremadament lent i propens a errors. El tercer bloc d'aquesta tesi se centra en aquest tema tenint tamb e en compte els coneixements adquirits en els blocs anteriors. Concretament, presentem un sistema per a la detecci o extracci o i classi caci o autom atica d'anomalies amb una alta precisi o i molt pocs falsos positius. Adicionalment, despleguem el sistema en un entorn operatiu i demostrem la seva utilitat pr actica. El quart i ultim bloc d'aquesta tesi presenta una generalitzaci o del nostre sistema que se centra en l'an alisi de tot el tr ansit, no nom es en les anomalies. Aquest nou sistema pret en ajudar m es als operadors ja que resumeix els patrons de tr ansit m es importants de la seva xarxa. En particular, es generalitza el sistema per fer front al "big data" (una gran quantitat de dades). En particular, el sistema tracta IPs origen i dest i, ports origen i destí , protocol, Sistemes Aut onoms origen i dest , aplicaci o que ha generat el tr ansit i fi nalment, dades de geolocalitzaci o (tamb e per origen i dest ). Primer, despleguem un prototip a la xarxa europea per a la recerca i la investigaci o (G EANT) i demostrem que el sistema pot processar grans quantitats de dades r apidament aix com crear informes altament informatius i compactes que s on de gran utilitat per ajudar a comprendre el que est a succeint a la xarxa. En segon lloc, despleguem la nostra eina en un escenari completament diferent i mostrem com tamb e pot ser utilitzat amb exit en un cas d' us en el m on real en el qual s'analitza el comportament de dispositius altament distribuïts

    Flow Monitoring Explained: From Packet Capture to Data Analysis With NetFlow and IPFIX

    Get PDF
    Flow monitoring has become a prevalent method for monitoring traffic in high-speed networks. By focusing on the analysis of flows, rather than individual packets, it is often said to be more scalable than traditional packet-based traffic analysis. Flow monitoring embraces the complete chain of packet observation, flow export using protocols such as NetFlow and IPFIX, data collection, and data analysis. In contrast to what is often assumed, all stages of flow monitoring are closely intertwined. Each of these stages therefore has to be thoroughly understood, before being able to perform sound flow measurements. Otherwise, flow data artifacts and data loss can be the consequence, potentially without being observed. This paper is the first of its kind to provide an integrated tutorial on all stages of a flow monitoring setup. As shown throughout this paper, flow monitoring has evolved from the early 1990s into a powerful tool, and additional functionality will certainly be added in the future. We show, for example, how the previously opposing approaches of deep packet inspection and flow monitoring have been united into novel monitoring approaches

    Effects of circadian rhythm phase alteration on physiological and psychological variables: Implications to pilot performance (including a partially annotated bibliography)

    Get PDF
    The effects of environmental synchronizers upon circadian rhythmic stability in man and the deleterious alterations in performance and which result from changes in this stability are points of interest in a review of selected literature published between 1972 and 1980. A total of 2,084 references relevant to pilot performance and circadian phase alteration are cited and arranged in the following categories: (1) human performance, with focus on the effects of sleep loss or disturbance and fatigue; (2) phase shift in which ground based light/dark alteration and transmeridian flight studies are discussed; (3) shiftwork; (4)internal desynchronization which includes the effect of evironmental factors on rhythmic stability, and of rhythm disturbances on sleep and psychopathology; (5) chronotherapy, the application of methods to ameliorate desynchronization symptomatology; and (6) biorythm theory, in which the birthdate based biorythm method for predicting aircraft accident susceptability is critically analyzed. Annotations are provided for most citations

    The theme(s) of the Joseph story: a literary analysis

    Get PDF
    Since the 1970s the application of narrative analysis to the Joseph Story has enriched its reading. But those who apply this method to the narrative produce significantly different results in terms of what its theme is. The aim of this thesis is to investigate the reasons for this and to articulate as objectively as possible the theme of the Joseph Story. Chapter One establishes the context of this investigation by evaluating the major narrative readings of the Joseph Story. It reveals that those who apply narrative methodologies to the story come to different conclusions about what its theme is. It notes that the different results could be due to different narrative approaches, the literary context of the narrative, and the complex nature of the text itself. We choose Humphreys, Longacre, and Turner as our dialogue partners because they represent different narrative methods of reading the Joseph Story. The reference terms `narrative criticism' and `theme' are then defined. Chapter Two argues that the way to overcome the confusion concerning the theme (s) of the Joseph Story is to use a methodology that addresses the limitations of the literary approaches applied to the narrative and takes note of the wider literary context of Genesis and the rich nature of the text. This chapter then proposes a narrative methodology of `triangulation' that comprises plot analysis, text-linguistics and poetics. Chapters three, four and five apply this methodology to the entire narrative in Genesis 37-50 via a detailed analysis of Genesis 37,44-45, and 49-50, the beginning, middle and end of the narrative, respectively. The motifs that emerge from our analysis are family breakdown, power, providence, blessing, and land. Chapter six concludes that each of these motifs is a key concern of the Joseph Story but none by itself adequately articulates the story's theme. It is the ecology of these motifs that enunciates the theme: God's providential work with and through Jacob's dysfunctional family, preserving it and blessing others

    Building on Progress - Expanding the Research Infrastructure for the Social, Economic, and Behavioral Sciences. Vol. 1

    Get PDF
    The publication provides a comprehensive compendium of the current state of Germany's research infrastructure in the social, economic, and behavioural sciences. In addition, the book presents detailed discussions of the current needs of empirical researchers in these fields and opportunities for future development. The book contains 68 advisory reports by more than 100 internationally recognized authors from a wide range of fields and recommendations by the German Data Forum (RatSWD) on how to improve the research infrastructure so as to create conditions ideal for making Germany's social, economic, and behavioral sciences more innovative and internationally competitive. The German Data Forum (RatSWD) has discussed the broad spectrum of issues covered by these advisory reports extensively, and has developed general recommendations on how to expand the research infrastructure to meet the needs of scholars in the social and economic sciences

    Metaphors of the sea : a critical study of five Anglo-Saxon poems

    Get PDF
    The object of this thesis is to contribute to the appreciation of five selected Anglo-Saxon poems - The Wanderer, The Seafarer, Exodus, Andreas, and Beowulf - by analysing their metaphoric use of the sea. Metaphor is an essential and distinctive element of all poetry and, to be genuine, to be alive, and to be ever-interesting, a poem must achieve itself through metaphor. A poem's unique mode of vision is metaphoric, and whatever it communicates we perceive in and through metaphor. This is an axiomatic tenet of the criticism of modern poetry. But criticism of Anglo-Saxon poetry, if it bases its insights on a detailed reference to metaphor, must justify itself on theoretical grounds
    corecore