8 research outputs found

    Network anomaly detection research: a survey

    Get PDF
    Data analysis to identifying attacks/anomalies is a crucial task in anomaly detection and network anomaly detection itself is an important issue in network security. Researchers have developed methods and algorithms for the improvement of the anomaly detection system. At the same time, survey papers on anomaly detection researches are available. Nevertheless, this paper attempts to analyze futher and to provide alternative taxonomy on anomaly detection researches focusing on methods, types of anomalies, data repositories, outlier identity and the most used data type. In addition, this paper summarizes information on application network categories of the existing studies

    A Big Data and machine learning approach for network monitoring and security

    Get PDF
    In the last decade the performances of 802.11 (Wi-Fi) devices skyrocketed. Today it is possible to realize gigabit wireless links spanning across kilometers at a fraction of the cost of the wired equivalent. In the same period, mesh network evolved from being experimental tools confined into university labs, to systems running in several real world scenarios. Mesh networks can now provide city-wide coverage and can compete on the market of Internet access. Yet, being wireless distributed networks, mesh networks are still hard to maintain and monitor. This paper explains how today we can perform monitoring, anomaly detection and root cause analysis in mesh networks using Big Data techniques. It first describes the architecture of a modern mesh network, it justifies the use of Big Data techniques and provides a design for the storage and analysis of Big Data produced by a large-scale mesh network. While proposing a generic infrastructure, we focus on its application in the security domain

    Big Data Analytics in Static and Streaming Provenance

    Get PDF
    Thesis (Ph.D.) - Indiana University, Informatics and Computing,, 2016With recent technological and computational advances, scientists increasingly integrate sensors and model simulations to understand spatial, temporal, social, and ecological relationships at unprecedented scale. Data provenance traces relationships of entities over time, thus providing a unique view on over-time behavior under study. However, provenance can be overwhelming in both volume and complexity; the now forecasting potential of provenance creates additional demands. This dissertation focuses on Big Data analytics of static and streaming provenance. It develops filters and a non-preprocessing slicing technique for in-situ querying of static provenance. It presents a stream processing framework for online processing of provenance data at high receiving rate. While the former is sufficient for answering queries that are given prior to the application start (forward queries), the latter deals with queries whose targets are unknown beforehand (backward queries). Finally, it explores data mining on large collections of provenance and proposes a temporal representation of provenance that can reduce the high dimensionality while effectively supporting mining tasks like clustering, classification and association rules mining; and the temporal representation can be further applied to streaming provenance as well. The proposed techniques are verified through software prototypes applied to Big Data provenance captured from computer network data, weather models, ocean models, remote (satellite) imagery data, and agent-based simulations of agricultural decision making

    A WEB-BASED ENVIRONMENTAL TOOLKIT TO SUPPORT SMES IN THE IMPLEMENTATION OF AN ENVIRONMENTAL MANAGEMENT SYSTEM

    Get PDF
    With small and medium sized-enterprises (SMEs) taking up the majority of the global businesses, it is important they act in an environmentally responsible manner. Environmental management systems (EMS) help companies evaluate and improve their environmental impact but they often require human, financial, and temporary resources that not all SMEs can afford. This research encompasses interviews with representatives of two small enterprises in Germany to provide insights into their understanding, and knowledge of an EMS and how they perceive their responsibility towards the environment. Furthermore, it presents a toolkit created especially for small and medium-sized enterprises. It serves as a simplified version of an EMS based on the ISO 14001 standard and is evaluated by target users and appropriate representatives. Some of the findings are: while open to the idea of improving their environmental impact, SMEs do not always feel it is their responsibility to do so; they seem to lack the means to fully implement an EMS. The developed toolkit is considered useful and usable and recommendations are drawn for its future enhancement

    ENERGY CONSUMPTION OF MOBILE PHONES

    Get PDF
    Battery consumption in mobile applications development is a very important aspect and has to be considered by all the developers in their applications. This study will present an analysis of different relevant concepts and parameters that may have an impact on energy consumption of Windows Phone applications. This operating system was chosen because limited research related thereto has been conducted, even though there are related studies for Android and iOS operating systems. Furthermore, another reason is the increasing number of Windows Phone users. The objective of this research is to categorise the energy consumption parameters (e.g. use of one thread or several threads for the same output). The result for each group of experiments will be analysed and a rule will be derived. The set of derived rules will serve as a guide for developers who intend to develop energy efficient Windows Phone applications. For each experiment, one application is created for each concept and the results are presented in two ways; a table and a chart. The table presents the duration of the experiment, the battery consumed in the experiment, the expected battery lifetime, and the energy consumption, while the charts display the energy distribution based on the main threads: UI thread, application thread, and network thread

    Dolus : cyber defense using pretense against DDoS attacks in cloud platforms

    Get PDF
    Cloud-hosted services are being increasingly used in online businesses in e.g., retail, healthcare, manufacturing, entertainment due to benefits such as scalability and reliability. These benefits are fueled by innovations in orchestration of cloud platforms that make them totally programmable as Software Defined everything Infrastructures (SDxI). At the same time, sophisticated targeted attacks such as Distributed Denial-of-Service (DDoS) are growing on an unprecedented scale threatening the availability of online businesses. In this thesis, we present a novel defense system called Dolus to mitigate the impact of DDoS attacks launched against high-value services hosted in SDxI-based cloud platforms. Our Dolus system is able to initiate a pretense in a scalable and collaborative manner to deter the attacker based on threat intelligence obtained from attack feature analysis in a two-stage ensemble learning scheme. Using foundations from pretense theory in child play, Dolus takes advantage of elastic capacity provisioning via quarantine virtual machines and SDxI policy co-ordination across multiple network domains. To maintain the pretense of false sense of success after attack identification, Dolus uses two strategies: (i) dummy traffic pressure in a quarantine to mimic target response time profiles that were present before legitimate users were migrated away, and (ii) Scapy-based packet manipulation to generate responses with spoofed IP addresses of the original target before the attack traffic started being quarantined. From the time gained through pretense initiation, Dolus enables cloud service providers to decide on a variety of policies to mitigate the attack impact, without disrupting the cloud services experience for legitimate users. We evaluate the efficacy of Dolus using a GENI Cloud testbed and demonstrate its real-time capabilities to: (a) detect DDoS attacks and redirect attack traffic to quarantine resources to engage the attacker under pretense, and (b) coordinate SDxI policies to possibly block DDoS attacks closer to the attack source(s)

    User experience and robustness in social virtual reality applications

    Get PDF
    Cloud-based applications that rely on emerging technologies such as social virtual reality are increasingly being deployed at high-scale in e.g., remote-learning, public safety, and healthcare. These applications increasingly need mechanisms to maintain robustness and immersive user experience as a joint consideration to minimize disruption in service availability due to cyber attacks/faults. Specifically, effective modeling and real-time adaptation approaches need to be investigated to ensure that the application functionality is resilient and does not induce undesired cybersickness levels. In this thesis, we investigate a novel ‘DevSecOps' paradigm to jointly tune both the robustness and immersive performance factors in social virtual reality application design/operations. We characterize robustness factors considering Security, Privacy and Safety (SPS), and immersive performance factors considering Quality of Application, Quality of Service, and Quality of Experience (3Q). We achieve “harmonized security and performance by design” via modeling the SPS and 3Q factors in cloud-hosted applications using attack-fault trees (AFT) and an accurate quantitative analysis via formal verification techniques i.e., statistical model checking (SMC). We develop a real-time adaptive control capability to manage SPS/3Q issues affecting a critical anomaly event that induces undesired cybersickness. This control capability features a novel dynamic rule-based approach for closed-loop decision making augmented by a knowledge base for the SPS/3Q issues of individual and/or combination events. Correspondingly, we collect threat intelligence on application and network based cyber-attacks that disrupt immersiveness, and develop a multi-label K-NN classifier as well as statistical analysis techniques for critical anomaly event detection. We validate the effectiveness of our solution approach in a real-time cloud testbed featuring vSocial, a social virtual reality based learning environment that supports delivery of Social Competence Intervention (SCI) curriculum for youth. Based on our experiment findings, we show that our solution approach enables: (i) identification of the most vulnerable components that impact user immersive experience to formally conduct risk assessment, (ii) dynamic decision making for controlling SPS/3Q issues inducing undesirable cybersickness levels via quantitative metrics of user feedback and effective anomaly detection, and (iii) rule-based policies following the NIST SP 800-160 principles and cloud-hosting recommendations for a more secure, privacy-preserving, and robust cloud-based application configuration with satisfactory immersive user experience.Includes bibliographical references (pages 133-146)

    Knowledge discovery with recommenders for big data management in science and engineering communities

    Get PDF
    Recent science and engineering research tasks are increasingly becoming dataintensive and use workflows to automate integration and analysis of voluminous data to test hypotheses. Particularly, bold scientific advances in areas of neuroscience and bioinformatics necessitate access to multiple data archives, heterogeneous software and computing resources, and multi-site interdisciplinary expertise. Datasets are evolving, and new tools are continuously invented for achieving new state-of-the-art performance. Principled cyber and software automation approaches to data-intensive analytics using systematic integration of cyberinfrastructure (CI) technologies and knowledge discovery driven algorithms will significantly enhance research and interdisciplinary collaborations in science and engineering. In this thesis, we demonstrate a novel recommender approach to discover latent knowledge patterns from both the infrastructure perspective (i.e., measurement recommender) and the applications perspective (i.e., topic recommender and scholar recommender). In the infrastructure perspective, we identify and diagnose network-wide anomaly events to address performance bottleneck by proposing a novel measurement recommender scheme. In cases where there is a lack of ground truth in networking performance monitoring (e.g., perfSONAR deployments), it is hard to pinpoint the root-cause analysis in a multi-domain context. To solve this problem, we define a "social plane" concept that relies on recommendation schemes to share diagnosis knowledge or work collaboratively. Our solution makes it easier for network operators and application users to quickly and effectively troubleshoot performance bottlenecks on wide-area network backbones. To evaluate our "measurement recommender", we use both real and synthetic datasets. The results show our measurement recommender scheme has high performance in terms of precision, recall, and accuracy, as well as efficiency in terms of the time taken for large volume measurement trace analysis. In the application perspective, our goal is to shorten time to knowledge discovery and adapt prior domain knowledge for computational and data-intensive communities. To achieve this goal, we design a novel topic recommender that leverages a domain-specific topic model (DSTM) algorithm to help scientists find the relevant tools or datasets for their applications. The DSTM is a probabilistic graphical model that extends the Latent Dirichlet Allocation (LDA) and uses the Markov chain Monte Carlo (MCMC) algorithm to infer latent patterns within a specific domain in an unsupervised manner. We evaluate our scheme based on large collections of the dataset (i.e., publications, tools, datasets) from bioinformatics and neuroscience domains. Our experiments result using the perplexity metric show that our model has better generalization performance within a domain for discovering highly-specific latent topics. Lastly, to enhance the collaborations among scholars to generate new knowledge, it is necessary to identify scholars with their specific research interests or cross-domain expertise. We propose a "ScholarFinder" model to quantify expert knowledge based on publications and funding records using a deep generative model. Our model embeds scholars' knowledge in order to recommend suitable scholars to perform multi-disciplinary tasks. We evaluate our model with state-of-the-art baseline models (e.g., XGBoost, DNN), and experiment results show that our ScholarFinder model outperforms state-ofthe-art models in terms of precision, recall, F1-score, and accuracy.Includes bibliographical references (pages 113-124)
    corecore