28 research outputs found

    DoWitcher: Effective Worm Detection and Containment in the Internet Core

    Get PDF
    Enterprise networks are increasingly offloading the responsibility for worm detection and containment to the carrier networks. However, current approaches to the zero-day worm detection problem such as those based on content similarity of packet payloads are not scalable to the carrier link speeds (OC-48 and up-wards). In this paper, we introduce a new system, namely DoWitcher, which in contrast to previous approaches is scalable as well as able to detect the stealthiest worms that employ low-propagation rates or polymorphisms to evade detection. DoWitcher uses an incremental approach toward worm detection: First, it examines the layer-4 traffic features to discern the presence of a worm anomaly; Next, it determines a flow-filter mask that can be applied to isolate the suspect worm flows and; Finally, it enables full-packet capture of only those flows that match the mask, which are then processed by a longest common subsequence algorithm to extract the worm content signature. Via a proof-of-concept implementation on a commercially available network analyzer processing raw packets from an OC-48 link, we demonstrate the capability of DoWitcher to detect low-rate worms and extract signatures for even the polymorphic worm

    Detection of network anomalies and novel attacks in the internet via statistical network traffic separation and normality prediction

    Get PDF
    With the advent and the explosive growth of the global Internet and the electronic commerce environment, adaptive/automatic network and service anomaly detection is fast gaining critical research and practical importance. If the next generation of network technology is to operate beyond the levels of current networks, it will require a set of well-designed tools for its management that will provide the capability of dynamically and reliably identifying network anomalies. Early detection of network anomalies and performance degradations is a key to rapid fault recovery and robust networking, and has been receiving increasing attention lately. In this dissertation we present a network anomaly detection methodology, which relies on the analysis of network traffic and the characterization of the dynamic statistical properties of traffic normality, in order to accurately and timely detect network anomalies. Anomaly detection is based on the concept that perturbations of normal behavior suggest the presence of anomalies, faults, attacks etc. This methodology can be uniformly applied in order to detect network attacks, especially in cases where novel attacks are present and the nature of the intrusion is unknown. Specifically, in order to provide an accurate identification of the normal network traffic behavior, we first develop an anomaly-tolerant non-stationary traffic prediction technique, which is capable of removing both pulse and continuous anomalies. Furthermore we introduce and design dynamic thresholds, and based on them we define adaptive anomaly violation conditions, as a combined function of both the magnitude and duration of the traffic deviations. Numerical results are presented that demonstrate the operational effectiveness and efficiency of the proposed approach, under different anomaly traffic scenarios and attacks, such as mail-bombing and UDP flooding attacks. In order to improve the prediction accuracy of the statistical network traffic normality, especially in cases where high burstiness is present, we propose, study and analyze a new network traffic prediction methodology, based on the frequency domain traffic analysis and filtering, with the objective_of enhancing the network anomaly detection capabilities. Our approach is based on the observation that the various network traffic components, are better identified, represented and isolated in the frequency domain. As a result, the traffic can be effectively separated into a baseline component, that includes most of the low frequency traffic and presents low burstiness, and the short-term traffic that includes the most dynamic part. The baseline traffic is a mean non-stationary periodic time series, and the Extended Resource-Allocating Network (BRAN) methodology is used for its accurate prediction. The short-term traffic is shown to be a time-dependent series, and the Autoregressive Moving Average (ARMA) model is proposed to be used for the accurate prediction of this component. Furthermore, it is demonstrated that the proposed enhanced traffic prediction strategy can be combined with the use of dynamic thresholds and adaptive anomaly violation conditions, in order to improve the network anomaly detection effectiveness. The performance evaluation of the proposed overall strategy, in terms of the achievable network traffic prediction accuracy and anomaly detection capability, and the corresponding numerical results demonstrate and quantify the significant improvements that can be achieved

    Modeling emergency department visit patterns for infectious disease complaints: results and application to disease surveillance

    Get PDF
    BACKGROUND: Concern over bio-terrorism has led to recognition that traditional public health surveillance for specific conditions is unlikely to provide timely indication of some disease outbreaks, either naturally occurring or induced by a bioweapon. In non-traditional surveillance, the use of health care resources are monitored in "near real" time for the first signs of an outbreak, such as increases in emergency department (ED) visits for respiratory, gastrointestinal or neurological chief complaints (CC). METHODS: We collected ED CCs from 2/1/94 – 5/31/02 as a training set. A first-order model was developed for each of seven CC categories by accounting for long-term, day-of-week, and seasonal effects. We assessed predictive performance on subsequent data from 6/1/02 – 5/31/03, compared CC counts to predictions and confidence limits, and identified anomalies (simulated and real). RESULTS: Each CC category exhibited significant day-of-week differences. For most categories, counts peaked on Monday. There were seasonal cycles in both respiratory and undifferentiated infection complaints and the season-to-season variability in peak date was summarized using a hierarchical model. For example, the average peak date for respiratory complaints was January 22, with a season-to-season standard deviation of 12 days. This season-to-season variation makes it challenging to predict respiratory CCs so we focused our effort and discussion on prediction performance for this difficult category. Total ED visits increased over the study period by 4%, but respiratory complaints decreased by roughly 20%, illustrating that long-term averages in the data set need not reflect future behavior in data subsets. CONCLUSION: We found that ED CCs provided timely indicators for outbreaks. Our approach led to successful identification of a respiratory outbreak one-to-two weeks in advance of reports from the state-wide sentinel flu surveillance and of a reported increase in positive laboratory test results

    Bridge Structrural Health Monitoring Using a Cyber-Physical System Framework

    Full text link
    Highway bridges are critical infrastructure elements supporting commercial and personal traffic. However, bridge deterioration coupled with insufficient funding for bridge maintenance remain a chronic problem faced by the United States. With the emergence of wireless sensor networks (WSN), structural health monitoring (SHM) has gained increasing attention over the last decade as a viable means of assessing bridge structural conditions. While intensive research has been conducted on bridge SHM, few studies have clearly demonstrated the value of SHM to bridge owners, especially using real-world implementation in operational bridges. This thesis first aims to enhance existing bridge SHM implementations by developing a cyber-physical system (CPS) framework that integrates multiple SHM systems with traffic cameras and weigh-in-motion (WIM) stations located along the same corridor. To demonstrate the efficacy of the proposed CPS, a 20-mile segment of the northbound I-275 highway in Michigan is instrumented with four traffic cameras, two bridge SHM systems and a WIM station. Real-time truck detection algorithms are deployed to intelligently trigger the SHM systems for data collection during large truck events. Such a triggering approach can improve data acquisition efficiency by up to 70% (as compared to schedule-based data collection). Leveraging computer vision-based truck re-identification techniques applied to videos from the traffic cameras along the corridor, a two-stage pipeline is proposed to fuse bridge input data (i.e. truck loads as measured by the WIM station) and output data (i.e. bridge responses to a given truck load). From August 2017 to April 2019, over 20,000 truck events have been captured by the CPS. To the author’s best knowledge, the CPS implementation is the first of its kind in the nation and offers large volume of heterogeneous input-output data thereby opening new opportunities for novel data-driven bridge condition assessment methods. Built upon the developed CPS framework, the second half of the thesis focuses on use of the data in real-world bridge asset management applications. Long-term bridge strain response data is used to investigate and model composite action behavior exhibited in slab-on-girder highway bridges. Partial composite action is observed and quantified over negative bending regions of the bridge through the monitoring of slip strain at the girder-deck interface. It is revealed that undesired composite action over negative bending regions might be a cause of deck deterioration. The analysis performed on modeling composite action is a first in studying composite behavior in operational bridges with in-situ SHM measurements. Second, a data-driven analytical method is proposed to derive site-specific parameters such as dynamic load allowance and unit influence lines for bridge load rating using the input-output data. The resulting rating factors more rationally account for the bridge's systematic behavior leading to more accurate rating of a bridge's load-carrying capacity. Third, the proposed CPS framework is shown capable of measuring highway traffic loads. The paired WIM and bridge response data is used for training a learning-based bridge WIM system where truck weight characteristics such as axle weights are derived directly using corresponding bridge response measurements. Such an approach is successfully utilized to extend the functionality of an existing bridge SHM system for truck weighing purposes achieving precision requirements of a Type-II WIM station (e.g. vehicle gross weight error of less than 15%).PHDCivil EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/163210/1/rayhou_1.pd

    Marshall Space Flight Center Research and Technology Report 2019

    Get PDF
    Today, our calling to explore is greater than ever before, and here at Marshall Space Flight Centerwe make human deep space exploration possible. A key goal for Artemis is demonstrating and perfecting capabilities on the Moon for technologies needed for humans to get to Mars. This years report features 10 of the Agencys 16 Technology Areas, and I am proud of Marshalls role in creating solutions for so many of these daunting technical challenges. Many of these projects will lead to sustainable in-space architecture for human space exploration that will allow us to travel to the Moon, on to Mars, and beyond. Others are developing new scientific instruments capable of providing an unprecedented glimpse into our universe. NASA has led the charge in space exploration for more than six decades, and through the Artemis program we will help build on our work in low Earth orbit and pave the way to the Moon and Mars. At Marshall, we leverage the skills and interest of the international community to conduct scientific research, develop and demonstrate technology, and train international crews to operate further from Earth for longer periods of time than ever before first at the lunar surface, then on to our next giant leap, human exploration of Mars. While each project in this report seeks to advance new technology and challenge conventions, it is important to recognize the diversity of activities and people supporting our mission. This report not only showcases the Centers capabilities and our partnerships, it also highlights the progress our people have achieved in the past year. These scientists, researchers and innovators are why Marshall and NASA will continue to be a leader in innovation, exploration, and discovery for years to come

    Nature-inspired survivability: Prey-inspired survivability countermeasures for cloud computing security challenges

    Get PDF
    As cloud computing environments become complex, adversaries have become highly sophisticated and unpredictable. Moreover, they can easily increase attack power and persist longer before detection. Uncertain malicious actions, latent risks, Unobserved or Unobservable risks (UUURs) characterise this new threat domain. This thesis proposes prey-inspired survivability to address unpredictable security challenges borne out of UUURs. While survivability is a well-addressed phenomenon in non-extinct prey animals, applying prey survivability to cloud computing directly is challenging due to contradicting end goals. How to manage evolving survivability goals and requirements under contradicting environmental conditions adds to the challenges. To address these challenges, this thesis proposes a holistic taxonomy which integrate multiple and disparate perspectives of cloud security challenges. In addition, it proposes the TRIZ (Teorija Rezbenija Izobretatelskib Zadach) to derive prey-inspired solutions through resolving contradiction. First, it develops a 3-step process to facilitate interdomain transfer of concepts from nature to cloud. Moreover, TRIZ’s generic approach suggests specific solutions for cloud computing survivability. Then, the thesis presents the conceptual prey-inspired cloud computing survivability framework (Pi-CCSF), built upon TRIZ derived solutions. The framework run-time is pushed to the user-space to support evolving survivability design goals. Furthermore, a target-based decision-making technique (TBDM) is proposed to manage survivability decisions. To evaluate the prey-inspired survivability concept, Pi-CCSF simulator is developed and implemented. Evaluation results shows that escalating survivability actions improve the vitality of vulnerable and compromised virtual machines (VMs) by 5% and dramatically improve their overall survivability. Hypothesis testing conclusively supports the hypothesis that the escalation mechanisms can be applied to enhance the survivability of cloud computing systems. Numeric analysis of TBDM shows that by considering survivability preferences and attitudes (these directly impacts survivability actions), the TBDM method brings unpredictable survivability information closer to decision processes. This enables efficient execution of variable escalating survivability actions, which enables the Pi-CCSF’s decision system (DS) to focus upon decisions that achieve survivability outcomes under unpredictability imposed by UUUR

    Research theme reports from April 1, 2019 - March 31, 2020

    Get PDF

    A Conceptual Model for Quality 4.0 Deployment in U.S. Based Manufacturing Firms

    Get PDF
    Manufacturing is currently undergoing a fourth industrial revolution, referred to as Industry 4.0, enabled by digital technologies and advances in our ability to collect and use data. Quality 4.0 is the application of Industry 4.0 to enhance the quality function within an organization. Quality practitioners are uniquely positioned within organizations and already possess data application skillsets. Despite a perception that Quality 4.0 will be critical to future success shared by a majority of industry, most companies have not attempted to implement Quality 4.0 strategy, and those that have report very low rates of success. The goal of this study was to understand the challenges and key factors behind implementation of a Quality 4.0 system and develop a model for implementation, highlighting those key factors. The model was developed through literature review, case study analysis, and expert interviews. The model indicated that four main constructs exist in Quality 4.0 deployment, digital strategy, enabling factors, methodologies, and technology. A top-level strategy should be developed to address key technology development themes as well as nontechnical business process themes. Strategy should then be executed in the domain of enabling factors and methodologies with a clear technology application serving as the output. A successful Quality 4.0 implementation will use the technology application to drive tangible quality improvement activities which add value to the business
    corecore