4,306 research outputs found

    SPARCS: Stream-processing architecture applied in real-time cyber-physical security

    Get PDF
    In this paper, we showcase a complete, end-To-end, fault tolerant, bandwidth and latency optimized architecture for real time utilization of data from multiple sources that allows the collection, transport, storage, processing, and display of both raw data and analytics. This architecture can be applied for a wide variety of applications ranging from automation/control to monitoring and security. We propose a practical, hierarchical design that allows easy addition and reconfiguration of software and hardware components, while utilizing local processing of data at sensor or field site ('fog computing') level to reduce latency and upstream bandwidth requirements. The system supports multiple fail-safe mechanisms to guarantee the delivery of sensor data. We describe the application of this architecture to cyber-physical security (CPS) by supporting security monitoring of an electric distribution grid, through the collection and analysis of distribution-grid level phasor measurement unit (PMU) data, as well as Supervisory Control And Data Acquisition (SCADA) communication in the control area network

    Designed-in security for cyber-physical systems

    Get PDF
    An expert from academia, one from a cyber-physical system (CPS) provider, and one from an end asset owner and user offer their different perspectives on the meaning and challenges of 'designed-in security.' The academic highlights foundational issues and talks about emerging technology that can help us design and implement secure software in CPSs. The vendor's view includes components of the academic view but emphasizes the secure system development process and the standards that the system must satisfy. The user issues a call to action and offers ideas that will ensure progress

    Anomaly Detection for Science DMZs Using System Performance Data

    Get PDF
    Science DMZs are specialized networks that enable large-scale distributed scientific research, providing efficient and guaranteed performance while transferring large amounts of data at high rates. The high-speed performance of a Science DMZ is made viable via data transfer nodes (DTNs), therefore they are a critical point of failure. DTNs are usually monitored with network intrusion detection systems (NIDS). However, NIDS do not consider system performance data, such as network I/O interrupts and context switches, which can also be useful in revealing anomalous system performance potentially arising due to external network based attacks or insider attacks. In this paper, we demonstrate how system performance metrics can be applied towards securing a DTN in a Science DMZ network. Specifically, we evaluate the effectiveness of system performance data in detecting TCP-SYN flood attacks on a DTN using DBSCAN (a density-based clustering algorithm) for anomaly detection. Our results demonstrate that system interrupts and context switches can be used to successfully detect TCP-SYN floods, suggesting that system performance data could be effective in detecting a variety of attacks not easily detected through network monitoring alone

    Big Data and Analysis of Data Transfers for International Research Networks Using NetSage

    Get PDF
    Modern science is increasingly data-driven and collaborative in nature. Many scientific disciplines, including genomics, high-energy physics, astronomy, and atmospheric science, produce petabytes of data that must be shared with collaborators all over the world. The National Science Foundation-supported International Research Network Connection (IRNC) links have been essential to enabling this collaboration, but as data sharing has increased, so has the amount of information being collected to understand network performance. New capabilities to measure and analyze the performance of international wide-area networks are essential to ensure end-users are able to take full advantage of such infrastructure for their big data applications. NetSage is a project to develop a unified, open, privacy-aware network measurement, and visualization service to address the needs of monitoring today's high-speed international research networks. NetSage collects data on both backbone links and exchange points, which can be as much as 1Tb per month. This puts a significant strain on hardware, not only in terms storage needs to hold multi-year historical data, but also in terms of processor and memory needs to analyze the data to understand network behaviors. This paper addresses the basic NetSage architecture, its current data collection and archiving approach, and details the constraints of dealing with this big data problem of handling vast amounts of monitoring data, while providing useful, extensible visualization to end users

    Resolving the Unexpected in Elections: Election Officials\u27 Options

    Get PDF
    This paper seeks to assist election officials and their lawyers in effectively handling the technical issues that can be difficult to understand and analyze, allowing them to protect themselves and the public interest from unfair accusations, inaccuracies in results, and conspiracy theories. The paper helps to empower officials to recognize which types of voting system events and indicators need a more structured analysis and what steps to take to set up the evaluations (or forensic assessments) using computer experts
    corecore