5,812 research outputs found

    Low-Resolution Fault Localization Using Phasor Measurement Units with Community Detection

    Get PDF
    A significant portion of the literature on fault localization assumes (more or less explicitly) that there are sufficient reliable measurements to guarantee that the system is observable. While several heuristics exist to break the observability barrier, they mostly rely on recognizing spatio-temporal patterns, without giving insights on how the performance are tied with the system features and the sensor deployment. In this paper, we try to fill this gap and investigate the limitations and performance limits of fault localization using Phasor Measurement Units (PMUs), in the low measurements regime, i.e., when the system is unobservable with the measurements available. Our main contribution is to show how one can leverage the scarce measurements to localize different type of distribution line faults (three-phase, single-phase to ground, ...) at the level of sub-graph, rather than with the resolution of a line. We show that the resolution we obtain is strongly tied with the graph clustering notion in network science.Comment: Accepted in IEEE SmartGridComm 2018 Conferenc

    SPARCS: Stream-processing architecture applied in real-time cyber-physical security

    Get PDF
    In this paper, we showcase a complete, end-To-end, fault tolerant, bandwidth and latency optimized architecture for real time utilization of data from multiple sources that allows the collection, transport, storage, processing, and display of both raw data and analytics. This architecture can be applied for a wide variety of applications ranging from automation/control to monitoring and security. We propose a practical, hierarchical design that allows easy addition and reconfiguration of software and hardware components, while utilizing local processing of data at sensor or field site ('fog computing') level to reduce latency and upstream bandwidth requirements. The system supports multiple fail-safe mechanisms to guarantee the delivery of sensor data. We describe the application of this architecture to cyber-physical security (CPS) by supporting security monitoring of an electric distribution grid, through the collection and analysis of distribution-grid level phasor measurement unit (PMU) data, as well as Supervisory Control And Data Acquisition (SCADA) communication in the control area network

    Designed-in security for cyber-physical systems

    Get PDF
    An expert from academia, one from a cyber-physical system (CPS) provider, and one from an end asset owner and user offer their different perspectives on the meaning and challenges of 'designed-in security.' The academic highlights foundational issues and talks about emerging technology that can help us design and implement secure software in CPSs. The vendor's view includes components of the academic view but emphasizes the secure system development process and the standards that the system must satisfy. The user issues a call to action and offers ideas that will ensure progress

    Anomaly Detection for Science DMZs Using System Performance Data

    Get PDF
    Science DMZs are specialized networks that enable large-scale distributed scientific research, providing efficient and guaranteed performance while transferring large amounts of data at high rates. The high-speed performance of a Science DMZ is made viable via data transfer nodes (DTNs), therefore they are a critical point of failure. DTNs are usually monitored with network intrusion detection systems (NIDS). However, NIDS do not consider system performance data, such as network I/O interrupts and context switches, which can also be useful in revealing anomalous system performance potentially arising due to external network based attacks or insider attacks. In this paper, we demonstrate how system performance metrics can be applied towards securing a DTN in a Science DMZ network. Specifically, we evaluate the effectiveness of system performance data in detecting TCP-SYN flood attacks on a DTN using DBSCAN (a density-based clustering algorithm) for anomaly detection. Our results demonstrate that system interrupts and context switches can be used to successfully detect TCP-SYN floods, suggesting that system performance data could be effective in detecting a variety of attacks not easily detected through network monitoring alone

    Priests and Prophets Negro Ministers and Civil Rights (An Investigation of a Cleveland Sample)

    Get PDF
    In the summer of 1966 the racial conflict in the U.S.A. saw a new polarization. The most militant groups in the Negro Revolt joined in a call for Black Power , a call that emerged on the Mississippi march in July; for them; Black power has succeeded the former slogan of Freedom Now . The concept of Black Power is not, however, a completely new phenomenon. It has grown out of the ferment of agitation and activity by different people and organizations in many black communities over the years

    Big Data and Analysis of Data Transfers for International Research Networks Using NetSage

    Get PDF
    Modern science is increasingly data-driven and collaborative in nature. Many scientific disciplines, including genomics, high-energy physics, astronomy, and atmospheric science, produce petabytes of data that must be shared with collaborators all over the world. The National Science Foundation-supported International Research Network Connection (IRNC) links have been essential to enabling this collaboration, but as data sharing has increased, so has the amount of information being collected to understand network performance. New capabilities to measure and analyze the performance of international wide-area networks are essential to ensure end-users are able to take full advantage of such infrastructure for their big data applications. NetSage is a project to develop a unified, open, privacy-aware network measurement, and visualization service to address the needs of monitoring today's high-speed international research networks. NetSage collects data on both backbone links and exchange points, which can be as much as 1Tb per month. This puts a significant strain on hardware, not only in terms storage needs to hold multi-year historical data, but also in terms of processor and memory needs to analyze the data to understand network behaviors. This paper addresses the basic NetSage architecture, its current data collection and archiving approach, and details the constraints of dealing with this big data problem of handling vast amounts of monitoring data, while providing useful, extensible visualization to end users
    corecore