5,292 research outputs found
Big Data and Analysis of Data Transfers for International Research Networks Using NetSage
Modern science is increasingly data-driven and collaborative in nature. Many scientific disciplines, including genomics, high-energy physics, astronomy, and atmospheric science, produce petabytes of data that must be shared with collaborators all over the world. The National Science Foundation-supported International Research Network Connection (IRNC) links have been essential to enabling this collaboration, but as data sharing has increased, so has the amount of information being collected to understand network performance. New capabilities to measure and analyze the performance of international wide-area networks are essential to ensure end-users are able to take full advantage of such infrastructure for their big data applications. NetSage is a project to develop a unified, open, privacy-aware network measurement, and visualization service to address the needs of monitoring today's high-speed international research networks. NetSage collects data on both backbone links and exchange points, which can be as much as 1Tb per month. This puts a significant strain on hardware, not only in terms storage needs to hold multi-year historical data, but also in terms of processor and memory needs to analyze the data to understand network behaviors. This paper addresses the basic NetSage architecture, its current data collection and archiving approach, and details the constraints of dealing with this big data problem of handling vast amounts of monitoring data, while providing useful, extensible visualization to end users
Recommended from our members
Phasor Measurement Units Optimal Placement and Performance Limits for Fault Localization
In this paper, the performance limits of faults localization are investigated using synchrophasor data. The focus is on a non-trivial operating regime where the number of Phasor Measurement Unit (PMU) sensors available is insufficient to have full observability of the grid state. Proposed analysis uses the Kullback Leibler (KL) divergence between the distributions corresponding to different fault location hypotheses associated with the observation model. This analysis shows that the most likely locations are concentrated in clusters of buses more tightly connected to the actual fault site akin to graph communities. Consequently, a PMU placement strategy is derived that achieves a near-optimal resolution for localizing faults for a given number of sensors. The problem is also analyzed from the perspective of sampling a graph signal, and how the placement of the PMUs i.e. the spatial sampling pattern and the topological characteristic of the grid affect the ability to successfully localize faults. To highlight the superior performance of presented fault localization and placement algorithms, the proposed strategy is applied to a modified IEEE 34, IEEE-123 bus test cases and to data from a real distribution grid. Additionally, the detection of cyber-physical attacks is also examined where PMU data and relevant Supervisory Control and Data Acquisition (SCADA) network traffic information are compared to determine if a network breach has affected the integrity of the system information and/or operations
Recommended from our members
Modeling and Analyzing Faults to Improve Election Process Robustness
This paper presents an approach for continuous process improvement and illustrates its application to improving the robustness of election processes. In this approach, the Little-JIL process definition language is used to create a precise and detailed model of an election process. Given this process model and a potential undesirable event, or hazard, a fault tree is automatically derived. Fault tree analysis is then used to automatically identify combinations of failures that might allow the selected potential hazard to occur. Once these combinations have been identified, we iteratively improve the process model to increase the robustness of the election process against those combinations that seem the most likely to occur.
We demonstrate this approach for the Yolo County election process. We focus our analysis on the ballot counting process and what happens when a discrepancy is found during the count. We identify two single points of failure (SPFs) in this process and propose process modifications that we then show remove these SPFs
Radiation hardness qualification of PbWO4 scintillation crystals for the CMS Electromagnetic Calorimeter
This is the Pre-print version of the Article. The official published version can be accessed from the link below - Copyright @ 2010 IOPEnsuring the radiation hardness of PbWO4 crystals was one of the main priorities during the construction of the electromagnetic calorimeter of the CMS experiment at CERN. The production on an industrial scale of radiation hard crystals and their certification over a period of several years represented a difficult challenge both for CMS and for the crystal suppliers. The present article reviews the related scientific and technological problems encountered
Search for the standard model Higgs boson in the H to ZZ to 2l 2nu channel in pp collisions at sqrt(s) = 7 TeV
A search for the standard model Higgs boson in the H to ZZ to 2l 2nu decay
channel, where l = e or mu, in pp collisions at a center-of-mass energy of 7
TeV is presented. The data were collected at the LHC, with the CMS detector,
and correspond to an integrated luminosity of 4.6 inverse femtobarns. No
significant excess is observed above the background expectation, and upper
limits are set on the Higgs boson production cross section. The presence of the
standard model Higgs boson with a mass in the 270-440 GeV range is excluded at
95% confidence level.Comment: Submitted to JHE
- âŚ