2,902 research outputs found

    On Large-Scale Graph Generation with Validation of Diverse Triangle Statistics at Edges and Vertices

    Full text link
    Researchers developing implementations of distributed graph analytic algorithms require graph generators that yield graphs sharing the challenging characteristics of real-world graphs (small-world, scale-free, heavy-tailed degree distribution) with efficiently calculable ground-truth solutions to the desired output. Reproducibility for current generators used in benchmarking are somewhat lacking in this respect due to their randomness: the output of a desired graph analytic can only be compared to expected values and not exact ground truth. Nonstochastic Kronecker product graphs meet these design criteria for several graph analytics. Here we show that many flavors of triangle participation can be cheaply calculated while generating a Kronecker product graph. Given two medium-sized scale-free graphs with adjacency matrices AA and BB, their Kronecker product graph has adjacency matrix C=ABC = A \otimes B. Such graphs are highly compressible: E|{\cal E}| edges are represented in O(E1/2){\cal O}(|{\cal E}|^{1/2}) memory and can be built in a distributed setting from small data structures, making them easy to share in compressed form. Many interesting graph calculations have worst-case complexity bounds O(Ep){\cal O}(|{\cal E}|^p) and often these are reduced to O(Ep/2){\cal O}(|{\cal E}|^{p/2}) for Kronecker product graphs, when a Kronecker formula can be derived yielding the sought calculation on CC in terms of related calculations on AA and BB. We focus on deriving formulas for triangle participation at vertices, tC{\bf t}_C, a vector storing the number of triangles that every vertex is involved in, and triangle participation at edges, ΔC\Delta_C, a sparse matrix storing the number of triangles at every edge.Comment: 10 pages, 7 figures, IEEE IPDPS Graph Algorithms Building Block

    An Ensemble Framework for Detecting Community Changes in Dynamic Networks

    Full text link
    Dynamic networks, especially those representing social networks, undergo constant evolution of their community structure over time. Nodes can migrate between different communities, communities can split into multiple new communities, communities can merge together, etc. In order to represent dynamic networks with evolving communities it is essential to use a dynamic model rather than a static one. Here we use a dynamic stochastic block model where the underlying block model is different at different times. In order to represent the structural changes expressed by this dynamic model the network will be split into discrete time segments and a clustering algorithm will assign block memberships for each segment. In this paper we show that using an ensemble of clustering assignments accommodates for the variance in scalable clustering algorithms and produces superior results in terms of pairwise-precision and pairwise-recall. We also demonstrate that the dynamic clustering produced by the ensemble can be visualized as a flowchart which encapsulates the community evolution succinctly.Comment: 6 pages, under submission to HPEC Graph Challeng

    Pricing Weather Derivatives

    Get PDF
    This paper presents a general method for pricing weather derivatives. Specification tests find that a temperature series for Fresno, California follows a mean-reverting Brownian motion process with discrete jumps and ARCH errors. Based on this process, we define an equilibrium pricing model for cooling degree day weather options. Comparing option prices estimated with three methods: a traditional burn-rate approach, a Black-Scholes-Merton approximation, and an equilibrium Monte Carlo simulation reveals significant differences. Equilibrium prices are preferred on theoretical grounds, so are used to demonstrate the usefulness of weather derivatives as risk management tools for California specialty crop growers.derivative, jump-diffusion process, mean-reversion, volatility, weather, Demand and Price Analysis,

    WEATHER DERIVATIVES: MANAGING RISK WITH MARKET-BASED INSTRUMENTS

    Get PDF
    Accurate pricing of weather derivatives is critically dependent upon correct specification of the underlying weather process. We test among six likely alternative processes using maximum likelihood methods and data from the Fresno, CA weather station. Using these data, we find that the best process is a mean-reverting geometric Brownian process with discrete jumps and ARCH errors. We describe a pricing model for weather derivatives based on such a process.Risk and Uncertainty,

    Cotton Price Policy and New Cereal Technology in the Malian Cotton Zone

    Get PDF
    During the last decade, cotton production and area have been declining as a result of depleting soil nutrients and low cotton prices in the cotton zone of Mali. This paper shows that the Malian government’s 2011 policy to increase the farm gate cotton price as a response to world cotton price increase enhances farm income but has less impact on cotton than on maize production. A complementary policy of introducing new sorghum technologies would have an equal impact on farmers’ incomes in the cotton zone of Mali.Cotton prices, improved sorghum technology, discrete stochastic programming, Mali, Agricultural and Food Policy, Farm Management, International Development, Production Economics, Risk and Uncertainty,

    RS-88 Pad Abort Demonstrator Thrust Chamber Assembly Testing at NASA Marshall Space Flight Center

    Get PDF
    This paper documents the effort conducted to collect hot-tire dynamic and acoustics environments data during 50,000-lb thrust lox-ethanol hot-fire rocket testing at NASA Marshall Space Flight Center (MSFC) in November-December 2003. This test program was conducted during development testing of the Boeing Rocketdyne RS-88 development engine thrust chamber assembly (TCA) in support of the Orbital Space Plane (OSP) Crew Escape System Propulsion (CESP) Program Pad Abort Demonstrator (PAD). In addition to numerous internal TCA and nozzle measurements, induced acoustics environments data were also collected. Provided here is an overview of test parameters, a discussion of the measurements, test facility systems and test operations, and a quality assessment of the data collected during this test program

    Space Launch System Mission Flexibility Assessment

    Get PDF
    The Space Launch System (SLS) is envisioned as a heavy lift vehicle that will provide the foundation for future beyond low Earth orbit (LEO) missions. While multiple assessments have been performed to determine the optimal configuration for the SLS, this effort was undertaken to evaluate the flexibility of various concepts for the range of missions that may be required of this system. These mission scenarios include single launch crew and/or cargo delivery to LEO, single launch cargo delivery missions to LEO in support of multi-launch mission campaigns, and single launch beyond LEO missions. Specifically, we assessed options for the single launch beyond LEO mission scenario using a variety of in-space stages and vehicle staging criteria. This was performed to determine the most flexible (and perhaps optimal) method of designing this particular type of mission. A specific mission opportunity to the Jovian system was further assessed to determine potential solutions that may meet currently envisioned mission objectives. This application sought to significantly reduce mission cost by allowing for a direct, faster transfer from Earth to Jupiter and to determine the order-of-magnitude mass margin that would be made available from utilization of the SLS. In general, smaller, existing stages provided comparable performance to larger, new stage developments when the mission scenario allowed for optimal LEO dropoff orbits (e.g. highly elliptical staging orbits). Initial results using this method with early SLS configurations and existing Upper Stages showed the potential of capturing Lunar flyby missions as well as providing significant mass delivery to a Jupiter transfer orbit

    NDVI With Artificial Neural Networks For SRTM Elevation Model Improvement – Hydrological Model Application

    Full text link
    Digital elevation model (DEM) plays a substantial role in hydrological study, from understanding the catchment characteristics, setting up a hydrological model to mapping the flood risk in the region. Depending on the nature of study and its objectives, high resolution and reliable DEM is often desired to set up a sound hydrological model. However, such source of good DEM is not always available and it is generally high-priced. Obtained through radar based remote sensing, Shuttle Radar Topography Mission (SRTM) is a publicly available DEM with resolution of 92m outside US. It is a great source of DEM where no surveyed DEM is available. However, apart from the coarse resolution, SRTM suffers from inaccuracy especially on area with dense vegetation coverage due to the limitation of radar signals not penetrating through canopy. This will lead to the improper setup of the model as well as the erroneous mapping of flood risk. This paper attempts on improving SRTM dataset, using Normalised Difference Vegetation Index (NDVI), derived from Visible Red and Near Infra-Red band obtained from Landsat with resolution of 30m, and Artificial Neural Networks (ANN). The assessment of the improvement and the applicability of this method in hydrology would be highlighted and discussed
    corecore