18,128 research outputs found

    Calculating energy storage due to topological changes in emerging active region NOAA AR 11112

    Full text link
    The Minimum Current Corona (MCC) model provides a way to estimate stored coronal energy using the number of field lines connecting regions of positive and negative photospheric flux. This information is quantified by the net flux connecting pairs of opposing regions in a connectivity matrix. Changes in the coronal magnetic field, due to processes such as magnetic reconnection, manifest themselves as changes in the connectivity matrix. However, the connectivity matrix will also change when flux sources emerge or submerge through the photosphere, as often happens in active regions. We have developed an algorithm to estimate the changes in flux due to emergence and submergence of magnetic flux sources. These estimated changes must be accounted for in order to quantify storage and release of magnetic energy in the corona. To perform this calculation over extended periods of time, we must additionally have a consistently labeled connectivity matrix over the entire observational time span. We have therefore developed an automated tracking algorithm to generate a consistent connectivity matrix as the photospheric source regions evolve over time. We have applied this method to NOAA Active Region 11112, which underwent a GOES M2.9 class flare around 19:00 on Oct.16th, 2010, and calculated a lower bound on the free magnetic energy buildup of ~8.25 x 10^30 ergs over 3 days.Comment: 36 pages, 14 figures. Published in 2012 ApJ, 749, 64. Published version available at http://stacks.iop.org/0004-637X/749/64 Animation available at http://solar.physics.montana.edu/tarrl/data/AR11112.mp

    A multiphase seismic investigation of the shallow subduction zone, southern North Island, New Zealand

    Get PDF
    The shallow structure of the Hikurangi margin, in particular the interface between the Australian Plate and the subducting Pacific Plate, is investigated using the traveltimes of direct and converted seismic phases from local earthquakes. Mode conversions take place as upgoing energy from earthquakes in the subducted slab crosses the plate interface. These PS and SP converted arrivals are observed as intermediate phases between the direct P and S waves. They place an additional constraint on the depth of the interface and enable the topography of the subducted plate to be mapped across the region. 301 suitable earthquakes were recorded by the Leeds (Tararua) broad-band seismic array, a temporary line of three-component short-period stations, and the permanent stations of the New Zealand national network. This provided coverage across the land area of southern North Island, New Zealand, at a total of 17 stations. Rays are traced through a structure parametrized using layered B-splines and the traveltime residuals inverted, simultaneously, for hypocentre relocation, interface depth and seismic velocity. The results are consistent with sediment in the northeast of the study region and gentle topography on the subducting plate. This study and recent tectonic reconstructions of the southwest Pacific suggest that the subducting plate consists of captured, oceanic crust. The anomalous nature of this crust partly accounts for the unusual features of the Hikurangi margin, e.g. the shallow trench, in comparison with the subducting margin further north

    The Jasper Framework: Towards a Platform Independent, Formal Treatment of Web Programming

    Full text link
    This paper introduces Jasper, a web programming framework which allows web applications to be developed in an essentially platform indepedent manner and which is also suited to a formal treatment. It outlines Jasper conceptually and shows how Jasper is implemented on several commonplace platforms. It also introduces the Jasper Music Store, a web application powered by Jasper and implemented on each of these platforms. And it briefly describes a formal treatment and outlines the tools and languages planned that will allow this treatment to be automated.Comment: In Proceedings WWV 2012, arXiv:1210.5783. Added doi references where possibl

    Automatic Data and Computation Mapping for Distributed-Memory Machines.

    Get PDF
    Distributed memory parallel computers offer enormous computation power, scalability and flexibility. However, these machines are difficult to program and this limits their widespread use. An important characteristic of these machines is the difference in the access time for data in local versus non-local memory; non-local memory accesses are much slower than local memory accesses. This is also a characteristic of shared memory machines but to a less degree. Therefore it is essential that as far as possible, the data that needs to be accessed by a processor during the execution of the computation assigned to it reside in its local memory rather than in some other processor\u27s memory. Several research projects have concluded that proper mapping of data is key to realizing the performance potential of distributed memory machines. Current language design efforts such as Fortran D and High Performance Fortran (HPF) are based on this. It is our thesis that for many practical codes, it is possible to derive good mappings through a combination of algorithms and systematic procedures. We view mapping as consisting of wo phases, alignment followed by distribution. For the alignment phase we present three constraint-based methods--one based on a linear programming formulation of the problem; the second formulates the alignment problem as a constrained optimization problem using Lagrange multipliers; the third method uses a heuristic to decide which constraints to leave unsatisfied (based on the penalty of increased communication incurred in doing so) in order to find a mapping. In addressing the distribution phase, we have developed two methods that integrate the placement of computation--loop nests in our case--with the mapping of data. For one distributed dimension, our approach finds the best combination of data and computation mapping that results in low communication overhead; this is done by choosing a loop order that allows message vectorization. In the second method, we introduce the distribution preference graph and the operations on this graph allow us to integrate loop restructuring transformations and data mapping. These techniques produce mappings that have been used in efficient hand-coded implementations of several benchmark codes
    • …
    corecore