3,826 research outputs found

    Sr-Nd-Pb-Hf isotope results from ODP Leg 187: Evidence for mantle dynamics of the Australian-Antarctic Discordance and origin of the Indian MORB source

    Get PDF
    New high precision PIMMS Hf and Pb isotope data for 14–28 Ma basalts recovered during ODP Leg 187 are compared with zero-age dredge samples from the Australian-Antarctic Discordance (AAD). These new data show that combined Nd-Hf isotope systematics can be used as an effective discriminant between Indian and Pacific MORB source mantle domains. In particular, Indian mantle is displaced to lower εNd and higher εHf ratios compared to Pacific mantle. As with Pb isotope plots, there is almost no overlap between the two mantle types in Nd-Hf isotope space. On the basis of our new Nd-Hf isotope data, we demonstrate that Pacific MORB-source mantle was present near the eastern margin of the AAD from as early as 28 Ma, its boundary with Indian MORB-source mantle coinciding with the eastern edge of a basin-wide arcuate depth anomaly that is centered on the AAD. This observation rules out models requiring rapid migration of Pacific MORB mantle into the Indian Ocean basin since separation of Australia from Antarctica. Although temporal variations in isotopic composition can be discerned relative to the fracture zone boundary of the modern AAD at 127°E, the distribution of different compositional groups appears to have remained much the same relative to the position of the residual depth anomaly for the past 30 m.y. Thus significant lateral flow of mantle along the ridge axis toward the interface appears unlikely. Instead, the dynamics that maintain both the residual depth anomaly and the isotopic boundary between Indian and Pacific mantle are due to eastward migration of the Australian and Antarctic plates over a stagnated, but slowly upwelling, slab oriented roughly orthogonal to the ridge axis. Temporal and spatial variations in the compositions of Indian MORB basalts within the AAD can be explained by progressive displacement of shallower Indian MORB-source mantle by deeper mantle having a higher εHf composition ascending ahead of the upwelling slab. Models for the origin of the distinctive composition of the Indian MORB-source based on recycling of a heterogeneous enriched component that consist of ancient altered ocean crust plus<10% pelagic sediment are inconsistent with Nd-Hf isotope systematics. Instead, the data can be explained by a model in which Indian mantle includes a significant proportion of material that was processed in the mantle wedge above a subduction zone and was subsequently mixed back into unprocessed upper mantle

    Algorithms for Provisioning Queries and Analytics

    Get PDF
    Provisioning is a technique for avoiding repeated expensive computations in what-if analysis. Given a query, an analyst formulates kk hypotheticals, each retaining some of the tuples of a database instance, possibly overlapping, and she wishes to answer the query under scenarios, where a scenario is defined by a subset of the hypotheticals that are "turned on". We say that a query admits compact provisioning if given any database instance and any kk hypotheticals, one can create a poly-size (in kk) sketch that can then be used to answer the query under any of the 2k2^{k} possible scenarios without accessing the original instance. In this paper, we focus on provisioning complex queries that combine relational algebra (the logical component), grouping, and statistics/analytics (the numerical component). We first show that queries that compute quantiles or linear regression (as well as simpler queries that compute count and sum/average of positive values) can be compactly provisioned to provide (multiplicative) approximate answers to an arbitrary precision. In contrast, exact provisioning for each of these statistics requires the sketch size to be exponential in kk. We then establish that for any complex query whose logical component is a positive relational algebra query, as long as the numerical component can be compactly provisioned, the complex query itself can be compactly provisioned. On the other hand, introducing negation or recursion in the logical component again requires the sketch size to be exponential in kk. While our positive results use algorithms that do not access the original instance after a scenario is known, we prove our lower bounds even for the case when, knowing the scenario, limited access to the instance is allowed

    The Semantic Grid: A future e-Science infrastructure

    No full text
    e-Science offers a promising vision of how computer and communication technology can support and enhance the scientific process. It does this by enabling scientists to generate, analyse, share and discuss their insights, experiments and results in an effective manner. The underlying computer infrastructure that provides these facilities is commonly referred to as the Grid. At this time, there are a number of grid applications being developed and there is a whole raft of computer technologies that provide fragments of the necessary functionality. However there is currently a major gap between these endeavours and the vision of e-Science in which there is a high degree of easy-to-use and seamless automation and in which there are flexible collaborations and computations on a global scale. To bridge this practice–aspiration divide, this paper presents a research agenda whose aim is to move from the current state of the art in e-Science infrastructure, to the future infrastructure that is needed to support the full richness of the e-Science vision. Here the future e-Science research infrastructure is termed the Semantic Grid (Semantic Grid to Grid is meant to connote a similar relationship to the one that exists between the Semantic Web and the Web). In particular, we present a conceptual architecture for the Semantic Grid. This architecture adopts a service-oriented perspective in which distinct stakeholders in the scientific process, represented as software agents, provide services to one another, under various service level agreements, in various forms of marketplace. We then focus predominantly on the issues concerned with the way that knowledge is acquired and used in such environments since we believe this is the key differentiator between current grid endeavours and those envisioned for the Semantic Grid

    Doctor of Philosophy

    Get PDF
    dissertationServing as a record of what happened during a scientific process, often computational, provenance has become an important piece of computing. The importance of archiving not only data and results but also the lineage of these entities has led to a variety of systems that capture provenance as well as models and schemas for this information. Despite significant work focused on obtaining and modeling provenance, there has been little work on managing and using this information. Using the provenance from past work, it is possible to mine common computational structure or determine differences between executions. Such information can be used to suggest possible completions for partial workflows, summarize a set of approaches, or extend past work in new directions. These applications require infrastructure to support efficient queries and accessible reuse. In order to support knowledge discovery and reuse from provenance information, the management of those data is important. One component of provenance is the specification of the computations; workflows provide structured abstractions of code and are commonly used for complex tasks. Using change-based provenance, it is possible to store large numbers of similar workflows compactly. This storage also allows efficient computation of differences between specifications. However, querying for specific structure across a large collection of workflows is difficult because comparing graphs depends on computing subgraph isomorphism which is NP-Complete. Graph indexing methods identify features that help distinguish graphs of a collection to filter results for a subgraph containment query and reduce the number of subgraph isomorphism computations. For provenance, this work extends these methods to work for more exploratory queries and collections with significant overlap. However, comparing workflow or provenance graphs may not require exact equality; a match between two graphs may allow paired nodes to be similar yet not equivalent. This work presents techniques to better correlate graphs to help summarize collections. Using this infrastructure, provenance can be reused so that users can learn from their own and others' history. Just as textual search has been augmented with suggested completions based on past or common queries, provenance can be used to suggest how computations can be completed or which steps might connect to a given subworkflow. In addition, provenance can help further science by accelerating publication and reuse. By incorporating provenance into publications, authors can more easily integrate their results, and readers can more easily verify and repeat results. However, reusing past computations requires maintaining stronger associations with any input data and underlying code as well as providing paths for migrating old work to new hardware or algorithms. This work presents a framework for maintaining data and code as well as supporting upgrades for workflow computations

    Big Data Analysis

    Get PDF
    The value of big data is predicated on the ability to detect trends and patterns and more generally to make sense of the large volumes of data that is often comprised of a heterogeneous mix of format, structure, and semantics. Big data analysis is the component of the big data value chain that focuses on transforming raw acquired data into a coherent usable resource suitable for analysis. Using a range of interviews with key stakeholders in small and large companies and academia, this chapter outlines key insights, state of the art, emerging trends, future requirements, and sectorial case studies for data analysis
    corecore