959 research outputs found

    Ruthenium/Iridium Ratios in the Cretaceous-tertiary Boundary Clay: Implications for Global Dispersal and Fractionation Within the Ejecta Cloud

    Get PDF
    Ruthenium (Ru) and iridium (Ir) are the least mobile platinum group elements (PGE's) within the Cretaceous-Tertiary (K-T) boundary clay (BC). The Ru/Ir ratio is, therefore, the most useful PGE interelement ratio for distinguishing terrestrial and extraterrestrial contributions to the BC. The Ru/Ir ratio of marine K-T sections (1.77 +/- 0.53) is statistically different from that of the continental sections (0.93 +/- 0.28). The marine Ru/Ir ratios are chondritic (C1 = 1.48 +/- 0.09), but the continental ratios are not. We discovered an inverse correlation of shocked quartz size (or distance from the impact site) and Ru/Ir ratio. This correlation may arise from the difference in Ru and Ir vaporization temperature and/or fractionation during condensation from the ejecta cloud. Postsedimentary alteration, remobilization, or terrestrial PGE input may be responsible for the Ru/Ir ratio variations within the groups of marine and continental sites studied. The marine ratios could also be attained if approximately 15 percent of the boundary metals were contributed by Deccan Trap emissions. However, volcanic emissions could not have been the principal source of the PGE's in the BC because mantle PGE ratios and abundances are inconsistent with those measured in the clay. The Ru/Ir values for pristine Tertiary mantle xenoliths (2.6 +/- 0.48), picrites (4.1 +/- 1.8), and Deccan Trap basalt (3.42 +/- 1.96) are all statistically distinct from those measured in the K-T BC

    The 30-kW ammonia arcjet technology

    Get PDF
    The technical results are summarized of a 30 kW class ammonia propellant arcjet technology program. Evaluation of previous arcjet thruster performance, including materials analysis of used thruster components, led to the design of an arcjet with improved performance and thermal characteristics. Tests of the new engine demonstrated that engine performance is relatively insensitive to cathode tip geometry. Other data suggested a maximum sustainable arc length for a given thruster configuration, beyond which the arc may reconfigure in a destructive manner. A flow controller calibration error was identified. This error caused previously reported values of specific impulse and thrust efficiency to be 20 percent higher than the real values. Corrected arcjet performance data are given. Duration tests of 413 and 252 hours, and several tests 100 hours in duration, were performed. The cathode tip erosion rate increased with increasing arc current. Elimination of power source ripple did not affect cathode tip whisker growth. Results of arcjet modeling, diagnostic development and mission analyses are also discussed. The 30 kW ammonia arcjet may now be considered ready for development for a flight demonstration, but widespread application of 30 kW class arcjet will require improved efficiency and lifetime

    Variational Deep Semantic Hashing for Text Documents

    Full text link
    As the amount of textual data has been rapidly increasing over the past decade, efficient similarity search methods have become a crucial component of large-scale information retrieval systems. A popular strategy is to represent original data samples by compact binary codes through hashing. A spectrum of machine learning methods have been utilized, but they often lack expressiveness and flexibility in modeling to learn effective representations. The recent advances of deep learning in a wide range of applications has demonstrated its capability to learn robust and powerful feature representations for complex data. Especially, deep generative models naturally combine the expressiveness of probabilistic generative models with the high capacity of deep neural networks, which is very suitable for text modeling. However, little work has leveraged the recent progress in deep learning for text hashing. In this paper, we propose a series of novel deep document generative models for text hashing. The first proposed model is unsupervised while the second one is supervised by utilizing document labels/tags for hashing. The third model further considers document-specific factors that affect the generation of words. The probabilistic generative formulation of the proposed models provides a principled framework for model extension, uncertainty estimation, simulation, and interpretability. Based on variational inference and reparameterization, the proposed models can be interpreted as encoder-decoder deep neural networks and thus they are capable of learning complex nonlinear distributed representations of the original documents. We conduct a comprehensive set of experiments on four public testbeds. The experimental results have demonstrated the effectiveness of the proposed supervised learning models for text hashing.Comment: 11 pages, 4 figure

    Fluid Inclusion Petrography and Microthermometry of the Middle Valley Hydrothermal System, Northern Juan de Fuca Ridge

    Get PDF
    Middle Valley is a hydrothermally active, sediment-covered rift at the northernmost end of the Juan de Fuca Ridge. Two hydrothermal centers are known from previous work: (1) a 60-m-high sediment mound with a 35-m-high inactive sulfide mound and two 20-m-high sulfide mounds 330 m to the south, one of which is known to be active, and (2) several mounds with attendant active hydrothermal chimneys. These sites (Sites 856 and 858, respectively), as well as other adjacent areas (Sites 857 and 855), were drilled during Leg 139 of the Ocean Drilling Program. Fluid inclusion petrographic observations and microthermometric measurements were made on a variety of samples and minerals recovered from these cores: (1) quartz from hydrothermally altered sediment; (2) low iron sphalerite and interstitial dolomite in massive sulfide; (3) calcite-sulfide veins cross-cutting sediment; (4) calcite and anhydrite concretions in sediment; (5) anhydrite veins cross-cutting sediment; and (6) wairakite and quartz veins cross-cutting mafic sills and sediment. Trapping temperatures of fluid inclusions in hydrothermal alteration minerals precipitated with massive sulfides range between 90° and 338°C. Fluid inclusions in calcite in carbonate concretions indicate these concretions formed between 112° and 192°C. Anhydrite in veins and concretions was precipitated between 137° and 311 °C. Quartz-wairakiteepidote veins in mafic sills and hydrothermally altered sediment were precipitated between 210° and 350°C. For all inclusions, there is a general increase in minimum trapping temperatures with increasing subsurface depth for all sites, with temperatures ranging from around 100°C at 2400 meters below sea level to around 275°C at 3100 mbsl. Eutectic and hydrohalite melting temperatures indicate that Ca, Na, and Cl are the dominant ionic species present in the inclusion fluids. Salinities for most inclusion fluids range between 2.5 and 7.0 equivalent weight percent NaCl. Most analyses are between 3 and 4.5 eq. wt% NaCl and similar to ambient bottom water, pore fluids, and vent fluid from Site 858. Trapped fluids are modified seawater, and there is no evidence for a significant magmatic fluid component. Oxygen isotopic compositions for fluids from which calcite concretions were precipitated, calculated from isotopic analyses of carbonates formed at low temperatures (133° to 158°C from fluid inclusions), are significantly enriched in 18O (δ1 8θ = +9.3‰ to +13.2‰), likely due to reaction with subsurface sediments at low water/rock ratios. Calcite that formed at higher temperatures (233°C) in hydrothermally altered sediment was precipitated from fluid only slightly enriched in 18O (δ1 8θ = +0.4%o). Estimated carbon isotope compositions of the fluid vary between δ13C = -7.0%e and -35.4‰ and are similar to the measured range for vent fluids

    The Case for Learned Index Structures

    Full text link
    Indexes are models: a B-Tree-Index can be seen as a model to map a key to the position of a record within a sorted array, a Hash-Index as a model to map a key to a position of a record within an unsorted array, and a BitMap-Index as a model to indicate if a data record exists or not. In this exploratory research paper, we start from this premise and posit that all existing index structures can be replaced with other types of models, including deep-learning models, which we term learned indexes. The key idea is that a model can learn the sort order or structure of lookup keys and use this signal to effectively predict the position or existence of records. We theoretically analyze under which conditions learned indexes outperform traditional index structures and describe the main challenges in designing learned index structures. Our initial results show, that by using neural nets we are able to outperform cache-optimized B-Trees by up to 70% in speed while saving an order-of-magnitude in memory over several real-world data sets. More importantly though, we believe that the idea of replacing core components of a data management system through learned models has far reaching implications for future systems designs and that this work just provides a glimpse of what might be possible

    An Abstraction-Based Framework for Neural Network Verification

    Get PDF
    Deep neural networks are increasingly being used as controllers for safety-critical systems. Because neural networks are opaque, certifying their correctness is a significant challenge. To address this issue, several neural network verification approaches have recently been proposed. However, these approaches afford limited scalability, and applying them to large networks can be challenging. In this paper, we propose a framework that can enhance neural network verification techniques by using over-approximation to reduce the size of the network—thus making it more amenable to verification. We perform the approximation such that if the property holds for the smaller (abstract) network, it holds for the original as well. The over-approximation may be too coarse, in which case the underlying verification tool might return a spurious counterexample. Under such conditions, we perform counterexample-guided refinement to adjust the approximation, and then repeat the process. Our approach is orthogonal to, and can be integrated with, many existing verification techniques. For evaluation purposes, we integrate it with the recently proposed Marabou framework, and observe a significant improvement in Marabou’s performance. Our experiments demonstrate the great potential of our approach for verifying larger neural networks

    Deglaciation of Fennoscandia

    Get PDF
    To provide a new reconstruction of the deglaciation of the Fennoscandian Ice Sheet, in the form of calendar-year time-slices, which are particularly useful for ice sheet modelling, we have compiled and synthesized published geomorphological data for eskers, ice-marginal formations, lineations, marginal meltwater channels, striae, ice-dammed lakes, and geochronological data from radiocarbon, varve, optically-stimulated luminescence, and cosmogenic nuclide dating. This 25 is summarized as a deglaciation map of the Fennoscandian Ice Sheet with isochrons marking every 1000 years between 22 and 13 cal kyr BP and every hundred years between 11.6 and final ice decay after 9.7 cal kyr BP. Deglaciation patterns vary across the Fennoscandian Ice Sheet domain, reflecting differences in climatic and geomorphic settings as well as ice sheet basal thermal conditions and terrestrial versus marine margins. For example, the ice sheet margin in the high-precipitation coastal setting of the western sector responded sensitively to climatic variations leaving a detailed record of prominent moraines and ice-marginal deposits in many fjords and coastal valleys. Retreat rates across the southern sector differed between slow retreat of the terrestrial margin in western and southern Sweden and rapid retreat of the calving ice margin in the Baltic Basin. Our reconstruction is consistent with much of the published research. However, the synthesis of a large amount of existing and new data support refined reconstructions in some areas. For example, we locate the LGM extent of the ice sheet in northwestern Russia further east than previously suggested and conclude that it occurred at a later time than the rest of the ice sheet, at around 17-15 cal kyr BP, and propose a slightly different chronology of moraine formation over southern Sweden based on improved correlations of moraine segments using new LiDAR data and tying the timing of moraine formation to Greenland ice core cold stages. Retreat rates vary by as much as an order of magnitude in different sectors of the ice sheet, with the lowest rates on the high-elevation and maritime Norwegian margin. Retreat rates compared to the climatic information provided by the Greenland ice core record show a general correspondence between retreat rate and climatic forcing, although a close match between retreat rate and climate is unlikely because of other controls, such as topography and marine versus terrestrial margins. Overall, the time slice reconstructions of Fennoscandian Ice Sheet deglaciation from 22 to 9.7 cal kyr BP provide an important dataset for understanding the contexts that underpin spatial and temporal patterns in retreat of the Fennoscandian Ice Sheet, and are an important resource for testing and refining ice sheet models

    The Challenge of Machine Learning in Space Weather Nowcasting and Forecasting

    Get PDF
    The numerous recent breakthroughs in machine learning (ML) make imperative to carefully ponder how the scientific community can benefit from a technology that, although not necessarily new, is today living its golden age. This Grand Challenge review paper is focused on the present and future role of machine learning in space weather. The purpose is twofold. On one hand, we will discuss previous works that use ML for space weather forecasting, focusing in particular on the few areas that have seen most activity: the forecasting of geomagnetic indices, of relativistic electrons at geosynchronous orbits, of solar flares occurrence, of coronal mass ejection propagation time, and of solar wind speed. On the other hand, this paper serves as a gentle introduction to the field of machine learning tailored to the space weather community and as a pointer to a number of open challenges that we believe the community should undertake in the next decade. The recurring themes throughout the review are the need to shift our forecasting paradigm to a probabilistic approach focused on the reliable assessment of uncertainties, and the combination of physics-based and machine learning approaches, known as gray-box.Comment: under revie

    Reconstruction of three-dimensional porous media using generative adversarial neural networks

    Full text link
    To evaluate the variability of multi-phase flow properties of porous media at the pore scale, it is necessary to acquire a number of representative samples of the void-solid structure. While modern x-ray computer tomography has made it possible to extract three-dimensional images of the pore space, assessment of the variability in the inherent material properties is often experimentally not feasible. We present a novel method to reconstruct the solid-void structure of porous media by applying a generative neural network that allows an implicit description of the probability distribution represented by three-dimensional image datasets. We show, by using an adversarial learning approach for neural networks, that this method of unsupervised learning is able to generate representative samples of porous media that honor their statistics. We successfully compare measures of pore morphology, such as the Euler characteristic, two-point statistics and directional single-phase permeability of synthetic realizations with the calculated properties of a bead pack, Berea sandstone, and Ketton limestone. Results show that GANs can be used to reconstruct high-resolution three-dimensional images of porous media at different scales that are representative of the morphology of the images used to train the neural network. The fully convolutional nature of the trained neural network allows the generation of large samples while maintaining computational efficiency. Compared to classical stochastic methods of image reconstruction, the implicit representation of the learned data distribution can be stored and reused to generate multiple realizations of the pore structure very rapidly.Comment: 21 pages, 20 figure

    Robustness Verification of Support Vector Machines

    Get PDF
    We study the problem of formally verifying the robustness to adversarial examples of support vector machines (SVMs), a major machine learning model for classification and regression tasks. Following a recent stream of works on formal robustness verification of (deep) neural networks, our approach relies on a sound abstract version of a given SVM classifier to be used for checking its robustness. This methodology is parametric on a given numerical abstraction of real values and, analogously to the case of neural networks, needs neither abstract least upper bounds nor widening operators on this abstraction. The standard interval domain provides a simple instantiation of our abstraction technique, which is enhanced with the domain of reduced affine forms, which is an efficient abstraction of the zonotope abstract domain. This robustness verification technique has been fully implemented and experimentally evaluated on SVMs based on linear and nonlinear (polynomial and radial basis function) kernels, which have been trained on the popular MNIST dataset of images and on the recent and more challenging Fashion-MNIST dataset. The experimental results of our prototype SVM robustness verifier appear to be encouraging: this automated verification is fast, scalable and shows significantly high percentages of provable robustness on the test set of MNIST, in particular compared to the analogous provable robustness of neural networks
    corecore