820 research outputs found

    Non-invasive progressive optimization for in-memory databases

    Get PDF
    Progressive optimization introduces robustness for database workloads against wrong estimates, skewed data, correlated attributes, or outdated statistics. Previous work focuses on cardinality estimates and rely on expensive counting methods as well as complex learning algorithms. In this paper, we utilize performance counters to drive progressive optimization during query execution. The main advantages are that performance counters introduce virtually no costs on modern CPUs and their usage enables a non-invasive monitoring. We present fine-grained cost models to detect differences between estimates and actual costs which enables us to kick-start reoptimization. Based on our cost models, we implement an optimization approach that estimates the individual selectivities of a multi-selection query efficiently. Furthermore, we are able to learn properties like sortedness, skew, or correlation during run-time. In our evaluation we show, that the overhead of our approach is negligible, while performance improvements are convincing. Using progressive optimization, we improve runtime up to a factor of three compared to average run-times and up to a factor of 4,5 compared to worst case run-times. As a result, we avoid costly operator execution orders and; thus, making query execution highly robust

    Modeling lithium rich carbon stars in the Large Magellanic Cloud: an independent distance indicator ?

    Get PDF
    We present the first quantitative results explaining the presence in the Large Magellanic Cloud of some asymptotic giant branch stars that share the properties of lithium rich carbon stars. A self-consistent description of time-dependent mixing, overshooting, and nuclear burning was required. We identify a narrow range of masses and luminosities for this peculiar stars. Comparison of these models with the luminosities of the few Li-rich C stars in the Large Magellanic Cloud provides an independent distance indicator for the LMCComment: 7 pages, 2 figure

    Quantifying rapid permafrost thaw with computer vision and graph theory

    Get PDF
    With the Earth’s climate rapidly warming, the Arctic represents one of the most vulnerable regions to environmental change. Permafrost, as a key element of the Arctic system, stores vast amounts of organic carbon that can be microbially decomposed into the greenhouse gases CO2 and CH4 upon thaw. Extensive thawing of these permafrost soils therefore has potentially substantial consequences to greenhouse gas concentrations in the atmosphere. In addition, thaw of ice-rich permafrost lastingly alters the surface topography and thus the hydrology. Fires represent an important disturbance in boreal permafrost regions and increasingly also in tundra regions as they combust the vegetation and upper organic soil layers that usually provide protective insulation to the permafrost below. Field studies and local remote sensing studies suggest that fire disturbances may trigger rapid permafrost thaw, with consequences often already observable in the first years post-disturbance. In polygonal ice-wedge landscapes, this becomes most prevalent through melting ice wedges and degrading troughs. The further these ice wedges degrade; the more troughs will likely connect and build an extensive hydrological network with changing patterns and degrees of connectivity that influences hydrology and runoff throughout large regions. While subsiding troughs over melting ice wedges may host new ponds, an increasing connectivity may also subsequently lead to more drainage of ponds, which in turn can limit further thaw and help stabilize the landscape. Whereas fire disturbances may accelerate the initiation of this process, the general warming of permafrost observed across the Arctic will eventually result in widespread degradation of polygonal landscapes. To quantify the changes in such dynamic landscapes over large regions, remote sensing data offers a valuable resource. However, considering the vast and ever-growing volumes of Earth observation data available, highly automated methods are needed that allow extracting information on the geomorphic state and changes over time of ice-wedge trough networks. In this study, we investigate these changing landscapes and their environmental implications in fire scars in Northern and Western Alaska. We developed a computer vision algorithm to automatically extract ice-wedge polygonal networks and the microtopography of the degrading troughs from high-resolution, airborne laserscanning-based digital terrain models (1 m spatial resolution; full-waveform Riegl Q680i LiDAR sensor). To derive information on the availability of surface water, we used optical and near-infrared aerial imagery at spatial resolutions of up to 5 cm captured by the Modular Aerial Camera System (MACS) developed by DLR. We represent the networks as graphs (a concept from the computer sciences to describe complex networks) and apply methods from graph theory to describe and quantify hydrological network characteristics of the changing landscape. Due to a lack of historical very-high-resolution data, we cannot investigate a dense time series of a single representative study area on the evolution of the microtopographic and hydrologic network, but rather leverage the possibilities of a space-for-time substitution. We thus investigate terrain models and multispectral data from 2019 and 2021 of ten study areas located in ten fire scars of different ages (up to 120 years between date of disturbance and date of data acquisition). With this approach, we can infer past and future states of degradation from the currently prevailing spatial patterns and show how this type of disturbed landscape evolves over time. Representing such polygonal landscapes as graphs and reducing large amounts of data into few quantifiable metrics, supports integration of results into i.e., numerical models and thus largely facilitates the understanding of the underlying complex processes of GHG emissions from permafrost thaw. We highlight these extensive possibilities but also illustrate the limitations encountered in the study that stem from a reduced availability and accessibility to pan-Arctic very-high-resolution Earth observation datasets

    SNGuess: A method for the selection of young extragalactic transients

    Full text link
    With a rapidly rising number of transients detected in astronomy, classification methods based on machine learning are increasingly being employed. Their goals are typically to obtain a definitive classification of transients, and for good performance they usually require the presence of a large set of observations. However, well-designed, targeted models can reach their classification goals with fewer computing resources. This paper presents SNGuess, a model designed to find young extragalactic nearby transients with high purity. SNGuess works with a set of features that can be efficiently calculated from astronomical alert data. Some of these features are static and associated with the alert metadata, while others must be calculated from the photometric observations contained in the alert. Most of the features are simple enough to be obtained or to be calculated already at the early stages in the lifetime of a transient after its detection. We calculate these features for a set of labeled public alert data obtained over a time span of 15 months from the Zwicky Transient Facility (ZTF). The core model of SNGuess consists of an ensemble of decision trees, which are trained via gradient boosting. Approximately 88% of the candidates suggested by SNGuess from a set of alerts from ZTF spanning from April 2020 to August 2021 were found to be true relevant supernovae (SNe). For alerts with bright detections, this number ranges between 92% and 98%. Since April 2020, transients identified by SNGuess as potential young SNe in the ZTF alert stream are being published to the Transient Name Server (TNS) under the AMPEL_ZTF_NEW group identifier. SNGuess scores for any transient observed by ZTF can be accessed via a web service. The source code of SNGuess is publicly available.Comment: 14 pages, 10 figures, Astronomy & Astrophysics (A&A), Forthcoming article, source code https://github.com/nmiranda/SNGues

    The evolution of ice-wedge polygon networks in tundra fire scars

    Get PDF
    Abstract In response to increasing temperatures and precipitation in the Arctic, ice-rich permafrost landscapes are undergoing rapid changes. In permafrost lowland landscapes, polygonal ice wedges are especially vulnerable, and their melting induces widespread subsidence triggering the transition from low-centered (LCP) to high-centered polygons (HCP) by forming degrading troughs. This process has an important impact on surface hydrology, as the connectivity of such trough networks determines the rate of drainage of an entire landscape (Liljedahl et al., 2016). While scientists have observed this degradation trend throughout large domains in the polygonal patterned Arctic landscape over timescales of multiple decades, it is especially evident in disturbed areas such as fire scars (Jones et al., 2015). Here, wildfires removed the insulating organic soil layer. We can therefore observe the LCP-to-HCP transition within only several years. Until now, studies on quantifying trough connectivity have been limited to local field studies and sparse time series only. With high-resolution Earth observation data, a more comprehensive analysis is possible. However, when considering the vast and ever-growing volumes of data generated, highly automated and scalable methods are needed that allow scientists to extract information on the geomorphic state and on changes over time of ice-wedge trough networks. In this study, we combine very-high-resolution (VHR) aerial imagery and comprehensive databases of segmented polygons derived from VHR optical satellite imagery (Witharana et al., 2018) to investigate the changing polygonal ground landscapes and their environmental implications in fire scars in Northern and Western Alaska. Leveraging the automated and scalable nature of our recently introduced approach (Rettelbach et al., 2021), we represent the polygon networks as graphs (a concept from computer science to describe complex networks) and use graph metrics to describe the state of these (hydrological) trough networks. Due to a lack of historical data, we cannot investigate a dense time series of a single representative study area on the evolution of the network, but rather leverage the possibilities of a space-for-time substitution. Thus, we focus on data from multiple fire scars of different ages (up to 120 years between date of disturbance and date of acquisition). With our approach, we might infer past and future states of degradation from the currently prevailing spatial patterns showing how this type of disturbed landscape evolves over space and time. It further allows scientists to gain insights into the complex geomorphology, hydrology, and ecology of landscapes, thus helping to quantify how they interact with climate change

    Development of Readout Interconnections for the Si-W Calorimeter of SiD

    Full text link
    The SiD collaboration is developing a Si-W sampling electromagnetic calorimeter, with anticipated application for the International Linear Collider. Assembling the modules for such a detector will involve special bonding technologies for the interconnections, especially for attaching a silicon detector wafer to a flex cable readout bus. We review the interconnect technologies involved, including oxidation removal processes, pad surface preparation, solder ball selection and placement, and bond quality assurance. Our results show that solder ball bonding is a promising technique for the Si-W ECAL, and unresolved issues are being addressed.Comment: 8 pages + title, 6 figure

    THGEM-based detectors for sampling elements in DHCAL: laboratory and beam evaluation

    Get PDF
    We report on the results of an extensive R&D program aimed at the evaluation of Thick-Gas Electron Multipliers (THGEM) as potential active elements for Digital Hadron Calorimetry (DHCAL). Results are presented on efficiency, pad multiplicity and discharge probability of a 10x10 cm2 prototype detector with 1 cm2 readout pads. The detector is comprised of single- or double-THGEM multipliers coupled to the pad electrode either directly or via a resistive anode. Investigations employing standard discrete electronics and the KPiX readout system have been carried out both under laboratory conditions and with muons and pions at the CERN RD51 test beam. For detectors having a charge-induction gap, it has been shown that even a ~6 mm thick single-THGEM detector reached detection efficiencies above 95%, with pad-hit multiplicity of 1.1-1.2 per event; discharge probabilities were of the order of 1e-6 - 1e-5 sparks/trigger, depending on the detector structure and gain. Preliminary beam tests with a WELL hole-structure, closed by a resistive anode, yielded discharge probabilities of <2e-6 for an efficiency of ~95%. Methods are presented to reduce charge-spread and pad multiplicity with resistive anodes. The new method showed good prospects for further evaluation of very thin THGEM-based detectors as potential active elements for DHCAL, with competitive performances, simplicity and robustness. Further developments are in course.Comment: 15 pages, 11 figures, MPGD2011 conference proceedin

    Correlated ab-initio calculations for ground-state properties of II-VI semiconductors

    Full text link
    Correlated ab-initio ground-state calculations, using relativistic energy-consistent pseudopotentials, are performed for six II-VI semiconductors. Valence (ns,npns,np) correlations are evaluated using the coupled cluster approach with single and double excitations. An incremental scheme is applied based on correlation contributions of localized bond orbitals and of pairs and triples of such bonds. In view of the high polarity of the bonds in II-VI compounds, we examine both, ionic and covalent embedding schemes for the calculation of individual bond increments. Also, a partitioning of the correlation energy according to local ionic increments is tested. Core-valence (nsp,(n1)dnsp,(n-1)d) correlation effects are taken into account via a core-polarization potential. Combining the results at the correlated level with corresponding Hartree-Fock data we recover about 94% of the experimental cohesive energies; lattice constants are accurate to \sim 1%; bulk moduli are on average 10% too large compared with experiment.Comment: 10 pages, twocolumn, RevTex, 3 figures, accepted Phys. Rev.

    Application of a theory and simulation-based convective boundary mixing model for AGB star evolution and nucleosynthesis

    Get PDF
    The s-process nucleosynthesis in Asymptotic giant branch (AGB) stars depends on the modeling of convective boundaries. We present models and s-process simulations that adopt a treatment of convective boundaries based on the results of hydrodynamic simulations and on the theory of mixing due to gravity waves in the vicinity of convective boundaries. Hydrodynamics simulations suggest the presence of convective boundary mixing (CBM) at the bottom of the thermal pulse-driven convective zone. Similarly, convection-induced mixing processes are proposed for the mixing below the convective envelope during third dredge-up (TDU), where the ¹³C pocket for the s process in AGB stars forms. In this work, we apply a CBM model motivated by simulations and theory to models with initial mass M=2 and M = 3 Mʘ, and with initial metal content Z = 0.01 and Z = 0.02. As reported previously, the He-intershell abundances of ¹²C and ¹⁶O are increased by CBM at the bottom of the pulse-driven convection zone. This mixing is affecting the ²²Ne(α, n)²⁵Mg activation and the s-process efficiency in the ¹³C-pocket. In our model, CBM at the bottom of the convective envelope during the TDU represents gravity wave mixing. Furthermore, we take into account the fact that hydrodynamic simulations indicate a declining mixing efficiency that is already about a pressure scale height from the convective boundaries, compared to mixing-length theory. We obtain the formation of the ¹³C-pocket with a mass of ≈10⁻⁴ Mʘ. The final s-process abundances are characterized by 0.36 < [s Fe] < 0.78 and the heavy-to-light s-process ratio is -0.23 < [hs ls] < 0.45. Finally, we compare our results with stellar observations, presolar grain measurements and previous work
    corecore