1,374,311 research outputs found

    Modeling Lithospheric Rheology from Modern Measurements of Bonneville Shoreline Deformation

    Get PDF
    Here I develop a cross-correlation approach to estimating heights of shoreline features, and apply the new method to paleo-shorelines of Pleistocene Lake Bonneville. I calculate 1st-derivative (slope) and 2nd-derivative (curvature) profiles from Digital Elevation Model (DEM) or Global Positioning System Real-Time Kinematic (GPS-RTK) measurements of elevation. I then cross-correlate pairs of profiles that have been shifted by various lags, or shifts in elevation. The correlation coefficient (a normalized dot-product measure of similarity) is calculated as a function of lag within small (~40 m) windows centered at various elevations. The elevation and lag with the greatest correlation coefficient indicates the shoreline elevation at the reference profile and the change in shoreline height for the profile pair. I evaluate several different algorithms for deriving slope and curvature by examining closure of elevation lags across profile triples. I then model isostatic response to Lake Bonneville loading and unloading. I first model lakeshore uplift response to lake load removal assuming an elastic layer over an inviscid half-space. I obtain a best-fit comparison of predicted to observed shoreline heights for the Bonneville level with an elastic layer thickness, Te, of 25±2 km (at 95% confidence) when using only previously published shoreline elevation estimates. The best-fit for the Bonneville level when using these estimates plus 44 new estimates suggests a Te of 26±2 km. The best-fit model for the Provo level suggests Te of 17±3 km. For the Gilbert level, the response is insensitive to the assumed Te. I next model isostatic response to Bonneville loading and unloading assuming an elastic layer over a viscoelastic halfspace. This approach assumes constant parameters for the entire loading history, and yields a best-fit model with Te =70±5 km and viscosity ç=~2x1018 Pa s with 95% confidence ranging from ~1x1018 to ~5x1019 Pa s when only the previously published data are used. With the newer data added, the best-fit model has Te =58±2 km and ç ranging from ~1x1018 to ~1x1019 Pa s with 95% confidence. The 12-15 m weighted root-mean-square misfit to the best-fitting model is dominated by tectonic signals related to Basin-and-Range tectonics particularly seismic offsets of the Wasatch fault, and closely mimics the geological timescale pattern of basin-subsidence and range-uplift

    Emulation of reionization simulations for Bayesian inference of astrophysics parameters using neural networks

    Get PDF
    Next generation radio experiments such as LOFAR, HERA and SKA are expected to probe the Epoch of Reionization and claim a first direct detection of the cosmic 21cm signal within the next decade. Data volumes will be enormous and can thus potentially revolutionize our understanding of the early Universe and galaxy formation. However, numerical modelling of the Epoch of Reionization can be prohibitively expensive for Bayesian parameter inference and how to optimally extract information from incoming data is currently unclear. Emulation techniques for fast model evaluations have recently been proposed as a way to bypass costly simulations. We consider the use of artificial neural networks as a blind emulation technique. We study the impact of training duration and training set size on the quality of the network prediction and the resulting best fit values of a parameter search. A direct comparison is drawn between our emulation technique and an equivalent analysis using 21CMMC. We find good predictive capabilities of our network using training sets of as low as 100 model evaluations, which is within the capabilities of fully numerical radiative transfer codes

    Channel selection in e-commerce age: a strategic analysis of co-op advertising models

    Get PDF
    Purpose: The purpose of this paper is to develop and compare two co-op advertising models: advertising model under traditional channel and co-op advertising model under dual channel, to select optimal channel structure to sell products for manufacturer and to derive optimal co-op advertising strategies for the manufacturer and the retailer. Design/methodology/approach: Stackelberg game theoretical is used to develop two co-op advertising models: co-op advertising model under traditional channel and co-op advertising model under dual channel. Then we compare the two models to select optimal channel structure to sell products for manufacturer and to derive optimal co-op advertising strategies for the manufacturer and the retailer. Furthermore, we analyze the impact of product web-fit on these optimal strategies and illustrate by some numeral examples. Based on our results, we provide some significant theories and managerial insights, and derive some probable paths of future research. Findings: We provide a framework for researching optimal co-op advertising strategies in a two-level supply chain considering different marketing channel structures. First, we discuss the traditional channel co-op adverting model and the dual channel co-op advertising model based on Stackelberg game theoretical, and we derive optimal co-op advertising strategies. Next, comparisons of these two channel structures are discussed and we find that the manufacturer always benefits from dual channel. But the retailer not always benefits from dual channel structure, and dual channel structure is better than retail channel with certain conditions. Also, the optimal co-op advertising strategies for the manufacturer and the retailer are obtained. Research limitations/implications: First, we focus on the aforementioned two channel structures; a further comparison with other channel structures can be investigated. Second, we ignore some factors that influence the demand of product, such as service and price. We can do some researches from the point of these factors. Third, how demand uncertainty affects the channel selection and co-op advertising strategy is another interesting research item. Practical implications: The manufacturer and the retailer know that the impact of co-op adverting on the demands of traditional channel and direct channel, both would like to choose reasonable strategies to improve the channel coordination. Therefore, it would be best if business managers conduct market survey before they start their co-op advertising campaign. Originality/value: Two new co-op advertising models in E-commerce age are developed, and the impact of product web-fit on these optimal strategies are analyzed and illustrate by some numeral examples. In addition, optimal channel structure in E-commerce age are selected for manufacturer and the retailerPeer Reviewe

    Radiogenic backgrounds in the NEXT double beta decay experiment

    Full text link
    [EN] Natural radioactivity represents one of the main backgrounds in the search for neutrinoless double beta decay. Within the NEXT physics program, the radioactivity- induced backgrounds are measured with the NEXT-White detector. Data from 37.9 days of low-background operations at the Laboratorio Subterraneo de Canfranc with xenon depleted in Xe-136 are analyzed to derive a total background rate of (0.84 +/- 0.02) mHz above 1000 keV. The comparison of data samples with and without the use of the radon abatement system demonstrates that the contribution of airborne-Rn is negligible. A radiogenic background model is built upon the extensive radiopurity screening campaign conducted by the NEXT collaboration. A spectral fit to this model yields the specific contributions of Co-60, K-40, Bi-214 and Tl-208 to the total background rate, as well as their location in the detector volumes. The results are used to evaluate the impact of the radiogenic backgrounds in the double beta decay analyses, after the application of topological cuts that reduce the total rate to (0.25 +/- 0.01) mHz. Based on the best-fit background model, the NEXT-White median sensitivity to the two-neutrino double beta decay is found to be 3.5 sigma after 1 year of data taking. The background measurement in a Q(beta beta)+/- 100 keV energy window validates the best-fit background model also for the neutrinoless double beta decay search with NEXT-100. Only one event is found, while the model expectation is (0.75 +/- 0.12) events.The NEXT collaboration acknowledges support from the following agencies and institutions: the European Research Council (ERC) under the Advanced Grant 339787-NEXT; the European Union's Framework Programme for Research and Innovation Horizon 2020 (2014-2020) under the Marie Sklodowska-Curie Grant Agreements No. 674896, 690575 and 740055; the Ministerio de Economia y Competitividad and the Ministerio de Ciencia, Innovacion y Universidades of Spain under grants FIS2014-53371-C04, RTI2018-095979, the Severo Ochoa Program SEV-2014-0398 and the Maria de Maetzu Program MDM-2016-0692; the GVA of Spain under grants PROMETEO/2016/120 and SEJI/2017/011; the Portuguese FCT under project PTDC/FIS-NUC/2525/2014, under project UID/FIS/04559/2013 to fund the activities of LIBPhys, and under grants PD/BD/105921/2014, SFRH/BPD/109180/2015 and SFRH/BPD/76842/2011; the U.S. Department of Energy under contracts number DE-AC02-06CH11357 (Argonne National Laboratory), DE-AC02-07CH11359 (Fermi National Accelerator Laboratory), DE-FG02-13ER42020 (Texas A&M) and DE-SC0019223/DE-SC0019054 (University of Texas at Arlington); and the University of Texas at Arlington. DGD acknowledges Ramon y Cajal program (Spain) under contract number RYC-2015-18820. We also warmly acknowledge the Laboratori Nazionali del Gran Sasso (LNGS) and the Dark Side collaboration for their help with TPB coating of various parts of the NEXT-White TPC. Finally, we are grateful to the Laboratorio Subterraneo de Canfranc for hosting and supporting the NEXT experiment.Novella, P.; Palmeiro, B.; Sorel, M.; Usón, A.; Ferrario, P.; Gómez-Cadenas, JJ.; Adams, C.... (2019). Radiogenic backgrounds in the NEXT double beta decay experiment. Journal of High Energy Physics (Online). (10):1-26. https://doi.org/10.1007/JHEP10(2019)051S12610KamLAND-Zen collaboration, Search for Majorana Neutrinos near the Inverted Mass Hierarchy Region with KamLAND-Zen, Phys. Rev. Lett.117 (2016) 082503 [arXiv:1605.02889] [INSPIRE].GERDA collaboration, Improved Limit on Neutrinoless Double-β Decay of76Ge from GERDA Phase II, Phys. Rev. Lett.120 (2018) 132503 [arXiv:1803.11100] [INSPIRE].NEXT collaboration, NEXT-100 Technical Design Report (TDR): Executive Summary, 2012JINST7 T06001 [arXiv:1202.0721] [INSPIRE].M. Redshaw, E. Wingfield, J. McDaniel and E.G. Myers, Mass and double-beta-decay Q value of Xe-136, Phys. Rev. Lett.98 (2007) 053003 [INSPIRE].EXO-200 collaboration, Improved measurement of the 2νββ half-life of136Xe with the EXO-200 detector, Phys. Rev.C 89 (2014) 015502 [arXiv:1306.6106] [INSPIRE].KamLAND-Zen collaboration, Measurement of the double-β decay half-life of136Xe with the KamLAND-Zen experiment, Phys. Rev.C 85 (2012) 045504 [arXiv:1201.4664] [INSPIRE].NEXT collaboration, Initial results on energy resolution of the NEXT-White detector, 2018JINST13 P10020 [arXiv:1808.01804] [INSPIRE].NEXT collaboration, Energy Calibration of the NEXT-White Detector with 1% Resolution Near Qββof136Xe, arXiv:1905.13110 [INSPIRE].NEXT collaboration, Near-Intrinsic Energy Resolution for 30 to 662 keV Gamma Rays in a High Pressure Xenon Electroluminescent TPC, Nucl. Instrum. Meth.A 708 (2013) 101 [arXiv:1211.4474] [INSPIRE].NEXT collaboration, Characterisation of NEXT-DEMO using xenon KαX-rays, 2014JINST9 P10007 [arXiv:1407.3966] [INSPIRE].NEXT collaboration, First proof of topological signature in the high pressure xenon gas TPC with electroluminescence amplification for the NEXT experiment, JHEP01 (2016) 104 [arXiv:1507.05902] [INSPIRE].NEXT collaboration, Demonstration of the event identification capabilities of the NEXT-White detector, arXiv:1905.13141 [INSPIRE].A.D. McDonald et al., Demonstration of Single Barium Ion Sensitivity for Neutrinoless Double Beta Decay using Single Molecule Fluorescence Imaging, Phys. Rev. Lett.120 (2018) 132504 [arXiv:1711.04782] [INSPIRE].P. Thapa et al., Barium Chemosensors with Dry-Phase Fluorescence for Neutrinoless Double Beta Decay, arXiv:1904.05901 [INSPIRE].NEXT collaboration, Ionization and scintillation response of high-pressure xenon gas to alpha particles, 2013 JINST8 P05025 [arXiv:1211.4508] [INSPIRE].NEXT collaboration, Initial results of NEXT-DEMO, a large-scale prototype of the NEXT-100 experiment, 2013 JINST8 P04002 [arXiv:1211.4838] [INSPIRE].NEXT collaboration, Operation and first results of the NEXT-DEMO prototype using a silicon photomultiplier tracking array, 2013 JINST8 P09011 [arXiv:1306.0471] [INSPIRE].NEXT collaboration, Description and commissioning of NEXT-MM prototype: first results from operation in a Xenon-Trimethylamine gas mixture, 2014 JINST9 P03010 [arXiv:1311.3242] [INSPIRE].NEXT collaboration, Ionization and scintillation of nuclear recoils in gaseous xenon, Nucl. Instrum. Meth.A 793 (2015) 62 [arXiv:1409.2853] [INSPIRE].NEXT collaboration, An improved measurement of electron-ion recombination in high-pressure xenon gas, 2015 JINST10 P03025 [arXiv:1412.3573] [INSPIRE].NEXT collaboration, Accurate γ and MeV-electron track reconstruction with an ultra-low diffusion Xenon/TMA TPC at 10 atm, Nucl. Instrum. Meth.A 804 (2015) 8 [arXiv:1504.03678] [INSPIRE].NEXT collaboration, The Next White (NEW) Detector, 2018 JINST13 P12010 [arXiv:1804.02409] [INSPIRE].NEXT collaboration, Sensitivity of NEXT-100 to Neutrinoless Double Beta Decay, JHEP05 (2016) 159 [arXiv:1511.09246] [INSPIRE].V. Alvarez et al., Radiopurity control in the NEXT-100 double beta decay experiment: procedures and initial measurements, 2013 JINST8 T01002 [arXiv:1211.3961] [INSPIRE].NEXT collaboration, Radiopurity assessment of the tracking readout for the NEXT double beta decay experiment, 2015 JINST10 P05006 [arXiv:1411.1433] [INSPIRE].NEXT collaboration, Radiopurity assessment of the energy readout for the NEXT double beta decay experiment, 2017 JINST12 T08003 [arXiv:1706.06012] [INSPIRE].NEXT collaboration, Measurement of radon-induced backgrounds in the NEXT double beta decay experiment, JHEP10 (2018) 112 [arXiv:1804.00471] [INSPIRE].NEXT collaboration, Electron drift properties in high pressure gaseous xenon, 2018 JINST13 P07013 [arXiv:1804.01680] [INSPIRE].NEXT collaboration, Calibration of the NEXT-White detector using83m Kr decays, 2018JINST13 P10014 [arXiv:1804.01780] [INSPIRE].NEXT collaboration, Background rejection in NEXT using deep neural networks, 2017JINST12 T01004 [arXiv:1609.06202] [INSPIRE].NEXT collaboration, Application and performance of an ML-EM algorithm in NEXT, 2017JINST12 P08009 [arXiv:1705.10270] [INSPIRE]

    Causal effects of green infrastructure on stormwater hydrology and water quality

    Get PDF
    Applications of green infrastructure to stormwater management continue to increase in urban landscapes. There are numerous studies of individual stormwater management sites, but few meta-analyses that synthesize and explore design variables for stormwater control structures within a robust statistical framework. The lack of a standardized framework is due to the complexity of stormwater infrastructure designs. Locally customized designs fit to meet diverse site conditions create datasets that become messy, non-uniform, and difficult to analyze across multiple sites. In this dissertation, I first examine how hydrologic processes govern the function of various stormwater infrastructure technologies using water budget data from published literature. The hydrologic observations are displayed on a Water Budget Triangle---a ternary plot tool developed to visualize simplified water budgets---to enable direct functional comparisons of green and grey approaches to stormwater management. The findings are used to generate a suite of observable site characteristics, which are then mapped to a set of stormwater control and treatment sites reported in the International Stormwater Best Management Practice (BMP) database. These mapped site characteristics provide site context for the runoff and water quality observations present in the database. Drawing from these contextual observations of design variables, I next examine the functional design of different stormwater management technologies by quantifying the differences among varied structural features, and comparing their causal effects on hydrologic and water quality performance. This stormwater toolbox provides a framework for comparison of the overall performance of different system types to understand causal implications of stormwater design

    Quantity and Quality: Not a Zero-Sum Game

    Get PDF
    Quantification of existing theories is a great challenge but also a great chance for the study of language in the brain. While quantification is necessary for the development of precise theories, it demands new methods and new perspectives. In light of this, four complementary methods were introduced to provide a quantitative and computational account of the extended Argument Dependency Model from Bornkessel-Schlesewsky and Schlesewsky. First, a computational model of human language comprehension was introduced on the basis of dependency parsing. This model provided an initial comparison of two potential mechanisms for human language processing, the traditional "subject" strategy, based on grammatical relations, and the "actor" strategy based on prominence and adopted from the eADM. Initial results showed an advantage for the traditional subject" model in a restricted context; however, the "actor" model demonstrated behavior in a test run that was more similar to human behavior than that of the "subject" model. Next, a computational-quantitative implementation of the "actor" strategy as weighted feature comparison between memory units was used to compare it to other memory-based models from the literature on the basis of EEG data. The "actor" strategy clearly provided the best model, showing a better global fit as well as better match in all details. Building upon the success modeling EEG data, the feasibility of estimating free parameters from empirical data was demonstrated. Both the procedure for doing so and the necessary software were introduced and applied at the level of individual participants. Using empirically estimated parameters, the models from the previous EEG experiment were calculated again and yielded similar results, thus reinforcing the previous work. In a final experiment, the feasibility of analyzing EEG data from a naturalistic auditory stimulus was demonstrated, which conventional wisdom says is not possible. The analysis suggested a new perspective on the nature of event-related potentials (ERPs), which does not contradict existing theory yet nonetheless goes against previous intuition. Using this new perspective as a basis, a preliminary attempt at a parsimonious neurocomputational theory of cognitive ERP components was developed

    Optimal Models for Plant Disease and Pest Detection Using UAV Image

    Get PDF
    The use of deep learning methods to detect plant diseases and pests based on UAV images is an important application of remote sensing technology in modern forestry. This paper uses a CenterNet-based object detection method to construct models for plant disease and pest detection. The accuracy of the models is influenced by parameter alpha, which is used to control the affine transformation in the preprocessing of CenterNet. First, different alphas are sampled for training and testing. Next, the least square method is used to fit the curve between alpha and accuracy measured by mAP (mean average precision). Finally, the equation of the curve is fitted as mAP = -0.22 * alpha2 + 0.32 * alpha + 0.42. In comparison, an automated machine learning (AutoML) method is also conducted to automatically search for the best model. The experiments are done with 5,281 images as the training dataset, 1,319 images as the verification dataset, and 3,842 images as the test dataset. The results show that the best alpha value obtained by the least square method is 0.733, and the accuracy of the corresponding model is 0.536 in mAP@[.5, .95]. In contrast, the accuracy of the AutoML method model is higher with the model accuracy of 0.545 in mAP@[.5, .95]. However, the training time and training resource consumption of the AutoML method are about 3 times that of the least square method. Therefore, in practice, a trade-off should be made according to the accuracy requirements, resource consumption, and task urgency

    Neural network determination of parton distributions: the nonsinglet case

    Get PDF
    We provide a determination of the isotriplet quark distribution from available deep--inelastic data using neural networks. We give a general introduction to the neural network approach to parton distributions, which provides a solution to the problem of constructing a faithful and unbiased probability distribution of parton densities based on available experimental information. We discuss in detail the techniques which are necessary in order to construct a Monte Carlo representation of the data, to construct and evolve neural parton distributions, and to train them in such a way that the correct statistical features of the data are reproduced. We present the results of the application of this method to the determination of the nonsinglet quark distribution up to next--to--next--to--leading order, and compare them with those obtained using other approaches.Comment: 46 pages, 18 figures, LaTeX with JHEP3 clas
    corecore